{"citing_id": "2305.03039v1", "cited_id": "1909.09223", "section_title": "Portability And Shareability", "citation": "For instance, INTERPRETML #REFR leverages Jupyter Book [29] to incorporate in-notebook visualizations into its documentation, providing readers with an engaging way to learn about ML model explanations.", "text_before_citation": ["The notebook community has created a vibrant ecosystem to convert notebooks into a wide range of mediums.", "This includes the ability for users to easily publish their notebooks containing interactive visualizations as slides #OTHEREFR , interactive books [29] , and dashboards #OTHEREFR .", "Therefore, given the portability of notebooks, notebook VA tools have the potential to reach a more diverse audience."], "text_after_citation": [], "citing_paper_content": {"title": "Supernova: Design Strategies And Opportunities For Interactive Visualization In Computational Notebooks", "abstract": "Computational notebooks such as Jupyter Notebook have become data scientists' de facto programming environments. Many visualization researchers and practitioners have developed interactive visualization tools that support notebooks. However, little is known about the appropriate design of visual analytics (VA) tools in notebooks. To bridge this critical research gap, we investigate the design strategies in this space by analyzing 161 notebook VA tools and their users' feedback. Our analysis encompasses 64 systems from academic papers and 103 systems sourced from a pool of 55k notebooks containing interactive visualizations that we obtain via scraping 8.6 million notebooks on GitHub. We also examine findings from 15 user studies and user feedback in 379 GitHub issues. Through this work, we identify unique design opportunities and considerations for future notebook VA tools, such as using and manipulating multimodal data in notebooks as well as balancing the degree of visualization-notebook integration. Finally, we develop SUPERNOVA, an open-source interactive tool to help researchers explore existing notebook VA tools and search for related work."}, "cited_paper_content": {"title": "Interpretml: A Unified Framework For Machine Learning Interpretability", "abstract": "InterpretML is an open-source Python package which exposes machine learning interpretability algorithms to practitioners and researchers. InterpretML exposes two types of interpretability - glassbox models, which are machine learning models designed for interpretability (ex: linear models, rule lists, generalized additive models), and blackbox explainability techniques for explaining existing systems (ex: Partial Dependence, LIME). The package enables practitioners to easily compare interpretability algorithms by exposing multiple methods under a unified API, and by having a built-in, extensible visualization platform. InterpretML also includes the first implementation of the Explainable Boosting Machine, a powerful, interpretable, glassbox model that can be as accurate as many blackbox models. The MIT licensed source code can be downloaded from this http URL."}, "keywords": ["documentation", "ML model explanations"], "citation_intent": "background"} {"citing_id": "2303.09273v1", "cited_id": "1505.05424", "section_title": "C. Competing Baselines", "citation": "In our work, we use a Gaussian prior distribution with zero mean and unit variance #REFR , and the MC sampling number of 50.", "text_before_citation": ["on Monday, Hist-D would calculate the mean \u00b5 t i and variance \u03c3 t i across all previously seen samples of node i at 8 a.m. of all Mondays.", "Once the mean and variance for each time slot and node of interest are calculated, these two values can be used to construct the prediction interval for that slot and node with {\u00b5 t i -\u03c3 t i , \u00b5 t i +\u03c3 t i }.", "Similarly, Hist-W would compute the weekly mean and variances across four weeks of a month to compute the PI.", "Bayesian uncertainty quantification models the uncertainty in the model parameters using a likelihood function constructed by Bayesian modeling #OTHEREFR .", "It also computes the data uncertainty by approximating the probability distribution over the model outputs through sampling and averaging over the resulting models."], "text_after_citation": ["Monte Carlo dropout #OTHEREFR models predictive distributions by randomly switching off neurons in DNNs during testing.", "This generates different model outputs that can be interpreted as samples from a probabilistic distribution, allowing MC dropout to estimate the prediction probability.", "In our work, we added a dropout layer with a rate of 0.3 after the last hidden layer of the base traffic forecasting model and used a sample number of 50.", "Deep Quantile Regression (DQR) can also estimate PIs #OTHEREFR , #OTHEREFR .", "Unlike conventional methods that minimize the averaged residual errors, DQR calculates the prediction errors at a specific quantile of the distribution."], "citing_paper_content": {"title": "Adaptive Modeling Of Uncertainties For Traffic Forecasting", "abstract": "Deep neural networks (DNNs) have emerged as a dominant approach for developing traffic forecasting models. These models are typically trained to minimize error on averaged test cases and produce a single-point prediction, such as a scalar value for traffic speed or travel time. However, singlepoint predictions fail to account for prediction uncertainty that is critical for many transportation management scenarios, such as determining the best-or worst-case arrival time. We present QUANTRAFFIC, a generic framework to enhance the capability of an arbitrary DNN model for uncertainty modeling. QUANTRAFFIC requires little human involvement and does not change the base DNN architecture during deployment. Instead, it automatically learns a standard quantile function during the DNN model training to produce a prediction interval for the single-point prediction. The prediction interval defines a range where the true value of the traffic prediction is likely to fall. Furthermore, QUANTRAFFIC develops an adaptive scheme that dynamically adjusts the prediction interval based on the location and prediction window of the test input. We evaluated QUANTRAFFIC by applying it to five representative DNN models for traffic forecasting across seven public datasets. We then compared QUANTRAFFIC against five uncertainty quantification methods. Compared to the baseline uncertainty modeling techniques, QUANTRAFFIC with base DNN architectures delivers consistently better and more robust performance than the existing ones on the reported datasets."}, "cited_paper_content": {"title": "Weight Uncertainty In Neural Networks", "abstract": "We introduce a new, efficient, principled and backpropagation-compatible algorithm for learning a probability distribution on the weights of a neural network, called Bayes by Backprop. It regularises the weights by minimising a compression cost, known as the variational free energy or the expected lower bound on the marginal likelihood. We show that this principled kind of regularisation yields comparable performance to dropout on MNIST classification. We then demonstrate how the learnt uncertainty in the weights can be used to improve generalisation in non-linear regression problems, and how this weight uncertainty can be used to drive the exploration-exploitation trade-off in reinforcement learning."}, "keywords": ["unit variance", "Gaussian prior distribution"], "citation_intent": "method"} {"citing_id": "2304.07199v1", "cited_id": "1905.12892", "section_title": "Experiments", "citation": "In particular, our multi-model bijective network is designed as multi-scale architecture adopted from #REFR where each scale includes multiple steps of the flow.", "text_before_citation": ["In this section, we first review datasets, implementation, and evaluation benchmarks.", "Then, the ablation studies analyze the effectiveness of our proposed approach.", "Finally, we compare our SOTA results with prior UDA approaches. #OTHEREFR with a MiT-B4 encoder #OTHEREFR .", "Following the UAV protocol of #OTHEREFR , the image size is set to 1024 \u00d7 1024.", "The design of G x and G y is identical."], "text_after_citation": ["Every single flow step injected by the domain information is designed as a stack of AcNorm, Invertible 1 \u00d7 1 Convolution, and Residual-style Affine Coupling Layer #OTHEREFR , #OTHEREFR , #OTHEREFR .", "The number of scales and flows in our experiments are set to 4 and 32, respectively.", "The entire framework is optimized by the SGD optimizer on four 48GB-VRAM GPUs, where the batch size of each GPU is set to 8 and the base learning rate is set to 2.5\u00d710 \u22124 .", "To increase the diversity of training data, several data augmentation techniques #OTHEREFR , #OTHEREFR are adopted in the training process."], "citing_paper_content": {"title": "Crovia: Seeing Drone Scenes From Car Perspective Via Cross-View Adaptation", "abstract": "Understanding semantic scene segmentation of urban scenes captured from the Unmanned Aerial Vehicles (UAV) perspective plays a vital role in building a perception model for UAV. With the limitations of large-scale densely labeled data, semantic scene segmentation for UAV views requires a broad understanding of an object from both its top and side views. Adapting from well-annotated autonomous driving data to unlabeled UAV data is challenging due to the cross-view differences between the two data types. Our work proposes a novel Cross-View Adaptation (CROVIA) approach to effectively adapt the knowledge learned from on-road vehicle views to UAV views. First, a novel geometry-based constraint to cross-view adaptation is introduced based on the geometry correlation between views. Second, cross-view correlations from image space are effectively transferred to segmentation space without any requirement of paired on-road and UAV view data via a new Geometry-Constraint Cross-View (GeiCo) loss. Third, the multi-modal bijective networks are introduced to enforce the global structural modeling across views. Experimental results on new cross-view adaptation benchmarks introduced in this work, i.e., SYNTHIA \u2192 UAVID and GTA5 \u2192 UAVID, show the State-of-the-Art (SOTA) performance of our approach over prior adaptation methods."}, "cited_paper_content": {"title": "Alignflow: Cycle Consistent Learning From Multiple Domains Via Normalizing Flows", "abstract": "Given datasets from multiple domains, a key challenge is to efficiently exploit these data sources for modeling a target domain. Variants of this problem have been studied in many contexts, such as cross-domain translation and domain adaptation. We propose AlignFlow, a generative modeling framework that models each domain via a normalizing flow. The use of normalizing flows allows for a) flexibility in specifying learning objectives via adversarial training, maximum likelihood estimation, or a hybrid of the two methods; and b) learning and exact inference of a shared representation in the latent space of the generative model. We derive a uniform set of conditions under which AlignFlow is marginally-consistent for the different learning objectives. Furthermore, we show that AlignFlow guarantees exact cycle consistency in mapping datapoints from a source domain to target and back to the source domain. Empirically, AlignFlow outperforms relevant baselines on image-to-image translation and unsupervised domain adaptation and can be used to simultaneously interpolate across the various domains using the learned representation."}, "keywords": ["multi-model bijective network"], "citation_intent": "method"} {"citing_id": "2304.03210v1", "cited_id": "1205.4174", "section_title": "Optimal Experimental Design For Biology Network Recovery", "citation": "Hauser and B\u00fchlmann #REFR similarly propose a utility function based on the number of oriented edges of a skeleton graph.", "text_before_citation": ["Another way to choose the optimal intervention is to choose the intervention that leads to the maximal number of oriented edges. Ness et al.", "#OTHEREFR use optimal experimental design to recover protein signaling networks #OTHEREFR .", "They use a utility function based on the expected information gain of an intervention given the observational MEC and other interventions in the batch. This algorithm, however, has factorial dependence on batch size. Ghassami et al.", "#OTHEREFR use the expected number of oriented edges of an essential graph as the utility function.", "The essential graph of the ground truth network is first estimated using a constraint based method like the PC algorithm."], "text_after_citation": [], "citing_paper_content": {"title": "Causal Discovery And Optimal Experimental Design For Genome-Scale Biological Network Recovery", "abstract": "Causal discovery of genome-scale networks is important for identifying pathways from genes to observable traits-e.g. differences in cell function, disease, drug resistance and others. Causal learners based on graphical models rely on interventional samples to orient edges in the network. However, these models have not been shown to scale up the size of the genome, which are on the order of 10 3-10 4 genes. We introduce a new learner, SP-GIES, that jointly learns from interventional and observational datasets and achieves almost 4x speedup against an existing learner for 1,000 node networks. SP-GIES achieves an AUC-PR score of 0.91 on 1,000 node networks, and scales up to 2,000 node networks-this is 4x larger than existing works. We also show how SP-GIES improves downstream optimal experimental design strategies for selecting interventional experiments to perform on the system. This is an important step forward in realizing causal discovery at scale via autonomous experimental design."}, "cited_paper_content": {"title": "Two Optimal Strategies For Active Learning Of Causal Models From Interventional Data", "abstract": "From observational data alone, a causal DAG is only identifiable up to Markov equivalence. Interventional data generally improves identifiability; however, the gain of an intervention strongly depends on the intervention target, that is, the intervened variables. We present active learning (that is, optimal experimental design) strategies calculating optimal interventions for two different learning goals. The first one is a greedy approach using single-vertex interventions that maximizes the number of edges that can be oriented after each intervention. The second one yields in polynomial time a minimum set of targets of arbitrary size that guarantees full identifiability. This second approach proves a conjecture of Eberhardt (2008) indicating the number of unbounded intervention targets which is sufficient and in the worst case necessary for full identifiability. In a simulation study, we compare our two active learning approaches to random interventions and an existing approach, and analyze the influence of estimation errors on the overall performance of active learning."}, "keywords": ["skeleton graph", "oriented edges"], "citation_intent": "background"} {"citing_id": "2305.02706v1", "cited_id": "1410.4258", "section_title": "Ii. Fap Channel Model In Diffusive Molecular Communication Systems", "citation": "The value of D is determined by the temperature, the fluid viscosity, and the molecule's Stokes radius, see #REFR .", "text_before_citation": ["Readers can refer to #OTHEREFR or #OTHEREFR for more details on diffusion channel modeling.", "The PDE approach treats the distribution of particles as a continuous function/data and uses PDE to capture its evolution.", "Namely, if we denote the concentration function by c (r, t; r 0 ), the diffusion channel can be modeled by a diffusion-advection equation:", "\u2202 t c (r, t; r 0 ) + v(r, t) \u2022 \u2207c (r, t; r 0 ) = D\u2207 2 c (r, t; r 0 ) , (2)", "where r 0 is the point where the diffusion starts, v is the velocity field of the fluid medium which is assumed to be incompressible, \u2207 and \u2207 2 are the gradient and the Laplacian operators, respectively, and D is the diffusion coefficient."], "text_after_citation": ["On the other hand, if we treat each particle individually, we can use the SDE model to capture its random trajectory X t .", "A suitable SDE model for the trajectory is the It\u00f4 diffusion process.", "Let us recap briefly that an It\u00f4 diffusion in Euclidean space R d is a stochastic process X t satisfying an SDE of the form", "EQUATION", "where B t is an d-dimensional standard Brownian motion."], "citing_paper_content": {"title": "On Vertically-Drifted First Arrival Position Distribution In Diffusion Channels", "abstract": "Recent studies show that stable distributions are successful in modeling heavy-tailed or impulsive noise. Investigation of the stability of a probability distribution can be greatly facilitated if the corresponding characteristic function (CF) has a closed-form expression. We explore a new family of distribution called the Vertically-Drifted First Arrival Position (VDFAP) distribution, which can be viewed as a generalization of symmetric alpha-stable (S\u03b1S) distribution with stability parameter \u03b1 = 1. In addition, VDFAP distribution has a clear physical interpretation when we consider first-hitting problems of particles following Brownian motion with a driving drift. Inspired by the Fourier relation between the probability density function and CF of Student's t-distribution, we extract an integral representation for the VDFAP probability density function. Then, we exploit the Hankel transform to derive a closed-form expression for the CF of VDFAP. From the CF, we discover that VDFAP possesses some interesting stability properties, which are in a weaker form than S\u03b1S. This calls for a generalization of the theory on alpha-stable distributions."}, "cited_paper_content": {"title": "A Comprehensive Survey Of Recent Advancements In Molecular Communication", "abstract": "With much advancement in the field of nanotechnology, bioengineering, and synthetic biology over the past decade, microscales and nanoscales devices are becoming a reality. Yet the problem of engineering a reliable communication system between tiny devices is still an open problem. At the same time, despite the prevalence of radio communication, there are still areas where traditional electromagnetic waves find it difficult or expensive to reach. Points of interest in industry, cities, and medical applications often lie in embedded and entrenched areas, accessible only by ventricles at scales too small for conventional radio waves and microwaves, or they are located in such a way that directional high frequency systems are ineffective. Inspired by nature, one solution to these problems is molecular communication (MC), where chemical signals are used to transfer information. Although biologists have studied MC for decades, it has only been researched for roughly 10 year from a communication engineering lens. Significant number of papers have been published to date, but owing to the need for interdisciplinary work, much of the results are preliminary. In this survey, the recent advancements in the field of MC engineering are highlighted. First, the biological, chemical, and physical processes used by an MC system are discussed. This includes different components of the MC transmitter and receiver, as well as the propagation and transport mechanisms. Then, a comprehensive survey of some of the recent works on MC through a communication engineering lens is provided. The survey ends with a technology readiness analysis of MC and future research directions."}, "keywords": ["fluid viscosity", "molecule's Stokes radius"], "citation_intent": "background"} {"citing_id": "2304.03394v1", "cited_id": "1906.08237", "section_title": "Transformer-Based Models", "citation": "XLNet #REFR was based on an auto-regressive model, which predicts future behavior based on past, and used a Transformer-XL.", "text_before_citation": ["Fine-tuning means to further train the pre-trained BERT model using our data.", "Section 5.1 includes the details of our BERT model and its hyperparameters.", "There have been numerous models extending BERT. In our experiments, we used RoBERTa and XLNet.", "As main differences from BERT, RoBERTa (Robustly optimized BERT approach) #OTHEREFR removed the NSP and replaced the static masking (in MLM) of BERT, with dynamic masking.", "In summary, RoBERTa has been shown to be more robust than BERT, it modified some of BERT, and it was trained using more data."], "text_after_citation": ["XLNet also introduced permutation language modeling, where all tokens (not only masked tokens) are predicted in random order, rather than sequential."], "citing_paper_content": {"title": "Deep Learning For Opinion Mining And Topic Classification Of Course Reviews", "abstract": "Student opinions for a course are important to educators and administrators, regardless of the type of the course or the institution. Reading and manually analyzing open-ended feedback becomes infeasible for massive volumes of comments at institution level or online forums. In this paper, we collected and pre-processed a large number of course reviews publicly available online. We applied machine learning techniques with the goal to gain insight into student sentiments and topics. Specifically, we utilized current Natural Language Processing (NLP) techniques, such as word embeddings and deep neural networks, and state-of-the-art BERT (Bidirectional Encoder Representations from Transformers), RoBERTa (Robustly optimized BERT approach) and XLNet (Generalized Autoregression Pre-training). We performed extensive experimentation to compare these techniques versus traditional approaches. This comparative study demonstrates how to apply modern machine learning approaches for sentiment polarity extraction and topic-based classification utilizing course feedback. For sentiment polarity, the top model was RoBERTa with 95.5% accuracy and 84.7% F1-macro, while for topic classification, an SVM (Support Vector Machine) was the top classifier with 79.8% accuracy and 80.6% F1-macro. We also provided an in-depth exploration of the effect of certain hyperparameters on the model performance and discussed our observations. These findings can be used by institutions and course providers as a guide for analyzing their own course feedback using NLP models towards self-evaluation and improvement."}, "cited_paper_content": {"title": "Xlnet: Generalized Autoregressive Pretraining For Language Understanding", "abstract": "With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling. However, relying on corrupting the input with masks, BERT neglects dependency between the masked positions and suffers from a pretrain-finetune discrepancy. In light of these pros and cons, we propose XLNet, a generalized autoregressive pretraining method that (1) enables learning bidirectional contexts by maximizing the expected likelihood over all permutations of the factorization order and (2) overcomes the limitations of BERT thanks to its autoregressive formulation. Furthermore, XLNet integrates ideas from Transformer-XL, the state-of-the-art autoregressive model, into pretraining. Empirically, under comparable experiment setting, XLNet outperforms BERT on 20 tasks, often by a large margin, including question answering, natural language inference, sentiment analysis, and document ranking."}, "keywords": ["auto-regressive model", "XLNet"], "citation_intent": "method"} {"citing_id": "2303.00902v1", "cited_id": "1707.00086", "section_title": "Case 1: Controversial Topics", "citation": "We have a similar finding in Macron (2017) and Yellow Vests (2019) where the missing tweets are against Macron such as #MacronDemission which means \"Macron Resign\", and #MacronLeaks, a coordinated disinformation campaign #REFR or supportive of his opponent Marine Le Pen.", "text_before_citation": ["Topics To answer RQ3, we performed a qualitative topic analysis on the missing tweets to better understand the content the studies may not be able to capture.", "To do that we, compared the content in the missing tweets with the recollected datasets using word shift graphs (See Section 3).", "For brevity, we only present the results with 15 datasets where we observe clear differences and only the top 20 n-grams.", "We observe that most of the missing tweets contain slogans against the topic.", "For instance, missing tweets in Biden (2020), Hillary (2016), and in Pro-Hillary (2016) appear to be sourced from users in favor of Donald Trump as the common hashtags include #MAGA, #Trump2020, #NeverHillary, and #CrookedHillary."], "text_after_citation": ["This suggests that data persistence may affect studies analyzing such counter-groups.", "We observe that the missing tweets in the datasets related to Trump politics such as Kavanaugh (2018) and Paris Agreement (2018) also contain tweets that are supportive of the decisions, such as #ConfirmKavanaughNow and #AmericaFirst.", "The missing tweets in the conspiracy theory-related datasets are supportive of the theories (e.g., #GreatAwakening, #Pedogate) while the tweets in the recollected datasets contain the 2-gram \"conspiracy-theory\", suggesting that the remaining tweets are mostly critical of the theories.", "Interestingly, the recollected Gun Control dataset is more likely to contain the hashtag #GunViolence, which is a more extreme version of the hashtag #GunControl.", "We also find that the missing tweets in the Netflix dataset contain the 2-grams \"Full-access\", \"Netflix-premium\", and \"watch-netflix."], "citing_paper_content": {"title": "The Impact Of Data Persistence Bias On Social Media Studies", "abstract": "Social media studies often collect data retrospectively to analyze public opinion. Social media data may decay over time and such decay may prevent the collection of the complete dataset. As a result, the collected dataset may differ from the complete dataset and the study may suffer from data persistence bias. Past research suggests that the datasets collected retrospectively are largely representative of the original dataset in terms of textual content. However, no study analyzed the impact of data persistence bias on social media studies such as those focusing on controversial topics. In this study, we analyze the data persistence and the bias it introduces on the datasets of three types: controversial topics, trending topics, and framing of issues. We report which topics are more likely to suffer from data persistence among these datasets. We quantify the data persistence bias using the change in political orientation, the presence of potentially harmful content and topics as measures. We found that controversial datasets are more likely to suffer from data persistence and they lean towards the political left upon recollection. The turnout of the data that contain potentially harmful content is significantly lower on non-controversial datasets. Overall, we found that the topics promoted by right-aligned users are more likely to suffer from data persistence. Account suspensions are the primary factor contributing to data removals, if not the only one. Our results emphasize the importance of accounting for the data persistence bias by collecting the data in real time when the dataset employed is vulnerable to data persistence bias."}, "cited_paper_content": {"title": "Disinformation And Social Bot Operations In The Run Up To The 2017 French Presidential Election", "abstract": "Recent accounts from researchers, journalists, as well as federal investigators, reached a unanimous conclusion: social media are systematically exploited to manipulate and alter public opinion. Some disinformation campaigns have been coordinated by means of bots, social media accounts controlled by computer scripts that try to disguise themselves as legitimate human users. In this study, we describe one such operation that occurred in the run up to the 2017 French presidential election. We collected a massive Twitter dataset of nearly 17 million posts, posted between 27 April and 7 May 2017 (Election Day). We then set to study the MacronLeaks disinformation campaign: By leveraging a mix of machine learning and cognitive behavioral modeling techniques, we separated humans from bots, and then studied the activities of the two groups independently, as well as their interplay. We provide a characterization of both the bots and the users who engaged with them, and oppose it to those users who didn\u2019t. Prior interests of disinformation adopters pinpoint to the reasons of scarce success of this campaign: the users who engaged with MacronLeaks are mostly foreigners with pre-existing interest in alt-right topics and alternative news media, rather than French users with diverse political views. Concluding, anomalous account usage patterns suggest the possible existence of a black market for reusable political disinformation bots."}, "keywords": ["coordinated disinformation campaign"], "citation_intent": "result"} {"citing_id": "2303.08233v2", "cited_id": "1911.02116", "section_title": "Sub-Task 1", "citation": "The baseline model, XLM-RoBERTa-base (XLM-R-base) #REFR , was trained and fine-tuned by minimizing the log-likelihood loss.", "text_before_citation": ["The starter kit for sub-task 1 can be found in the NL4Opt repository 2 ."], "text_after_citation": ["As part of the pilot study, we reported 3 the baseline model's performance on the test set when evaluated on the source domain, target domain, and entire test set for all entity types (i.e., constraint direction, limit, etc.).", "Based on this preliminary analysis, the objective name was the most difficult to identify potentially due to its ambiguity.", "We expect the greatest improvements would arise from methods that are capable of accurately recognizing the objective names and their spans.", "Evaluation: This baseline achieved an F1 score of 0.906 on the test split."], "citing_paper_content": {"title": "Nl4Opt Competition: Formulating Optimization Problems Based On Their Natural Language Descriptions", "abstract": "The Natural Language for Optimization (NL4Opt) Competition was created to investigate methods of extracting the meaning and formulation of an optimization problem based on its text description. Specifically, the goal of the competition is to increase the accessibility and usability of optimization solvers by allowing non-experts to interface with them using natural language. We separate this challenging goal into two sub-tasks: (1) recognize and label the semantic entities that correspond to the components of the optimization problem; (2) generate a meaning representation (i.e. a logical form) of the problem from its detected problem entities. The first task aims to reduce ambiguity by detecting and tagging the entities of the optimization problems. The second task creates an intermediate representation of the linear programming (LP) problem that is converted into a format that can be used by commercial solvers. In this report, we present the LP word problem dataset and shared tasks for the NeurIPS 2022 competition. Furthermore, we investigate and compare the performance of the ChatGPT large language model against the winning solutions. Through this competition, we hope to bring interest towards the development of novel machine learning applications and datasets for optimization modeling."}, "cited_paper_content": {"title": "Unsupervised Cross-Lingual Representation Learning At Scale", "abstract": "This paper shows that pretraining multilingual language models at scale leads to significant performance gains for a wide range of cross-lingual transfer tasks. We train a Transformer-based masked language model on one hundred languages, using more than two terabytes of filtered CommonCrawl data. Our model, dubbed XLM-R, significantly outperforms multilingual BERT (mBERT) on a variety of cross-lingual benchmarks, including +13.8% average accuracy on XNLI, +12.3% average F1 score on MLQA, and +2.1% average F1 score on NER. XLM-R performs particularly well on low-resource languages, improving 11.8% in XNLI accuracy for Swahili and 9.2% for Urdu over the previous XLM model. We also present a detailed empirical evaluation of the key factors that are required to achieve these gains, including the trade-offs between (1) positive transfer and capacity dilution and (2) the performance of high and low resource languages at scale. Finally, we show, for the first time, the possibility of multilingual modeling without sacrificing per-language performance; XLM-Ris very competitive with strong monolingual models on the GLUE and XNLI benchmarks. We will make XLM-R code, data, and models publicly available."}, "keywords": ["log-likelihood loss", "baseline model"], "citation_intent": "method"} {"citing_id": "2303.08999v1", "cited_id": "1802.04799", "section_title": "B. Constraint-Based Optimization Of Deployments On Gpu", "citation": "To the best of our knowledge, this is the first to consider these constraints for deployments of DNN models on GPUs, compared with previous arts, e.g., #REFR .", "text_before_citation": ["EQUATION", "These constraints are usually ignored by designers, which wastes lots of optimization workloads.", "For examples, for T \u2208 [T r , T sm ], these T values are legal while on-board resources are not fully utilized and the system parallelism can be further improved.", "With the above constraints, the feasible domain of the block sizes is shrunken significantly.", "The visualization view of the constraints are shown in Fig. 7 ."], "text_after_citation": ["With the target of minimizing the inference latency, we tend to build a regression model with respect to candidate values of nx, ny, and nz.", "The key challenge is that the clear form is the objective function is unknown because of the invisible execution process of GPU and CUDA programming Fig. 7 The visualized solution space.", "The solution points below the dotted points are legal configurations.", "model.", "Bayesian optimization is adopted in this paper as the searching algorithm to search the optimal configuration of the blocks with Gaussian process (GP) model utilized as the surrogate model #OTHEREFR ."], "citing_paper_content": {"title": "A High-Performance Accelerator For Super-Resolution Processing On Embedded Gpu", "abstract": "Recent years have witnessed impressive progress in super-resolution (SR) processing. However, its real-time inference requirement sets a challenge not only for the model design but also for the on-chip implementation. In this paper, we implement a full-stack SR acceleration framework on embedded GPU devices. The special dictionary learning algorithm used in SR models was analyzed in detail and accelerated via a novel dictionary selective strategy. Besides, the hardware programming architecture together with the model structure is analyzed to guide the optimal design of computation kernels to minimize the inference latency under the resource constraints. With these novel techniques, the communication and computation bottlenecks in the deep dictionary learning-based SR models are tackled perfectly. The experiments on the edge embedded NVIDIA NX and 2080Ti show that our method outperforms the state-of-the-art NVIDIA TensorRT significantly, and can achieve real-time performance."}, "cited_paper_content": {"title": "Tvm: An Automated End-To-End Optimizing Compiler For Deep Learning", "abstract": "There is an increasing need to bring machine learning to a wide diversity of hardware devices. Current frameworks rely on vendor-specific operator libraries and optimize for a narrow range of server-class GPUs. Deploying workloads to new platforms -- such as mobile phones, embedded devices, and accelerators (e.g., FPGAs, ASICs) -- requires significant manual effort. We propose TVM, a compiler that exposes graph-level and operator-level optimizations to provide performance portability to deep learning workloads across diverse hardware back-ends. TVM solves optimization challenges specific to deep learning, such as high-level operator fusion, mapping to arbitrary hardware primitives, and memory latency hiding. It also automates optimization of low-level programs to hardware characteristics by employing a novel, learning-based cost modeling method for rapid exploration of code optimizations. Experimental results show that TVM delivers performance across hardware back-ends that are competitive with state-of-the-art, hand-tuned libraries for low-power CPU, mobile GPU, and server-class GPUs. We also demonstrate TVM's ability to target new accelerator back-ends, such as the FPGA-based generic deep learning accelerator. The system is open sourced and in production use inside several major companies."}, "keywords": ["GPUs", "DNN models"], "citation_intent": "result"} {"citing_id": "2303.00938v1", "cited_id": "1903.02958", "section_title": "Related Work", "citation": "Re-Lie #REFR performs normalizing flow in R 3 and transform a Euclidean vector into an element of SO(3).", "text_before_citation": ["However, non of the grasping methods can generalize to a large number of objects under raw vision input.", "In comparison, our goal-conditioned grasp execution method is the first vision-based pipeline to achieve universal generalization by leveraging a teacher-student distillation trick, object curriculum learning, and state canonicalization.", "Normalizing Flow Normalizing flow is a powerful technique for modeling highly complex distributions, suiting our needs for grasp proposal generation.", "Building normalizing flow in the Euclidean space has been well studied #OTHEREFR .", "However, it is much harder to build normalizing flow in a non-Euclidean space such as SO(3)."], "text_after_citation": ["However, this practice introduces discontinuity to the flow, because it is equivalent to using the axis angle representation.", "Therefore, our grasp proposal module decouples rotation from translation and joint angles, and model these distributions separately with IPDF #OTHEREFR and normalizing flow."], "citing_paper_content": {"title": "Unidexgrasp: Universal Robotic Dexterous Grasping Via Learning Diverse Proposal Generation And Goal-Conditioned Policy", "abstract": "Figure 1. UniDexGrasp via grasp proposal generation and goal-conditioned execution. Left (grasp proposals): each figure shows for an object we generate diverse and high-quality grasp poses that vary greatly in rotation, translation and articulation states; right (grasp execution): given a grasp goal pose, our highly generalizable goal-conditioned grasping policy can grasp the object in the way specified by the goal, as shown in the green and blue trajectories and their corresponding goals."}, "cited_paper_content": {"title": "Reparameterizing Distributions On Lie Groups", "abstract": "Reparameterizable densities are an important way to learn probability distributions in a deep learning setting. For many distributions it is possible to create low-variance gradient estimators by utilizing a `reparameterization trick'. Due to the absence of a general reparameterization trick, much research has recently been devoted to extend the number of reparameterizable distributional families. Unfortunately, this research has primarily focused on distributions defined in Euclidean space, ruling out the usage of one of the most influential class of spaces with non-trivial topologies: Lie groups. In this work we define a general framework to create reparameterizable densities on arbitrary Lie groups, and provide a detailed practitioners guide to further the ease of usage. We demonstrate how to create complex and multimodal distributions on the well known oriented group of 3D rotations, SO(3), using normalizing flows. Our experiments on applying such distributions in a Bayesian setting for pose estimation on objects with discrete and continuous symmetries, showcase their necessity in achieving realistic uncertainty estimates."}, "keywords": ["normalizing flow"], "citation_intent": "background"} {"citing_id": "2304.08352v1", "cited_id": "1806.03822", "section_title": "Results Discussion", "citation": "This result is still significantly less than the state-of-the-art results across many other NLP tasks though (e.g., for the SQuAD dataset #REFR ), hinting that there is still room for significant improvement here.", "text_before_citation": ["As expected, due to the generative nature of the majority of the implicit examples, the extractive NPR approach performed significantly better on explicit examples (F1 score of 0.150 for implicit examples versus 0.625 for explicit examples).", "In order to gain an insight into the contribution of individual features, we report the mean results across all configurations for when this feature is set and when it is unset.", "The mean F1-scores for different options across all configurations in which they are enabled were 0.44 with word vectors, 0.39 without word vectors, 0.42 with NNAN candidate generation, 0.41 with noun chunk candidate generation, and 0.45, 0.42 and 0.38 for a negative sampling factor of 1, 2 and 4 respectively.", "Thus, on average, across all configurations the best performing ones used word vectors, NNAN candidate generation and downsampled negative to positive samples to a 1:1 ratio.", "The NPR approach clearly performs well for explicit examples, with the top configuration achieving an F1-Score of 0.625."], "text_after_citation": ["The fact that the variations of the NPR approach that we evaluated on had little impact on the results achieved suggests that a more distinct strategy may be required, rather than a variation of the SOTA.", "Further, the contrast in performance on explicit versus implicit examples, illustrates the importance of considering non-span or grounded generative examples in a realistic evaluation of the MIDR task.", "Whilst, it is clear that the extractive NPR approach will achieve a zero score for this class of examples in the Exact Match metric, the top F1-Score of 0.15, is also significantly lower than for explicit examples.", "Also, the top performing configurations for the grouping All Identifiers differ from those for the grouping All Descriptions showing the importance of considering examples without a description in MIDR datasets."], "citing_paper_content": {"title": "What Makes A Good Dataset For Symbol Description Reading?", "abstract": "The usage of mathematical formulas as concise representations of a document's key ideas is common practice. Correctly interpreting these formulas, by identifying mathematical symbols and extracting their descriptions, is an important task in document understanding. This paper makes the following contributions to the mathematical identifier description reading (MIDR) task: (i) introduces the Math Formula Question Answering Dataset (MFQuAD) with 7508 annotated identifier occurrences; (ii) describes novel variations of the noun phrase ranking approach for the MIDR task; (iii) reports experimental results for the SOTA noun phrase ranking approach and our novel variations of the approach, providing problem insights and a performance baseline; (iv) provides a position on the features that make an effective dataset for the MIDR task."}, "cited_paper_content": {"title": "Know What You Don'T Know: Unanswerable Questions For Squad", "abstract": "Extractive reading comprehension systems can often locate the correct answer to a question in a context document, but they also tend to make unreliable guesses on questions for which the correct answer is not stated in the context. Existing datasets either focus exclusively on answerable questions, or use automatically generated unanswerable questions that are easy to identify. To address these weaknesses, we present SQuAD 2.0, the latest version of the Stanford Question Answering Dataset (SQuAD). SQuAD 2.0 combines existing SQuAD data with over 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones. To do well on SQuAD 2.0, systems must not only answer questions when possible, but also determine when no answer is supported by the paragraph and abstain from answering. SQuAD 2.0 is a challenging natural language understanding task for existing models: a strong neural system that gets 86% F1 on SQuAD 1.1 achieves only 66% F1 on SQuAD 2.0."}, "keywords": ["NLP tasks"], "citation_intent": "result"} {"citing_id": "2304.11465v1", "cited_id": "1512.03012", "section_title": "V. Evaluation", "citation": "The results show that Pred-NBV is able to outperform the baselines significantly using large scale models from the Shapenet #REFR dataset and with real world 3D LiDAR data.", "text_before_citation": ["In this section we present results on the evaluation of the Pred-NBV pipeline.", "We start with a qualitative example followed by a discussion of the performance of the individual modules against respective baseline methods."], "text_after_citation": ["Figure 2 show the reconstructed point cloud of a C17 airplane the path followed by a UAV and in AirSim #OTHEREFR .", "We create candidate poses on three concentric rings at different heights around the center of the partially observed point cloud.", "The candidate poses change as more of the object is visible.", "As shown in Figure 6 , Pred-NBV is able to observe more points than the NBV planner without prediction in the same time budget."], "citing_paper_content": {"title": "Pred-Nbv: Prediction-Guided Next-Best-View Planning For 3D Object Reconstruction", "abstract": "Prediction-based active perception has shown the potential to improve the navigation efficiency and safety of the robot by anticipating the uncertainty in the unknown environment. The existing works for 3D shape prediction make an implicit assumption about the partial observations and therefore cannot be used for real-world planning and do not consider the control effort for next-best-view planning. We present Pred-NBV, a realistic object shape reconstruction method consisting of PoinTr-C, an enhanced 3D prediction model trained on the ShapeNet dataset, and an information and control effort-based next-best-view method to address these issues. Pred-NBV shows an improvement of 25.46% in object coverage over the traditional methods in the AirSim simulator, and performs better shape completion than PoinTr, the stateof-the-art shape completion model, even on real data obtained from a Velodyne 3D LiDAR mounted on DJI M600 Pro."}, "cited_paper_content": {"title": "Shapenet: An Information-Rich 3D Model Repository", "abstract": "We present ShapeNet: a richly-annotated, large-scale repository of shapes represented by 3D CAD models of objects. ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy. It is a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Annotations are made available through a public web-based interface to enable data visualization of object attributes, promote data-driven geometric analysis, and provide a large-scale quantitative benchmark for research in computer graphics and vision. At the time of this technical report, ShapeNet has indexed more than 3,000,000 models, 220,000 models out of which are classified into 3,135 categories (WordNet synsets). In this report we describe the ShapeNet effort as a whole, provide details for all currently available datasets, and summarize future plans."}, "keywords": ["LiDAR data", "real world 3D"], "citation_intent": "result"} {"citing_id": "2304.07435v2", "cited_id": "2001.02613", "section_title": "Related Work", "citation": "Patil et al.'s #REFR work on depth prediction and completion demonstrates that recurrent modules enforce consistency and improve accuracy even without special loss functions.", "text_before_citation": ["But it does have the advantage of not relying on camera poses or external structures. Lai et al. #OTHEREFR and Cao et al.", "#OTHEREFR pair a warping loss with a convolutional recurrent module which allows the network to learn temporal affinities more effectively.", "In fact, the use of recurrent structures is another way of enforcing temporal consistency in video tasks #OTHEREFR . Zhang et al.", "#OTHEREFR propagate spatio-temporal depth information across frames using a novel convolutional LSTM module which is trained using an adversarial loss.", "While this loss does not have any explicit temporal constraint like the warping loss, it is nonetheless shown to improve depth consistency."], "text_after_citation": ["Convolutional LSTMs, however, can be difficult to train due to their large memory requirements.", "Depth Fusion: Fusion methods #OTHEREFR achieve scene reconstruction by blending weighted signed distance field (SDF) volumes for each frame.", "The use of SDF volumes makes them scale poorly to large scenes and high resolutions. Keller et al. #OTHEREFR and Lefloch et al. #OTHEREFR propose point clouds as an alternative to SDFs.", "Traditional fusion methods, however, are not directly applicable to depth reconstruction as their focus is geometric reconstruction over multiple frames.", "Completeness, in the sense of reconstructing each visible point in a frame, is not a requirement."], "citing_paper_content": {"title": "Temporally Consistent Online Depth Estimation Using Point-Based Fusion", "abstract": "Depth estimation is an important step in many computer vision problems such as 3D reconstruction, novel view synthesis, and computational photography. Most existing work focuses on depth estimation from single frames. When applied to videos, the result lacks temporal consistency, showing flickering and swimming artifacts. In this paper we aim to estimate temporally consistent depth maps of video streams in an online setting. This is a difficult problem as future frames are not available and the method must choose between enforcing consistency and correcting errors from previous estimations. The presence of dynamic objects further complicates the problem. We propose to address these challenges by using a global point cloud that is dynamically updated each frame, along with a learned fusion approach in image space. Our approach encourages consistency while simultaneously allowing updates to handle errors and dynamic objects. Qualitative and quantitative results show that our method achieves state-of-the-art quality for consistent video depth estimation."}, "cited_paper_content": {"title": "Don'T Forget The Past: Recurrent Depth Estimation From Monocular Video", "abstract": "Autonomous cars need continuously updated depth information. Thus far, the depth is mostly estimated independently for a single frame at a time, even if the method starts from video input. Our method produces a time series of depth maps, which makes it an ideal candidate for online learning approaches. In particular, we put three different types of depth estimation (supervised depth prediction, self-supervised depth prediction, and self-supervised depth completion) into a common framework. We integrate the corresponding networks with a convolutional LSTM such that the spatiotemporal structures of depth across frames can be exploited to yield a more accurate depth estimation. Our method is flexible. It can be applied to monocular videos only or be combined with different types of sparse depth patterns. We carefully study the architecture of the recurrent network and its training strategy. We are first to successfully exploit recurrent networks for real-time self-supervised monocular depth estimation and completion. Extensive experiments show that our recurrent method outperforms its image-based counterpart consistently and significantly in both self-supervised scenarios. It also outperforms previous depth estimation methods of the three popular groups."}, "keywords": ["depth prediction"], "citation_intent": "background"} {"citing_id": "2303.08757v1", "cited_id": "1505.04597", "section_title": "Approach 7: 4D Mj-Net", "citation": "The rest of the model's structure follows the classic U-Net architecture #REFR with a series of 2D-Conv and Transpose layer blocks plus skip connection.", "text_before_citation": ["The last layer of each block diminishes by a factor equal to the stride value in the time dimension.", "The stride values are 2, 3, and 5, respectively, for each block.", "The output of the 4D-Conv layers is a tensor where the temporal dimension has been squeezed and reduced; information are extrapolated from the temporal dimension.", "Thus the output resulting from the 4D-Conv layers contains only three dimensions (X \u00d7 Y \u00d7 Z) plus the channel dimension.", "3D-Conv layers are implemented to reduce the depth dimension Z and produce a 2D vector (X \u00d7 Y ) plus the channel dimension."], "text_after_citation": ["All 2D-Conv Transpose layers utilize a kernel with shape (2 \u00d7 2).", "Every max pooling layer, 3D-Conv, and 2D-Conv layers use the same parameters described in Sec. 4.2.", "Two MonteCarlo dropout layers #OTHEREFR are added at the end of the 4D and 2D Convolution blocks. The rate was set to 50%.", "These layers were added to reduce uncertainties in the final predictions.", "The last convolution layer has a kernel of (1 \u00d7 1) and a Softmax activation function to produce a probability score for every class."], "citing_paper_content": {"title": "Exploiting 4D Ct Perfusion For Segmenting Infarcted Areas In Patients With Suspected Acute Ischemic Stroke", "abstract": "Precise and fast prediction methods for ischemic areas (core and penumbra) in acute ischemic stroke (AIS) patients are of significant clinical interest: they play an essential role in improving diagnosis and treatment planning. Computed Tomography (CT) scan is one of the primary modalities for early assessment in patients with suspected AIS. CT Perfusion (CTP) is often used as a primary assessment to determine stroke location, severity, and volume of ischemic lesions. Current automatic segmentation methods for CTP mostly use already processed 3D color maps conventionally used for visual assessment by radiologists as input. Alternatively, the raw CTP data is used on a slice-by-slice basis as 2D+time input, where the spatial information over the volume is ignored. In this paper, we investigate different methods to utilize the entire 4D CTP as input to fully exploit the spatio-temporal information. This leads us to propose a novel 4D convolution layer. Our comprehensive experiments on a local dataset comprised of 152 patients divided into three groups show that our proposed models generate more precise results than other methods explored. A Dice Coefficient of 0.70 and 0.45 is achieved for All authors are with the BioMedical Data analysis group (https://www.uis.no/en/ bmdlab)"}, "cited_paper_content": {"title": "U-Net: Convolutional Networks For Biomedical Image Segmentation", "abstract": "There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net ."}, "keywords": ["2D-Conv"], "citation_intent": "method"} {"citing_id": "2304.01498v1", "cited_id": "1807.04364", "section_title": "Introduction", "citation": "The trilateral weighted sparse coding (TWSC) #REFR accomplishes real image denoising by employing image priors.", "text_before_citation": ["As one of the most significant research areas of low-level visual tasks, image denoising aims to restore clean images from noisy ones.", "During the past decades, many researchers have presented a number of denoising methods.", "Before the wide application of deep neural networks (DNNs), filtering techniques and sparse learning are widely used denoising methods.", "For instance, in the NLM #OTHEREFR , the weighted average of all pixels within the search window in an image is applied to achieve noise removal.", "The BM3D #OTHEREFR improves the sparse representation by collaborative alteration."], "text_after_citation": ["The weighted nuclear norm minimization (WNNM) #OTHEREFR and the multi-channel WNNM (MCWNNM) for color images #OTHEREFR employ the low rank approach and prior knowledge to enhance denoising performance.", "These denoising methods can obtain favorable denoising performance, however most of them have to contain a complex and timeconsuming optimization algorithm.", "Meanwhile, many manually adjusted parameters are also usually required for these models to perform well, this may lead to uncertainty of their denoising performance.", "Therefore these models can hardly be applied in practical denoising scenes.", "From the early successful deep neural networks (DNNs) based image denoising model DnCNN #OTHEREFR to the present, the DNNs based denoising models have received much attention due to their superior denoising effect."], "citing_paper_content": {"title": "Dcanet: Dual Convolutional Neural Network With Attention For Image Blind Denoising", "abstract": "Noise removal of images is an essential preprocessing procedure for many computer vision tasks. Currently, many denoising models based on deep neural networks can perform well in removing the noise with known distributions (i.e. the additive Gaussian white noise). However eliminating real noise is still a very challenging task, since real-world noise often does not simply follow one single type of distribution, and the noise may spatially vary. In this paper, we present a new dual convolutional neural network (CNN) with attention for image blind denoising, named as the DCANet. To the best of our knowledge, the proposed DCANet is the first work that integrates both the dual CNN and attention mechanism for image denoising. The DCANet is composed of a noise estimation network, a spatial and channel attention module (SCAM), and a CNN with a dual structure. The noise estimation network is utilized to estimate the spatial distribution and the noise level in an image. The noisy image and its estimated noise are combined as the input of the SCAM, and a dual CNN contains two different branches is designed to learn the complementary features to obtain the denoised image. The experimental results have verified that the proposed DCANet can suppress both synthetic and real noise effectively."}, "cited_paper_content": {"title": "A Trilateral Weighted Sparse Coding Scheme For Real-World Image Denoising", "abstract": "Most of existing image denoising methods assume the corrupted noise to be additive white Gaussian noise (AWGN). However, the realistic noise in real-world noisy images is much more complex than AWGN, and is hard to be modelled by simple analytical distributions. As a result, many state-of-the-art denoising methods in literature become much less effective when applied to real-world noisy images captured by CCD or CMOS cameras. In this paper, we develop a trilateral weighted sparse coding (TWSC) scheme for robust real-world image denoising. Specifically, we introduce three weight matrices into the data and regularisation terms of the sparse coding framework to characterise the statistics of realistic noise and image priors. TWSC can be reformulated as a linear equality-constrained problem and can be solved by the alternating direction method of multipliers. The existence and uniqueness of the solution and convergence of the proposed algorithm are analysed. Extensive experiments demonstrate that the proposed TWSC scheme outperforms state-of-the-art denoising methods on removing realistic noise."}, "keywords": ["image priors", "sparse"], "citation_intent": "background"} {"citing_id": "2304.03098v1", "cited_id": "1904.13264", "section_title": "Results", "citation": "Among the four word embeddings models we tried, the best performing is FastText, confirming the results of #REFR .", "text_before_citation": ["The results of the Spearman's \u03c1 correlation in the STS benchmark of our SFBoW are reported in the last three rows of Table 2 .", "The reported values belong to the FSBoW configurations that achieved a best score, among the variants we considered for the experiments, in at least one task; if interested in the complete experimental results reporting all the SFBoW configurations, please refer to Appendix A."], "text_after_citation": ["Concerning the choice of the universe matrix, instead, the best scores are achieved either with Identity matrix or with PCA rotation matrix, highlighting how the features described by the word embeddings provide a better description of the semantic content of sentences.", "About the choice of the universe matrix, clustering provided poor results, so Table 2 reports only the scores from Identity matrix and PCA.", "Density based clustering turned out to give meaningless results, for this reason its analysis is omitted.", "k-Means clustering, instead, gave more promising results, but still f Universe matrix is the identity matrix. g Universe matrix is the PCA projection matrix. h Universe matrix is built from the English vocabulary.", "i Universe matrix is built from the top 100 000 most frequent words."], "citing_paper_content": {"title": "Static Fuzzy Bag-Of-Words: A Lightweight Sentence Embedding Algorithm", "abstract": "The introduction of embedding techniques has pushed forward significantly the Natural Language Processing field. Many of the proposed solutions have been presented for word-level encoding; anyhow, in the last years, new mechanism to treat information at an higher level of aggregation, like at sentence-and document-level, have emerged. With this work we address specifically the sentence embeddings problem, presenting the Static Fuzzy Bag-of-Word model. Our model is a refinement of the Fuzzy Bag-of-Words approach, providing sentence embeddings with a predefined dimension. SFBoW provides competitive performances in Semantic Textual Similarity benchmarks, while requiring low computational resources."}, "cited_paper_content": {"title": "Don'T Settle For Average, Go For The Max: Fuzzy Sets And Max-Pooled Word Vectors", "abstract": "Recent literature suggests that averaged word vectors followed by simple post-processing outperform many deep learning methods on semantic textual similarity tasks. Furthermore, when averaged word vectors are trained supervised on large corpora of paraphrases, they achieve state-of-the-art results on standard STS benchmarks. Inspired by these insights, we push the limits of word embeddings even further. We propose a novel fuzzy bag-of-words (FBoW) representation for text that contains all the words in the vocabulary simultaneously but with different degrees of membership, which are derived from similarities between word vectors. We show that max-pooled word vectors are only a special case of fuzzy BoW and should be compared via fuzzy Jaccard index rather than cosine similarity. Finally, we propose DynaMax, a completely unsupervised and non-parametric similarity measure that dynamically extracts and max-pools good features depending on the sentence pair. This method is both efficient and easy to implement, yet outperforms current baselines on STS tasks by a large margin and is even competitive with supervised word vectors trained to directly optimise cosine similarity."}, "keywords": ["four word embeddings"], "citation_intent": "result"} {"citing_id": "2304.04670v1", "cited_id": "2004.01099", "section_title": "Requirements Quality Literature", "citation": "In addition to the dependency of these NLP-powered tools on the availability and reliability of training data, this puts the NLP4RE research domain on the forefront of the open science challenge #REFR .", "text_before_citation": ["One popular approach to this is the proposal of quality factors.", "Requirements quality publications often formulate one or more quality factors-e.g., the use of coordination ambiguity leading to divergent interpretations #OTHEREFR annotate instances of that quality factor in a data set, and finally present an implementation (i.e., an algorithm or full-fledged tool) to detect these instances automatically.", "These artifacts-both data sets and implementations-represent essential contributions facilitating empirical research and technology transfer.", "While the (annotated) data sets are the main driver for developing new and improving existing implementations for quality factor detection, implementations are the tools to be deployed in industry for actual integration and improvement of the software engineering process.", "The NLP4RE research domain, which applies natural language processing (NLP) techniques to RE #OTHEREFR and constitutes a large part of the contributions to the requirements quality literature [1] , is particularly focused on said delivery and improvement of tools."], "text_after_citation": ["The NLP4RE community is therefore particularly aware of its dependency on the availability of artifacts #OTHEREFR .", "However, recent systematic studies revealed that a significant amount of these artifacts are not available 2 anymore or have never been #OTHEREFR 1, #OTHEREFR .", "Table 1 reports the availability status of 57 data sets (D) and 36 implementations (I) extracted from the 57 primary studies of our previously-published literature review on requirements quality factors #OTHEREFR ."], "citing_paper_content": {"title": "Let'S Stop Building At The Feet Of Giants: Recovering Unavailable Requirements Quality Artifacts", "abstract": "Requirements quality literature abounds with publications presenting artifacts, such as data sets and tools. However, recent systematic studies show that more than 80% of these artifacts have become unavailable or were never made public, limiting reproducibility and reusability. In this work, we report on an attempt to recover those artifacts. To that end, we requested corresponding authors of unavailable artifacts to recover and disclose them according to open science principles. Our results, based on 19 answers from 35 authors (54% response rate), include an assessment of the availability of requirements quality artifacts and a breakdown of authors' reasons for their continued unavailability. Overall, we improved the availability of seven data sets and seven implementations."}, "cited_paper_content": {"title": "Natural Language Processing (Nlp) For Requirements Engineering: A Systematic Mapping Study", "abstract": "Natural language processing supported requirements engineering is an area of research and development that seeks to apply NLP techniques, tools and resources to a variety of requirements documents or artifacts to support a range of linguistic analysis tasks performed at various RE phases. Such tasks include detecting language issues, identifying key domain concepts and establishing traceability links between requirements. This article surveys the landscape of NLP4RE research to understand the state of the art and identify open problems. The systematic mapping study approach is used to conduct this survey, which identified 404 relevant primary studies and reviewed them according to five research questions, cutting across five aspects of NLP4RE research, concerning the state of the literature, the state of empirical research, the research focus, the state of the practice, and the NLP technologies used. Results: 1) NLP4RE is an active and thriving research area in RE that has amassed a large number of publications and attracted widespread attention from diverse communities; 2) most NLP4RE studies are solution proposals having only been evaluated using a laboratory experiment or an example application; 3) most studies have focused on the analysis phase, with detection as their central linguistic analysis task and requirements specification as their commonly processed document type; 4) 130 new tools have been proposed to support a range of linguistic analysis tasks, but there is little evidence of adoption in the long term, although some industrial applications have been published; 5) 140 NLP techniques, 66 NLP tools and 25 NLP resources are extracted from the selected studies."}, "keywords": ["NLP4RE research domain"], "citation_intent": "background"} {"citing_id": "2303.01847v1", "cited_id": "cs/0007035", "section_title": "Mapped", "citation": "Thus, the overall performance of the English mapping is 99.89%, which compares favorably with more complex mapping strategies like #REFR .", "text_before_citation": ["12 https://github.com/goodmami/wn/ issues/179", "We evaluate the performance of our algorithm using the values above, and obtain almost perfect performance results:", "precision = tp tp + f p = 0.9996 (1) recall = tp tp + f n = 0.9983 (2) f 1 = 2 * precision * recall precision + recall = 0.9989 (3)"], "text_after_citation": ["Comparing the lost English synsets between the two types of synset identifiers (offsets vs.", "ILIs), we found that 143 were lost using both types, while 62 were only lost with offsets (always due to satellite adjectives becoming standard adjectives), and 89 were only lost with CILI 1.0.", "The respective additions of these losses yield the total loss reported for English in table 1 (205 with offsets vs. 232 with the ILI)."], "citing_paper_content": {"title": "Mapping Wordnets On The Fly With Permanent Sense Keys", "abstract": "Most of the major databases on the semantic web have links to Princeton WordNet (PWN) synonym set (synset) identifiers, which differ for each PWN release, and are thus incompatible between versions. On the other hand, both PWN and the more recent Open English Wordnet (OEWN) provide permanent word sense identifiers (the sense keys), which can solve this interoperability problem. We present an algorithm that runs in linear time, to automatically derive a synset mapping between any pair of Wordnet versions that use PWN sense keys. This allows to update old WordNet links, and seamlessly interoperate with newer English Wordnet versions for which no prior mapping exists. By applying the proposed algorithm on the fly, at load time, we combine the Open Multilingual Wordnet (OMW 1.4, which uses old PWN 3.0 identifiers) with OEWN Edition 2021, and obtain almost perfect precision and recall. We compare the results of our approach using respectively synset offsets, versus the Collaborative InterLingual Index (CILI version 1.0) as synset identifiers, and find that the synset offsets perform better than CILI 1.0 in all cases, except a few ties."}, "cited_paper_content": {"title": "Mapping Wordnets Using Structural Information", "abstract": "We present a robust approach for linking already existing lexical/semantic hierarchies. We used a constraint satisfaction algorithm (relaxation labeling) to select - among a set of candidates- the node in a target taxonomy that bests matches each node in a source taxonomy. In particular, we use it to map the nominal part of WordNet 1.5 onto WordNet 1.6, with a very high precision and a very low remaining ambiguity."}, "keywords": ["English mapping"], "citation_intent": "result"} {"citing_id": "2304.05989v1", "cited_id": "1602.00134", "section_title": "Datasets", "citation": "To obtain human skeletal data in the scene we compared the output of the Convolutional Pose Moachine (CPM) #REFR model with the kinect skeletal data.", "text_before_citation": ["Table 4 summarizes the characteristics of the datasets employed in this study.", "Moreover, the affordance labels present in the LOAD dataset are indicated in Figure 8 , whereas the labels for both the CAD-120 and Watch-n-Patch datasets are: 'can-contain', 'containable', 'can-support', 'supportable', 'drinkable', and 'holdable'.", "supportable can-support sittable holdable pullable can-cover rollable pushable carriable kickable coverable can-contain containable drinkable bouncable We used 80% of the CAD-120 dataset to determine the method's parameters and train the graph2vec network, resulting to 18,072 AGraphlets, and then evaluated the proposed approach on the remaining 20% unseen videos (24 videos), which comprise of 5,682 AGraphlets.", "Experiments on the Watch-n-Patch and LOAD datasets were conducted on 24 and 15 handpicked videos, or 6,900 and 1,212 AGraphlets respectively, where the predictions of the object detector, after visual inspection, were sufficient for capturing interacting objects.", "For defining the parameters of the algorithm, we exploited hand-picked videos from the training set for the CAD-120 dataset, and for the LOAD and Watch-n-Patch datasets we hand-picked videos different from the ones used for testing the proposed method."], "text_after_citation": ["Through an empirical study we found that the CPM predictions are more accurate, especially when human-joint occlusion occurs or the human agent stands on their side.", "Object locations and depth information are provided from the predicted objects' masks.", "We employ the state-of-the-art Mask R-CNN framework #OTHEREFR ) trained on the COCO dataset #OTHEREFR , since the training data distribution is similar to the data distribution present in the datasets we use in this work, and it is a generic dataset with more object classes than those included in the target task.", "The box enclosing the object's mask corresponds to the object's bounding box.", "This implementation is based on Mask R-CNN predictions, however the proposed method is not object-specific and any class-agnostic proposal method can be used."], "citing_paper_content": {"title": "Object-Agnostic Affordance Categorization Via Unsupervised Learning Of Graph Embeddings", "abstract": "Acquiring knowledge about object interactions and affordances can facilitate scene understanding and human-robot collaboration tasks. As humans tend to use objects in many different ways depending on the scene and the objects' availability, learning object affordances in everyday-life scenarios is a challenging task, particularly in the presence of an open set of interactions and objects. We address the problem of affordance categorization for class-agnostic objects with an open set of interactions; we achieve this by learning similarities between object interactions in an unsupervised way and thus inducing clusters of object affordances. A novel depth-informed qualitative spatial representation is proposed for the construction of Activity Graphs (AGs), which abstract from the continuous representation of spatio-temporal interactions in RGB-D videos. These AGs are clustered to obtain groups of objects with similar affordances. Our experiments in a real-world scenario demonstrate that our method learns to create object affordance clusters with a high V-measure even in cluttered scenes. The proposed approach handles object occlusions by capturing effectively possible interactions and without imposing any object or scene constraints."}, "cited_paper_content": {"title": "Convolutional Pose Machines", "abstract": "Pose Machines provide a sequential prediction framework for learning rich implicit spatial models. In this work we show a systematic design for how convolutional networks can be incorporated into the pose machine framework for learning image features and image-dependent spatial models for the task of pose estimation. The contribution of this paper is to implicitly model long-range dependencies between variables in structured prediction tasks such as articulated pose estimation. We achieve this by designing a sequential architecture composed of convolutional networks that directly operate on belief maps from previous stages, producing increasingly refined estimates for part locations, without the need for explicit graphical model-style inference. Our approach addresses the characteristic difficulty of vanishing gradients during training by providing a natural learning objective function that enforces intermediate supervision, thereby replenishing back-propagated gradients and conditioning the learning procedure. We demonstrate state-of-the-art performance and outperform competing methods on standard benchmarks including the MPII, LSP, and FLIC datasets."}, "keywords": ["kinect skeletal data"], "citation_intent": "method"} {"citing_id": "2303.04895v1", "cited_id": "1502.02298", "section_title": "Belief Revision", "citation": "It is substantially similar to that of the general result given in #REFR (see Theorem 1) except that the result here relates to formulas and not to theories.", "text_before_citation": ["A revision operator \u2022 satisfies the AGM postulates if and only if there exists a FA that maps each formula \u03d5 to a binary relation \u03d5 such that for every formula \u03c8", "Mod(\u03d5 \u2022 \u03c8) = Min(Mod(\u03c8), \u03d5 )", "where", "Min(Mod(\u03c8), \u03d5 ) = {M \u2208 Mod(\u03c8) | \u2200M \u2032 \u2208 Mod(\u03c8), M \u2032 \u227a \u03d5 M}.", "The proof of Proposition 16 is given in Appendix."], "text_after_citation": ["This then required redefining a particular FA in the proof.", "The FA used in #OTHEREFR to prove this representation result was an adaptation of the FA proposed in the original paper #OTHEREFR , but the latter was not adaptable within the framework of modal logic 6 ."], "citing_paper_content": {"title": "Morpho-Logic From A Topos Perspective: Application To Symbolic Ai", "abstract": "Modal logics have proved useful for many reasoning tasks in symbolic artificial intelligence (AI), such as belief revision, spatial reasoning, among others. On the other hand, mathematical morphology (MM) is a theory for non-linear analysis of structures, that was widely developed and applied in image analysis. Its mathematical bases rely on algebra, complete lattices, topology. Strong links have been established between MM and mathematical logics, mostly modal logics. In this paper, we propose to further develop and generalize this link between mathematical morphology and modal logic from a topos perspective, i.e. categorial structures generalizing space, and connecting logics, sets and topology. Furthermore, we rely on the internal language and logic of topos. We define structuring elements, dilations and erosions as morphisms. Then we introduce the notion of structuring neighborhoods, and show that the dilations and erosions based on them lead to a constructive modal logic, for which a sound and complete proof system is proposed. We then show that the modal logic thus defined (called morpho-logic here), is well adapted to define concrete and efficient operators for revision, merging, and abduction of new knowledge, or even spatial reasoning."}, "cited_paper_content": {"title": "Belief Revision, Minimal Change And Relaxation: A General Framework Based On Satisfaction Systems, And Applications To Description Logics", "abstract": "Belief revision of knowledge bases represented by a set of sentences in a given logic has been extensively studied but for specific logics, mainly propositional, and also recently Horn and description logics. Here, we propose to generalize this operation from a model-theoretic point of view, by defining revision in the abstract model theory of satisfaction systems. In this framework, we generalize to any satisfaction system the characterization of the AGM postulates given by Katsuno and Mendelzon for propositional logic in terms of minimal change among interpretations. In this generalization, the constraint on syntax independence is partially relaxed. Moreover, we study how to define revision, satisfying these weakened AGM postulates, from relaxation notions that have been first introduced in description logics to define dissimilarity measures between concepts, and the consequence of which is to relax the set of models of the old belief until it becomes consistent with the new pieces of knowledge. We show how the proposed general framework can be instantiated in different logics such as propositional, first-order, description and Horn logics. In particular for description logics, we introduce several concrete relaxation operators tailored for the description logic ALC and its fragments EL and ELU , discuss their properties and provide some illustrative examples."}, "keywords": ["Theorem", "general result"], "citation_intent": "result"} {"citing_id": "2305.00986v1", "cited_id": "1505.04597", "section_title": "Unet With Dense Net", "citation": "The algorithm used to achieve this was UNet #REFR , it uses Double Convolutional layers to identify and extract features from the input image and uses skip connects to reuse these features in a related layer.", "text_before_citation": ["In this case, the pattern of interest is the rot present in an image, while identifying rot is a subjective matter it is entirely possible to map this as an input feature to a model and have its representations capture.", "This model would be able to capture image segments on related images.", "To achieve this, the model typically uses a combination of Double Convolutional Neural Networks with a structure called skip-connections which skip some of the connections in a neural network and feeds the output of one layer as input to the other layers.", "Skip-connections greatly reduce the complexity of loss surfaces, making it easier for optimizers to reduce loss while ensuring that feature representations are reused #OTHEREFR .", "The images for a sample image and prediction are shown below (areas in yellow are rotten areas of the meat as identified by the model)."], "text_after_citation": ["The idea is that each feature set captured in a layer is captured in a layer connected by a skip connection and passed to the next layer to compute the representation segment.", "Since this task outputs a set of image patterns the ideal outcome would be identifying the quality of the outputs in terms of the intersection and the overlap resulting from the predictions and the image masks.", "The loss functions capable of representing this effort are Dice loss and Jaccard loss which broadly look at the ratio of the intersection to the union, so concretely both would have a measure of how well the model can segment the patterns of interest from a given input image.", "The extracted image segmented predictions were passed as input features to a DenseNet model and predictions were output based on the segments captured.", "The segment of interest in this case is the rot present in the image and this was one-hot encoded when it was passed to the DenseNet model."], "citing_paper_content": {"title": "Meat Freshness Prediction", "abstract": "In most retail stores, the number of days since initial processing is used as a proxy for estimating the freshness of perishable foods or freshness is assessed manually by an employee. While the former method can lead to wastage, as some fresh foods might get disposed after a fixed number of days, the latter can be time-consuming, expensive and impractical at scale. This project aims to propose a Machine Learning (ML) based approach that evaluates freshness of food based on live data. For the current scope, it only considers meat as a the subject of analysis and attempts to classify pieces of meat as fresh, half-fresh or spoiled. Finally the model achieved an accuracy of above 90% and relatively high performance in terms of the cost of misclassification. It is expected that the technology will contribute to the optimization of the client's business operation, reducing the risk of selling defective or rotten products that can entail serious monetary, non-monetary and health-based consequences while also achieving higher corporate value as a sustainable company by reducing food wastage through timely sales and disposal. 1. Business Problem Assessing the freshness of perishable food is a significant operational challenge for retailers, as it is time-consuming, and can affect their business performance as well as reputation if a wrong judgment is made. In most retail stores, the number of days since initial processing is used as a proxy for freshness. Regardless of actual freshness, products are judged to be fresh if fewer days have passed and stale if more days have passed. Products that have passed many days since their initial processing are discounted, and if they are still not purchased, they are ultimately disposed"}, "cited_paper_content": {"title": "U-Net: Convolutional Networks For Biomedical Image Segmentation", "abstract": "There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net ."}, "keywords": ["algorithm", "Double Convolutional layers"], "citation_intent": "method"} {"citing_id": "2303.11117v2", "cited_id": "1811.00405", "section_title": "Emotion Recognition In Conversation", "citation": "By distinguishing specific speakers, DialogueRNN #REFR modeled emotions dynamically based on the current speaker, contextual content, and emotional state. Zhong et al.", "text_before_citation": ["Distinct from traditional emotion recognition which treats emotion as a static state, ERC takes full consideration of emotion to be dynamic and flow between speaker interactions. Hazarika et al.", "#OTHEREFR proposed a LSTM-based model to enable current utterance to capture contextual information in historical conversations.", "CMN #OTHEREFR employed a skip attention mechanism to merge contextual information in a historical conversation. Jiao et al.", "#OTHEREFR proposed a hierarchical GRU to address the difficulty of capturing longdistance contextual information effectively."], "text_after_citation": ["#OTHEREFR proposed Knowledge-Enriched Transformer, which dynamically exploited external commonsense knowledge through hierarchical self-attention and context-aware graph attention.", "By building directed graphical structures over the input utterance sequences with speaker information, DialogueGCN #OTHEREFR applied graph convolution network to construct inter-and intra-dependencies among distant utterances.", "COSMIC #OTHEREFR combined different commonsense knowledge and learned the interaction between the interlocutors in the dialogue.", "DialogXL #OTHEREFR modified the memory block in XLNet #OTHEREFR to store longer historical contexts and conversation-aware self-attention to handle multi-party structures. Wang et al.", "#OTHEREFR proposed a relational graph attention network to encode the tree structure for sentiment prediction."], "citing_paper_content": {"title": "Research Paper . Emotionic: Emotional Inertia And Contagion-Driven Dependency Modelling For Emotion Recognition In Conversation", "abstract": "Emotion Recognition in Conversation (ERC) has attracted growing attention in recent years as a result of the advancement and implementation of human-computer interface technologies. However, previous approaches to modeling global and local context dependencies lost the diversity of dependency information and do not take the context dependency into account at the classification level. In this paper, we propose a novel approach to dependency modeling driven by Emotional Inertia and Contagion (EmotionIC) for conversational emotion recognition at the feature extraction and classification levels. At the feature extraction level, our designed Identity Masked Multi-head Attention (IM-MHA) captures the identity-based long-distant context in the dialogue to contain the diverse influence of different participants and construct the global emotional atmosphere, while the devised Dialogue-based Gate Recurrent Unit (DialogGRU) that aggregates the emotional tendencies of dyadic dialogue is applied to refine the contextual features with interand intra-speaker dependencies. At the classification level, by introducing skip connections in Conditional Random Field (CRF), we elaborate the Skip-chain CRF (SkipCRF) to capture the high-order dependencies within and between speakers, and to emulate the emotional flow of distant participants. Experimental results show that our method can significantly outperform the state-of-the-art models on four benchmark datasets. The ablation studies confirm that our modules can effectively model emotional inertia and contagion."}, "cited_paper_content": {"title": "Dialoguernn: An Attentive Rnn For Emotion Detection In Conversations", "abstract": "Emotion detection in conversations is a necessary step for a number of applications, including opinion mining over chat history, social media threads, debates, argumentation mining, understanding consumer feedback in live conversations, and so on. Currently systems do not treat the parties in the conversation individually by adapting to the speaker of each utterance. In this paper, we describe a new method based on recurrent neural networks that keeps track of the individual party states throughout the conversation and uses this information for emotion classification. Our model outperforms the state-of-the-art by a significant margin on two different datasets."}, "keywords": ["DialogueRNN modeled emotions"], "citation_intent": "background"} {"citing_id": "2303.09826v1", "cited_id": "1809.00219", "section_title": "Comparisons With State-Of-The-Art Methods", "citation": "And with the help of powerful SR backbone (RRDB-Net #REFR with 16.70M parameters), their results are quite remarkable.", "text_before_citation": ["We also conduct experiments on BasicVSR #OTHEREFR and PDM #OTHEREFR as two representative methods for noneblind VSR method and blind SISR with implicit degradation modeling.", "Apart from AnimeSR #OTHEREFR , for which we report the result in the original paper, all the others are general SR methods for open-domain images or videos.", "For fair comparisons, we fine-tune their officially released models on animation dataset AVC-Train. Quantitative comparison. As shown in Tab.", "1, we evaluate all the methods on AVC-ReaLQ #OTHEREFR for quantitative comparisons.", "Among them, Real-ESRGAN #OTHEREFR and BSR-GAN #OTHEREFR expand the former explicit degradation models by introducing high-order degradations and random shuffling respectively, which greatly improve the synthetic ability."], "text_after_citation": ["Nonetheless, ignoring the intrinsic characteristics of animation data limits their performance when applied to animation domain.", "Although specialized for animation videos, AnimeSR #OTHEREFR only utilizes a small number of real data (three human-annotated animation videos), which hinders the performance of VSR model in real scenarios.", "Different from them, our VQD-SR considers the intrinsic characteristics of animation videos and leverages the enormous degradation priors contained in rich-content real animation videos.", "VQD-SR also adopts the HR enhancement strategy for more effective SR supervision.", "Due to these advantages, VQD-SR achieves a result of 0.4096 and significantly outperforms the SOTA animation VSR model #OTHEREFR by 0.0264 in MANIQA on AVC-RealLQ #OTHEREFR ."], "citing_paper_content": {"title": "Learning Data-Driven Vector-Quantized Degradation Model For Animation Video Super-Resolution", "abstract": "Existing real-world video super-resolution (VSR) methods focus on designing a general degradation pipeline for open-domain videos while ignoring data intrinsic characteristics which strongly limit their performance when applying to some specific domains (e.g. animation videos). In this paper, we thoroughly explore the characteristics of animation videos and leverage the rich priors in real-world animation data for a more practical animation VSR model. In particular, we propose a multi-scale Vector-Quantized Degradation model for animation video Super-Resolution (VQD-SR) to decompose the local details from global structures and transfer the degradation priors in real-world animation videos to a learned vector-quantized codebook for degradation modeling. A rich-content Real Animation Low-quality (RAL) video dataset is collected for extracting the priors. We further propose a data enhancement strategy for high-resolution (HR) training videos based on our observation that existing HR videos are mostly collected from the Web which contains conspicuous compression artifacts. The proposed strategy is valid to lift the upper bound of animation VSR performance, regardless of the specific VSR model. Experimental results demonstrate the superiority of the proposed VQD-SR over state-of-the-art methods, through extensive quantitative and qualitative evaluations of the latest animation video super-resolution benchmark."}, "cited_paper_content": {"title": "Esrgan: Enhanced Super-Resolution Generative Adversarial Networks", "abstract": "The Super-Resolution Generative Adversarial Network (SRGAN) is a seminal work that is capable of generating realistic textures during single image super-resolution. However, the hallucinated details are often accompanied with unpleasant artifacts. To further enhance the visual quality, we thoroughly study three key components of SRGAN - network architecture, adversarial loss and perceptual loss, and improve each of them to derive an Enhanced SRGAN (ESRGAN). In particular, we introduce the Residual-in-Residual Dense Block (RRDB) without batch normalization as the basic network building unit. Moreover, we borrow the idea from relativistic GAN to let the discriminator predict relative realness instead of the absolute value. Finally, we improve the perceptual loss by using the features before activation, which could provide stronger supervision for brightness consistency and texture recovery. Benefiting from these improvements, the proposed ESRGAN achieves consistently better visual quality with more realistic and natural textures than SRGAN and won the first place in the PIRM2018-SR Challenge. The code is available at this https URL ."}, "keywords": ["powerful SR backbone"], "citation_intent": "result"} {"citing_id": "2303.16099v1", "cited_id": "1810.11654", "section_title": "Review Of Brain Glioma Segmentation Methods", "citation": "To avoid over-fitting problems in 3D voxel-level segmentation on limited training datasets, Myronenko #REFR proposed a 3D CNN with an additional variational autoencoder to regularise the decoder by reconstructing the input image.", "text_before_citation": ["#OTHEREFR ensembled three 2D CNNs on three orthogonal 2D patches.", "To fully make use of 3D contextual information, recent works applied 3D convolutional kernels on original volume data. Kamnitsas et al.", "#OTHEREFR proposed two pathway 3D CNN followed with dense CRF called DeepMedic for brain tumor segmentation.", "Authors of #OTHEREFR further extended the work by using model ensembling #OTHEREFR .", "The proposed system EMMA ensembled models from FCN, U-Net and DeepMedic for processing 3D patches."], "text_after_citation": ["The architecture built in #OTHEREFR is further developed in various recent works. Su et al.", "#OTHEREFR extends the architecture built in #OTHEREFR into two sub-networks to fuse the information learned from different modalities. Jiang et al.", "#OTHEREFR proposed two-stage networks where each stage adopts a similar network in #OTHEREFR .", "The first stage network generates a coarse result and the second stage network refines the segmentation result.", "The final result in #OTHEREFR reaches state-of-the-art by ensemble 12 model instances, which requires huge computational resources."], "citing_paper_content": {"title": "Medical Image Analysis Using Deep Relational Learning", "abstract": "School of Informatics Master of Philosophy Medical Image Analysis using Deep Relational Learning by Zhihua LIU Benefited from deep learning techniques, remarkable progress has been made within the medical image analysis area in recent years. However, it is very challenging to fully utilize the relational information (the relationship between tissues or organs or images) within the deep neural network architecture. Thus in this thesis, we propose two novel solutions to this problem called implicit and explicit deep relational learning. We generalize these two paradigms of deep relational learning into different solutions and evaluate them on various medical image analysis tasks. Automated segmentation of brain glioma in 3D magnetic resonance imaging plays an active role in glioma diagnosis, progression monitoring and surgery planning. In this work, we propose a novel Context-Aware Network that effectively models implicit relation information between features to perform accurate 3D glioma segmentation. We evaluate our proposed method on publicly accessible brain tumor segmentation datasets BRATS2017 and BRATS2018 against several state-of-the-art approaches using different segmentation metrics. The experimental results show that the proposed algorithm has better or competitive performance, compared to the standard approaches. Subsequently, we propose a new hierarchical homography estimation network to achieve accurate medical image mosaicing by learning the explicit spatial relationship between adjacent frames. We use the UCL Fetoscopy Placenta dataset to conduct experiments and our hierarchical homography estimation network outperforms the other state-of-the-art mosaicing methods while generating robust and meaningful mosaicing results on unseen frames. First of all, I want to thank my father Mr. Yuhua Du, my mother Mrs. Zengxia Xiao and my fiancee Miss. Xinyu Wang. Thank them for their support to me and the family. I cannot repay the support from my family, and it is also my biggest motivation to continue scientific research. Secondly, I want to thank my first supervisor, Prof. Huiyu Zhou. Prof. Huiyu Zhou has a rigorous attitude towards science. He also dedicates himself to his work. He is both a good teacher and a good friend. I am grateful to him for his continuous criticism and teaching, which have benefited me for life. At the same time, I would also like to thank my second supervisor, Prof. Yudong Zhang, for his professional guidance on my academic work. Also, I would like to thank all colleagues of Biomedical Image Processing Lab (BIPL), especially Mr."}, "cited_paper_content": {"title": "3D Mri Brain Tumor Segmentation Using Autoencoder Regularization", "abstract": "Automated segmentation of brain tumors from 3D magnetic resonance images (MRIs) is necessary for the diagnosis, monitoring, and treatment planning of the disease. Manual delineation practices require anatomical knowledge, are expensive, time consuming and can be inaccurate due to human error. Here, we describe a semantic segmentation network for tumor subregion segmentation from 3D MRIs based on encoder-decoder architecture. Due to a limited training dataset size, a variational auto-encoder branch is added to reconstruct the input image itself in order to regularize the shared decoder and impose additional constraints on its layers. The current approach won 1st place in the BraTS 2018 challenge."}, "keywords": ["3D CNN", "additional variational autoencoder"], "citation_intent": "method"} {"citing_id": "2303.02902v1", "cited_id": "1909.01513", "section_title": "Reeb Graph And Persistence Diagram", "citation": "To ensure that RG f is the Reeb graph of a Morse function, we first eliminate degenerate critical nodes in RG f by breaking them into nondegenerate critical nodes #REFR .", "text_before_citation": ["ExDg 0 (RG f ) encodes the range of f and ExDg 1 (RG f ) captures the 1-cycles or loops in RG f .", "To compute the persistence diagrams of RG f , we require RG f to be the Reeb graph of a Morse function. If f is a Morse function, i.e.", "all its critical points are non-degenerate and are at different levels, then the critical nodes of RG f have distinct values off and belong to one of the following five-types: (i) a minimum (with down-degree = 0, up-degree = 1), (ii) a maximum (with up-degree = 0, down-degree = 1), (iii) a downfork (with down-degree = 2, up-degree = 1) and (iv) a up-fork (with up-degree = 2, down-degree = 1).", "A regular node has up-degree = 1 and down-degree = 1.", "A down-fork (similarly, up-fork) node is called an essential down-fork node when it contributes to a loop (cycle) of the Reeb graph. Otherwise it is called an ordinary down-fork node."], "text_after_citation": ["After eliminating degenerate critical nodes, we ensure that the critical nodes off are at different levels.", "If two critical nodes are at the same level, then the value of one of the nodes is increased/decreased by a small value .", "After removing degenerate critical nodes and ensuring that critical nodes are at different levels, RG f becomes the Reeb graph of a Morse function.", "The points in Dg 0 (f ) are computed by pairing ordinary down-forks with minima, ordinary up-forks with maxima and the global minimum with global maximum.", "Let u be an ordinary down-fork of RG f ."], "citing_paper_content": {"title": "A Topological Distance Measure Between Multi-Fields For Classification And Analysis Of Shapes And Data", "abstract": "Distance measures play an important role in shape classification and data analysis problems. Topological distances based on Reeb graphs and persistence diagrams have been employed to obtain effective algorithms in shape matching and scalar data analysis. In the current paper, we propose an improved distance measure between two multi-fields by computing a multidimensional Reeb graph (MDRG) each of which captures the topology of a multi-field through a hierarchy of Reeb graphs in different dimensions. A hierarchy of persistence diagrams is then constructed by computing a persistence diagram corresponding to each Reeb graph of the MDRG. Based on this representation, we propose a novel distance measure between two MDRGs by extending the bottleneck distance between two Reeb graphs. We show that the proposed measure satisfies the pseudo-metric and stability properties. We examine the effectiveness of the proposed multi-field topologybased measure on two different applications: (1) shape classification and (2) detection of topological features in a time-varying multi-field data. In the shape classification problem, the performance of the proposed measure is compared with the well-known topology-based measures in shape matching. In the second application, we consider a time-varying volumetric multi-field data from the field of computational chemistry where the goal is to detect the site of stable bond formation between Pt and CO molecules. We demonstrate the ability of the proposed distance in classifying each of the sites as occurring before and after the bond stabilization."}, "cited_paper_content": {"title": "Propagate And Pair: A Single-Pass Approach To Critical Point Pairing In Reeb Graphs", "abstract": "With the popularization of Topological Data Analysis, the Reeb graph has found new applications as a summarization technique in the analysis and visualization of large and complex data, whose usefulness extends beyond just the graph itself. Pairing critical points enables forming topological fingerprints, known as persistence diagrams, that provides insights into the structure and noise in data. Although the body of work addressing the efficient calculation of Reeb graphs is large, the literature on pairing is limited. In this paper, we discuss two algorithmic approaches for pairing critical points in Reeb graphs, first a multipass approach, followed by a new single-pass algorithm, called Propagate and Pair."}, "keywords": ["Reeb graph"], "citation_intent": "method"} {"citing_id": "2303.16251v2", "cited_id": "1711.00165", "section_title": "I. Introduction", "citation": "Similar results are then achieved in #REFR for deep fully-connected networks as all hidden layer widths go to infinity.", "text_before_citation": ["#OTHEREFR gives a constructive method, but only for target and activation functions in L 1 .", "In #OTHEREFR and #OTHEREFR , they propose constructive methods for a class of target functions with unit step and ReLU activations respectively.", "In #OTHEREFR , functions are approximated using trigonometric polynomial ridge functions, which can then be shown in expectation to be equivalent to randomly initialized ReLU activations.", "There are several interesting and important results in the literature having to do with neural networks with random (typically Gaussian) parameter initialization, sometimes as a consequence of using randomly initialized gradient descent for training the network.", "A classic result by #OTHEREFR shows that the output of a single hidden-layer network with Gaussian randomly initialized parameters goes to a Gaussian Process as the width goes to infinity."], "text_after_citation": ["Also for deep fully-connected networks, in #OTHEREFR the authors define the Neural Tangent Kernel and propose that its limit, as the hidden layer widths go to infinity, can be used to study the timestep evolution and dynamics of the parameters, and the corresponding network output function, in gradient descent.", "In #OTHEREFR , the authors show that single hidden-layer networks cannot achieve the same rates of increase in a measure of curvature produced by the network output, as deep networks can, with parameters Gaussian randomly initialized and bounded activation functions.", "In #OTHEREFR , the authors show that single hidden-layer networks of a sufficient width can use Gaussian randomly initialized gradient descent on values of a target function, and achieve guaranteed generalization to the entire function.", "And in #OTHEREFR , the authors show that deep fully-connected networks, where each hidden layer width meets a sufficient size, can be trained with Gaussian randomly initialized gradient descent and be guaranteed to reach the global minimum at a linear rate.", "Our primary contributions in this paper are: developing a novel method for bridging convex combinations of activation functions (eg."], "citing_paper_content": {"title": "Function Approximation With Randomly Initialized Neural Networks For Approximate Model Reference Adaptive Control", "abstract": "Classical results in neural network approximation theory show how arbitrary continuous functions can be approximated by networks with a single hidden layer, under mild assumptions on the activation function. However, the classical theory does not give a constructive means to generate the network parameters that achieve a desired accuracy. Recent results have demonstrated that for specialized activation functions, such as ReLUs, high accuracy can be achieved via linear combinations of randomly initialized activations. These recent works utilize specialized integral representations of target functions that depend on the specific activation functions used. This paper defines mollified integral representations, which provide a means to form integral representations of target functions using activations for which no direct integral representation is currently known. The new construction enables approximation guarantees for randomly initialized networks using any activation for which there exists an established base approximation which may not be constructive. We extend the results to the supremum norm and show how this enables application to an extended, approximate version of (linear) model reference adaptive control."}, "cited_paper_content": {"title": "Deep Neural Networks As Gaussian Processes", "abstract": "A deep fully-connected neural network with an i.i.d. prior over its parameters is equivalent to a Gaussian process (GP) in the limit of infinite network width. This correspondence enables exact Bayesian inference for neural networks on regression tasks by means of straightforward matrix computations. For single hidden-layer networks, the covariance function of this GP has long been known. Recently, kernel functions for multi-layer random neural networks have been developed, but only outside of a Bayesian framework. As such, previous work has not identified the correspondence between using these kernels as the covariance function for a GP and performing fully Bayesian prediction with a deep neural network. In this work, we derive this correspondence and develop a computationally efficient pipeline to compute the covariance functions. We then use the resulting GP to perform Bayesian inference for deep neural networks on MNIST and CIFAR-10. We find that the GP-based predictions are competitive and can outperform neural networks trained with stochastic gradient descent. We observe that the trained neural network accuracy approaches that of the corresponding GP-based computation with increasing layer width, and that the GP uncertainty is strongly correlated with prediction error. We connect our observations to the recent development of signal propagation in random neural networks."}, "keywords": ["deep fully-connected networks"], "citation_intent": "result"} {"citing_id": "2303.14711v1", "cited_id": "1805.03278", "section_title": "Results", "citation": "Per pixel and per instance evaluation for the detection of small hyperreflective specks and foci in comparison to pixel-wise results for larger conventional hyperreflective foci reported by Schlegl et al. #REFR for data acquired on a Cirrus/Spectralis OCT scanner.", "text_before_citation": ["However, on their figures, they appear to be much larger, corresponding to conventional HRF.", "In addition, they also include features in different retinal layers.", "Due to the difference in sizes and scale, the performance of their algorithm is not directly comparable to our results, yet their results show the performance of a different algorithm on a similar task.", "A more detailed comparison of our algorithm and the ground truth yields the following observations (Fig. 2) .", "Labeled features with slightly higher intensity values than the background are difficult to detect and the reader agreement on these is not high. Tab. 1."], "text_after_citation": ["This also applies to boundary regions of the features because of the smooth transition to background intensities and a missing consensus of where to delineate feature boundaries.", "Another situation of disagreement can occur on much higher intensity values on the ELM compared to the rest of the ELM.", "They are usually detected by the algorithm, but usually not annotated by the human experts.", "In addition, for features that are close to IS/OS and OPL, it is sometimes unclear, if they are detached from these layers.", "It occurs that only either the algorithm or the readers include or exclude them, which has a high impact on the scores as the regions are comparably large."], "citing_paper_content": {"title": "Unsupervised Detection Of Small Hyperreflective Features In Ultrahigh Resolution Optical Coherence Tomography", "abstract": "Recent advances in optical coherence tomography such as the development of high speed ultrahigh resolution scanners and corresponding signal processing techniques may reveal new potential biomarkers in retinal diseases. Newly visible features are, for example, small hyperreflective specks in age-related macular degeneration. Identifying these new markers is crucial to investigate potential association with disease progression and treatment outcomes. Therefore, it is necessary to reliably detect these features in 3D volumetric scans. Because manual labeling of entire volumes is infeasible a need for automatic detection arises. Labeled datasets are often not publicly available and there are usually large variations in scan protocols and scanner types. Thus, this work focuses on an unsupervised approach that is based on local peak-detection and random walker segmentation to detect small features on each B-scan of the volume."}, "cited_paper_content": {"title": "Fully Automated Segmentation Of Hyperreflective Foci In Optical Coherence Tomography Images", "abstract": "The automatic detection of disease related entities in retinal imaging data is relevant for disease- and treatment monitoring. It enables the quantitative assessment of large amounts of data and the corresponding study of disease characteristics. The presence of hyperreflective foci (HRF) is related to disease progression in various retinal diseases. Manual identification of HRF in spectral-domain optical coherence tomography (SD-OCT) scans is error-prone and tedious. We present a fully automated machine learning approach for segmenting HRF in SD-OCT scans. Evaluation on annotated OCT images of the retina demonstrates that a residual U-Net allows to segment HRF with high accuracy. As our dataset comprised data from different retinal diseases including age-related macular degeneration, diabetic macular edema and retinal vein occlusion, the algorithm can safely be applied in all of them though different pathophysiological origins are known."}, "keywords": ["Cirrus/Spectralis OCT scanner"], "citation_intent": "result"} {"citing_id": "2304.00257v1", "cited_id": "1711.07971", "section_title": "Three-Dimensional Convolutional Neural Network (Cnn) Architecture", "citation": "Our approach is adopted from #REFR , whereby a 2D \u00d7 kernel can be inflated to a 3D \u00d7 \u00d7 kernel that spreads across frames.", "text_before_citation": ["Resnet18 was used as the backbone of our CNN model.", "It serves as our feature extractor which compresses the image representation into a vector with dimensionality of 512 as shown in Fig. 5 .", "The base architecture, which was designed for image classification was extended to a three-dimensional (3D) model by \"inflating\" the kernels #OTHEREFR ."], "text_after_citation": ["The weights from the pretrained 2D Resnet 18 model are used to initialize the kernels, whereby each of the planes in the \u00d7 \u00d7 kernel are initialized with the pre-trained \u00d7 weights, rescaled by 1/ .", "This initialization setup produces the same results as the 2D pre-trained model run on a single static frame, repeated in the time domain.", "In our setup, the first convolutional kernel in each Resnet layer is initialized with a 3 \u00d7 3 \u00d7 3 kernel.", "The rest of the kernels are simply extended to their 3D counterpart that perform the same operation, by initializing them as a 1 \u00d7 3 \u00d7 3 kernel.", "This approach does not increase the parameters significantly, thus alleviating overfitting. The CNN architecture is visualized in Fig. 2 ."], "citing_paper_content": {"title": "Radifusion: A Multi-Radiomics Deep Learning Based Breast Cancer Risk Prediction Model Using Sequential Mammographic Images With Image Attention And Bilateral Asymmetry Refinement", "abstract": "Breast cancer is a significant public health concern and early detection is critical for triaging high risk patients. Sequential screening mammograms can provide important spatiotemporal information about changes in breast tissue over time. In this study, we propose a deep learning architecture called RADIFUSION that utilizes sequential mammograms and incorporates a linear image attention mechanism, radiomic features, a new gating mechanism to combine different mammographic views, and bilateral asymmetry-based finetuning for breast cancer risk assessment. We evaluate our model on a screening dataset called Cohort of Screen-Aged Women (CSAW) dataset. Based on results obtained on the independent testing set consisting of 1,749 women, our approach achieved superior performance compared to other state-of-the-art models with area under the receiver operating characteristic curves (AUCs) of 0.905, 0.872 and 0.866 in the three respective metrics of 1-year AUC, 2-year AUC and > 2-year AUC. Our study highlights the importance of incorporating various deep learning mechanisms, such as image attention, radiomic features, gating mechanism, and bilateral asymmetry-based fine-tuning, to improve the accuracy of breast cancer risk assessment. We also demonstrate that our model's performance was enhanced by leveraging spatiotemporal information from sequential mammograms. Our findings suggest that RADIFUSION can provide clinicians with a powerful tool for breast cancer risk assessment."}, "cited_paper_content": {"title": "Non-Local Neural Networks", "abstract": "Both convolutional and recurrent operations are building blocks that process one local neighborhood at a time. In this paper, we present non-local operations as a generic family of building blocks for capturing long-range dependencies. Inspired by the classical non-local means method in computer vision, our non-local operation computes the response at a position as a weighted sum of the features at all positions. This building block can be plugged into many computer vision architectures. On the task of video classification, even without any bells and whistles, our non-local models can compete or outperform current competition winners on both Kinetics and Charades datasets. In static image recognition, our non-local models improve object detection/segmentation and pose estimation on the COCO suite of tasks. Code is available at this https URL ."}, "keywords": ["2D \u00d7 kernel"], "citation_intent": "method"} {"citing_id": "2304.03424v1", "cited_id": "1908.09048", "section_title": "Introduction", "citation": "Other works such as Griffon #REFR used machine learning models to predict the minor slowdown in runtimes for a limited number of job templates.", "text_before_citation": ["In production systems, jobs are often scheduled or pipelined with strong data dependencies (jobs using other jobs' output data as inputs) #OTHEREFR .", "Stability and predictability of job runtimes are important factors that affect the fundamental design and architecture of data processing pipelines.", "Unfortunately, they are often neglected by operators due to the difficulties of assessment even though job slowdowns are inevitable #OTHEREFR .", "Even with massive amounts of telemetry data, cloud providers still default to a manual triage process due to the difficulty of capturing the compounding factors that impact job runtime and its stability, which is not scalable and error-prone.", "Although prior works #OTHEREFR have empirically characterized runtime variation, they do not propose methods to predict the variation nor the likelihood of a new run being an outlier compared to the average or median runtimes."], "text_after_citation": ["They are unable to predict significant slowdowns that appear as outliers.", "As ML models are notoriously bad at handling outliers especially with a low existence, prior time-series based approaches #OTHEREFR are not applicable.", "In this paper, we aim to address this gap for production data analytics systems by developing a novel and systematic approach for modeling, predicting, and explaining the job runtime variation, allowing for finer-grained differentiation in characteristics.", "For our study, we comprehensively examine the runtime variation for millions of production SCOPE #OTHEREFR jobs on Cosmos #OTHEREFR , an exabyte-scale analytics platform at Microsoft that supports a broad spectrum of Microsoft products #OTHEREFR .", "Our key contribution is a framework for systematically analyzing, predicting and explaining runtime variation that includes:"], "citing_paper_content": {"title": "Runtime Variation In Big Data Analytics", "abstract": "The dynamic nature of resource allocation and runtime conditions on Cloud can result in high variability in a job's runtime across multiple iterations, leading to a poor experience. Identifying the sources of such variation and being able to predict and adjust for them is crucial to cloud service providers to design reliable data processing pipelines, provision and allocate resources, adjust pricing services, meet SLOs and debug performance hazards. In this paper, we analyze the runtime variation of millions of production SCOPE jobs on Cosmos, an exabyte-scale internal analytics platform at Microsoft. We propose an innovative 2-step approach to predict job runtime distribution by characterizing typical distribution shapes combined with a classification model with an average accuracy of >96%, out-performing traditional regression models and better capturing long tails. We examine factors such as job plan characteristics and inputs, resource allocation, physical cluster heterogeneity and utilization, and scheduling policies. To the best of our knowledge, this is the first study on predicting categories of runtime distributions for enterprise analytics workloads at scale. Furthermore, we examine how our methods can be used to analyze what-if scenarios, focusing on the impact of resource allocation, scheduling, and physical cluster provisioning decisions on a job's runtime consistency and predictability. CCS Concepts: \u2022 Computer systems organization \u2192 Cloud computing; \u2022 Computing methodologies \u2192 Causal reasoning and diagnostics; \u2022 Information systems \u2192 Data analytics."}, "cited_paper_content": {"title": "Griffon: Reasoning About Job Anomalies With Unlabeled Data In Cloud-Based Platforms", "abstract": "Microsoft's internal big data analytics platform is comprised of hundreds of thousands of machines, serving over half a million jobs daily, from thousands of users. The majority of these jobs are recurring and are crucial for the company's operation. Although administrators spend significant effort tuning system performance, some jobs inevitably experience slowdowns, i.e., their execution time degrades over previous runs. Currently, the investigation of such slowdowns is a labor-intensive and error-prone process, which costs Microsoft significant human and machine resources, and negatively impacts several lines of businesses. In this work, we present Griffin, a system we built and have deployed in production last year to automatically discover the root cause of job slowdowns. Existing solutions either rely on labeled data (i.e., resolved incidents with labeled reasons for job slowdowns), which is in most cases non-existent or non-trivial to acquire, or on time-series analysis of individual metrics that do not target specific jobs holistically. In contrast, in Griffin we cast the problem to a corresponding regression one that predicts the runtime of a job, and show how the relative contributions of the features used to train our interpretable model can be exploited to rank the potential causes of job slowdowns. Evaluated over historical incidents, we show that Griffin discovers slowdown causes that are consistent with the ones validated by domain-expert engineers, in a fraction of the time required by them."}, "keywords": ["runtimes", "machine learning models"], "citation_intent": "background"} {"citing_id": "2303.07985v1", "cited_id": "1908.05268", "section_title": "Conclusion", "citation": "We contrast this with the work of Kiefer & Neuen #REFR Theorem 6 .4], who showed that k-WL identifies graphs of treewidth k, though they did not control for rounds.", "text_before_citation": ["We showed that the (3k + 4)-WL identifies graphs of treewidth k in O(log n) rounds, improving upon the work of Grohe & Verbitsky #OTHEREFR , who established the analogous result for (4k + 3)-WL.", "As a corollary, we obtained that graphs of treewidth k are identified by FO + C formulas with (3k + 5)-variables and quantifier depth O(log n)."], "text_after_citation": ["Naturally, it would be of interest to close the gap between our upper bound of (3k + 4) on the Weisfeiler-Leman dimension required to achieve O(log n) rounds and the best known upper bound of k on the Weisfeiler-Leman dimension (without controlling for rounds) of graphs of treewidth k.", "One approach would be to improve the result on #OTHEREFR , to provide a tree decomposition of width \u2264 3k + 2 and height O(log n) for graphs of treewidth k.", "It would also be of interest to examine special families of graphs with bounded treewidth that possess additional structure, which Weisfeiler-Leman can exploit to achieve O(log n) rounds with only k + O(1) pebbles.", "Kiefer & Neuen #OTHEREFR Theorem 6 .1] also established a lower bound of \u2308k/2\u2309 \u2212 2 on the Weisfeiler-Leman dimension (again, without controlling for rounds) of graphs of treewidth k.", "It would be of interest to strengthen their lower bound on the Weisfeiler-Leman dimension when restricting to O(log n) rounds."], "citing_paper_content": {"title": "Logarithmic Weisfeiler-Leman And Treewidth *", "abstract": "In this paper, we show that the (3k + 4)-dimensional Weisfeiler-Leman algorithm can identify graphs of treewidth k in O(log n) rounds. This improves the result of Grohe & Verbitsky (ICALP 2006), who previously established the analogous result for (4k + 3)-dimensional Weisfeiler-Leman. In light of the equivalence between Weisfeiler-Leman and the logic FO + C (Cai, F\u00fcrer, & Immerman, Combinatorica 1992), we obtain an improvement in the descriptive complexity for graphs of treewidth k. Precisely, if G is a graph of treewidth k, then there exists a (3k + 5)-variable formula \u03d5 in FO + C with quantifier depth O(log n) that identifies G up to isomorphism."}, "cited_paper_content": {"title": "The Power Of The Weisfeiler-Leman Algorithm To Decompose Graphs", "abstract": "The Weisfeiler-Leman procedure is a widely-used approach for graph isomorphism testing that works by iteratively computing an isomorphism-invariant coloring of vertex tuples. Meanwhile, a fundamental tool in structural graph theory, which is often exploited in approaches to tackle the graph isomorphism problem, is the decomposition into 2- and 3-connected components. ::: We prove that the 2-dimensional Weisfeiler-Leman algorithm implicitly computes the decomposition of a graph into its 3-connected components. Thus, the dimension of the algorithm needed to distinguish two given graphs is at most the dimension required to distinguish the corresponding decompositions into 3-connected components (assuming it is at least 2). ::: This result implies that for k >= 2, the k-dimensional algorithm distinguishes k-separators, i.e., k-tuples of vertices that separate the graph, from other vertex k-tuples. As a byproduct, we also obtain insights about the connectivity of constituent graphs of association schemes. ::: In an application of the results, we show the new upper bound of k on the Weisfeiler-Leman dimension of graphs of treewidth at most k. Using a construction by Cai, Furer, and Immerman, we also provide a new lower bound that is asymptotically tight up to a factor of 2."}, "keywords": ["treewidth k"], "citation_intent": "result"} {"citing_id": "2304.03754v1", "cited_id": "1810.04805", "section_title": "Video Descriptions", "citation": "For question and answer features, we pre-trained BERT #REFR on our generated training set and extracted QA features adhering to the NExT-QA setting.", "text_before_citation": ["9 OpenAI text-davinci-003 API https://openai.com/api/ #OTHEREFR Our empirical findings indicate that adjusting the max len and top k parameters in the prompt for GPT-3 yields favorable results.", "As such, we utilize the default max len and top k settings and use prompts in the format of \"what is the intention of {Cap}? Provide {top k} answers within vide examples of few-shot prompting by randomly sampling 5 QAs from NExT-QA and transferring the question to the declared sentence.", "For the distillation experiments (Section 3.4), we distill from GPT-3 to T5-large 11 model.", "Video QA training In all of our experiments, we followed the NExT-QA #OTHEREFR video preprocessing method, where we uniformly sampled eight segments of 16 consecutive frames.", "For visual features, we used Resnet101 #OTHEREFR pre-trained on ImageNet #OTHEREFR and inflated 3D ResNeXt-101 #OTHEREFR pre-trained on Kinetics #OTHEREFR as our feature extractors."], "text_after_citation": ["To adapt an open-ended QA model for multiple-choice QA, we concatenated each candidate answer with the question and optimized the model with Hinge Loss, following the NExT-QA implementation.", "For video QA training, we employ the default NeXT-QA implementation #OTHEREFR with the exception of setting the patience in the ReduceLROnPlateau to 2 instead of 5, and the maximum number of epochs to 25 instead of 50, as we observed a faster convergence during training.", "We conduct the training on a single NVIDIA TITAN RTX GPU, and each experiment takes around 18 to 24 hours at most."], "citing_paper_content": {"title": "Language Models Are Causal Knowledge Extractors For Zero-Shot Video Question Answering", "abstract": "Causal Video Question Answering (CVidQA) queries not only association or temporal relations but also causal relations in a video. Existing question synthesis methods pretrained question generation (QG) systems on reading comprehension datasets with text descriptions as inputs. However, QG models only learn to ask association questions (e.g., \"what is someone doing...\") and result in inferior performance due to the poor transfer of association knowledge to CVidQA, which focuses on causal questions like \"why is someone doing ...\". Observing this, we proposed to exploit causal knowledge to generate question-answer pairs, and proposed a novel framework, Causal Knowledge Extraction from Language Models (CaKE-LM), leveraging causal commonsense knowledge from language models to tackle CVidQA. To extract knowledge from LMs, CaKE-LM generates causal questions containing two events with one triggering another (e.g., \"score a goal\" triggers \"soccer player kicking ball\") by prompting LM with the action (soccer player kicking ball) to retrieve the intention (to score a goal). CaKE-LM significantly outperforms conventional methods by 4% to 6% of zero-shot CVidQA accuracy on NExT-QA and Causal-VidQA datasets. We also conduct comprehensive analyses and provide key findings for future research."}, "cited_paper_content": {"title": "Bert: Pre-Training Of Deep Bidirectional Transformers For Language Understanding", "abstract": "We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. ::: BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement)."}, "keywords": ["pre-trained BERT"], "citation_intent": "method"} {"citing_id": "2303.12132v1", "cited_id": "1511.06349", "section_title": "Generative Deep Neural Language Models", "citation": "Autoencoding models are trained similarly, except that the masked word can be located anywhere in the randomly sampled suite of words #REFR .", "text_before_citation": ["From the point of view of a Transformer-based LLM, generating new text is equivalent to the generation of a translation, except that there is not necessarily an embedding space representation.", "Instead, it's the continuation of an initial word sequence, where the word to be generated can be located either at the end of the original sentence or in the middle of it.", "These two cases correspond to the two main approaches to training Generative Deep Neural Language Models.", "Autoregressive models are trained by a random sampling suite of words in the source dataset (\"utterances\"), masking a word and training the model to predict the masked word accurately #OTHEREFR .", "Autoregressive models are best thought of as autocomplete -they are trained to complete sentences in a way that is representative of their training dataset."], "text_after_citation": ["Autoencoding models are best thought of as search or autocorrect suggestions -they are trained to rewrite whole sentences in a way that is representative of their training dataset.", "Just as Google search suggestions, they can suggest words at the end of a query to refine it, but they can also add one at the beginning or even rewrite a word (for instance, to correct a typo).", "While Autoencoding models can be used to generate text based on utterances, their strength is rather in applications that require understanding the utterance as a whole.", "While the autoencoding models are generally considered to be more powerful than autoregressive ones, their generative capabilities are not necessarily optimal for the size of the model, training dataset, or the computational resources spent training the model.", "After all, the training mode relevant to the generation represents only a fraction of the training time of autoregressive models."], "citing_paper_content": {"title": "Fundamentals Of Generative Large Language Models And Perspectives In Cyber-Defense", "abstract": "Generative Language Models gained significant attention in late 2022 / early 2023, notably with the introduction of models refined to act consistently with users' expectations of interactions with AI (conversational models). Arguably the focal point of public attention has been such a refinement of the GPT3 model-the ChatGPT and its subsequent integration with auxiliary capabilities, including search as part of Microsoft Bing. Despite extensive prior research invested in their development, their performance and applicability to a range of daily tasks remained unclear and niche. However, their wider utilization without a requirement for technical expertise, made in large part possible through conversational fine-tuning, revealed the extent of their true capabilities in a real-world environment. This has garnered both public excitement for their potential applications and concerns about their capabilities and potential malicious uses. This review aims to provide a brief overview of the history, state of the art, and implications of Generative Language Models in terms of their principles, abilities, limitations, and future prospects-especially in the context of cyber-defense, with a focus on the Swiss operational environment."}, "cited_paper_content": {"title": "Generating Sentences From A Continuous Space", "abstract": "The standard recurrent neural network language model (RNNLM) generates sentences one word at a time and does not work from an explicit global sentence representation. In this work, we introduce and study an RNN-based variational autoencoder generative model that incorporates distributed latent representations of entire sentences. This factorization allows it to explicitly model holistic properties of sentences such as style, topic, and high-level syntactic features. Samples from the prior over these sentence representations remarkably produce diverse and well-formed sentences through simple deterministic decoding. By examining paths through this latent space, we are able to generate coherent novel sentences that interpolate between known sentences. We present techniques for solving the difficult learning problem presented by this model, demonstrate its effectiveness in imputing missing words, explore many interesting properties of the model's latent sentence space, and present negative results on the use of the model in language modeling."}, "keywords": ["Autoencoding models"], "citation_intent": "method"} {"citing_id": "2303.04393v2", "cited_id": "2002.07953", "section_title": "C. Moving-Threshold Estimation", "citation": "Because the entropy of unknown samples is usually larger than known ones', we could set a pre-defined threshold \u03c1 and recognize those samples whose entropy is larger than \u03c1 as unknown. Following #REFR , \u03c1 is set to be ln(K)/2.", "text_before_citation": ["The key challenge of open set domain adaptation is how to separate common samples from private samples in the target domain.", "Assuming the model has more confidence (lower entropy) in shared-class samples than unknown samples, one mainstream method in OSDA is to set a confidence threshold on the entropy of the output of the classifier."], "text_after_citation": ["However, the threshold leads the model to be sensitive to hyperparameter tuning that may influence the robustness of the model in real world scenarios.", "We tackle this problem by proposing the movingthreshold estimation method.", "One important drawback of the previous methods is that they ignore the semantic knowledge within unknown samples.", "In fact, there are various kinds of unknown images in an open world that could become unknown target samples.", "As a result, the relationships between unknown classes and known classes can vary greatly."], "citing_paper_content": {"title": "Imbalanced Open Set Domain Adaptation Via Moving-Threshold Estimation And Gradual Alignment", "abstract": "Multimedia applications are often associated with cross-domain knowledge transfer, where Unsupervised Domain Adaptation (UDA) can be used to reduce the domain shifts. Open Set Domain Adaptation (OSDA) aims to transfer knowledge from a well-labeled source domain to an unlabeled target domain under the assumption that the target domain contains unknown classes. Existing OSDA methods consistently lay stress on the covariate shift, ignoring the potential label shift problem. The performance of OSDA methods degrades drastically under intradomain class imbalance and inter-domain label shift. However, little attention has been paid to this issue in the community. In this paper, the Imbalanced Open Set Domain Adaptation (IOSDA) is explored where the covariate shift, label shift and category mismatch exist simultaneously. To alleviate the negative effects raised by label shift in OSDA, we propose Open-set Movingthreshold Estimation and Gradual Alignment (OMEGA)-a novel architecture that improves existing OSDA methods on classimbalanced data. Specifically, a novel unknown-aware target clustering scheme is proposed to form tight clusters in the target domain to reduce the negative effects of label shift and intra-domain class imbalance. Furthermore, moving-threshold estimation is designed to generate specific thresholds for each target sample rather than using one for all. Extensive experiments on IOSDA, OSDA and OPDA benchmarks demonstrate that our method could significantly outperform existing state-of-the-arts. Code and data are available at https://github.com/mendicant04/OMEGA."}, "cited_paper_content": {"title": "Universal Domain Adaptation Through Self Supervision", "abstract": "Unsupervised domain adaptation methods traditionally assume that all source categories are present in the target domain. In practice, little may be known about the category overlap between the two domains. While some methods address target settings with either partial or open-set categories, they assume that the particular setting is known a priori. We propose a more universally applicable domain adaptation approach that can handle arbitrary category shift, called Domain Adaptative Neighborhood Clustering via Entropy optimization (DANCE). DANCE combines two novel ideas: First, as we cannot fully rely on source categories to learn features discriminative for the target, we propose a novel neighborhood clustering technique to learn the structure of the target domain in a self-supervised way. Second, we use entropy-based feature alignment and rejection to align target features with the source, or reject them as unknown categories based on their entropy. We show through extensive experiments that DANCE outperforms baselines across open-set, open-partial and partial domain adaptation settings."}, "keywords": ["unknown samples", "whose entropy"], "citation_intent": "method"} {"citing_id": "2303.14259v1", "cited_id": "1806.00680", "section_title": "Introduction", "citation": "Recent research has shown how to leverage modern networks to achieve high throughput and low latency with an RPC-based system #REFR .", "text_before_citation": ["Ordered key-value stores expand the set of supported applications by providing an efficient SCAN operation to retrieve the key-value pairs whose keys are within a specified range.", "For example, distributed file systems can use SCAN to map ranges of logical file offsets to the nodes storing the data #OTHEREFR .", "It is also used to query graph stores #OTHEREFR 50] and the popular Redis [14] offers sorted sets.", "We describe Honeycomb, a system that provides hardware acceleration for an in-memory ordered key-value store.", "There is a large body of research on improving the performance of ordered key-value stores, e.g., #OTHEREFR ."], "text_after_citation": ["Other research has explored using one-sided RDMA reads to bypass the server CPU for GET and SCAN operations #OTHEREFR .", "Since RDMA NICs only provide simple one-sided reads of contiguous memory regions, these systems require at least two RDMA reads per operation when supporting variable-sized keys or values, and they require client-side caching to avoid additional RDMAs when traversing the tree data structures stored by the servers.", "Honeycomb accelerates an ordered key-value store using an FPGA-based SmartNIC #OTHEREFR attached to a host CPU.", "These SmartNICs are widely deployed in data centers #OTHEREFR .", "They enable effective CPU offload by avoiding the functionality limitations of RDMA and the performance problems of SmartNICs based on general-purpose, low-power cores #OTHEREFR ."], "citing_paper_content": {"title": "Honeycomb: Ordered Key-Value Store Acceleration On An Fpga-Based Smartnic", "abstract": "In-memory ordered key-value stores are an important building block in modern distributed applications. We present Honeycomb, a hybrid software-hardware system for accelerating read-dominated workloads on ordered key-value stores that provides linearizability for all operations including scans. Honeycomb stores a B-Tree in host memory, and executes SCAN and GET on an FPGA-based SmartNIC, and PUT, UPDATE and DELETE on the CPU. This approach enables large stores and simplifies the FPGA implementation but raises the challenge of data access and synchronization across the slow PCIe bus. We describe how Honeycomb overcomes this challenge with careful data structure design, caching, request parallelism with out-of-order request execution, wait-free read operations, and batching synchronization between the CPU and the FPGA. For read-heavy YCSB workloads, Honeycomb improves the throughput of a state-of-the-art ordered key-value store by at least 1.8\u00d7. For scan-heavy workloads inspired by cloud storage, Honeycomb improves throughput by more than 2\u00d7. The cost-performance, which is more important for large-scale deployments, is improved by at least 1.5\u00d7 on these workloads. * This work was done when affiliated with Microsoft \u2020 This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible."}, "cited_paper_content": {"title": "Datacenter Rpcs Can Be General And Fast", "abstract": "It is commonly believed that datacenter networking software must sacrifice generality to attain high performance. The popularity of specialized distributed systems designed specifically for niche technologies such as RDMA, lossless networks, FPGAs, and programmable switches testifies to this belief. In this paper, we show that such specialization is not necessary. eRPC is a new general-purpose remote procedure call (RPC) library that offers performance comparable to specialized systems, while running on commodity CPUs in traditional datacenter networks based on either lossy Ethernet or lossless fabrics. eRPC performs well in three key metrics: message rate for small messages; bandwidth for large messages; and scalability to a large number of nodes and CPU cores. It handles packet loss, congestion, and background request execution. In microbenchmarks, one CPU core can handle up to 10 million small RPCs per second, or send large messages at 75 Gbps. We port a production-grade implementation of Raft state machine replication to eRPC without modifying the core Raft source code. We achieve 5.5 \u00b5s of replication latency on lossy Ethernet, which is faster than or comparable to specialized replication systems that use programmable switches, FPGAs, or RDMA."}, "keywords": ["high throughput", "RPC-based system"], "citation_intent": "background"} {"citing_id": "2305.02981v1", "cited_id": "1812.04948", "section_title": "Dataset Preparation", "citation": "We take the original human face FFHQ dataset #REFR and create a set of image masks based on our matting network.", "text_before_citation": ["We utilize Fast Multi-Level Foreground Estimation #OTHEREFR to extract foregrounds.", "These foregrounds are blended (1) with new backgrounds, and the resulting compositions are used as inputs for StyleMatte.", "Additional background images were obtained from the BG-20k dataset #OTHEREFR ."], "text_after_citation": ["First, we filter the dataset to contain images with only one person using an instance segmentation neural network #OTHEREFR .", "The filtration step is obligatory because some visual artifacts arise when skipping it.", "We obtain the results of StyleMatte for the remaining 90% images as additional \u03b1channels.", "We perform an additional filtration step to get Method FID FFHQ AFHQ v2 PSeg #OTHEREFR 62 rid of some masks that are visually inconsistent with the contours of the portrait.", "To do so, we extract segmentation masks s with a basic semantic segmentation network #OTHEREFR pretrained on ResNet50 #OTHEREFR and choose only those that are aligned with the matte."], "citing_paper_content": {"title": "Adversarially-Guided Portrait Matting", "abstract": "We present a method for generating alpha mattes using a limited data source. We pretrain a novel transformerbased model (StyleMatte) on portrait datasets. We utilize this model to provide image-mask pairs for the StyleGAN3based network (StyleMatteGAN). This network is trained unsupervisedly and generates previously unseen imagemask training pairs that are fed back to StyleMatte. We demonstrate that the performance of the matte pulling network improves during this cycle and obtains top results on the used datasets. Furthermore, StyleMatte-GAN provides high-resolution, privacy-preserving portraits with alpha mattes, making it suitable for various image composition tasks. Our code is available at https://github.com/chroneus/stylematte."}, "cited_paper_content": {"title": "A Style-Based Generator Architecture For Generative Adversarial Networks", "abstract": "We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature. The new architecture leads to an automatically learned, unsupervised separation of high-level attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images (e.g., freckles, hair), and it enables intuitive, scale-specific control of the synthesis. The new generator improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. To quantify interpolation quality and disentanglement, we propose two new, automated methods that are applicable to any generator architecture. Finally, we introduce a new, highly varied and high-quality dataset of human faces."}, "keywords": ["image masks", "original human face"], "citation_intent": "method"} {"citing_id": "2304.01950v1", "cited_id": "1703.05175", "section_title": "Because Of The Challenges Of Limitation In Network Bandwidth", "citation": "Inspired by prototypical networks #REFR , which adopts a single prototype to represent each class by calculating the mean of the class's embedding space.", "text_before_citation": ["FedPer #OTHEREFR proposes a strategy of adding a personalized layer to the base layer and suggests updating only the base layer during the federated training process.", "Afterwards, clients can update their personalized layer based on their own local data.", "Additionally, #OTHEREFR explores a benchmark for non-IID settings, they divide non-IID settings into five cases, such as label distribution skew, feature distribution skew, quantity skew, etc.", "Further, as #OTHEREFR mentioned, some existing studies #OTHEREFR - #OTHEREFR , #OTHEREFR cover only one non-IID case, which do not give sufficient evaluations to this challenge.", "Therefore, to avoid the influence of biased global models and to evaluate non-IID cases as comprehensively as possible, we focus on personalized FL by optimizing the local objective of each local client under the label and feature distribution skewness."], "text_after_citation": ["This prototype can serve as an important information carrier to boost the performance of various learning domains, and has been successfully applied in meta-learning #OTHEREFR , multi-task learning #OTHEREFR , and transfer learning #OTHEREFR .", "There have been some existing works #OTHEREFR , #OTHEREFR - #OTHEREFR introducing the concept of prototypes into FL.", "FedProto #OTHEREFR proposes to reduce communication overhead by only exchanging prototypes between clients and the server, instead of exchanging gradients or model parameters.", "FedPCL #OTHEREFR proposes to use multiple pre-trained models to extract the features separately, and then they use a projection network to fuse these extracted features in a personalized way while keeping the shared representation compact for efficient communication.", "These works adopt a single prototype to represent each class and argue that directly averaging the representations from heterogeneous data across clients can effectively capture the embedding representations of each class."], "citing_paper_content": {"title": "Mp-Fedcl: Multi-Prototype Federated Contrastive Learning For Edge Intelligence", "abstract": "Federated learning-assisted edge intelligence enables privacy protection in modern intelligent services. However, not Independent and Identically Distributed (non-IID) distribution among edge clients can impair the local model performance. The existing single prototype-based strategy represents a sample by using the mean of the feature space. However, feature spaces are usually not clustered, and a single prototype may not represent a sample well. Motivated by this, this paper proposes a multiprototype federated contrastive learning approach (MP-FedCL) which demonstrates the effectiveness of using a multi-prototype strategy over a single-prototype under non-IID settings, including both label and feature skewness. Specifically, a multi-prototype computation strategy based on k-means is first proposed to capture different embedding representations for each class space, using multiple prototypes (k centroids) to represent a class in the embedding space. In each global round, the computed multiple prototypes and their respective model parameters are sent to the edge server for aggregation into a global prototype pool, which is then sent back to all clients to guide their local training. Finally, local training for each client minimizes their own supervised learning tasks and learns from shared prototypes in the global prototype pool through supervised contrastive learning, which encourages them to learn knowledge related to their own class from others and reduces the absorption of unrelated knowledge in each global iteration. Experimental results on MNIST, Digit-5, Office-10, and DomainNet show that our method outperforms multiple baselines, with an average test accuracy improvement of about 4.6% and 10.4% under feature and label non-IID distributions, respectively."}, "cited_paper_content": {"title": "Prototypical Networks For Few-Shot Learning", "abstract": "A recent approach to few-shot classification called matching networks has demonstrated the benefits of coupling metric learning with a training procedure that mimics test. This approach relies on a complicated fine-tuning procedure and an attention scheme that forms a distribution over all points in the support set, scaling poorly with its size. We propose a more streamlined approach, prototypical networks, that learns a metric space in which few-shot classification can be performed by computing Euclidean distances to prototype representations of each class, rather than individual points. Our method is competitive with state-of-the-art one-shot classification approaches while being much simpler and more scalable with the size of the support set. We empirically demonstrate the performance of our approach on the Omniglot and mini-ImageNet datasets. We further demonstrate that a similar idea can be used for zero-shot learning, where each class is described by a set of attributes, and achieve state-of-the-art results on the Caltech UCSD bird dataset."}, "keywords": ["single prototype", "prototypical networks"], "citation_intent": "background"} {"citing_id": "2304.09026v1", "cited_id": "1910.04032", "section_title": "Fog Data Processing", "citation": "Personally identifiable data can be anonymized using on-device resources before it is further processed by a third party #REFR .", "text_before_citation": ["Fog computing is an abstraction around the combined compute and storage resources at the edge, in the core network, and in the cloud #OTHEREFR .", "Fog computing is a promising paradigm for emerging domains such as the IoT: By extending cloud resources close to the edge of the network, where data is generated, fog applications can benefit from low-latency, high-bandwidth, privacy-preserving data processing infrastructure #OTHEREFR .", "For example, IoT sensor data can be processed by compute resources at a local radio gateway and sent to actuators at the edge without incurring a significant network delay #OTHEREFR .", "Similarly, data from multiple sensors can be aggregated and filtered at the edge in order to limit the network strain of sending all data to the cloud."], "text_after_citation": ["The downside of geo-distributed, heterogeneous fog infrastructure is the complexity of its management for applications, i.e., building approaches for distributing data, deploying services, and managing fault-tolerance #OTHEREFR .", "Researchers have proposed abstractions in the form of fog data processing platforms that manage this complexity for applications.", "For example, NebulaStream #OTHEREFR is an end-to-end IoT data management system.", "A key novelty of NebulaStream is its ability to autonomously deploy different data processing approaches such as stream and complex event processing in a geo-distributed environment.", "SoFA #OTHEREFR uses Apache Spark to combine all available fog resources for stream processing operator deployment."], "citing_paper_content": {"title": "Towards A Benchmark For Fog Data Processing", "abstract": "Fog data processing systems provide key abstractions to manage data and event processing in the geo-distributed and heterogeneous fog environment. The lack of standardized benchmarks for such systems, however, hinders their development and deployment, as different approaches cannot be compared quantitatively. Existing cloud data benchmarks are inadequate for fog computing, as their focus on workload specification ignores the tight integration of application and infrastructure inherent in fog computing. In this paper, we outline an approach to a fog-native data processing benchmark that combines workload specifications with infrastructure specifications. This holistic approach allows researchers and engineers to quantify how a software approach performs for a given workload on given infrastructure. Further, by basing our benchmark in a realistic IoT sensor network scenario, we can combine paradigms such as low-latency event processing, machine learning inference, and offline data analytics, and analyze the performance impact of their interplay in a fog data processing system."}, "cited_paper_content": {"title": "Fog Computing As Privacy Enabler", "abstract": "Despite broad discussions on privacy challenges arising from fog computing, the authors argue that privacy and security requirements might actually drive the adoption of fog computing. They present four patterns of fog computing fostering data privacy and the security of business secrets. Their practical application is illuminated on the basis of three case studies."}, "keywords": ["identifiable data"], "citation_intent": "method"} {"citing_id": "2303.13371v1", "cited_id": "1909.05506", "section_title": "C. Properties Of Rcr And Rar", "citation": "Cross-Modal Adaptive Message Passing (CAMP) #REFR explores a region-word affinity matrix via inner product and transfers cross-modality contents to improve the region and word representations, which are then aggregated as the holistic image and text features to compute the final similarity.", "text_before_citation": ["To demonstrate their great applicability, we apply these two regulators to many existing methods based on cross-modal interaction:", "Stacked Cross Attention (SCAN) #OTHEREFR first computes all region-word similarities and aligns each region/word with its corresponding words/regions.", "The final similarity is obtained by averaging all region/word-based cosine distances.", "Bidirectional Focal Attention (BFAN) #OTHEREFR extends the generic attention by reassigning more fine-grained attention weight for each region-word pair and calculates the matching result by summing up region-based and word-based scores.", "Position Focused Attention (PFAN) #OTHEREFR enhances region features by introducing extra position information to promote region-word correspondences and integrates all region/wordattended cosine similarities as the prediction."], "text_after_citation": ["Similarity Graph Reasoning and Attention Filtration (SGRAF) #OTHEREFR adopts cosine similarities multiplied with a fixed temperature as region-word attention weights, followed by the complex graph and attention modules to map hierarchical similarity features into a matching score. Fig.", "4 illustrates how we plug the RCR or RAR into the above matching approaches.", "Specifically, cross-modal attention utilizes the cosine metric or inner product as region-word affinity weights, and outputs each region/word along with its related words/regions.", "With these paired features, the RCR first constructs the alignment vectors and then learns the corresponding weight vectors and temperature factors via Eq.", "(7)- #OTHEREFR , which in turn refine the region-word feature distances and optimize the cross-modal interaction via Eq. (9)- #OTHEREFR ."], "citing_paper_content": {"title": "Plug-And-Play Regulators For Image-Text Matching", "abstract": "Exploiting fine-grained correspondence and visualsemantic alignments has shown great potential in image-text matching. Generally, recent approaches first employ a crossmodal attention unit to capture latent region-word interactions, and then integrate all the alignments to obtain the final similarity. However, most of them adopt one-time forward association or aggregation strategies with complex architectures or additional information, while ignoring the regulation ability of network feedback. In this paper, we develop two simple but quite effective regulators which efficiently encode the message output to automatically contextualize and aggregate cross-modal representations. Specifically, we propose (i) a Recurrent Correspondence Regulator (RCR) which facilitates the cross-modal attention unit progressively with adaptive attention factors to capture more flexible correspondence, and (ii) a Recurrent Aggregation Regulator (RAR) which adjusts the aggregation weights repeatedly to increasingly emphasize important alignments and dilute unimportant ones. Besides, it is interesting that RCR and RAR are \"plug-and-play\": both of them can be incorporated into many frameworks based on cross-modal interaction to obtain significant benefits, and their cooperation achieves further improvements. Extensive experiments on MSCOCO and Flickr30K datasets validate that they can bring an impressive and consistent R@1 gain on multiple models, confirming the general effectiveness and generalization ability of the proposed methods."}, "cited_paper_content": {"title": "Camp: Cross-Modal Adaptive Message Passing For Text-Image Retrieval", "abstract": "Text-image cross-modal retrieval is a challenging task in the field of language and vision. Most previous approaches independently embed images and sentences into a joint embedding space and compare their similarities. However, previous approaches rarely explore the interactions between images and sentences before calculating similarities in the joint space. Intuitively, when matching between images and sentences, human beings would alternatively attend to regions in images and words in sentences, and select the most salient information considering the interaction between both modalities. In this paper, we propose Cross-modal Adaptive Message Passing (CAMP), which adaptively controls the information flow for message passing across modalities. Our approach not only takes comprehensive and fine-grained cross-modal interactions into account, but also properly handles negative pairs and irrelevant information with an adaptive gating scheme. Moreover, instead of conventional joint embedding approaches for text-image matching, we infer the matching score based on the fused features, and propose a hardest negative binary cross-entropy loss for training. Results on COCO and Flickr30k significantly surpass state-of-the-art methods, demonstrating the effectiveness of our approach."}, "keywords": ["cross-modality contents", "Cross-Modal Adaptive Message"], "citation_intent": "method"} {"citing_id": "2303.00923v1", "cited_id": "1602.01585", "section_title": "B Baseline Systems", "citation": "They design a regression model using BERT-based features extracted from review texts, star rating, and product type information from Amazon product review dataset #REFR .", "text_before_citation": ["These features are fed into conventional classifiers such as SVM, Random Forest, and gradient boosting to identify helpful reviews.", "\u2022 TextCNN #OTHEREFR employs a textbased CNN model #OTHEREFR to automatically capture the character-level, word-level, and topic-level features for helpfulness prediction.", "\u2022 MTNL (Fan et al., 2018) utilizes end-to-end multi-task neural learning (MTNL) architecture for classifying helpful reviews.", "They take the help of an auxiliary task, such as rating regression, to boost the performance of the original task, which is review helpfulness identification.", "\u2022 BERTHelp #OTHEREFR develop their helpfulness prediction model using pre-trained BERT #OTHEREFR ."], "text_after_citation": [], "citing_paper_content": {"title": "On The Role Of Reviewer Expertise In Temporal Review Helpfulness Prediction", "abstract": "Helpful reviews have been essential for the success of e-commerce services, as they help customers make quick purchase decisions and benefit the merchants in their sales. While many reviews are informative, others provide little value and may contain spam, excessive appraisal, or unexpected biases. With the large volume of reviews and their uneven quality, the problem of detecting helpful reviews has drawn much attention lately. Existing methods for identifying helpful reviews primarily focus on review text and ignore the two key factors of (1) who post the reviews and (2) when the reviews are posted. Moreover, the helpfulness votes suffer from scarcity for less popular products and recently submitted (a.k.a., coldstart) reviews. To address these challenges, we introduce a dataset and develop a model that integrates the reviewer's expertise, derived from the past review history of the reviewers, and the temporal dynamics of the reviews to automatically assess review helpfulness. We conduct experiments on our dataset to demonstrate the effectiveness of incorporating these factors and report improved results compared to several well-established baselines."}, "cited_paper_content": {"title": "Ups And Downs: Modeling The Visual Evolution Of Fashion Trends With One-Class Collaborative Filtering", "abstract": "Building a successful recommender system depends on understanding both the dimensions of people's preferences as well as their dynamics. In certain domains, such as fashion, modeling such preferences can be incredibly difficult, due to the need to simultaneously model the visual appearance of products as well as their evolution over time. The subtle semantics and non-linear dynamics of fashion evolution raise unique challenges especially considering the sparsity and large scale of the underlying datasets. In this paper we build novel models for the One-Class Collaborative Filtering setting, where our goal is to estimate users' fashion-aware personalized ranking functions based on their past feedback. To uncover the complex and evolving visual factors that people consider when evaluating products, our method combines high-level visual features extracted from a deep convolutional neural network, users' past feedback, as well as evolving trends within the community. Experimentally we evaluate our method on two large real-world datasets from Amazon.com, where we show it to outperform state-of-the-art personalized ranking measures, and also use it to visualize the high-level fashion trends across the 11-year span of our dataset."}, "keywords": ["review texts", "dataset"], "citation_intent": "method"} {"citing_id": "2303.13332v1", "cited_id": "1811.02840", "section_title": "Introduction", "citation": "Tellez et al., #REFR showed in a benchmark study that VAE compression of medical tissue images to a latent space of 128 (>5000:1 compression ratio) retained the most details of the original whole slide image compared to 4 other encoders.", "text_before_citation": ["Compression and scaling has also been found to adversely effect tissue segmentation up to ratios of 50:1 #OTHEREFR .", "In contrast to discrete cosine transformation models, neural networks have been proven to retain high efficiency and fidelity in the lossy compression of image data #OTHEREFR .", "While neural networks seek to store image data in latent space representations, not every network does this at equivalent efficiency or accuracy #OTHEREFR .", "For better visualization, early stopping is not used for these experiments.", "coders (VAEs) retain higher image quality and lower noise ratios at extreme compression ratios #OTHEREFR ."], "text_after_citation": ["In the current study, we develop a VAE to compress and index images in latent space for fast complex search of whole slide H&E cancer images."], "citing_paper_content": {"title": "Clinically Relevant Latent Space Embedding Of Cancer Histopathology Slides Through Variational Autoencoder Based Image Compression", "abstract": "In this paper, we introduce a Variational Autoencoder (VAE) based training approach that can compress and decompress cancer pathology slides at a compression ratio of 1:512, which is better than the previously reported state of the art (SOTA) in the literature, while still maintaining accuracy in clinical validation tasks. The compression approach was tested on more common computer vision datasets such as CIFAR10, and we explore which image characteristics enable this compression ratio on cancer imaging data but not generic images. We generate and visualize embeddings from the compressed latent space and demonstrate how they are useful for clinical interpretation of data, and how in the future such latent embeddings can be used to accelerate search of clinical imaging data."}, "cited_paper_content": {"title": "Neural Image Compression For Gigapixel Histopathology Image Analysis", "abstract": "We propose Neural Image Compression (NIC), a two-step method to build convolutional neural networks for gigapixel image analysis solely using weak image-level labels. First, gigapixel images are compressed using a neural network trained in an unsupervised fashion, retaining high-level information while suppressing pixel-level noise. Second, a convolutional neural network (CNN) is trained on these compressed image representations to predict image-level labels, avoiding the need for fine-grained manual annotations. We compared several encoding strategies, namely reconstruction error minimization, contrastive training and adversarial feature learning, and evaluated NIC on a synthetic task and two public histopathology datasets. We found that NIC can exploit visual cues associated with image-level labels successfully, integrating both global and local visual information. Furthermore, we visualized the regions of the input gigapixel images where the CNN attended to, and confirmed that they overlapped with annotations from human experts."}, "keywords": ["medical tissue images"], "citation_intent": "result"} {"citing_id": "2303.03770v3", "cited_id": "1602.07868", "section_title": "Experimental Setup", "citation": "For experiments on PACS and VisDA-C, we also apply WeightNorm #REFR on the classifier.", "text_before_citation": ["Following #OTHEREFR , we use a subset of it that contains 126 classes from 4 domains (Real, Sketch, Clipart, Painting) and we refer to it as DomainNet-126.", "We evaluate 7 domain shifts built from the 4 domains and we report the top-1 accuracy under each domain shift as well as their average (Avg.). Implementation details.", "We use standard classification architectures comprising a feature extractor followed by a Single-Source UDA Method SF P \u2192 A P \u2192 C P \u2192 S A \u2192 P A \u2192 C A \u2192 S Avg.", "NEL #OTHEREFR Table) .", "Following SHOT #OTHEREFR , we add an extra 256-dimensional fully-connected+BatchNorm bottleneck after the encoder output."], "text_after_citation": ["For source training, we initialise the ResNet backbone with ImageNet-1K #OTHEREFR pre-trained weights available in the Pytorch model zoo.", "We train the source model with the standard cross-entropy loss and with label-smoothing like in #OTHEREFR .", "For the adaptation phase, the target model is initialised with the source model's parameters.", "For more details, the code that is available at https://github.com/MattiaLitrico/Guiding-Pseudo-labels-with-Uncertainty-Estimation-for-Sourcefree-Unsupervised-Domain-Adaptation."], "citing_paper_content": {"title": "Guiding Pseudo-Labels With Uncertainty Estimation For Source-Free Unsupervised Domain Adaptation", "abstract": "Standard Unsupervised Domain Adaptation (UDA) methods assume the availability of both source and target data during the adaptation. In this work, we investigate the Source-free Unsupervised Domain Adaptation (SF-UDA), a specific case of UDA where a model is adapted to a target domain without access to source data. We propose a novel approach for the SF-UDA setting based on a loss reweighting strategy that brings robustness against the noise that inevitably affects the pseudo-labels. The classification loss is reweighted based on the reliability of the pseudo-labels that is measured by estimating their uncertainty. Guided by such reweighting strategy, the pseudo-labels are progressively refined by aggregating knowledge from neighbouring samples. Furthermore, a self-supervised contrastive framework is leveraged as a target space regulariser to enhance such knowledge aggregation. A novel negative pairs exclusion strategy is proposed to identify and exclude negative pairs made of samples sharing the same class, even in presence of some noise in the pseudo-labels. Our method outperforms previous methods on three major benchmarks by a large margin. We set the new SF-UDA state-of-the-art on VisDA-C and DomainNet with a performance gain of +1.8% on both benchmarks and on PACS with +12.3% in the single-source setting and +6.6% in multi-target adaptation. Additional analyses demonstrate that the proposed approach is robust to the noise, which results in significantly more accurate pseudo-labels compared to state-of-the-art approaches."}, "cited_paper_content": {"title": "Weight Normalization: A Simple Reparameterization To Accelerate Training Of Deep Neural Networks", "abstract": "We present weight normalization: a reparameterization of the weight vectors in a neural network that decouples the length of those weight vectors from their direction. By reparameterizing the weights in this way we improve the conditioning of the optimization problem and we speed up convergence of stochastic gradient descent. Our reparameterization is inspired by batch normalization but does not introduce any dependencies between the examples in a minibatch. This means that our method can also be applied successfully to recurrent models such as LSTMs and to noise-sensitive applications such as deep reinforcement learning or generative models, for which batch normalization is less well suited. Although our method is much simpler, it still provides much of the speed-up of full batch normalization. In addition, the computational overhead of our method is lower, permitting more optimization steps to be taken in the same amount of time. We demonstrate the usefulness of our method on applications in supervised image recognition, generative modelling, and deep reinforcement learning."}, "keywords": ["classifier", "WeightNorm"], "citation_intent": "method"} {"citing_id": "2303.09660v1", "cited_id": "1810.03292", "section_title": "Literature Review", "citation": "Gradientbased methods may generate the same results using an untrained model as those from a trained model, indicating that the method demonstrates only general model characteristics #REFR .", "text_before_citation": ["However, as inferred from the algorithm strategy, the computation of integrated gradients requires repeated calculation over multiple iterations. To reduce computational cost, #OTHEREFR", "(2021) proposed a special class of CNNs to compute the result of integrated gradients with only one forward-backward pass.", "As a result, the saliency map itself can be used as an effective tool; for example, it can be integrated into regular training as priors #OTHEREFR .", "Each of the discussed methods has its own strengths and weaknesses.", "For example, the perturbation-based method may underestimate the importance of features because other features already saturate the output #OTHEREFR ."], "text_after_citation": ["Although CAM methods can generate a saliency map by passing through fewer non-linear layers and potentially suffer from fewer issues, they may predict inaccurate object locations due to the coarse resolution of the saliency map.", "In this paper, we explore the characteristics of different \"feature attribution-based\" model explanation approaches and use two GeoAI-ready natural feature datasets in an image classification task to compare model-learned features with human-understandable features.", "In addition to interpreting a model's reasoning process, we compare the strengths and weaknesses of these popular approaches and discuss ways to improve them within the growing field of explainable GeoAI."], "citing_paper_content": {"title": "Explainable Geoai: Can Saliency Maps Help Interpret Artificial Intelligence'S Learning Process? An Empirical Study On Natural Feature Detection", "abstract": "Improving the interpretability of geospatial artificial intelligence (GeoAI) models has become critically important to open the \"black box\" of complex AI models, such as deep learning. This paper compares popular saliency map generation techniques and their strengths and weaknesses in interpreting GeoAI and deep learning models' reasoning behaviors, particularly when applied to geospatial analysis and image processing tasks. We surveyed two broad classes of model explanation methods: perturbation-based and gradient-based methods. The former identifies important image areas, which help machines make predictions by modifying a localized area of the input image. The latter evaluates the contribution of every single pixel of the input image to the model's prediction results through gradient backpropagation. In this study, three algorithms-the occlusion method, the integrated gradients method, and the class activation map method-are examined for a natural feature detection task using deep learning. The algorithms' strengths and weaknesses are discussed, and the consistency between model-learned and human-understandable concepts for object recognition is also compared. The experiments used two GeoAIready datasets to demonstrate the generalizability of the research findings."}, "cited_paper_content": {"title": "Sanity Checks For Saliency Maps", "abstract": "Saliency methods have emerged as a popular tool to highlight features in an input deemed relevant for the prediction of a learned model. Several saliency methods have been proposed, often guided by visual appeal on image data. In this work, we propose an actionable methodology to evaluate what kinds of explanations a given method can and cannot provide. We find that reliance, solely, on visual assessment can be misleading. Through extensive experiments we show that some existing saliency methods are independent both of the model and of the data generating process. Consequently, methods that fail the proposed tests are inadequate for tasks that are sensitive to either data or model, such as, finding outliers in the data, explaining the relationship between inputs and outputs that the model learned, and debugging the model. We interpret our findings through an analogy with edge detection in images, a technique that requires neither training data nor model. Theory in the case of a linear model and a single-layer convolutional neural network supports our experimental findings."}, "keywords": ["Gradientbased methods", "trained model"], "citation_intent": "method"} {"citing_id": "2304.11118v1", "cited_id": "1711.05101", "section_title": "Experiments", "citation": "We use AdamW optimizer #REFR with a learning rate of 1e \u2212 4, batch size of 256, without weight decay.", "text_before_citation": ["During training, we use \u03bb vlb = 1.0, and define t to vary between [1, T ], where T = 1000 corresponds to a pure Gaussian distribution.", "At inference, we start from We report the results after retraining AvatarPoser, and report the same results as in #OTHEREFR for methods with a star (*).", "pure Gaussian noise, and we use DDIM sampling [47] with 50 steps.", "We set the variance \u03a3 \u03b8 of the reverse noise to zero.", "This configuration turns the model into a deterministic mapping from Gaussian noise to motions, allowing it to do much fewer denoising steps without degrading the quality of synthesized motions."], "text_after_citation": ["Our model has 22M parameters and is trained for 1.5 days on four NVIDIA Quadro RTX 8000. More implementation details are in the Supplementary Material (Sec. A.1).", "Our approach has no limitations concerning the length of the generated sequences.", "We can synthesize motions of arbitrary length by applying BoDiffusion in an autoregressive manner using a sliding window over the input data.", "We refer the reader to the Supplementary Material for more explanation of our inference-time protocol (see Sec. A.2)."], "citing_paper_content": {"title": "Bodiffusion: Diffusing Sparse Observations For Full-Body Human Motion Synthesis", "abstract": "Mixed reality applications require tracking the user's fullbody motion to enable an immersive experience. However, typical head-mounted devices can only track head and hand movements, leading to a limited reconstruction of full-body motion due to variability in lower body configurations. We propose BoDiffusion-a generative diffusion model for motion synthesis to tackle this under-constrained reconstruction problem. We present a time and space conditioning scheme that allows BoDiffusion to leverage sparse tracking inputs while generating smooth and realistic full-body motion sequences. To the best of our knowledge, this is the first approach that uses the reverse diffusion process to model full-body tracking as a conditional sequence generation task. We conduct experiments on the large-scale motion-capture dataset AMASS and show that our approach outperforms the state-of-the-art approaches by a significant margin in terms of full-body motion realism and joint reconstruction error."}, "cited_paper_content": {"title": "Fixing Weight Decay Regularization In Adam", "abstract": "We note that common implementations of adaptive gradient algorithms, such as Adam, limit the potential benefit of weight decay regularization, because the weights do not decay multiplicatively (as would be expected for standard weight decay) but by an additive constant factor. We propose a simple way to resolve this issue by decoupling weight decay and the optimization steps taken w.r.t. the loss function. We provide empirical evidence that our proposed modification (i) decouples the optimal choice of weight decay factor from the setting of the learning rate for both standard SGD and Adam, and (ii) substantially improves Adam's generalization performance, allowing it to compete with SGD with momentum on image classification datasets (on which it was previously typically outperformed by the latter). We also demonstrate that longer optimization runs require smaller weight decay values for optimal results and introduce a normalized variant of weight decay to reduce this dependence. Finally, we propose a version of Adam with warm restarts (AdamWR) that has strong anytime performance while achieving state-of-the-art results on CIFAR-10 and ImageNet32x32. Our source code is available at https://github.com/loshchil/AdamW-and-SGDW"}, "keywords": ["weight decay", "AdamW optimizer"], "citation_intent": "method"} {"citing_id": "2303.16521v1", "cited_id": "1702.08720", "section_title": "Datasets And Metrics", "citation": "However, entropy maximization occasionally also reaches the state with all points in the same cluster (this is consistent with previous literature, e.g., #REFR ).", "text_before_citation": ["Our method significantly outperforms others on all datasets and metrics.", "Observe that the unregularized model exhibits total collapse in all experiments, placing all points in the same cluster and consequently achieving a cluster performance no better than random guessing.", "In many of our experiments, the performance of SS is not much better.", "By making a slight change to the author's original method, we could actually significantly improve its results (see appendix), but it was still unreliable and less accurate than our method.", "The other two existing partition support methods do a reasonable job of avoiding partition collapse."], "text_after_citation": ["The Sinkhorn-Knopp method is more reliable, but by far the most uniform cluster sizes are produced by our method.", "Note that we do not employ our assignment algorithm at inference time, instead we just assign each point to the cluster with the nearest centroid.", "This shows that our cluster centroids are well-distributed around the data manifold, each capturing a sizeable subset of the data even when the explicit support is removed.", "Together, these figures show that (a) some form of partition support is necessary to learn anything meaningful, (b) our method of combination assignment is better at avoiding partition Table 1 : Effect, on cluster size and clustering performance, of our method compared to two existing methods of preventing partition collapse.", "\"CA\" refers to our method of combination assignment, \"SK\" refers to Sinkhorn-Knopp regularization, as proposed by #OTHEREFR and #OTHEREFR , \"Ent\" refers to entropy maximization, as used by #OTHEREFR , \"SS\" is the sum of squares minimization proposed in #OTHEREFR and others, and \"No-Reg\" is the model without any partition support component."], "citing_paper_content": {"title": "Hard Regularization To Prevent Deep Clustering Collapse Without Data Augmentation", "abstract": "Online deep clustering refers to the joint use of a feature extraction network and a clustering model to assign cluster labels to each new data point or batch as it is processed. While faster and more versatile than offline methods, online clustering can easily reach the collapsed solution where the encoder maps all inputs to the same point and all are put into a single cluster. Successful existing models have employed various techniques to avoid this problem, most of which require data augmentation or which aim to make the average soft assignment across the dataset the same for each cluster. We propose a method that does not require data augmentation, and that, differently from existing methods, regularizes the hard assignments. Using a Bayesian framework, we derive an intuitive optimization objective that can be straightforwardly included in the training of the encoder network. Tested on four image datasets, it consistently avoids collapse more robustly than other methods and leads to more accurate clustering. We also conduct further experiments and analyses justifying our choice to regularize the hard cluster assignments."}, "cited_paper_content": {"title": "Learning Discrete Representations Via Information Maximizing Self Augmented Training", "abstract": "Learning discrete representations of data is a central machine learning task because of the compactness of the representations and ease of interpretation. The task includes clustering and hash learning as special cases. Deep neural networks are promising to be used because they can model the non-linearity of data and scale to large datasets. However, their model complexity is huge, and therefore, we need to carefully regularize the networks in order to learn useful representations that exhibit intended invariance for applications of interest. To this end, we propose a method called Information Maximizing Self-Augmented Training (IMSAT). In IMSAT, we use data augmentation to impose the invari-ance on discrete representations. More specifically, we encourage the predicted representations of augmented data points to be close to those of the original data points in an end-to-end fashion. At the same time, we maximize the information-theoretic dependency between data and their predicted discrete representations. Extensive experiments on benchmark datasets show that IMSAT produces state-of-the-art results for both clustering and unsupervised hash learning."}, "keywords": ["cluster", "entropy maximization"], "citation_intent": "result"} {"citing_id": "2303.06545v1", "cited_id": "1409.0473", "section_title": "Positive Moment Estimation", "citation": "We use an attentionbased Recurrent Neural Network #REFR as the language generator \u03a6(\u2022), which utilizes these features to reconstruct nouns and verbs of one query.", "text_before_citation": ["Researchers #OTHEREFR have observed that neural networks will fit correct labels (informative labels) much faster than wrong labels (uninformative labels).", "Thus, from a perspective of algorithm optimization, networks will predict unobserved positive moments more readily than negative ones when the number of positives is constrained #OTHEREFR .", "Semantic reconstruction To better uncover potential positive moments, we build a lightweight semanticreconstruction model that reconstructs semantic information of queries from proposal features.", "Based on a single positive label, a sequential video embedding within its interval is first extracted from the video embedding V.", "Then, a random weighted averaging operation is applied to augment N s fixed-dimensional features {f v i } Ns i=1 ."], "text_after_citation": ["In the optimization phase, a standard captioning loss is used to maximize the normalized log-likelihood of the correct words,", "EQUATION", "where x l denotes a noun/verb of one query and T s is total time step of the recurrent model.", "Intuitively, positive moments can better reconstruct the semantic information of a query.", "In the estimation phase, we leverage this recurrent model to generate words for both multi-scale proposals and the labeled moment, and then compare semantic similarities between them."], "citing_paper_content": {"title": "Towards Diverse Temporal Grounding Under Single Positive Labels", "abstract": "Temporal grounding aims to retrieve moments of the described event within an untrimmed video by a language query. Typically, existing methods assume annotations are precise and unique, yet one query may describe multiple moments in many cases. Hence, simply taking it as a one-vs-one mapping task and striving to match single-label annotations will inevitably introduce false negatives during optimization. In this study, we reformulate this task as a one-vs-many optimization problem under the condition of single positive labels. The unlabeled moments are considered unobserved rather than negative, and we explore mining potential positive moments to assist in multiple moment retrieval. In this setting, we propose a novel Diverse Temporal Grounding framework, termed DTG-SPL, which mainly consists of a positive moment estimation (PME) module and a diverse moment regression (DMR) module. PME leverages semantic reconstruction information and an expected positive regularization to uncover potential positive moments in an online fashion. Under the supervision of these pseudo positives, DMR is able to localize diverse moments in parallel that meet different users. The entire framework allows for end-toend optimization as well as fast inference. Extensive experiments on Charades-STA and ActivityNet Captions show that our method achieves superior performance in terms of both single-label and multi-label metrics."}, "cited_paper_content": {"title": "Neural Machine Translation By Jointly Learning To Align And Translate", "abstract": "Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition."}, "keywords": ["attentionbased Recurrent Neural"], "citation_intent": "method"} {"citing_id": "2303.12086v1", "cited_id": "2002.11450", "section_title": "Settings", "citation": "We have implemented the LL simulator for IEEE 802.11p and LTE C-V2X in Python in our previous work where more simulator details can be found in #REFR .", "text_before_citation": ["For the link simulation, it's necessary to build the entire transmission and receiving operations."], "text_after_citation": ["The simulation pipeline is the same as the Section.III.B in #OTHEREFR . In Fig. 3 , the simulation processing procedure is shown. Here we give short introductions about each processing step.", "1) Control channel processing: In this step, a new SCI message is created which includes the MCS value, resource indication value, group destination identity, etc.", "The created SCI is a binary message which is encoded by using a convolutional encoder followed by rate matching, interleaving, and a 16-bit Cyclic Redundancy Check (CRC) attached to the encoded message.", "Once we have the binary codes, the next step is to process PSCCH scrambling.", "There are 240 PSCCHgenerated symbols and are cyclically shifted with a random value chosen from the set [0, 3, 6, 9] to reduce the effect of interference."], "citing_paper_content": {"title": "Effect Of Variable Physical Numerologies On Link-Level Performance Of 5G Nr V2X", "abstract": "With technology and societal development, the 5th generation wireless communication (5G) contributes significantly to different societies like industries or academies. Vehicle-to-Everything (V2X) communication technology has been one of the leading services for 5G which has been applied in vehicles. It's used to exchange their status information with other traffic and traffic participants to increase traffic safety and efficiency. Cellular-V2X (C-V2X) is one of the emerging technologies to enable V2X communications. The first Long-Term Evolution (LTE) based C-V2X was released on the 3rd Generation Partnership Project (3GPP) standard. 3GPP is working towards the development of New Radio (NR) systems that it's called 5G NR V2X. One single numerology in LTE cannot satisfy most performance requirements because of the variety of deployment options and scenarios. For this reason, in order to meet the diverse requirements, the 5G NR Physical Layer (PHY) is designed to provide a highly flexible framework. Scalable Orthogonal Frequency-Division Multiplexing (OFDM) numerologies make flexibility possible. The term numerology refers to the PHY waveform parametrization and allows different Subcarrier Spacings (SCSs), symbols, and slot duration. This paper implements the Link-Level (LL) simulations of LTE C-V2X communication and 5G NR V2X communication where simulation results are used to compare similarities and differences between LTE and 5G NR. We detect the effect of variable PHY Numerologies of 5G NR on the LL performance of V2X. The simulation results show that the performance of 5G NR improved by using variable numerologies."}, "cited_paper_content": {"title": "Link Level Performance Comparison Of C-V2X And Its-G5 For Vehicular Channel Models", "abstract": "V2X communications plays a significant role in increasing traffic safety and efficiency by enabling vehicles to exchange their status information with other vehicles and traffic entities in their proximity. In this regard, two technologies emerged as the main contenders for enabling V2X communications which have stringent requirements in terms of latency and reliability due to their apparent safety criticality. The first one is the DSRC standard (referred to as ITS-G5 in Europe) that is well researched since 20 years and has attained enough technical maturity for current deployment. The second one is the relatively new CV2X standard that is nevertheless, based on the 3GPP standard family that have successful deployments in almost every corner of the globe. In this work, we compare the link level performance of the PHY protocols for both the technologies for different vehicular fading channel models. To this end, we construct and simulate the PHY pipelines and show the performance results by means of BLER} versus SNR graphs. Our investigations show that CV2X performs better than ITS-G5 for almost all the considered channel models due to better channel coding and estimation schemes."}, "keywords": ["LTE C-V2X"], "citation_intent": "method"} {"citing_id": "2303.08714v1", "cited_id": "1505.04597", "section_title": "Hf-Guided Ca", "citation": "In the original U-net architecture, the encoder features are directly concatenated with the features obtained by the decoder #REFR .", "text_before_citation": [], "text_after_citation": ["This fusion facilitates the network to integrate the higher and lower-layer features effectively but lacks the ability to extract high-frequency features.", "To tackle this issue, we introduce a High-Frequency feature guided Cross-Attention mechanism (HF-guided CA) to recover fine-grained high-frequency details.", "The flow of the HF-guided CA is illustrated in Fig.5 .", "We utilize the pre-trained CNN prediction by extracting the\u0124 i ,V i , andD i coefficients at the i-th level of the DWT.", "By adding these extracted coefficients with a linear projection, we obtain the feature map Q with aggregated highfrequency information:"], "citing_paper_content": {"title": "Resdiff: Combining Cnn And Diffusion Model For Image Super-Resolution", "abstract": "Adapting the Diffusion Probabilistic Model (DPM) for direct image super-resolution is wasteful, given that a simple Convolutional Neural Network (CNN) can recover the main low-frequency content. Therefore, we present ResDiff, a novel Diffusion Probabilistic Model based on Residual structure for Single Image Super-Resolution (SISR). ResDiff utilizes a combination of a CNN, which restores primary low-frequency components, and a DPM, which predicts the residual between the ground-truth image and the CNNpredicted image. In contrast to the common diffusion-based methods that directly use LR images to guide the noise towards HR space, ResDiff utilizes the CNN's initial prediction to direct the noise towards the residual space between HR space and CNN-predicted space, which not only accelerates the generation process but also acquires superior sample quality. Additionally, a frequency-domain-based loss function for CNN is introduced to facilitate its restoration, and a frequency-domain guided diffusion is designed for DPM on behalf of predicting high-frequency details. The extensive experiments on multiple benchmark datasets demonstrate that ResDiff outperforms previous diffusionbased methods in terms of shorter model convergence time, superior generation quality, and more diverse samples."}, "cited_paper_content": {"title": "U-Net: Convolutional Networks For Biomedical Image Segmentation", "abstract": "There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net ."}, "keywords": ["encoder features", "original U-net architecture"], "citation_intent": "method"} {"citing_id": "2304.02886v1", "cited_id": "1904.03323", "section_title": "C. Comparison With Other Works", "citation": "It is difficult to compare the results, since these works do not use the same evaluation dataset and English works can benefit from specialized models such as ClinicalBERT #REFR .", "text_before_citation": ["The Table IV shows the model with the highest F 1 -score of this paper with the results of previous work on ICD-10 code association."], "text_after_citation": ["For French baseline, we implemented and trained the model proposed in #OTHEREFR on ICD-10-HNFC dataset. The result is shown in parallel with our proposal.", "Our model clearly outperforms the classification method used in #OTHEREFR .", "On the same validation dataset, with class reduction (1564 labels) the F 1 -score goes from 0.35 obtained with the model proposed in #OTHEREFR to 0.55 with our proposal, i.e. an improvement of 57%.", "With the raw codes (6161 labels), the F 1 -score goes from 0.27 to 0.45, i.e. an improvement of 66.6%.", "The difference in scores with the results of PLM-ICD can be explained by the use of a context specific (medical) Transformers which has a vocabulary more adapted to the content of the documents."], "citing_paper_content": {"title": "Automatic Icd-10 Code Association: A Challenging Task On French Clinical Texts", "abstract": "Automatically associating ICD codes with electronic health data is a well-known NLP task in medical research. NLP has evolved significantly in recent years with the emergence of pre-trained language models based on Transformers architecture, mainly in the English language. This paper adapts these models to automatically associate the ICD codes. Several neural network architectures have been experimented with to address the challenges of dealing with a large set of both input tokens and labels to be guessed. In this paper, we propose a model that combines the latest advances in NLP and multi-label classification for ICD-10 code association. Fair experiments on a Clinical dataset in the French language show that our approach increases the F1-score metric by more than 55% compared to state-of-the-art results."}, "cited_paper_content": {"title": "Publicly Available Clinical Bert Embeddings", "abstract": "Contextual word embedding models such as ELMo (Peters et al., 2018) and BERT (Devlin et al., 2018) have dramatically improved performance for many natural language processing (NLP) tasks in recent months. However, these models have been minimally explored on specialty corpora, such as clinical text; moreover, in the clinical domain, no publicly-available pre-trained BERT models yet exist. In this work, we address this need by exploring and releasing BERT models for clinical text: one for generic clinical text and another for discharge summaries specifically. We demonstrate that using a domain-specific model yields performance improvements on three common clinical NLP tasks as compared to nonspecific embeddings. These domain-specific models are not as performant on two clinical de-identification tasks, and argue that this is a natural consequence of the differences between de-identified source text and synthetically non de-identified task text."}, "keywords": ["ClinicalBERT"], "citation_intent": "result"} {"citing_id": "2303.01991v1", "cited_id": "1801.00868", "section_title": "A. Panoptic Segmentation", "citation": "Panoptic segmentation #REFR is a task that aims to accurately segment both object instances and stuff regions in an image, providing a comprehensive understanding of the scene.", "text_before_citation": [], "text_after_citation": ["General approaches to this task include the use of a two-stage pipeline #OTHEREFR - #OTHEREFR , where instance segmentation and semantic segmentation are performed independently and then fused together.", "An alternative approach is the use of a single network #OTHEREFR - #OTHEREFR that is able to predict both instancelevel and semantic-level segmentation maps simultaneously.", "The Panoptic FCN methodology #OTHEREFR is a single-stage approach that has been shown to achieve state-of-the-art performance using an elegant dynamic-convolutions based approach, which has the property that each detected object has a learned compressed representation, i.e. kernel, that could be used for tracking."], "citing_paper_content": {"title": "Unified Perception: Efficient Depth-Aware Video Panoptic Segmentation With Minimal Annotation Costs", "abstract": "Depth-aware video panoptic segmentation is a promising approach to camera based scene understanding. However, the current state-of-the-art methods require costly video annotations and use a complex training pipeline compared to their image-based equivalents. In this paper, we present a new approach titled Unified Perception that achieves state-of-theart performance without requiring video-based training. Our method employs a simple two-stage cascaded tracking algorithm that (re)uses object embeddings computed in an imagebased network. Experimental results on the Cityscapes-DVPS dataset demonstrate that our method achieves an overall DVPQ of 57.1, surpassing state-of-the-art methods. Furthermore, we show that our tracking strategies are effective for long-term object association on KITTI-STEP, achieving an STQ of 59.1 which exceeded the performance of state-of-the-art methods that employ the same backbone network."}, "cited_paper_content": {"title": "Panoptic Segmentation", "abstract": "We propose and study a task we name panoptic segmentation (PS). Panoptic segmentation unifies the typically distinct tasks of semantic segmentation (assign a class label to each pixel) and instance segmentation (detect and segment each object instance). The proposed task requires generating a coherent scene segmentation that is rich and complete, an important step toward real-world vision systems. While early work in computer vision addressed related image/scene parsing tasks, these are not currently popular, possibly due to lack of appropriate metrics or associated recognition challenges. To address this, we propose a novel panoptic quality (PQ) metric that captures performance for all classes (stuff and things) in an interpretable and unified manner. Using the proposed metric, we perform a rigorous study of both human and machine performance for PS on three existing datasets, revealing interesting insights about the task. The aim of our work is to revive the interest of the community in a more unified view of image segmentation. For more analysis and up-to-date results, please check the arXiv version of the paper: {\\small\\url{https://arxiv.org/abs/1801.00868}}."}, "keywords": ["Panoptic segmentation"], "citation_intent": "background"} {"citing_id": "2304.05635v1", "cited_id": "1902.09843", "section_title": "Methodology", "citation": "Inspired by #REFR , we clip and make w \u2208 [0, 1], \u2200w \u2208 W i for regularization via \u03c3(w) = max(0, min(1, w)).", "text_before_citation": ["Adaptive Head Aggregation.", "Following existing pFL paradigms, we globally share the representation part of the segmentation network (encoder F e and the SCR module) and personalize the task head (decoder F d ).", "To element-wisely aggregate two models without introducing multiple aggregation weight matrices, we adopt an adaptive learning based approach similar to residual learning for updating the weight matrices. The adaptive head aggregation is formulated a\u015d", "EQUATION", "where W i is a learnable weight matrix."], "text_after_citation": ["At the beginning, every element in W i is initialized to be 1, and then iteratively gets updated via", "EQUATION", "In the update process of W i , Eqs.", "3and 4are respectively utilized to alternatively update\u03b8 t i and W i .", "Upon convergence, the initialization parameter\u03b8 t i for F d in each federation round is obtained, after which local models are trained using Eq. (1)."], "citing_paper_content": {"title": "Unifying And Personalizing Weakly-Supervised Federated Medical Image Segmentation Via Adaptive Representation And Aggregation", "abstract": "Federated learning (FL) enables multiple sites to collaboratively train powerful deep models without compromising data privacy and security. The statistical heterogeneity (e.g., non-IID data and domain shifts) is a primary obstacle in FL, impairing the generalization performance of the global model. Weakly supervised segmentation, which uses sparsely-grained (i.e., point-, bounding box-, scribble-, block-wise) supervision, is increasingly being paid attention to due to its great potential of reducing annotation costs. However, there may exist label heterogeneity, i.e., different annotation forms across sites. In this paper, we propose a novel personalized FL framework for medical image segmentation, named FedICRA, which uniformly leverages heterogeneous weak supervision via adaptIve Contrastive Representation and Aggregation. Concretely, to facilitate personalized modeling and to avoid confusion, a channel selection based site contrastive representation module is employed to adaptively cluster intra-site embeddings and separate inter-site ones. To effectively integrate the common knowledge from the global model with the unique knowledge from each local model, an adaptive aggregation module is applied for updating and initializing local models at the element level. Additionally, a weakly supervised objective function that leverages a multiscale tree energy loss and a gated CRF loss is employed to generate more precise pseudo-labels and further boost the segmentation performance. Through extensive experiments on two distinct medical image segmentation tasks of different modalities, the proposed FedICRA demonstrates overwhelming performance over other state-ofthe-art personalized FL methods. Its performance even approaches that of fully supervised training on centralized data. Our code and data are available at https://github.com/llmir/FedICRA."}, "cited_paper_content": {"title": "Adaptive Gradient Methods With Dynamic Bound Of Learning Rate", "abstract": "Adaptive optimization methods such as AdaGrad, RMSprop and Adam have been proposed to achieve a rapid training process with an element-wise scaling term on learning rates. Though prevailing, they are observed to generalize poorly compared with SGD or even fail to converge due to unstable and extreme learning rates. Recent work has put forward some algorithms such as AMSGrad to tackle this issue but they failed to achieve considerable improvement over existing methods. In our paper, we demonstrate that extreme learning rates can lead to poor performance. We provide new variants of Adam and AMSGrad, called AdaBound and AMSBound respectively, which employ dynamic bounds on learning rates to achieve a gradual and smooth transition from adaptive methods to SGD and give a theoretical proof of convergence. We further conduct experiments on various popular tasks and models, which is often insufficient in previous work. Experimental results show that new variants can eliminate the generalization gap between adaptive methods and SGD and maintain higher learning speed early in training at the same time. Moreover, they can bring significant improvement over their prototypes, especially on complex deep networks. The implementation of the algorithm can be found at this https URL ."}, "keywords": ["regularization"], "citation_intent": "method"} {"citing_id": "2303.08599v1", "cited_id": "2002.10118", "section_title": "Results", "citation": "In addition, according to a recent theorem #REFR that capturing uncertainty information and correcting overconfidence can be achieved by making only the last layer of a model in binary classification, we can assume that adding a GP layer is Bayesian enough so that GPF-BERT can achieve better calibration.", "text_before_citation": ["\"\u2191\" represents higher is better and \"\u2193\" means lower is better.", "All the models are trained in one dataset and test in the other dataset.", "ble, GPF-BERT has a significant decrease (at least 8 times) in inference time.", "While not completely free, GPF-BERT only adds negligible computational cost, but it greatly improves the calibration, which facilitates adaptation to other models.", "We believe that GP maintains a distribution over functions rather than model parameters, which enables GPF-BERT to improve uncertainty calibration for dialog response retrieval models."], "text_after_citation": [], "citing_paper_content": {"title": "Efficient Uncertainty Estimation With Gaussian Process For Reliable Dialog Response Retrieval", "abstract": "Deep neural networks have achieved remarkable performance in retrieval-based dialogue systems, but they are shown to be ill calibrated. Though basic calibration methods like Monte Carlo Dropout and Ensemble can calibrate well, these methods are time-consuming in the training or inference stages. To tackle these challenges, we propose an efficient uncertainty calibration framework GPF-BERT for BERT-based conversational search, which employs a Gaussian Process layer and the focal loss on top of the BERT architecture to achieve a highquality neural ranker. Extensive experiments are conducted to verify the effectiveness of our method. In comparison with basic calibration methods, GPF-BERT achieves the lowest empirical calibration error (ECE) in three in-domain datasets and the distributional shift tasks, while yielding the highest R 10 @1 and MAP performance on most cases. In terms of time consumption, our GPF-BERT has an 8\u00d7 speedup."}, "cited_paper_content": {"title": "Being Bayesian, Even Just A Bit, Fixes Overconfidence In Relu Networks", "abstract": "The point estimates of ReLU classification networks---arguably the most widely used neural network architecture---have been shown to yield arbitrarily high confidence far away from the training data. This architecture, in conjunction with a maximum a posteriori estimation scheme, is thus not calibrated nor robust. Approximate Bayesian inference has been empirically demonstrated to improve predictive uncertainty in neural networks, although the theoretical analysis of such Bayesian approximations is limited. We theoretically analyze approximate Gaussian posterior distributions on the weights of ReLU networks and show that they fix the overconfidence problem. Furthermore, we show that even a simplistic, thus cheap, Bayesian approximation, also fixes these issues. This indicates that a sufficient condition for a calibrated uncertainty on a ReLU network is ``to be a bit Bayesian''. These theoretical results validate the usage of last-layer Bayesian approximation and motivate a range of a fidelity-cost trade-off. We further validate these findings empirically via various standard experiments using common deep ReLU networks and Laplace approximations."}, "keywords": ["GPF-BERT", "uncertainty information"], "citation_intent": "background"} {"citing_id": "2303.08403v1", "cited_id": "2002.05709", "section_title": "Contrastive Self-Supervised Learning", "citation": "Sim-CLR #REFR utilizes augmented images as positives while the other images in the same batch as negatives.", "text_before_citation": ["The fundamental idea of contrastive learning is to minimize the distance between similar (i.e., positive) instances while maximizing the distance among dissimilar (i.e., negative) instances #OTHEREFR ."], "text_after_citation": ["Maintaining a similar contrastive concept, MoCo #OTHEREFR exploits a momentum encoder and proposes a dynamic dictionary with a queue to handle negative samples efficiently in both performance and memory perspectives. InfoNCE loss #OTHEREFR is often used in contrastive learning.", "Minimizing this loss increases mutual information between positive pairs so that the model can extract the consistent features between the original and augmented samples."], "citing_paper_content": {"title": "Dualfair: Fair Representation Learning At Both Group And Individual Levels Via Contrastive Self-Supervision", "abstract": "Algorithmic fairness has become an important machine learning problem, especially for mission-critical Web applications. This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations. Unlike existing models that target a single type of fairness, our model jointly optimizes for two fairness criteria-group fairness and counterfactual fairness-and hence makes fairer predictions at both the group and individual levels. Our model uses contrastive loss to generate embeddings that are indistinguishable for each protected group, while forcing the embeddings of counterfactual pairs to be similar. It then uses a self-knowledge distillation method to maintain the quality of representation for the downstream tasks. Extensive analysis over multiple datasets confirms the model's validity and further shows the synergy of jointly addressing two fairness criteria, suggesting the model's potential value in fair intelligent Web applications. CCS CONCEPTS \u2022 Computing methodologies \u2192 Philosophical/theoretical foundations of artificial intelligence; Machine learning."}, "cited_paper_content": {"title": "A Simple Framework For Contrastive Learning Of Visual Representations", "abstract": "This paper presents SimCLR: a simple framework for contrastive learning of visual representations. We simplify recently proposed contrastive self-supervised learning algorithms without requiring specialized architectures or a memory bank. In order to understand what enables the contrastive prediction tasks to learn useful representations, we systematically study the major components of our framework. We show that (1) composition of data augmentations plays a critical role in defining effective predictive tasks, (2) introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and (3) contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning. By combining these findings, we are able to considerably outperform previous methods for self-supervised and semi-supervised learning on ImageNet. A linear classifier trained on self-supervised representations learned by SimCLR achieves 76.5% top-1 accuracy, which is a 7% relative improvement over previous state-of-the-art, matching the performance of a supervised ResNet-50. When fine-tuned on only 1% of the labels, we achieve 85.8% top-5 accuracy, outperforming AlexNet with 100X fewer labels."}, "keywords": ["negatives", "augmented images"], "citation_intent": "background"} {"citing_id": "2304.12849v1", "cited_id": "1804.02771", "section_title": "D.2 Sparse Depth Label", "citation": "Our sparsification strategy is also distinct from previous works that uniformly remove pixels through the entire depth range #REFR since we sparsify the label by restricting the observable depth range during the training.", "text_before_citation": ["In many depth estimation datasets, ground truth (GT) depth labels are sparsely annotated due to the hardware limitation of depth sensors, such as LiDAR, Radar, Structured-Light, and Time-of-Flight #OTHEREFR You et al., 2019b] .", "To better utilize this sparse information, approaches to incorporate sparse GT as additional input have been proposed #OTHEREFR .", "This task, also known as depth completion, differs from MDE because an additional sparse depth map is used as an input to supplement RGB information.", "In our proposed MDE setup, we also sparsify the depth map but it is only used as a label (not input) in the training process."], "text_after_citation": ["When compared to previously studied sparse setups, the proposed setup is far more challenging because the distribution of GT labels significantly differs for training and test phases."], "citing_paper_content": {"title": "Depth-Relative Self Attention For Monocular Depth Estimation", "abstract": "Monocular depth estimation is very challenging because clues to the exact depth are incomplete in a single RGB image. To overcome the limitation, deep neural networks rely on various visual hints such as size, shade, and texture extracted from RGB information. However, we observe that if such hints are overly exploited, the network can be biased on RGB information without considering the comprehensive view. We propose a novel depth estimation model named RElative Depth Transformer (RED-T) that uses relative depth as guidance in selfattention. Specifically, the model assigns high attention weights to pixels of close depth and low attention weights to pixels of distant depth. As a result, the features of similar depth can become more likely to each other and thus less prone to misused visual hints. We show that the proposed model achieves competitive results in monocular depth estimation benchmarks and is less biased to RGB information. In addition, we propose a novel monocular depth estimation benchmark that limits the observable depth range during training in order to evaluate the robustness of the model for unseen depths."}, "cited_paper_content": {"title": "Estimating Depth From Rgb And Sparse Sensing", "abstract": "We present a deep model that can accurately produce dense depth maps given an RGB image with known depth at a very sparse set of pixels. The model works simultaneously for both indoor/outdoor scenes and produces state-of-the-art dense depth maps at nearly real-time speeds on both the NYUv2 and KITTI datasets. We surpass the state-of-the-art for monocular depth estimation even with depth values for only 1 out of every \\({\\sim }10000\\) image pixels, and we outperform other sparse-to-dense depth methods at all sparsity levels. With depth values for \\(1{\\slash }256\\) of the image pixels, we achieve a mean error of less than \\(1\\%\\) of actual depth on indoor scenes, comparable to the performance of consumer-grade depth sensor hardware. Our experiments demonstrate that it would indeed be possible to efficiently transform sparse depth measurements obtained using e.g. lower-power depth sensors or SLAM systems into high-quality dense depth maps."}, "keywords": ["observable depth range"], "citation_intent": "method"} {"citing_id": "2304.07060v1", "cited_id": "1812.04948", "section_title": "C. Analysis", "citation": "Our DDPM of choice is trained on FFHQ #REFR dataset which contains 70, 000 unlabeled high-quality images.", "text_before_citation": ["C.1 Unique Subject Counts. In Fig.", "3 , we plot the number of unique subjects that can be sampled as we increase the sample size.", "The blue curve shows that the number of unique samples that can be generated by a DDPM of our choice does not saturate when we sample 200, 000 samples.", "At 200, 000 samples, the unique subjects are about 60, 000.", "And by extrapolating the curve, we estimate the number might reach 80, 000 with more samples."], "text_after_citation": ["The orange line shows the number of unique samples that are sufficiently different from the subjects in the CASIA-WebFace dataset.", "The green line shows the number of unique samples left after filtering images that contain sunglasses.", "The flat region is due to the filtering stage reducing the total candidates.", "The plot shows that DDPM trained on FFHQ dataset can sufficiently generate a large number of unique and new samples that are different from CASIA-WebFace dataset.", "However, with more samples, eventually there is a limit to the number of unique samples that can be generated."], "citing_paper_content": {"title": "Dcface: Synthetic Face Generation With Dual Condition Diffusion Model", "abstract": "Generating synthetic datasets for training face recognition models is challenging because dataset generation entails more than creating high fidelity images. It involves generating multiple images of same subjects under different factors (e.g., variations in pose, illumination, expression, aging and occlusion) which follows the real image conditional distribution. Previous works have studied the generation of synthetic datasets using GAN or 3D models. In this work, we approach the problem from the aspect of combining subject appearance (ID) and external factor (style) conditions. These two conditions provide a direct way to control the inter-class and intra-class variations. To this end, we propose a Dual Condition Face Generator (DCFace) based on a diffusion model. Our novel Patch-wise style extractor and Time-step dependent ID loss enables DCFace to consistently produce face images of the same subject under different styles with precise control. Face recognition models trained on synthetic images from the proposed DCFace provide higher verification accuracies compared to previous works by 6.11% on average in 4 out of 5 test datasets, LFW, CFP-FP, CPLFW, AgeDB and CALFW. Code Link"}, "cited_paper_content": {"title": "A Style-Based Generator Architecture For Generative Adversarial Networks", "abstract": "We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature. The new architecture leads to an automatically learned, unsupervised separation of high-level attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images (e.g., freckles, hair), and it enables intuitive, scale-specific control of the synthesis. The new generator improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. To quantify interpolation quality and disentanglement, we propose two new, automated methods that are applicable to any generator architecture. Finally, we introduce a new, highly varied and high-quality dataset of human faces."}, "keywords": ["high-quality images"], "citation_intent": "method"} {"citing_id": "2304.11062v1", "cited_id": "1706.03762", "section_title": "Introduction", "citation": "The Transformer model #REFR has been widely adopted and used in various research areas and industrial applications.", "text_before_citation": [], "text_after_citation": ["The most important issue of the model is quadratic complexity of attention operation, that makes large models increasingly difficult to apply to longer inputs. arXiv:2304.11062v1 [cs.CL]"], "citing_paper_content": {"title": "Scaling Transformer To 1M Tokens And Beyond With Rmt", "abstract": "Figure 1: Recurrent Memory Transformer retains information across up to 2\u00d710 6 tokens. By augmenting a pre-trained BERT model with recurrent memory (Bulatov et al., 2022), we enabled it to store task-specific information across 7 segments of 512 tokens each. During inference, the model effectively utilized memory for up to 4,096 segments with a total length of 2,048,000 tokens-significantly exceeding the largest input size reported for transformer models (64K tokens for CoLT5 (Ainslie et al., 2023), and 32K tokens for GPT-4 (OpenAI, 2023)). This augmentation maintains the base model's memory size at 3.6 GB in our experiments."}, "cited_paper_content": {"title": "Attention Is All You Need", "abstract": "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data."}, "keywords": ["Transformer model"], "citation_intent": "method"} {"citing_id": "2303.02162v1", "cited_id": "1610.02415", "section_title": "A.4.1 Discussion On Comparison Among Generation-Based Methods", "citation": "However, there is a phenomenon that is observed in other VAE-based generative approaches #REFR : while the latent vector is searched under a guidance to maximize desired properties, its decoded instance does not always have the properties or are not even valid.", "text_before_citation": ["Both MCTS and BP-VAE hardly generate any qualified TCRs (very low q%: 0.01\u00b10.03% for MCTS and 0.04\u00b10.09% for BP-VAEin P McPAS ).", "MCTS explores potentially the entire sequence space, including both valid TCRs and non-TCRs.", "Although it uses R to guide the search, with substantially more calls to calculate R than other methods, due to the fact that valid TCRs may only occupy an extreme small portion of the entire sequence space, it is extremely challenging for MCTS to find qualified TCRs.", "In P VDJDB , MCTS even cannot find any qualified TCR for all the peptides within 1,000 rollouts, and thus has zero values at s v (C q ) and s r (C q ).", "An estimation using TCRdb over all possible length-15 sequence space results at most This vector could correspond to a binding TCR and then is decoded to a TCR sequence."], "text_after_citation": ["This phenomenon also appears in BP-VAE: the decoded TCRs are not qualified most of the times.", "This might be due to the propagation or magnification of the errors from the predictor in the latent space, or the exploration in the latent space ends at a region far away from that of valid TCRs.", "However, theoretical justification behind VAE-based generative approaches is out of the scope of this paper."], "citing_paper_content": {"title": "T-Cell Receptor Optimization With Reinforcement Learning And Mutation Polices For Precision Immunotherapy", "abstract": "T cells monitor the health status of cells by identifying foreign peptides displayed on their surface. T-cell receptors (TCRs), which are protein complexes found on the surface of T cells, are able to bind to these peptides. This process is known as TCR recognition and constitutes a key step for immune response. Optimizing TCR sequences for TCR recognition represents a fundamental step towards the development of personalized treatments to trigger immune responses killing cancerous or virus-infected cells. In this paper, we formulated the search for these optimized TCRs as a reinforcement learning (RL) problem, and presented a framework TCRPPO with a mutation policy using proximal policy optimization. TCRPPO mutates TCRs into effective ones that can recognize given peptides. TCRPPO leverages a reward function that combines the likelihoods of mutated sequences being valid TCRs measured by a new scoring function based on deep autoencoders, with the probabilities of mutated sequences recognizing peptides from a peptide-TCR interaction predictor. We compared TCRPPO with multiple baseline methods and demonstrated that TCRPPO significantly outperforms all the baseline methods to generate positive binding and valid TCRs. These results demonstrate the potential of TCRPPO for both precision immunotherapy and peptide-recognizing TCR motif discovery."}, "cited_paper_content": {"title": "Automatic Chemical Design Using A Data-Driven Continuous Representation Of Molecules", "abstract": "We report a method to convert discrete representations of molecules to and from a multidimensional continuous representation. This generative model allows efficient search and optimization through open-ended spaces of chemical compounds. We train deep neural networks on hundreds of thousands of existing chemical structures to construct two coupled functions: an encoder and a decoder. The encoder converts the discrete representation of a molecule into a real-valued continuous vector, and the decoder converts these continuous vectors back to the discrete representation from this latent space. Continuous representations allow us to automatically generate novel chemical structures by performing simple operations in the latent space, such as decoding random vectors, perturbing known chemical structures, or interpolating between molecules. Continuous representations also allow the use of powerful gradient-based optimization to efficiently guide the search for optimized functional compounds. We demonstrate our method in the design of drug-like molecules as well as organic light-emitting diodes."}, "keywords": ["VAE-based generative approaches"], "citation_intent": "background"} {"citing_id": "2303.15414v1", "cited_id": "1703.07402", "section_title": "Inference Details", "citation": "After matching, like DeepSORT #REFR , we need to handle the born and death of tracklets.", "text_before_citation": ["Due to the continuous relaxation, the output of the QP layer may not be binary.", "To get a valid assignment, we use the greedy rounding strategy to generate the final permutation matrix from the predicted matching score map, i.e., we match the detection with the tracklet with the maximum score."], "text_after_citation": ["We keep the matching between detection and tracklet only if it satisfies all the following constraints: 1) The appearance similarity between detection and tracklet is above the threshold \u03c3.", "2) The detection is not far away from the tracklet.", "We set a threshold \u03ba as the Mahalanobis distance between the predicted distribution of the tracklet bounding box by the motion model and the detection bounding box in pixel coordinates, called the motion gate.", "3) The detection bounding box overlaps with the position of tracklet predicted by the motion model. The constraints above can be written as", "EQUATION"], "citing_paper_content": {"title": "Learnable Graph Matching: A Practical Paradigm For Data Association", "abstract": "Data association is at the core of many computer vision tasks, e.g., multiple object tracking, image matching, and point cloud registration. Existing methods usually solve the data association problem by network flow optimization, bipartite matching, or end-to-end learning directly. Despite their popularity, we find some defects of the current solutions: they mostly ignore the intra-view context information; besides, they either train deep association models in an end-to-end way and hardly utilize the advantage of optimization-based assignment methods, or only use an off-the-shelf neural network to extract features. In this paper, we propose a general learnable graph matching method to address these issues. Especially, we model the intra-view relationships as an undirected graph. Then data association turns into a general graph matching problem between graphs. Furthermore, to make optimization end-to-end differentiable, we relax the original graph matching problem into continuous quadratic programming and then incorporate training into a deep graph neural network with KKT conditions and implicit function theorem. In MOT task, our method achieves state-of-the-art performance on several MOT datasets. For image matching, our method outperforms state-of-the-art methods with half training data and iterations on a popular indoor dataset, ScanNet. Code will be available at https://github.com/jiaweihe1996/GMTracker."}, "cited_paper_content": {"title": "Simple Online And Realtime Tracking With A Deep Association Metric", "abstract": "Simple Online and Realtime Tracking (SORT) is a pragmatic approach to multiple object tracking with a focus on simple, effective algorithms. In this paper, we integrate appearance information to improve the performance of SORT. Due to this extension we are able to track objects through longer periods of occlusions, effectively reducing the number of identity switches. In spirit of the original framework we place much of the computational complexity into an offline pre-training stage where we learn a deep association metric on a large-scale person re-identification dataset. During online application, we establish measurement-to-track associations using nearest neighbor queries in visual appearance space. Experimental evaluation shows that our extensions reduce the number of identity switches by 45%, achieving overall competitive performance at high frame rates."}, "keywords": ["tracklets"], "citation_intent": "method"} {"citing_id": "2303.18187v1", "cited_id": "1908.08655", "section_title": "Benchmark Classification Results", "citation": "We remark that our comparison does not include the spiking predictive coding model reported in #REFR , even though a slightly better generalization error was reported on MNIST.", "text_before_citation": ["In Table 1 , we present our simulation results for our recurrent spiking model trained with event-driven forwardforward learning.", "Notice that, like many bio-physical spiking networks, even though we do not quite match the performance of backprop-based feedforward networks (essentially a purely rate-coded system), our generalization error comes surprisingly close.", "This is particularly promising given that the predicted context layer is a layer of spiking neurons itself.", "Furthermore, our ED-FF model comes the closest to matching the BP-FNN rate-coded baseline compared to the other SNN credit assignment algorithms.", "The BFA SNN comes in a close second place, illustrating that feedback synapses are still quite a powerful mechanism even though our goal was to demonstrate effective learning without feedback."], "text_after_citation": ["This model was not included given that is unsupervised and required post-fitting a log-linear classifier to rate code approximations of its top-most spike-train representations (and our focus in this study was on spiking models that jointly learned a spiking classifier with the internal representations of the sensory input).", "In Figure 2 , we examine the clusters that emerge within the latent space induced by a recurrent spiking network trained with ED-FF.", "To compute the latent vectors/codes, we form an approximate rate code from each data point's resultant top-layer (layer s 3 t ) spike trains as follows:", "EQUATION", "with \u03b3 c = 1."], "citing_paper_content": {"title": "Learning Spiking Neural Systems With The Event-Driven Forward-Forward Process", "abstract": "We develop a novel credit assignment algorithm for information processing with spiking neurons without requiring feedback synapses. Specifically, we propose an event-driven generalization of the forward-forward and the predictive forward-forward learning processes for a spiking neural system that iteratively processes sensory input over a stimulus window. As a result, the recurrent circuit computes the membrane potential of each neuron in each layer as a function of local bottom-up, top-down, and lateral signals, facilitating a dynamic, layer-wise parallel form of neural computation. Unlike spiking neural coding, which relies on feedback synapses to adjust neural electrical activity, our model operates purely online and forward in time, offering a promising way to learn distributed representations of sensory data patterns with temporal spike signals. Notably, our experimental results on several pattern datasets demonstrate that the even-driven forward-forward (ED-FF) framework works well for training a dynamic recurrent spiking system capable of both classification and reconstruction."}, "cited_paper_content": {"title": "Spiking Neural Predictive Coding For Continual Learning From Data Streams", "abstract": "For energy-efficient computation in specialized neuromorphic hardware, we present the Spiking Neural Coding Network, an instantiation of a family of artificial neural models strongly motivated by the theory of predictive coding. The model, in essence, works by operating in a never-ending process of \"guess-and-check\", where neurons predict the activity values of one another and then immediately adjust their own activities to make better future predictions. The interactive, iterative nature of our neural system fits well into the continuous time formulation of data sensory stream prediction and, as we show, the model's structure yields a simple, local synaptic update rule, which could be used to complement or replace online spike-timing dependent plasticity rules. In this article, we experiment with an instantiation of our model that consists of leaky integrate-and-fire units. However, the general framework within which our model is situated can naturally incorporate more complex, formal neurons such as the Hodgkin-Huxley model. Our experimental results in pattern recognition demonstrate the potential of the proposed model when binary spike trains are the primary paradigm for inter-neuron communication. Notably, our model is competitive in terms of classification performance, capable of conducting online semi-supervised learning, and more computationally economical and biologically-plausible than popular artificial neural networks."}, "keywords": ["spiking predictive coding"], "citation_intent": "result"} {"citing_id": "2303.17966v1", "cited_id": "1609.02907", "section_title": "Introduction", "citation": "GCNs obtain convolution-like operations by aggregating information about the neighbors of a node using methods from spectral graph theory and using nonlinear activation functions to obtain low-dimensional features of graph nodes #REFR .", "text_before_citation": ["Data in these domains are different from traditional Euclidean data in that it has an asymmetric and irregular structure.", "And describing these data by graphs can well express the structural characteristics of these data and show powerful representation.", "Meanwhile, graph analysis methods in machine learning can make good use of such non-Euclidean structured data and be used for tasks such as node classification, link prediction, and clustering.", "Graph neural networks (GNNs), a widely used graph analysis method, process graph-structured data by performing deep learning operations on the graph domain and have made remarkable progress due to their excellent performance.", "While in Graph Neural Networks (GNNs), Convolutional Neural Networks (CNNs) are extended to graph-structured data by introducing Graph Convolutional Networks (GCNs)."], "text_after_citation": ["Then, GCNs requires the entire graph to be loaded into memory for convolution during training, which is inefficient for large graph transformation work patterns.", "GraphSAGE #OTHEREFR is an inductive graph convolution method that uses sampling of some of the nodes for learning and aggregation functions to learn feature information aggregated from the neighborhoods of the nodes.", "It is GraphSAGE learning by sampling that makes the convolution operation does not need to load the complete graph, which is a significant improvement over GCNs in the processing of large graphs.", "Many different improved versions have been proposed by researchers since then #OTHEREFR , and all have achieved promising results.", "GCNs are mainly built on a semi-supervised paradigm #OTHEREFR , where the Laplacian matrix of the graph is utilized for convolutional operations, and message aggregation is constrained by the adjacency matrix of the graph."], "citing_paper_content": {"title": "Hd-Gcn:A Hybrid Diffusion Graph Convolutional Network", "abstract": "In tasks involving graph-structured data, Graph Convolutional Networks (GCNs) have brought promising results by introducing convolution into Graph Neural Networks (GNNs). However, the information diffusion performance of GCNs and its variant models is limited by the adjacency matrix, which can lower their performance. Therefore, we introduce a new framework for graph convolutional networks called the Hybrid Diffusion Graph Convolutional Network (HD-GCN) to address the limitations of information diffusion caused by the adjacency matrix. In the HD-GCN framework, we initially utilize diffusion maps to facilitate the diffusion of information among nodes that are adjacent to each other in the feature space. This allows for the diffusion of information between similar points that may not have an adjacent relationship. Next, we utilize graph convolution to further propagate information among adjacent nodes after the diffusion maps, thereby enabling the spread of information among similar nodes that are adjacent in the graph. Finally, we employ the diffusion distances obtained through the use of diffusion maps to regularize and constrain the predicted labels of training nodes. This regularization method is then applied to the HD-GCN training, resulting in a smoother classification surface. The model proposed in this paper effectively overcomes the limitations of information diffusion imposed only by the adjacency matrix. HD-GCN utilizes hybrid diffusion by combining information diffusion between neighborhood nodes in the feature space and adjacent nodes in the adjacency matrix. This method allows for more comprehensive information propagation among nodes, resulting in improved model performance. We evaluated the performance of HD-GCN on three well-known citation network datasets and the results showed that the proposed framework is more effective than several graph-based semi-supervised learning methods."}, "cited_paper_content": {"title": "Semi-Supervised Classification With Graph Convolutional Networks", "abstract": "We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin."}, "keywords": ["graph nodes"], "citation_intent": "method"} {"citing_id": "2304.01919v1", "cited_id": "1907.13568", "section_title": "Introduction", "citation": "Most visualization software aims for the accurate reproduction of data using simple, abstract 2-dimensional graphical marks: the rectangles (\u25fb) of the bar chart; the arcs (\u2aa6) of the pie; and the mix of circles (\u25ef), rectangles (\u25fb), and occasional stars ( ) in scatter plots #REFR .", "text_before_citation": [], "text_after_citation": ["In many situations, these representations are insufficient for a creative design vision. Graphical 'embellishments' require additional styling and design work.", "A designer might choose to stylize a visualization by replacing simple abstract marks with alternative representations: candles instead of bars, slices of fruit instead of arcs in a pie chart, and flowers instead of circles in a scatter plot.", "Beyond the simple replacement of marks, visualization styling can use alternative artistic forms and media (a sketch, a painting, a photograph, clay, etc.).", "Our particular focus is on stylized forms that represent a significant stylistic change to how the mark is represented (e.g., stacked coffee cups), but are still recognizable in their original idiomatic forms (e.g., a bar chart) #OTHEREFR .", "Due to the uniqueness of many of these representations, for a designer to achieve their vision, they need additional tools, skills, and procedures."], "citing_paper_content": {"title": "Viz2Viz: Prompt-Driven Stylized Visualization Generation Using A Diffusion Model", "abstract": "Fig. 1: viz2viz generated samples (original data inset). From left to right: bird eye view of green forests with many trees, blue ocean with ships, grey city with many buildings, orange deserts; realistic stacks of red coca-cola coke cans, brown tea cups, glass wine bottles, starbucks paper coffee cups; realistic pink tulips; Ukiyo-e style side view of red wooden roller coaster, Ukiyo-e style blue sea waves and surfers. Note that in some examples in this paper, we select prompts to demonstrate the capabilities of viz2viz rather than for their aesthetic or design qualities."}, "cited_paper_content": {"title": "Critical Reflections On Visualization Authoring Systems", "abstract": "An emerging generation of visualization authoring systems support expressive information visualization without textual programming. As they vary in their visualization models, system architectures, and user interfaces, it is challenging to directly compare these systems using traditional evaluative methods. Recognizing the value of contextualizing our decisions in the broader design space, we present critical reflections on three systems we developed \u2014Lyra, Data Illustrator, and Charticulator. This paper surfaces knowledge that would have been daunting within the constituent papers of these three systems. We compare and contrast their (previously unmentioned) limitations and trade-offs between expressivity and learnability. We also reflect on common assumptions that we made during the development of our systems, thereby informing future research directions in visualization authoring systems."}, "keywords": ["visualization software"], "citation_intent": "method"} {"citing_id": "2303.05800v1", "cited_id": "1409.1556", "section_title": "Introduction:", "citation": "One way to circumvent these difficulties is to embed pooling layers along the CLs #REFR .", "text_before_citation": ["In addition, given a deep architecture and the ratios between the depths of consecutive CLs, SRs increase as a function of the first CL depth #OTHEREFR .", "The deep learning strategy resulted in several practical difficulties, including the following.", "First, although the depth increases along the deep architecture, the input size of the layers remains fixed.", "The second difficulty is that the last CL output size, depth \u00d7 layer input size, becomes very large, serving as the first FC layer input, which consists of a large number of tunable parameters.", "These computational complexities overload even powerful GPUs, limited by the accelerated utilization of a large number of filters and sizes of the FC layers."], "text_after_citation": ["Each pooling reduces the output dimension of a CL by combining a cluster of outputs, e.g., 2 \u00d7 2, at one, and such operations along the deep architecture reduce the CL dimension by a factor 4 .", "The most popular pooling operators are max-pooling (MP) #OTHEREFR , which implements the maximal value of each cluster, and average pooling (AP) #OTHEREFR , which implements the average value of each cluster; however, more types of pooling operators exist #OTHEREFR .", "The core question in this work is whether SRs can be enhanced depending on the location of the pooling operators along the CLs of a given deep architecture.", "For instance, VGG16 consists of 13 CLs, three FC layers, and five (2 \u00d7 2) MP operators located along the CLs 2 (Fig. 1A) .", "The results indicate that SRs can be significantly increased by a smaller number of pooling operators adjacent to the last CL with optionally larger pooling sizes, for example, the advanced VGG16 (Fig. 1B) ."], "citing_paper_content": {"title": "Enhancing The Success Rates By Performing Pooling Decisions Adjacent To The Output Layer", "abstract": "Learning classification tasks of (2 \u00d7 2) inputs typically consist of \u2264 (2 \u00d7 2) max-pooling (MP) operators along the entire feedforward deep architecture. Here we show, using the CIFAR-10 database, that pooling decisions adjacent to the last convolutional layer significantly enhance accuracy success rates (SRs). In particular, average SRs of the advanced-VGG with layers (A-VGGm) architectures are 0.936, 0.940, 0.954, 0.955, and 0.955 for m=6, 8, 14, 13, and 16, respectively. The results indicate A-VGG8's SR is superior to VGG16's, and that the SRs of A-VGG13 and A-VGG16 are equal, and comparable to that of Wide-ResNet16. In addition, replacing the three fully connected (FC) layers with one FC layer, A-VGG6 and A-VGG14, or with several linear activation FC layers, yielded similar SRs. These significantly enhanced SRs stem from training the most influential input-output routes, in comparison to the inferior routes selected following multiple MP decisions along the deep architecture. In addition, SRs are sensitive to the order of the non-commutative MP and average pooling operators adjacent to the output layer, varying the number and location of training routes. The results call for the reexamination of previously proposed deep architectures and their SRs by utilizing the proposed pooling strategy adjacent to the output layer."}, "cited_paper_content": {"title": "Very Deep Convolutional Networks For Large-Scale Image Recognition", "abstract": "In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision."}, "keywords": ["layers"], "citation_intent": "background"} {"citing_id": "2303.13371v1", "cited_id": "1803.08024", "section_title": "C. Ablation Studies", "citation": "The Baseline employs the T2I attention from SCAN #REFR and averages all the cosine similarities as the final score. 1) Correspondence regulator. Eq.", "text_before_citation": ["In this section, we first report the configurations of our proposed regulators, as well as the initialization and optimization of the attention factors.", "Then, we delve into the RAR and RCR to display how the aggregation weights and cross-attention distributions are progressively refined. Finally, we also explore alternative strategies and architectures.", "All comparisons are implemented based on SCAN #OTHEREFR unless otherwise noted.", "Residual mechanism of the regulators.", "In TABLE III, we carry out critical analyses of the influence of residual architectures."], "text_after_citation": ["#OTHEREFR indicates that the current adaptive weight vector e at the last step.", "Here, we remove these two variables to construct a no-residual version of the RCR.", "Compared with the residual structure, the RCR without residual design results in an obvious R@1 drop in TABLE III, indicating that RCR is inclined to predict offsets against the current state to adjust previous regulation dynamically.", "To be specific, 1-step RCR without residual fashion produces better results than Baseline.", "This is because in the beginning, each word shares the same initialization of a weight vector e (0) =1 d and temperature \u03bb (0) =10, and the RCR barely infers the absolute value of these attention factors in the next step."], "citing_paper_content": {"title": "Plug-And-Play Regulators For Image-Text Matching", "abstract": "Exploiting fine-grained correspondence and visualsemantic alignments has shown great potential in image-text matching. Generally, recent approaches first employ a crossmodal attention unit to capture latent region-word interactions, and then integrate all the alignments to obtain the final similarity. However, most of them adopt one-time forward association or aggregation strategies with complex architectures or additional information, while ignoring the regulation ability of network feedback. In this paper, we develop two simple but quite effective regulators which efficiently encode the message output to automatically contextualize and aggregate cross-modal representations. Specifically, we propose (i) a Recurrent Correspondence Regulator (RCR) which facilitates the cross-modal attention unit progressively with adaptive attention factors to capture more flexible correspondence, and (ii) a Recurrent Aggregation Regulator (RAR) which adjusts the aggregation weights repeatedly to increasingly emphasize important alignments and dilute unimportant ones. Besides, it is interesting that RCR and RAR are \"plug-and-play\": both of them can be incorporated into many frameworks based on cross-modal interaction to obtain significant benefits, and their cooperation achieves further improvements. Extensive experiments on MSCOCO and Flickr30K datasets validate that they can bring an impressive and consistent R@1 gain on multiple models, confirming the general effectiveness and generalization ability of the proposed methods."}, "cited_paper_content": {"title": "Stacked Cross Attention For Image-Text Matching", "abstract": "In this paper, we study the problem of image-text matching. Inferring the latent semantic alignment between objects or other salient stuff (e.g. snow, sky, lawn) and the corresponding words in sentences allows to capture fine-grained interplay between vision and language, and makes image-text matching more interpretable. Prior work either simply aggregates the similarity of all possible pairs of regions and words without attending differentially to more and less important words or regions, or uses a multi-step attentional process to capture limited number of semantic alignments which is less interpretable. In this paper, we present Stacked Cross Attention to discover the full latent alignments using both image regions and words in a sentence as context and infer image-text similarity. Our approach achieves the state-of-the-art results on the MS-COCO and Flickr30K datasets. On Flickr30K, our approach outperforms the current best methods by 22.1% relatively in text retrieval from image query, and 18.2% relatively in image retrieval with text query (based on Recall@1). On MS-COCO, our approach improves sentence retrieval by 17.8% relatively and image retrieval by 16.6% relatively (based on Recall@1 using the 5K test set). Code has been made available at: https://github.com/kuanghuei/SCAN."}, "keywords": ["T2I attention"], "citation_intent": "method"} {"citing_id": "2303.04906v1", "cited_id": "1912.04977", "section_title": "Introduction", "citation": "Federated Learning (FL) is a Machine Learning (ML) technique that has gained tremendous popularity in the last years #REFR .", "text_before_citation": [], "text_after_citation": ["Its core idea is to orchestrate the training of a global ML model without ever exchanging the data owned by each party or requiring it to be gathered in one common computational infrastructure.", "The popularity of FL caused the development of a plethora of FL frameworks, e.g., Intel \u00ae OpenFL #OTHEREFR , Flower #OTHEREFR , TensorFlow Federated [1] , and HPE Swarm Learning #OTHEREFR to cite a few.", "This software only supports one ML model type: Deep Neural Networks (DNNs).", "While DNNs have shown unprecedented results across a wide range of applications, from image recognition #OTHEREFR to natural language processing #OTHEREFR , from drug discovery #OTHEREFR to fraud detection #OTHEREFR , they are not the best ML model for every use case.", "First, DNNs require massive amounts of data, and collecting and labelling enough high-quality samples is often prohibitive."], "citing_paper_content": {"title": "Model-Agnostic Federated Learning", "abstract": "Since its debut in 2016, Federated Learning (FL) has been tied to the inner workings of Deep Neural Networks (DNNs). On the one hand, this allowed its development and widespread use as DNNs proliferated. On the other hand, it neglected all those scenarios in which using DNNs is not possible or advantageous. The fact that most current FL frameworks only allow training DNNs reinforces this problem. To address the lack of FL solutions for non-DNN-based use cases, we propose MAFL (Model-Agnostic Federated Learning). MAFL marries a modelagnostic FL algorithm, AdaBoost.F, with an open industry-grade FL framework: Intel \u00ae OpenFL. MAFL is the first FL system not tied to any specific type of machine learning model, allowing exploration of FL scenarios beyond DNNs and trees. We test MAFL from multiple points of view, assessing its correctness, flexibility and scaling properties up to 64 nodes. We optimised the base software achieving a 5.5x speedup on a standard FL scenario. MAFL is compatible with x86-64, ARM-v8, Power and RISC-V."}, "cited_paper_content": {"title": "Advances And Open Problems In Federated Learning", "abstract": "Federated learning (FL) is a machine learning setting where many clients (e.g. mobile devices or whole organizations) collaboratively train a model under the orchestration of a central server (e.g. service provider), while keeping the training data decentralized. FL embodies the principles of focused data collection and minimization, and can mitigate many of the systemic privacy risks and costs resulting from traditional, centralized machine learning and data science approaches. Motivated by the explosive growth in FL research, this paper discusses recent advances and presents an extensive collection of open problems and challenges."}, "keywords": ["Federated Learning"], "citation_intent": "method"} {"citing_id": "2304.12180v1", "cited_id": "1703.03864", "section_title": "L( + )", "citation": "This estimator can be massively parallelized (we denote the number of such iid estimators by N ) #REFR ; however, it is not online and might incur large latency between gradient updates when T is large.", "text_before_citation": ["With the score function gradient estimator trick #OTHEREFR , an unbiased estimator of (2) is given by 1 \u03c3 2 L([\u03b8 + ] \u00d7T ) .", "This estimator is zeroth-order since it only requires the loss function evaluation but not an explicit computation of its gradient, which allows for its effective use in cases when the gradients are noninformative (due to chaos) or not directly computable (black-box/discontinuous loss).", "To reduce the variance of this estimator, antithetic sampling is used and we call this finite-difference-style estimator FullES:", "EQUATION", "Here the term Full highlights that this estimator can only produce a gradient estimate by averaging the antithetic particles' losses (see Equation (1)) after a full unroll of T steps."], "text_after_citation": ["\u03b8 Worker 1 1 N N \u2211 i=1 g i \u23df \u2192 s \u03c4 N +1 \u2192 \u2191 \u03b8 \u2191 \u2026 \u2192 s \u03c4 N +W \u2191 \u2191 L \u03c4 N +1 L \u03c4 N +W s \u03c4 N g N g 1 Horizon length T Truncation window length W +\u03f5 \u03b8 +\u03f5 Worker N Online Evolution Strategies Protocol (a) PESWorker \u03f5 1 \u03f5 2 \u03f5 3 Accumulated noise \u03be \u03f5 1 GPES Worker K=2W \u03f5 1 \u03f5 2 \u03f5 1 \u03f5 1 Reuse \u03f5 1 \u03f5 1 Sample", "= \u03be = (b)", "Figure 2: (a) Illustration of step-unlocked Online ES workers working independently at different truncation windows.", "Here a central server sends \u03b8 (whose gradient to be estimated) to each worker and receives the estimates over partial unrolls from each.", "The averaged gradient is then used in a first-order optimization algorithm (like Adam)."], "citing_paper_content": {"title": "Noise-Reuse In Online Evolution Strategies", "abstract": "Online evolution strategies have become an attractive alternative to automatic differentiation (AD) due to their ability to handle chaotic and black-box loss functions, while also allowing more frequent gradient updates than vanilla Evolution Strategies (ES). In this work, we propose a general class of unbiased online evolution strategies. We analytically and empirically characterize the variance of this class of gradient estimators and identify the one with the least variance, which we term Noise-Reuse Evolution Strategies (NRES). Experimentally, we show that NRES results in faster convergence than existing AD and ES methods in terms of wall-clock speed and total number of unroll steps across a variety of applications, including learning dynamical systems, meta-training learned optimizers, and reinforcement learning."}, "cited_paper_content": {"title": "Evolution Strategies As A Scalable Alternative To Reinforcement Learning", "abstract": "We explore the use of Evolution Strategies (ES), a class of black box optimization algorithms, as an alternative to popular MDP-based RL techniques such as Q-learning and Policy Gradients. Experiments on MuJoCo and Atari show that ES is a viable solution strategy that scales extremely well with the number of CPUs available: By using a novel communication strategy based on common random numbers, our ES implementation only needs to communicate scalars, making it possible to scale to over a thousand parallel workers. This allows us to solve 3D humanoid walking in 10 minutes and obtain competitive results on most Atari games after one hour of training. In addition, we highlight several advantages of ES as a black box optimization technique: it is invariant to action frequency and delayed rewards, tolerant of extremely long horizons, and does not need temporal discounting or value function approximation."}, "keywords": ["gradient updates"], "citation_intent": "background"} {"citing_id": "2303.02762v1", "cited_id": "1910.00350", "section_title": "C. Tool-Chain Workflow", "citation": "HAL #REFR was used to perform fundamental operations on the netlist such as graph traversal and storing and retrieving data on design elements.", "text_before_citation": ["An RTL description is written for all the identified modules.", "They are instantiated in the top-level module to form a complete netlist.", "The obtained high-level RTL is then verified using JasperGold for equivalence checking with the source HDL to confirm functional validity.", "However, to ensure equivalence in sequential circuits, we reset all flip-flops in the source HDL that do not have an initialization.", "The techniques described in this paper were implemented using Python."], "text_after_citation": ["All the benchmark designs were first synthesized using Artix 7-series and Zync 7series FPGAs using Xilinx Vivado.", "We set the flatten hierarchy option to full, while we kept the other default synthesis and optimization settings.", "We delete all the original net IDs that may indicate the signal name in the original model and generate unique random IDs for all nets. We use these netlists for our reverse engineering experiments. Yosys #OTHEREFR was used for the QBF SAT problem.", "Permutation-independent Boolean matching was performed using testnpn #OTHEREFR command in abc #OTHEREFR .", "All experiments were done on an AMD Ryzen 7 processor with 16GB RAM."], "citing_paper_content": {"title": "Reverse Engineering Word-Level Models From Look-Up Table Netlists", "abstract": "Reverse engineering of FPGA designs from bitstreams to RTL models aids in understanding the high level functionality of the design and for validating and reconstructing legacy designs. Fast carry-chains are commonly used in synthesis of operators in FPGA designs. We propose a method to detect word-level structures by analyzing these carry-chains in LUT (Look-Up Table) level netlists. We also present methods to adapt existing techniques to identify combinational operations and sequential modules in ASIC netlists to LUT netlists. All developed and adapted techniques are consolidated into an integrated tool-chain to aid in reverse engineering of word-level designs from LUT-level netlists. When evaluated on a set of real-world designs, the tool-chain infers 34% to 100% of the elements in the netlist to be part of a known word-level operation or a known sequential module."}, "cited_paper_content": {"title": "Highway To Hal: Open-Sourcing The First Extendable Gate-Level Netlist Reverse Engineering Framework", "abstract": "Since hardware oftentimes serves as the root of trust in our modern interconnected world, malicious hardware manipulations constitute a ubiquitous threat in the context of the Internet of Things (IoT). Hardware reverse engineering is a prevalent technique to detect such manipulations. Over the last years, an active research community has significantly advanced the field of hardware reverse engineering. Notably, many open research questions regarding the extraction of functionally correct netlists from Field Programmable Gate Arrays (FPGAs) or Application Specific Integrated Circuits (ASICs) have been tackled. In order to facilitate further analysis of recovered netlists, a software framework is required, serving as the foundation for specialized algorithms. Currently, no such framework is publicly available. Therefore, we provide the first open-source gate-library agnostic framework for gate-level netlist analysis. In this positional paper, we demonstrate the workflow of our modular framework HAL on the basis of two case studies and provide profound insights on its technical foundations."}, "keywords": ["netlist"], "citation_intent": "method"} {"citing_id": "2303.14337v1", "cited_id": "1911.04118", "section_title": "Question-Driven Claim Extraction With Validation", "citation": "To this end, we employ an answer sentence selection model #REFR that validates each of the extracted contexts (from Section 4.3.1) separately against the strategic question.", "text_before_citation": ["Following #OTHEREFR , we adopt a Question Answering (QA) formulation to identify claims relevant to a given strategic question.", "Specifically, we design a QA pipeline, utilizing a transformer-based RoBERTa-large encoder model #OTHEREFR variant 6 that has been trained on SQuAD 2.0 #OTHEREFR and Natural Questions #OTHEREFR .", "The pipeline takes as input the news corpus split into snippets along with the strategic question, and outputs short answer extractions to these questions.", "The identified short answers are then expanded, by including the 3-sentence window around it to provide additional context.", "However, there is still a risk of false positives #OTHEREFR being identified as candidate answers with high confidence, thus necessitating the validation #OTHEREFR of extracted answers."], "text_after_citation": ["We concatenate the question and extracted context as input to a binary classification model 7 , with an underlying RoBERTa-large backbone, that is trained on Natural Questions #OTHEREFR and WikiQA #OTHEREFR .", "The output of the model is a validation score, between 0 (incorrect answer selection) and 1 (correct answer selection), used to select the top-5 relevant contexts for summarization. Fig.", "3: Figure showing an example for how multimodal information (in the form of images) supports and provides additional context to the claims presented in SmartBook.", "In this example, the presence of anti-aircraft weapons (as seen in the image) in Ukraine provides background for the discussion in NATO on whether to impose a no-fly zone.", "Situation reports often rely on a variety of sources, including text, images, videos, and audio recordings, to provide a holistic view of events."], "citing_paper_content": {"title": "Smartbook: Ai-Assisted Situation Report Generation", "abstract": "Emerging events, such as the COVID pandemic and the Ukraine Crisis, require a time-sensitive comprehensive understanding of the situation to allow for appropriate decision-making and effective action response. Automated generation of situation reports can significantly reduce the time, effort, and cost for domain experts when preparing their official human-curated reports. However, AI research toward this goal has been very limited, and no successful trials have yet been conducted to automate such report generation. Pre-existing natural language processing methods, large language model based text generation, and information retrieval techniques are insufficient to identify, locate, and summarize important information, and lack detailed, structured, and strategic awareness. We propose SmartBook, a novel task formulation targeting situation report generation, which consumes large volumes of news data to produce a structured situation report with multiple hypotheses (claims) summarized and grounded with rich links to factual evidence. We realize SmartBook for the Ukraine-Russia crisis by automatically generating intelligence analysis reports to assist expert analysts. The machine-generated reports are structured in the form of timelines, with each timeline organized by major events (or chapters), corresponding strategic questions (or sections) and their grounded summaries (or section content). Our proposed framework automatically detects realtime event-related strategic questions, which are more directed than manually-crafted analyst questions, which tend to be too complex, hard"}, "cited_paper_content": {"title": "Tanda: Transfer And Adapt Pre-Trained Transformer Models For Answer Sentence Selection", "abstract": "We propose TANDA, an effective technique for fine-tuning pre-trained Transformer models for natural language tasks. Specifically, we first transfer a pre-trained model into a model for a general task by fine-tuning it with a large and high-quality dataset. We then perform a second fine-tuning step to adapt the transferred model to the target domain. We demonstrate the benefits of our approach for answer sentence selection, which is a well-known inference task in Question Answering. We built a large scale dataset to enable the transfer step, exploiting the Natural Questions dataset. Our approach establishes the state of the art on two well-known benchmarks, WikiQA and TREC-QA, achieving MAP scores of 92% and 94.3%, respectively, which largely outperform the previous highest scores of 83.4% and 87.5%, obtained in very recent work. We empirically show that TANDA generates more stable and robust models reducing the effort required for selecting optimal hyper-parameters. Additionally, we show that the transfer step of TANDA makes the adaptation step more robust to noise. This enables a more effective use of noisy datasets for fine-tuning. Finally, we also confirm the positive impact of TANDA in an industrial setting, using domain specific datasets subject to different types of noise."}, "keywords": ["answer sentence selection"], "citation_intent": "method"} {"citing_id": "2303.14470v1", "cited_id": "2003.03488", "section_title": "B.1. Implementation Details", "citation": "We follow the two-step scheme (as detailed in \u00a7 4) and the training settings in #REFR .", "text_before_citation": ["Implementation of ImageNet training."], "text_after_citation": ["Specifically, for each step, the model is trained for 640k training iterations with batch size 512.", "We adopt the Adam optimizer #OTHEREFR and set the initial learning rate to 10 \u22123 .", "Weight decay rates in the first and second steps are 10 \u22125 and 0, respectively.", "For experiments on ImageNet, models are trained with 8 V100 GPUs.", "We follow the training settings and data augmentation strategies in #OTHEREFR ."], "citing_paper_content": {"title": "Compacting Binary Neural Networks By Sparse Kernel Selection", "abstract": "Binary Neural Network (BNN) represents convolution weights with 1-bit values, which enhances the efficiency of storage and computation. This paper is motivated by a previously revealed phenomenon that the binary kernels in successful BNNs are nearly power-law distributed: their values are mostly clustered into a small number of codewords. This phenomenon encourages us to compact typical BNNs and obtain further close performance through learning nonrepetitive kernels within a binary kernel subspace. Specifically, we regard the binarization process as kernel grouping in terms of a binary codebook, and our task lies in learning to select a smaller subset of codewords from the full codebook. We then leverage the Gumbel-Sinkhorn technique to approximate the codeword selection process, and develop the Permutation Straight-Through Estimator (PSTE) that is able to not only optimize the selection process end-toend but also maintain the non-repetitive occupancy of selected codewords. Experiments verify that our method reduces both the model size and bit-wise computational costs, and achieves accuracy improvements compared with stateof-the-art BNNs under comparable budgets."}, "cited_paper_content": {"title": "Reactnet: Towards Precise Binary Neural Network With Generalized Activation Functions", "abstract": "In this paper, we propose several ideas for enhancing a binary network to close its accuracy gap from real-valued networks without incurring any additional computational cost. We first construct a baseline network by modifying and binarizing a compact real-valued network with parameter-free shortcuts, bypassing all the intermediate convolutional layers including the downsampling layers. This baseline network strikes a good trade-off between accuracy and efficiency, achieving superior performance than most of existing binary networks at approximately half of the computational cost. Through extensive experiments and analysis, we observed that the performance of binary networks is sensitive to activation distribution variations. Based on this important observation, we propose to generalize the traditional Sign and PReLU functions, denoted as RSign and RPReLU for the respective generalized functions, to enable explicit learning of the distribution reshape and shift at near-zero extra cost. Lastly, we adopt a distributional loss to further enforce the binary network to learn similar output distributions as those of a real-valued network. We show that after incorporating all these ideas, the proposed ReActNet outperforms all the state-of-the-arts by a large margin. Specifically, it outperforms Real-to-Binary Net and MeliusNet29 by 4.0% and 3.6% respectively for the top-1 accuracy and also reduces the gap to its real-valued counterpart to within 3.0% top-1 accuracy on ImageNet dataset."}, "keywords": ["training settings"], "citation_intent": "method"} {"citing_id": "2303.00411v2", "cited_id": "1312.5185", "section_title": "Introduction", "citation": "In Section 7 we include a setting for abstract wave equations, which was considered in #REFR only for the splitting scheme.", "text_before_citation": ["For other schemes the contractivity of R usually follows by a functional calculus argument (see Proposition 2.4 below).", "In the above, one usually takes Y to be a suitable intermediate space between X and D(A).", "In the special and important case that Y = D(A) one can take \u03b1 = #OTHEREFR 2 for all of the aforementioned schemes.", "More general convergence rates can be found in Table 1 and Maxwell equations are included in the main text (see Subsections 3.3, 6.4, and 6.5).", "Our results improve several results from the literature to more general schemes and general rates \u03b1."], "text_after_citation": ["We prove similar higher order convergence rates for more general schemes, and in particular recover #OTHEREFR as a special case.", "To make the above results applicable to implementable numerical schemes for SPDEs, one would additionally need a space discretisation.", "Since the main novelty of our work lies in the treatment of temporal discretisations, we will only consider the latter.", "A detailed understanding of the global Lipschitz setting is a quintessential step towards the treatment of locally Lipschitz nonlinearities, which occur more frequently in practice.", "Our result should be seen as a first step, and we plan to continue our work on uniform strong errors in a locally Lipschitz setting in the near future."], "citing_paper_content": {"title": "Pathwise Uniform Convergence Of Time Discretisation Schemes For Spdes", "abstract": "In this paper we prove convergence rates for time discretisation schemes for semilinear stochastic evolution equations with additive or multiplicative Gaussian noise, where the leading operator A is the generator of a strongly continuous semigroup S on a Hilbert space X, and the focus is on non-parabolic problems. The main results are optimal bounds for the uniform strong error E \u221e k := E sup j\u2208{0,...,N k } U (t j) \u2212 U j p 1/p , where p \u2208 [2, \u221e), U is the mild solution, U j is obtained from a time discretisation scheme, k is the step size, and N k = T /k. The usual schemes such as splitting/exponential Euler, implicit Euler, and Crank-Nicolson, etc. are included as special cases. Under conditions on the nonlinearity and the noise we show \u2022 E \u221e k k log(T /k) (linear equation, additive noise, general S); \u2022 E \u221e k \u221a k log(T /k) (nonlinear equation, multiplicative noise, contractive S); \u2022 E \u221e k k log(T /k) (nonlinear wave equation, multiplicative noise). The logarithmic factor can be removed if the splitting scheme is used with a (quasi)-contractive S. The obtained bounds coincide with the optimal bounds for SDEs. Most of the existing literature is concerned with bounds for the simpler pointwise strong error E k := sup j\u2208{0,...,N k } E U (t j) \u2212 U j p 1/p. Applications to Maxwell equations, Schr\u00f6dinger equations, and wave equations are included. For these equations our results improve and reprove several existing results with a unified method."}, "cited_paper_content": {"title": "An Exponential Integrator Scheme For Time Discretization Of Nonlinear Stochastic Wave Equation", "abstract": "This work is devoted to convergence analysis of an exponential integrator scheme for semi-discretization in time of nonlinear stochastic wave equation. A unified framework is first set forth, which covers important cases of additive and multiplicative noises. Within this framework, the proposed scheme is shown to converge uniformly in the strong $L^p$-sense with precise convergence rates given. The abstract results are then applied to several concrete examples. Further, weak convergence rates of the scheme are examined for the case of additive noise. To analyze the weak error for the nonlinear case, techniques based on the Malliavin calculus were usually exploited in the literature. Under certain appropriate assumptions on the nonlinearity, this paper provides a weak error analysis, which does not rely on the Malliavin calculus. The rates of weak convergence can, as expected, be improved in comparison with the strong rates. Both strong and weak convergence results obtained here show that the proposed method achieves higher convergence rates than the implicit Euler and Crank-Nicolson time discretizations. Numerical results are finally reported to confirm our theoretical findings."}, "keywords": ["abstract wave equations"], "citation_intent": "background"} {"citing_id": "2304.02599v1", "cited_id": "1911.02212", "section_title": "Lower Bound For Inverse Trace Estimation", "citation": "Note that this result is very similar to that of #REFR , except that we work with the inverse trace rather than the minimum eigenvalue.", "text_before_citation": [", w n , the matrix W has the Wishart(d \u2212 n) distribution.", "Proposition 22 ([BHSW20, Lemma 3.5]).", "For any matrices Y 1 \u2208 R n\u00d7n , Y 2 \u2208 R (d\u2212n)\u00d7n , and any symmetric matrix W \u2208 R (d\u2212n)\u00d7(d\u2212n) , it holds that", "\u03bb min Y 1 Y 1 Y 1 Y 2 Y 2 Y 1 Y 2 Y 2 + W \u2264 \u03bb min ( W ) .", "We are now ready to prove Theorem 18."], "text_after_citation": ["Proof.", "[Proof of Theorem 18] Let \u03b4 > 0 be chosen later.", "We first argue that tr must not be too large.", "Applying Proposition 24, we conclude that there is a universal constant C > 0 such that tr(W \u22121 ) \u2264 C d 2 with probability at least 1/2. Hence,", "P tr \u2264 CC d 2 \u2265 P tr(W \u22121 ) \u2264 C d 2 and tr \u2264 C tr(W \u22121 ) \u2265 P{tr(W \u22121 ) \u2264 C d 2 } \u2212 P{ tr > C tr(W \u22121 )} \u2265 1 2 \u2212 \u03b4 ."], "citing_paper_content": {"title": "Query Lower Bounds For Log-Concave Sampling", "abstract": "Log-concave sampling has witnessed remarkable algorithmic advances in recent years, but the corresponding problem of proving lower bounds for this task has remained elusive, with lower bounds previously known only in dimension one. In this work, we establish the following query lower bounds: (1) sampling from strongly log-concave and log-smooth distributions in dimension d \u2265 2 requires \u2126(log \u03ba) queries, which is sharp in any constant dimension, and (2) sampling from Gaussians in dimension d (hence also from general log-concave and log-smooth distributions in dimension d) requires \u2126(min(\u221a \u03ba log d, d)) queries, which is nearly sharp for the class of Gaussians. Here \u03ba denotes the condition number of the target distribution. Our proofs rely upon (1) a multiscale construction inspired by work on the Kakeya conjecture in harmonic analysis, and (2) a novel reduction that demonstrates that block Krylov algorithms are optimal for this problem, as well as connections to lower bound techniques based on Wishart matrices developed in the matrix-vector query literature."}, "cited_paper_content": {"title": "The Gradient Complexity Of Linear Regression", "abstract": "We investigate the computational complexity of several basic linear algebra primitives, including largest eigenvector computation and linear regression, in the computational model that allows access to the data via a matrix-vector product oracle. We show that for polynomial accuracy, $\\Theta(d)$ calls to the oracle are necessary and sufficient even for a randomized algorithm. ::: Our lower bound is based on a reduction to estimating the least eigenvalue of a random Wishart matrix. This simple distribution enables a concise proof, leveraging a few key properties of the random Wishart ensemble."}, "keywords": ["minimum eigenvalue"], "citation_intent": "result"} {"citing_id": "2303.16574v1", "cited_id": "1901.05555", "section_title": "Comparisons With Others", "citation": "The Traj++ EWTA+reweighting #REFR performs best on the average ADE/FDE, but its performance gains on tailed samples are relatively little.", "text_before_citation": ["For long-tailed classification methods #OTHEREFR , we construct a classification head after the encoder of Traj++ EWTA to use it to classify the trajectories according to the discretization of Kalman filter errors, same as Makansi et al.", "#OTHEREFR , and the classification loss is trained along with the prediction loss.", "Table 1 summarizes our experimental results on ETH-UCY using a best-of-20 evaluation #OTHEREFR .", "We can see that our method stably outperforms all comparing methods on all the top 1% \u2212 5% long-tail samples.", "Specifically, our framework outperforms the second best method: Traj++ EWTA+contrastive #OTHEREFR by 9.5% on ADE and 8.5% on FDE on the top 1% hardest samples, and maintains the average ADE and FDE nearly stable."], "text_after_citation": ["The Traj++ EWTA+resampling #OTHEREFR gets more gains on the most tailed samples, but its average ADE/FDE become much worse.", "Unlike simply doing resampling or loss reweighting, hypernetwork can decouple head samples and tail samples in the parameter space of decoder, therefore achieves better performances.", "Quantitative comparisons on Traj++ EWTA on Nuscenes.", "Comparison results with the previous best long-tail prediction method #OTHEREFR on Nuscenes are in Table 2.", "We find out that the resampling operation in the original Traj++ EWTA does not work well with FEND, probably because of causing overfit on hypernetwork."], "citing_paper_content": {"title": "Fend: A Future Enhanced Distribution-Aware Contrastive Learning Framework For Long-Tail Trajectory Prediction", "abstract": "Predicting the future trajectories of the traffic agents is a gordian technique in autonomous driving. However, trajectory prediction suffers from data imbalance in the prevalent datasets, and the tailed data is often more complicated and safety-critical. In this paper, we focus on dealing with the long-tail phenomenon in trajectory prediction. Previous methods dealing with long-tail data did not take into account the variety of motion patterns in the tailed data. In this paper, we put forward a future enhanced contrastive learning framework to recognize tail trajectory patterns and form a feature space with separate pattern clusters.Furthermore, a distribution aware hyper predictor is brought up to better utilize the shaped feature space. Our method is a model-agnostic framework and can be plugged into many well-known baselines. Experimental results show that our framework outperforms the state-of-the-art longtail prediction method on tailed samples by 9.5% on ADE and 8.5% on FDE, while maintaining or slightly improving the averaged performance. Our method also surpasses many long-tail techniques on trajectory prediction task."}, "cited_paper_content": {"title": "Class-Balanced Loss Based On Effective Number Of Samples", "abstract": "With the rapid increase of large-scale, real-world datasets, it becomes critical to address the problem of long-tailed data distribution (i.e., a few classes account for most of the data, while most classes are under-represented). Existing solutions typically adopt class re-balancing strategies such as re-sampling and re-weighting based on the number of observations for each class. In this work, we argue that as the number of samples increases, the additional benefit of a newly added data point will diminish. We introduce a novel theoretical framework to measure data overlap by associating with each sample a small neighboring region rather than a single point. The effective number of samples is defined as the volume of samples and can be calculated by a simple formula $(1-\\beta^{n})/(1-\\beta)$, where $n$ is the number of samples and $\\beta \\in [0,1)$ is a hyperparameter. We design a re-weighting scheme that uses the effective number of samples for each class to re-balance the loss, thereby yielding a class-balanced loss. Comprehensive experiments are conducted on artificially induced long-tailed CIFAR datasets and large-scale datasets including ImageNet and iNaturalist. Our results show that when trained with the proposed class-balanced loss, the network is able to achieve significant performance gains on long-tailed datasets."}, "keywords": ["tailed samples"], "citation_intent": "background"} {"citing_id": "2303.15244v1", "cited_id": "1206.5538", "section_title": "Introduction", "citation": "Therefore, recent methods rely on the so-called manifold hypothesis #REFR , stating that even complex and high-dimensional datasets are contained in a low-dimensional manifold.", "text_before_citation": ["Manifold learning.", "The treatment of high-dimensional data is often computationally costly and numerically unstable.", "Therefore, in many applications, it is important to find a low-dimensional representation of high-dimensional datasets.", "Classical methods, like the principal component analysis (PCA) #OTHEREFR , assume that the data is contained in a low-dimensional subspace.", "However, for complex datasets this assumption appears to be too restrictive, particularly when working with image datasets."], "text_after_citation": ["Based on this hypothesis, in recent years, many successful approaches have been based on generative models, able to represent high dimensional data in R n by a generator D : R d \u2192 R n with d n: these include generative adversarial networks (GANs) #OTHEREFR , variational autoencoders (VAEs) #OTHEREFR , injective flows #OTHEREFR and score-based diffusion models #OTHEREFR .", "For a survey on older approaches to manifold learning, the reader is referred to #OTHEREFR and to the references therein.", "Learning manifolds with multiple charts.", "Under the assumption that D is injective, the set of generated points {D(z) : z \u2208 R d } forms a manifold that approximates the training set.", "However, this requires that the data manifold admits a global parameterization."], "citing_paper_content": {"title": "Manifold Learning By Mixture Models Of Vaes For Inverse Problems", "abstract": "Representing a manifold of very high-dimensional data with generative models has been shown to be computationally efficient in practice. However, this requires that the data manifold admits a global parameterization. In order to represent manifolds of arbitrary topology, we propose to learn a mixture model of variational autoencoders. Here, every encoder-decoder pair represents one chart of a manifold. We propose a loss function for maximum likelihood estimation of the model weights and choose an architecture that provides us the analytical expression of the charts and of their inverses. Once the manifold is learned, we use it for solving inverse problems by minimizing a data fidelity term restricted to the learned manifold. To solve the arising minimization problem we propose a Riemannian gradient descent algorithm on the learned manifold. We demonstrate the performance of our method for low-dimensional toy examples as well as for deblurring and electrical impedance tomography on certain image manifolds."}, "cited_paper_content": {"title": "Representation Learning: A Review And New Perspectives", "abstract": "The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks. This motivates longer term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation, and manifold learning."}, "keywords": ["low-dimensional manifold", "high-dimensional datasets"], "citation_intent": "method"} {"citing_id": "2303.08514v1", "cited_id": "1804.01962", "section_title": "Novel Neural Networks-Based Approaches", "citation": "This paper combines traditional Gabor waveletbased iris coding with a DNN driven by post-mortem iris data for feature extraction, reducing the recognition error rate by one-third compared to the baseline method #REFR .", "text_before_citation": ["The two respective feature maps are combined into a single feature map reflecting the differences between domains, which is then fed into a subsequent convolutional layer to extract high-level semantic information further.", "The EER values of this research method on Q-FIRE and CASIA are 0.15% and 0.31%, respectively.", "The iris recognition technique proposed by #OTHEREFR uses a local circular Gabor filter for initial feature extraction before input to the CNN to retain all directional information.", "This design solves the problem of traditional Gabor wavelet transform insensitivity to circular orientation versus the difficulty of neural networks extracting directional features on the circular structure of the iris.", "#OTHEREFR proposes an iris recognition method explicitly designed for post-mortem samples, thus enabling the application of iris biometrics in forensic science."], "text_after_citation": ["In addition to filters, attention mechanisms and feature histogram methods are also applied to optimize the feature extraction process of IR.", "#OTHEREFR proposes a spatial attention feature fusion module to fuse features at different levels.", "Spatial attention #OTHEREFR can encode the different positions of each importance in the feature map.", "The fact that iris features are local features suggests that iris features have other spatial significance in different local regions; therefore, spatial attention feature fusion is well suited for iris recognition.", "The spatial attention feature fusion module in the dual spatial attention network proposed in this study can learn the weights of each location and effectively fuse features at different levels."], "citing_paper_content": {"title": "Deep Learning For Iris Recognition: A Review", "abstract": "Iris recognition is a secure biometric technology known for its stability and privacy. With no two irises being identical and little change throughout a person's lifetime, iris recognition is considered more reliable and less susceptible to external factors than other biometric recognition methods. Unlike traditional machine learning-based iris recognition methods, deep learning technology does not rely on feature engineering and boasts excellent performance. This paper collects 120 relevant papers to summarize the development of iris recognition based on deep learning. We first introduce the background of iris recognition and the motivation and contribution of this survey. Then, we present the common datasets widely used in iris recognition. After that, we summarize the key tasks involved in the process of iris recognition based on deep learning technology, including identification, segmentation, presentation attack detection, and localization. Finally, we discuss the challenges and potential development of iris recognition. This review provides a comprehensive sight of the research of iris recognition based on deep learning."}, "cited_paper_content": {"title": "Iris Recognition After Death", "abstract": "This paper presents a comprehensive study of post-mortem human iris recognition carried out for 1200 near-infrared and 1787 visible-light samples collected from 37 deceased individuals kept in mortuary conditions. We used four independent iris recognition methods (three commercial and one academic) to analyze genuine and impostor comparison scores and check the dynamics of iris quality decay over a period of up to 814 h after death. This study shows that post-mortem iris recognition may be close-to-perfect approximately 5\u20137 h after death and occasionally is still viable even 21 days after death. These conclusions contradict the statements present in the past literature that the iris is unusable as a biometrics shortly after death, and show that the dynamics of post-mortem changes to the iris that are important for biometric identification are more moderate than previously hypothesized. This paper contains a thorough medical commentary that helps to understand which post-mortem metamorphoses of the eye may impact the performance of automatic iris recognition. An important finding is that false-match probability is higher when live iris images are compared with post-mortem samples than when only live samples are used in comparisons. This paper conforms to reproducible research and the database used in this study is made publicly available to facilitate research on post-mortem iris recognition. To the best of our knowledge, this paper offers the most comprehensive evaluation of post-mortem iris recognition and the largest database of post-mortem iris images."}, "keywords": ["post-mortem iris data"], "citation_intent": "method"} {"citing_id": "2304.12479v1", "cited_id": "0706.3639", "section_title": "Defining Agi: A General Perspective", "citation": "For instance, an AGI agent shall be capable of understanding, learning, and carrying out any intellectual work that a human person is capable of #REFR .", "text_before_citation": ["AGI usually refers to machine intelligence that possesses human-like cognitive abilities."], "text_after_citation": ["In contrast, to narrow/limited AI, which is created to excel in specific tasks or domains, AGI systems mimic humans' general-purpose problem-solving abilities #OTHEREFR .", "AGI differs from limited AI in several significant ways, demonstrating the general objective of reaching humanlike intelligence in AI systems.", "The ability of AGI systems to function autonomously, making judgments and conducting actions without the need for ongoing human supervision, is one of these features.", "Thanks to this degree of autonomy, AGI may work well in complicated, dynamic situations, enabling it to adjust to unforeseen conditions #OTHEREFR .", "AGI also has the capacity for general-purpose learning, allowing it to reason and learn across various areas."], "citing_paper_content": {"title": "Artificial General Intelligence (Agi) For Education", "abstract": "Artificial general intelligence (AGI) has gained global recognition as a future technology due to the emergence of breakthrough large language models and chatbots such as GPT-4 and ChatGPT, respectively. AGI aims to replicate human intelligence through computer systems, which is one of the critical technologies having the potential to revolutionize the field of education. Compared to conventional AI models, typically designed for a limited range of tasks, demand significant amounts of domain-specific data for training and may not always consider intricate interpersonal dynamics in education. AGI, driven by the recent large pre-trained models, represents a significant leap in the capability of machines to perform tasks that require human-level intelligence, such as reasoning, problem-solving, decision-making, and even understanding human emotions and social interactions. This work reviews AGI's key concepts, capabilities, scope, and potential within future education, including setting educational goals, designing pedagogy and curriculum, and performing assessments. We also provide rich discussions over various ethical issues in education faced by AGI and how AGI will affect human educators. The development of AGI necessitates interdisciplinary collaborations between educators and AI engineers to advance research and application efforts."}, "cited_paper_content": {"title": "A Collection Of Definitions Of Intelligence", "abstract": "This chapter is a survey of a large number of informal definitions of \u201cintelligence\u201d that the authors have collected over the years. Naturally, compiling a complete list would be impossible as many definitions of intelligence are buried deep inside articles and books. Nevertheless, the 70 odd definitions presented here are, to the authors' knowledge, the largest and most well referenced collection there is."}, "keywords": ["AGI agent", "intellectual work"], "citation_intent": "background"} {"citing_id": "2304.10848v1", "cited_id": "1812.11061", "section_title": "A.2 Pseudo-Linear Time In The Regime With Positive Drift", "citation": "We note that there is a non-vanishing gap between our upper and lower bound in the theorem above when = \u03a9 #REFR .", "text_before_citation": ["Recall that this estimate was conditional on a fixed value of 0 .", "Since 0 follows a binomial distribution with parameters and #OTHEREFR 2 , we have 0 \u2265 2 \u2212 3/4 with probability 1\u2212 (1), and in this case,", "[ | 0 \u2265 2 \u2212 3/4 ] \u2265 (ln( 2 \u2212 3/4 \u22121) \u2212ln( ) \u22121) = (1\u2212 (1)) ln( ),", "where the last estimate exploits our assumption = ( ). Just from the contribution of this case, we", "obtain [ ] \u2265 (1 \u2212 (1)) [ | 0 \u2265 2 \u2212 3/4 ] = (1 \u2212 (1)) ln( ). \u25a1"], "text_after_citation": ["The reason, most likely, is the argument used in the lower bound proof that the Metropolis algorithm cannot be faster than randomized local search, which ignores any negative effect of accepting inferior solutions.", "For our purposes, the theorem above is sufficient, since for all but very large values of (where the gap is negligible) the runtime of the Metropolis algorithm is dominated by the second part of the optimization process starting from a solution with ( ) = .", "The reason why we could not prove a tighter bound for all values of is that the existing multiplicative drift theorems for lower bounds, e.g., Theorem 2.2 in #OTHEREFR or Theorem 3.7 in #OTHEREFR , either are not applicable to our process or necessarily lead to a constant-factor gap to the upper bound obtained from multiplicative drift.", "Applying the variable drift theorem from #OTHEREFR to the process = min{ | \u2264 } appears to be a promising way to overcome these difficulties, but since we do not need such a precise bound, we do not follow this route any further."], "citing_paper_content": {"title": "How Well Does The Metropolis Algorithm Cope With Local Optima? *", "abstract": "The Metropolis algorithm (MA) is a classic stochastic local search heuristic. It avoids getting stuck in local optima by occasionally accepting inferior solutions. To better and in a rigorous manner understand this ability, we conduct a mathematical runtime analysis of the MA on the CLIFF benchmark. Apart from one local optimum, cliff functions are monotonically increasing towards the global optimum. Consequently, to optimize a cliff function, the MA only once needs to accept an inferior solution. Despite seemingly being an ideal benchmark for the MA to profit from its main working principle, our mathematical runtime analysis shows that this hope does not come true. Even with the optimal temperature (the only parameter of the MA), the MA optimizes most cliff functions less efficiently than simple elitist evolutionary algorithms (EAs), which can only leave the local optimum by generating a superior solution possibly far away. This result suggests that our understanding of why the MA is often very successful in practice is not yet complete. Our work also suggests to equip the MA with global mutation operators, an idea supported by our preliminary experiments. CCS CONCEPTS \u2022 Theory of computation \u2192 Theory of randomized search heuristics."}, "cited_paper_content": {"title": "A Tight Runtime Analysis For The (\u039c + \u039b) Ea", "abstract": "Despite significant progress in the theory of evolutionary algorithms, the theoretical understanding of true population-based evolutionary algorithms remains challenging and only few rigorous results exist. Already for the most basic problem, the determination of the asymptotic runtime of the (\u03bc + \u03bb) evolutionary algorithm on the simple OneMax benchmark function, only the special cases \u03bc = 1 and \u03bb = 1 have been solved. In this work, we analyze this long-standing problem and show the asymptotically tight result that the runtime T, the number of iterations until the optimum is found, satisfies [EQUATION] where log+ x := max{1, log x} for all x > 0."}, "keywords": ["theorem", "non-vanishing gap"], "citation_intent": "background"} {"citing_id": "2303.12800v1", "cited_id": "1905.13430", "section_title": "Introduction", "citation": "In addition, most existing approaches are not applicable when the IoT devices are behind a NAT (network address translation) enabled router, as many of the features get altered in the NAT process #REFR .", "text_before_citation": ["This can help organizations effectively manage the security issues associated with IoT devices and determine whether the behavior/activity of the connected IoT devices is normal.", "Existing approaches & their limitations: Prior research suggested ways of identifying IoT devices by analyzing the network communication (Meidan et al., 2017a,b; #OTHEREFR .", "Since the methods proposed are based on machine learning, feature engineering (i.e., feature extraction, selection, and tuning) is required.", "This necessitates manual input from subject matter experts, which is both expensive and prone to errors.", "Existing approaches are timeconsuming, requiring multiple sessions to identify known and unknown (also referred to as unauthorized in this paper) IoT devices, and tend to have complex architecture, since they use multistage models."], "text_after_citation": ["Proposed solution: Our approach mitigates these limitations.", "Just a single session is needed to identify known and unknown IoT devices; in addition, it is free from the burden of feature engineering and the errors that are accompanied with feature engineering, and it has a simple architecture.", "The proposed approach also applies to IoT devices behind a NAT-enabled router, as the NAT process does not alter the payload of the communication.", "The proposed approach enables us to identify known and unknown IoT devices in the network in various scenarios.", "Organizations' use of DHCP and the ease in which MAC addresses can be spoofed has made it difficult to identify IoT devices using traditional approaches #OTHEREFR ."], "citing_paper_content": {"title": "Iot Device Identification Based On Network Communication Analysis Using Deep Learning", "abstract": "Attack vectors for adversaries have increased in organizations because of the growing use of less secure IoT devices. The risk of attacks on an organization's network has also increased due to the bring your own device (BYOD) policy which permits employees to bring IoT devices onto the premises and attach them to the organization's network. To tackle this threat and protect their networks, organizations generally implement security policies in which only white-listed IoT devices are allowed on the organization's network. To monitor compliance with such policies, it has become essential to distinguish IoT devices permitted within an organization's network from non-whitelisted (unknown) IoT devices. In this research, deep learning is applied to network communication for the automated identification of IoT devices permitted on the network. In contrast to existing methods, the proposed approach does not require complex feature engineering of the network communication, because the 'communication behavior' of IoT devices is represented as small images which are generated from the device's network communication payload. The proposed approach is applicable for any IoT device, regardless of the protocol used for communication. As our approach relies on the network communication payload, it is also applicable for the IoT devices behind a network address translation (NAT) enabled router. In this study, we trained various classifiers on a publicly accessible dataset to identify IoT devices in different scenarios, including the identification of known and unknown IoT devices, achieving over 99% overall average detection accuracy."}, "cited_paper_content": {"title": "Privacy-Preserving Detection Of Iot Devices Connected Behind A Nat In A Smart Home Setup", "abstract": "Today, telecommunication service providers (telcos) are exposed to cyber-attacks executed by compromised IoT devices connected to their customers' networks. Such attacks might have severe effects not only on the target of attacks but also on the telcos themselves. To mitigate those risks we propose a machine learning based method that can detect devices of specific vulnerable IoT models connected behind a domestic NAT, thereby identifying home networks that pose a risk to the telco's infrastructure and availability of services. As part of the effort to preserve the domestic customers' privacy, our method relies on NetFlow data solely, refraining from inspecting the payload. To promote future research in this domain we share our novel dataset, collected in our lab from numerous and various commercial IoT devices."}, "keywords": ["IoT devices"], "citation_intent": "background"} {"citing_id": "2304.08870v1", "cited_id": "1505.04597", "section_title": "Related Works", "citation": "Typically, a UNet #REFR is used to learn to produce the denoising signal. Conditioning e.g.", "text_before_citation": ["Diffusion Model.", "Recently diffusion model #OTHEREFR have shown superior image quality and text-guided capability.", "In training, the diffusion model gradually adds noise to the image until it becomes random noise; this process is known as forward diffusion.", "The diffused random noise act as latent variables and is denoised progressively to generate an image in image sampling; this progress is known as reverse diffusion."], "text_after_citation": ["text can be applied using classifier #OTHEREFR or classified-free approaches #OTHEREFR .", "Most methods #OTHEREFR use the latter to find the direction between the conditional and unconditional in the latent space, which to be applied in sampling time to guide the model towards the conditioning direction.", "However, denoising every image pixel can be computationally expensive; therefore, LDM #OTHEREFR proposed a two-stage process.", "It first trained a variational autoencoder (VAE) #OTHEREFR to encode the image into smaller dimensional latent variables, and the diffusion model learned to produce the VAE latent variables.", "Diffusion models could provide image editing by performing text-guided diffusion on regions defined by segmentation mask #OTHEREFR .Dreambooth #OTHEREFR shows that they could encode a person's face into a text token and use a diffusion model to generate the person in a different scene."], "citing_paper_content": {"title": "Upgpt: Universal Diffusion Model For Person Image Generation, Editing And Pose Transfer", "abstract": "Existing person image generative models can do either image generation or pose transfer but not both. We propose a unified diffusion model, UPGPT to provide a universal solution to perform all the person image tasks-generative, pose transfer, and editing. With fine-grained multimodality and disentanglement capabilities, our approach offers finegrained control over the generation and the editing process of images using a combination of pose, text, and image, all without needing a semantic segmentation mask which can be challenging to obtain or edit. We also pioneer the parameterized body SMPL model in pose-guided person image generation to demonstrate new capability-simultaneous pose and camera view interpolation while maintaining a person's appearance. Results on the benchmark Deep-Fashion dataset show that UPGPT is the new state-of-theart while simultaneously pioneering new capabilities of edit and pose transfer in human image generation."}, "cited_paper_content": {"title": "U-Net: Convolutional Networks For Biomedical Image Segmentation", "abstract": "There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net ."}, "keywords": ["Conditioning", "denoising signal"], "citation_intent": "method"} {"citing_id": "2303.02334v1", "cited_id": "1308.2140", "section_title": "Related Work", "citation": "Network science has introduced various types of centrality measures to determine the relative importance of nodes in a network under respective circumstances #REFR .", "text_before_citation": ["Some results on the MPC of nonlinear systems using model reduction have been reported in the literature. For example, Wiese et al. #OTHEREFR presented an MPC method for gas turbines.", "They specifically developed a lower-order internal model from a physics-based higher-order model using rigorous timescale separation arguments that can be extended to various gas turbine systems.", "Zhang and Liu #OTHEREFR considered the problem of the economic model predictive control of wastewater treatment plants based on model reduction.", "Their reduction is based on a technique called reduced-order trajectory segment linearization.", "The authors showed via numerical simulations that, while the proposed methods lead to improved computational efficiency, they do not involve reduced control performance."], "text_after_citation": ["Given this context, the interplay between control and centrality has been actively investigated. Liu et al.", "#OTHEREFR introduced the concept of control centrality to quantify the ability of a single node to control the entire network.", "Inspired by the relationship between control centrality and the hierarchical structure in networks, the authors designed efficient attack strategies against the controllability of malicious networks. Fitch et al.", "#OTHEREFR showed that the tracking performance of any leader set within a multiagent system can be quantified by a novel centrality measure called joint centrality.", "For both single and multiple leaders, the authors have analytically proven the effectiveness of the centrality measure for leader selection."], "citing_paper_content": {"title": "Reduced-Order Model Predictive Control Of A Fish Schooling Model", "abstract": "We study the problem of model predictive control (MPC) for the fish schooling model proposed by Gautrais et al. (Annales Zoologici Fennici, 2008). The high nonlinearity of the model attributed to its attraction/alignment/repulsion law suggests the need to use MPC for controlling the fish schooling's motion. However, for large schools, the hybrid nature of the law can make it numerically demanding to perform finite-horizon optimizations in MPC. Therefore, this paper proposes reducing the fish schooling model for numerically efficient MPC; the reduction is based on using the weighted average of the directions of individual fish in the school. We analytically show how using the normalized eigenvector centrality of the alignment-interaction network can yield a better reduction by comparing reduction errors. We confirm this finding on the weight and numerical efficiency of the MPC with the reduced-order model by numerical simulations. The proposed reduction allows us to control a school with up to 500 individuals. Further, we confirm that reduction with the normalized eigenvector centrality allows us to improve the control accuracy by factor of five when compared to that using constant weights."}, "cited_paper_content": {"title": "Axioms For Centrality", "abstract": "Given a social network, which of its nodes are more central? This question has been asked many times in sociology, psychology and computer science, and a whole plethora of centrality measures (a.k.a. centrality indices, or rankings) were proposed to account for the importance of the nodes of a network. In this paper, we try to provide a mathematically sound survey of the most important classic centrality measures known from the literature and propose an axiomatic approach to establish whether they are actually doing what they have been designed for. Our axioms suggest some simple, basic properties that a centrality measure should exhibit. Surprisingly, only a new simple measure based on distances, harmonic centrality, turns out to satisfy all axioms; essentially, harmonic centrality is a correction to Bavelas's classic closeness centrality designed to take unreachable nodes into account in a natural way. As a sanity check, we examine in turn each measure under the lens of information retrieval, leveraging state-of-the-art knowledge in the discipline to measure the effectiveness of the various indices in locating web pages that are relevant to a query. While there are some examples of this comparisons in the literature, here for the first time we take into consideration centrality measures based on distances, such as closeness, in an information-retrieval setting. The results match closely the data we gathered using our axiomatic approach. Our results suggest that centrality measures based on distances, which have been neglected in information retrieval in favour of spectral centrality measures in the last years, are actually of very high quality; moreover, harmonic centrality pops up as an excellent general-purpose centrality index for arbitrary directed graphs."}, "keywords": ["Network science", "centrality measures"], "citation_intent": "background"} {"citing_id": "2303.15684v1", "cited_id": "2001.06426", "section_title": "Vi. Implications And Challenges", "citation": "Complementing the work of Nayebi #REFR who reported that 87.8% of survey respondents agreed that images provide additional information compared to text, we empirically show how images are used and how essential they are to understand a question.", "text_before_citation": ["Results from RQ1 reveal six types of image contents (i.e., User interface, Source code, Error code, Diagram, Results, and Configuration) and four types of image sources (i.e., Desktop Application, Web Browser, Mobile Application, and Usercreated).", "Furthermore, we observe that images are essential and provide complementary information compared to their associated text.", "As shown in Table 2 , we find that 68% of images are essential for the questions.", "For example, in this question #OTHEREFR , the developer faced a problem when using Bootstrap input groups and applying the negative margin.", "To seek answers, the developer posted an image showing the unexpected user interface where the bottom highlight of the input was hidden to support the problem description."], "text_after_citation": ["For developers, our results provide concrete recommendations on the types of images that are commonly shared in specific situations.", "Stack Overflow in general intends to be against the usage of images as raised in this post #OTHEREFR , hence our study would be used to complement the evidence-based guidelines.", "For example, developers are commonly required to take a screenshot of their desktop (47%) to present a user interface (58%).", "Furthermore, users tend to provide images when asking about undesired outputs in their user interface.", "Meanwhile, in the aforementioned developer post, the developer expressed one of the major concerns of \"links to images fail\"."], "citing_paper_content": {"title": "Understanding The Role Of Images On Stack Overflow", "abstract": "Images are increasingly being shared by software developers in diverse channels including question-and-answer forums like Stack Overflow. Although prior work has pointed out that these images are meaningful and provide complementary information compared to their associated text, how images are used to support questions is empirically unknown. To address this knowledge gap, in this paper we specifically conduct an empirical study to investigate (I) the characteristics of images, (II) the extent to which images are used in different question types, and (III) the role of images on receiving answers. Our results first show that user interface is the most common image content and undesired output is the most frequent purpose for sharing images. Moreover, these images essentially facilitate the understanding of 68% of sampled questions. Second, we find that discrepancy questions are more relatively frequent compared to those without images, but there are no significant differences observed in description length in all types of questions. Third, the quantitative results statistically validate that questions with images are more likely to receive accepted answers, but do not speed up the time to receive answers. Our work demonstrates the crucial role that images play by approaching the topic from a new angle and lays the foundation for future opportunities to use images to assist in tasks like generating questions and identifying question-relatedness."}, "cited_paper_content": {"title": "Eye Of The Mind: Image Processing For Social Coding", "abstract": "Developers are increasingly sharing images in social coding environments alongside the growth in visual interactions within social networks. The analysis of the ratio between the textual and visual content of Mozilla's change requests and in Q/As of StackOverflow programming revealed a steady increase in sharing images over the past five years. Developers' shared images are meaningful and are providing complementary information compared to their associated text. Often, the shared images are essential in understanding the change requests, questions, or the responses submitted. Relying on these observations, we delve into the potential of automatic completion of textual software artifacts with visual content."}, "keywords": ["images"], "citation_intent": "background"} {"citing_id": "2305.00869v1", "cited_id": "1809.01812", "section_title": "Representation Learning For Spatialmultiomniglot", "citation": "In line with the finding of #REFR , increasing the number of K not only helps MDRE to reach the ground truth MI, but also the quality of representations improves from 86.7% to 100% test classification accuracy.", "text_before_citation": ["Figure 7b illustrates that MDRE's encoder learns representations that achieve \u223c100% Omniglot character classification for both d = n 2 = 4, 9.", "On the other hand, the performances of the single ratio estimator and TRE (using the same exact dimension-wise mixing to construct auxiliary distributions) both degrade as the complexity of the task increases, with TRE only reaching up to 91% and 85% for d = 4 and d = 9, respectively.", "All models were trained with the same encoder architecture to ensure fair comparison.", "We further studied the effect of changing K in the d = 4 setup.", "For K = 1, we aggregate all the dimensionwise mixed samples into 1 class, whereas for K = 3, we separate them into their respective classes (corresponding to the number of dimensions mixed). We illustrate this effect in Figure 7c ."], "text_after_citation": [], "citing_paper_content": {"title": "Estimating The Density Ratio Between Distributions With High Discrepancy Using Multinomial Logistic Regression", "abstract": "Functions of the ratio of the densities p/q are widely used in machine learning to quantify the discrepancy between the two distributions p and q. For high-dimensional distributions, binary classification-based density ratio estimators have shown great promise. However, when densities are well separated, estimating the density ratio with a binary classifier is challenging. In this work, we show that the state-of-the-art density ratio estimators perform poorly on well separated cases and demonstrate that this is due to distribution shifts between training and evaluation time. We present an alternative method that leverages multi-class classification for density ratio estimation and does not suffer from distribution shift issues. The method uses a set of auxiliary densities {m k } K k=1 and trains a multi-class logistic regression to classify the samples from p, q and {m k } K k=1 into K + 2 classes. We show that if these auxiliary densities are constructed such that they overlap with p and q, then a multi-class logistic regression allows for estimating log p/q on the domain of any of the K + 2 distributions and resolves the distribution shift problems of the current state-ofthe-art methods. We compare our method to state-of-the-art density ratio estimators on both synthetic and real datasets and demonstrate its superior performance on the tasks of density ratio estimation, mutual information estimation, and representation learning. Code: https://www.blackswhan.com/mdre/"}, "cited_paper_content": {"title": "Noise Contrastive Estimation And Negative Sampling For Conditional Models: Consistency And Statistical Efficiency", "abstract": "Noise Contrastive Estimation (NCE) is a powerful parameter estimation method for log-linear models, which avoids calculation of the partition function or its derivatives at each training step, a computationally demanding step in many cases. It is closely related to negative sampling methods, now widely used in NLP. This paper considers NCE-based estimation of conditional models. Conditional models are frequently encountered in practice; however there has not been a rigorous theoretical analysis of NCE in this setting, and we will argue there are subtle but important questions when generalizing NCE to the conditional case. In particular, we analyze two variants of NCE for conditional models: one based on a classification objective, the other based on a ranking objective. We show that the ranking-based variant of NCE gives consistent parameter estimates under weaker assumptions than the classification-based method; we analyze the statistical efficiency of the ranking-based and classification-based variants of NCE; finally we describe experiments on synthetic data and language modeling showing the effectiveness and trade-offs of both methods."}, "keywords": ["100% test classification"], "citation_intent": "result"} {"citing_id": "2303.14727v1", "cited_id": "1711.10275", "section_title": "Setting", "citation": "Through these results, we demonstrate that our approach can produce segmentation results that are comparable to the fully supervised baseline #REFR with only 0.02% annotation.", "text_before_citation": ["\u2020 means disabling graph propagation and relation network during inference, but note that they are still used in training. mIoU as shown in Table 4 .", "This experiment also demonstrates that our method can still achieve decent performance even though the annotator ignores several objects by mistake in \"One Thing One Click\" scheme.", "We further investigate the performance drop with a more challenging \"Four Things One Click\" scheme.", "However, the model cannot converge well in the very first iteration due to the insufficient label and the self-training fails in this case.", "Qualitative Results on ScanNet-v2 Then, we show prediction results on ScanNet-v2 in Figures 8."], "text_after_citation": ["See the error maps shown in (d) and (f) for better visualizations.", "Ablation Studies To further study the effectiveness of selftraining, graph propagation and relation network, we conduct ablation studies on these three modules on ScanNet-v2 validation set as shown in Table 5 with single view evaluation.", "\"3D U-Net\" indicates that the labels are propagated only based on the confidence score of the 3D U-Net itself, i.e., the unary term in Equation 2.", "This ablation is designed to manifest the effectiveness of self-training.", "The \"3D U-Net\" column in Table 5 manifests that the performance is consistently improved with self-training strategy even without pairwise energy term in Equation 2 and super-voxel partition."], "citing_paper_content": {"title": "One Thing One Click++: Self-Training For Weakly Supervised 3D Scene Understanding", "abstract": "3D scene understanding, e.g., point cloud semantic and instance segmentation, often requires large-scale annotated training data, but clearly, point-wise labels are too tedious to prepare. While some recent methods propose to train a 3D network with small percentages of point labels, we take the approach to an extreme and propose \"One Thing One Click,\" meaning that the annotator only needs to label one point per object. To leverage these extremely sparse labels in network training, we design a novel self-training approach, in which we iteratively conduct the training and label propagation, facilitated by a graph propagation module. Also, we adopt a relation network to generate the per-category prototype to enhance the pseudo label quality and guide the iterative training. Besides, our model can be compatible to 3D instance segmentation equipped with a point-clustering strategy. Experimental results on both ScanNet-v2 and S3DIS show that our self-training approach, with extremely-sparse annotations, outperforms all existing weakly supervised methods for 3D semantic and instance segmentation by a large margin, and our results are also comparable to those of the fully supervised counterparts. Codes and models are available at https://github.com/liuzhengzhe/One-Thing-One-Click."}, "cited_paper_content": {"title": "3D Semantic Segmentation With Submanifold Sparse Convolutional Networks", "abstract": "Convolutional networks are the de-facto standard for analyzing spatio-temporal data such as images, videos, and 3D shapes. Whilst some of this data is naturally dense (e.g., photos), many other data sources are inherently sparse. Examples include 3D point clouds that were obtained using a LiDAR scanner or RGB-D camera. Standard \"dense\" implementations of convolutional networks are very inefficient when applied on such sparse data. We introduce new sparse convolutional operations that are designed to process spatially-sparse data more efficiently, and use them to develop spatially-sparse convolutional networks. We demonstrate the strong performance of the resulting models, called submanifold sparse convolutional networks (SSCNs), on two tasks involving semantic segmentation of 3D point clouds. In particular, our models outperform all prior state-of-the-art on the test set of a recent semantic segmentation competition."}, "keywords": ["fully supervised baseline"], "citation_intent": "result"} {"citing_id": "2304.08733v1", "cited_id": "1908.07086", "section_title": "A View From Human Difficulty", "citation": "We calculated average time consumption using the CIFAR-H dataset #REFR , since CIFAR-N lacks annotator time information.", "text_before_citation": ["We conduct a similar analysis as Section 4.2 to seek opportunities for human-machine complimentary teaming from the human difficulty perspective instead.", "To quantify the human difficulty levels we use time spent labeling an image, and an entropy-based measure of human agreement.", "Time Spent: Assuming the i-th human annotator spent t i (x) time (in seconds) in annotating sample x, we adopt the average time spent on sample x as a measure to indicate the difficulty level of the given task x. Mathematically, t(x) := 1 k i\u2208[k] t i (x).", "A larget(x) means the sample x is relatively hard for human annotators since it requires humans to spend a long time on annotation."], "text_after_citation": ["Human agreement (entropy): Given k human annotators for a sample x, we calculate the entropy on x as:", "Entropy(x) = \u2212 i\u2208[K] p H,i (x) \u2022 log (p H,i (x)), where p H,i (x) := 1 k j\u2208[k] 1(f ML,j (x) = i).", "The human agreement is a metric to evaluate human consensus on a task given multiple annotations.", "A higher agreement level implies less ambiguity and easier judgment.", "We calculate inverse entropy using CIFAR-H because it provides 50 labels per image, enough to calculate agreement."], "citing_paper_content": {"title": "Do Humans And Machines Have The Same Eyes? Human-Machine Perceptual Differences On Image Classification", "abstract": "Trained computer vision models are assumed to solve vision tasks by imitating human behavior learned from training labels. Most efforts in recent vision research focus on measuring the model task performance using standardized benchmarks. Limited work has been done to understand the perceptual difference between humans and machines. To fill this gap, our study first quantifies and analyzes the statistical distributions of mistakes from the two sources. We then explore human vs. machine expertise after ranking tasks by difficulty levels. Even when humans and machines have similar overall accuracies, the distribution of answers may vary. Leveraging the perceptual difference between humans and machines, we empirically demonstrate a post-hoc human-machine collaboration that outperforms humans or machines alone."}, "cited_paper_content": {"title": "Human Uncertainty Makes Classification More Robust", "abstract": "The classification performance of deep neural networks has begun to asymptote at near-perfect levels. However, their ability to generalize outside the training set and their robustness to adversarial attacks have not. In this paper, we make progress on this problem by training with full label distributions that reflect human perceptual uncertainty. We first present a new benchmark dataset which we call CIFAR10H, containing a full distribution of human labels for each image of the CIFAR10 test set. We then show that, while contemporary classifiers fail to exhibit human-like uncertainty on their own, explicit training on our dataset closes this gap, supports improved generalization to increasingly out-of-training-distribution test datasets, and confers robustness to adversarial attacks."}, "keywords": ["CIFAR-H dataset"], "citation_intent": "method"} {"citing_id": "2304.02554v1", "cited_id": "1904.09675", "section_title": "Automatic Evaluation Metrics", "citation": "BERTScore #REFR assesses the similarity between two texts at the token level by measuring the soft overlap using contextual embeddings from BERT.", "text_before_citation": ["We select several evaluation metrics that are commonly used in summarization:", "ROUGE #OTHEREFR , which is the dominant automatic evaluation metric in summarization, is widely used by researchers.", "The most commonly used ROUGE measures are ROUGE-1, ROUGE-2, and ROUGE-L, which evaluate the similarity between two texts based on the overlap of unigrams, bigrams, and the longest common sequence."], "text_after_citation": ["Similarly, MoverScore #OTHEREFR uses n-gram embeddings that are pooled from BERT to compute the semantic distance between two texts at the n-gram level.", "BARTScore #OTHEREFR 1 views evaluation as a natural language generation task and considers that when the quality of the generated text is higher, BART is more likely to generate it from the source text or the reference, or to generate the reference from it.", "BARTScore can be flexibly applied to evaluate text from various perspectives.", "FactCC 2 and DAE 3 are two factuality metrics based on classification.", "When evaluating a summary, we use NLTK 4 to split it into individual sentences and classify each one as factually correct or not."], "citing_paper_content": {"title": "Human-Like Summarization Evaluation With Chatgpt", "abstract": "Evaluating text summarization is a challenging problem, and existing evaluation metrics are far from satisfactory. In this study, we explored ChatGPT's ability to perform humanlike summarization evaluation using four human evaluation methods on five datasets. We found that ChatGPT was able to complete annotations relatively smoothly using Likert scale scoring, pairwise comparison, Pyramid, and binary factuality evaluation. Additionally, it outperformed commonly used automatic evaluation metrics on some datasets. Furthermore, we discussed the impact of different prompts, compared its performance with that of human evaluation, and analyzed the generated explanations and invalid responses."}, "cited_paper_content": {"title": "Bertscore: Evaluating Text Generation With Bert", "abstract": "We propose BERTScore, an automatic evaluation metric for text generation. Analogously to common metrics, BERTScore computes a similarity score for each token in the candidate sentence with each token in the reference sentence. However, instead of exact matches, we compute token similarity using contextual embeddings. We evaluate using the outputs of 363 machine translation and image captioning systems. BERTScore correlates better with human judgments and provides stronger model selection performance than existing metrics. Finally, we use an adversarial paraphrase detection task and show that BERTScore is more robust to challenging examples compared to existing metrics."}, "keywords": ["BERTScore", "contextual embeddings"], "citation_intent": "method"} {"citing_id": "2304.02089v1", "cited_id": "1811.10155", "section_title": "Dataset", "citation": "Following the same paradigm used in #REFR , we construct queries for each purchased item with category information.", "text_before_citation": ["Besides, we selected three categories with different sizes: Clothing, Shoes & Jewelry, Toys & Games, Electronics.", "These datasets both contain several categories so that users may have different interests.", "Following the strategy in References #OTHEREFR , we extracted the users' product purchasing behaviors based on their reviews, i.e., the products they reviewed are the ones they purchased.", "Our model uses the previously purchased products in a neighboring window size to model the short-term user interests.", "We further filtered the dataset to make sure each user has at least 20 purchased products (i.e., 20 reviews)."], "text_after_citation": ["This strategy is based on the finding that directed product search is users' search for a producer's name, a brand, or a set of terms describing product category.", "We partitioned each of the four datasets into three sets: training, validation and testing sets.", "We first extracted the userproduct pairs from users' reviews, and then extracted the queries for these products, getting triplets.", "For each dataset, the last purchasing transaction of each user is held for the testing set, the second last for the validation set, and the rest for the training set.", "Moreover, we hid the reviews of the validation and testing sets in the training phase to simulate the real-world scenarios."], "citing_paper_content": {"title": "Hierarchically Fusing Long And Short-Term User Interests For Click-Through Rate Prediction In Product Search", "abstract": "Estimating Click-Through Rate (CTR) is a vital yet challenging task in personalized product search. However, existing CTR methods still struggle in the product search settings due to the following three challenges including how to more effectively extract users' short-term interests with respect to multiple aspects, how to extract and fuse users' long-term interest with short-term interests, how to address the entangling characteristic of long and short-term interests. To resolve these challenges, in this paper, we propose a new approach named Hierarchical Interests Fusing Network (HIFN), which consists of four basic modules namely Short-term Interests Extractor (SIE), Long-term Interests Extractor (LIE), Interests Fusion Module (IFM) and Interests Disentanglement Module (IDM). Specifically, SIE is proposed to extract user's short-term interests by integrating three fundamental interests encoders within it namely query-dependent, target-dependent and causal-dependent interest encoder, respectively, followed by delivering the resultant representation to the module LIE, where it can effectively capture user longterm interests by devising an attention mechanism with respect to the short-term interests from SIE module. In IFM, the achieved long and short-term interests are further fused in an adaptive manner, followed by concatenating it with original raw context features for the final prediction result. Last but not least, considering the entangling characteristic of long and short-term interests, IDM further devises a self-supervised framework to disentangle long and short-term interests. Extensive offline and online evaluations on a real-world e-commerce platform demonstrate the superiority of HIFN over state-of-the-art methods."}, "cited_paper_content": {"title": "Attentive Long Short-Term Preference Modeling For Personalized Product Search", "abstract": "E-commerce users may expect different products even for the same query, due to their diverse personal preferences. It is well-known that there are two types of preferences: long-term ones and short-term ones. The former refers to user' inherent purchasing bias and evolves slowly. By contrast, the latter reflects users' purchasing inclination in a relatively short period. They both affect users' current purchasing intentions. However, few research efforts have been dedicated to jointly model them for the personalized product search. To this end, we propose a novel Attentive Long Short-Term Preference model, dubbed as ALSTP, for personalized product search. Our model adopts the neural networks approach to learn and integrate the long- and short-term user preferences with the current query for the personalized product search. In particular, two attention networks are designed to distinguish which factors in the short-term as well as long-term user preferences are more relevant to the current query. This unique design enables our model to capture users' current search intentions more accurately. Our work is the first to apply attention mechanisms to integrate both long- and short-term user preferences with the given query for the personalized search. Extensive experiments over four Amazon product datasets show that our model significantly outperforms several state-of-the-art product search methods in terms of different evaluation metrics."}, "keywords": ["category information"], "citation_intent": "method"} {"citing_id": "2303.12936v1", "cited_id": "1706.03762", "section_title": "Comparing Bert And Distilbert", "citation": "DistilBERT, as a transformerbased model, is better in capturing long-term dependencies in an input sequence #REFR .", "text_before_citation": ["Recently, it was shown that ELMo and BERT make no significant difference in semantic analysis #OTHEREFR .", "Here it is observed that although they are close-by in the null context, DistilBERT is more robust than ELMo in the cross-context in text classification.", "The findings of this study are in line with prior work.", "The fairly comparable scores of ELMo and the traditional baselines in the null context supports the observation of #OTHEREFR that is, when it comes to contextual embeddings, there is only a small improvement in learning semantics over traditional ML methods.", "DistilBERT is on par with or exceeding ELMo on a binary text classification task #OTHEREFR ."], "text_after_citation": ["DistilBERT is lighter than ELMo and has a shorter training time #OTHEREFR .", "Here it should be noted that the experimental settings of the previous work and In this study, ELMo and DistilBERT are compared on their fine-tuning performance on two binary text classification tasks.", "The main focus was to see how much can these models be benefited in a practical way without any modification to the pretraining outputs.", "But the models were actually pretrained on entirely different corpora (ELMo on One Billion Words Benchmark #OTHEREFR , DistilBERT on English Wikipedia and Toronto BookCorpus #OTHEREFR ).", "If the models were also pretrained from scratch on the same corpus, it would be ensured that they utilize the same knowledge to learn the context."], "citing_paper_content": {"title": "", "abstract": "I am grateful to my family for their unconditional love and patience. I am grateful to Arzucan\u00d6zg\u00fcr, for being such an inspiring figure by her selfless devotion to research the most righteous way with the passion to contribute to the community. I am grateful to Ali H\u00fcrriyetoglu, for being such a role model, who could somehow always find a way to turn the mist of research questions into a structured path to create practical solutions by combining creativity and technique. I cannot thank enough my dear friends who put up with my whims throughout this journey. I thank fellows from TabiLAB for inspiring me with their brilliance, invaluable insights and recommendations. I thank Ko\u00e7 University EMW research team for their generosity in sharing the data which was created with blood, sweat and tears. I feel lucky that I got to meet fellows in EMW project engineering team who invested their precious time and energy to support me in this study from the very beginning. Lastly, I owe the deepest gratitude to our professors and staff members in our department who taught us how to form such a great community and made it feel like the dearest home from the day one."}, "cited_paper_content": {"title": "Attention Is All You Need", "abstract": "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data."}, "keywords": ["long-term dependencies", "input sequence"], "citation_intent": "background"} {"citing_id": "2304.08072v1", "cited_id": "1805.08841", "section_title": "Introduction", "citation": "It is worth noting that in the literature #REFR , it was emphasized that some unreliable results may be generated during the application of the CycleGAN, resulting in misdiagnosis of the disease. In order to solve this problem, Hiasa et al.", "text_before_citation": ["However, in most cases, it is very difficult to fully obtain the paired medical images, so the monitoring scheme has certain limitations.", "Consequently, unsupervised CycleGAN has a broader application prospect in the field of multimodal medical imaging generation.", "The model can find the mapping relationship between the source image and the target image, which can effectively eliminate the limitation of a pair of corresponding image pairs in the training process.", "In addition, the CycleGAN also uses cycle consistency loss to preserve key attributes between the input image and the generated image. For example, based on the CycleGAN, Wolterink et al.", "realized the direct synthesis of brain CT images from brain MR images without paired data #OTHEREFR ."], "text_after_citation": ["improved the CycleGAN network structure by adding gradient consistency loss during the training, then the accuracy of identifying boundaries is raised #OTHEREFR . While Lilian et al.", "improved the loss function to solve the existed problem that the gradient disappearance caused by the non-coincidence of the true sample and the generated sample is difficult to train #OTHEREFR However, there still exist the problems in the CycleGAN, such as long training time, slow convergence speed and easy to ignore spatial location information.", "In recent years, generative adversarial networks (GANs) #OTHEREFR have been proved to be a promising medical imaging generation method #OTHEREFR .", "GANs have shown excellent performance in the field of medical images, including medical imaging reconstruction #OTHEREFR , medical imaging classification #OTHEREFR , medical imaging detection #OTHEREFR , medical imaging segmentation #OTHEREFR and medical imaging denoising #OTHEREFR .", "Multimodal medical imaging generation obtains the mapping function from the source image to the target image by learning."], "citing_paper_content": {"title": "Two-Stage Mr Image Segmentation Method For Brain Tumors Based On Attention Mechanism", "abstract": "Multimodal magnetic resonance imaging (MRI) can reveal different patterns of human tissue and is crucial for clinical diagnosis. However, limited by cost, noise and manual labeling, obtaining diverse and reliable multimodal MR images remains a challenge. For the same lesion, different MRI manifestations have great differences in background information, coarse positioning and fine structure. In order to obtain better generation and segmentation performance, a coordination-spatial attention generation adversarial network (CASP-GAN) based on the cycle-consistent generative adversarial network (CycleGAN) is proposed. The performance of the generator is optimized by introducing the Coordinate Attention (CA) module and the Spatial Attention (SA) module. The two modules can make full use of the captured location information, accurately locating the interested region, and enhancing the generator model network structure. The ability to extract the structure information and the detailed information of the original medical image can help generate the desired image with higher quality. There exist some problems in the original CycleGAN that the training time is long, the parameter amount is too large, and it is difficult to converge. In response to this problem, we introduce the Coordinate Attention (CA) module to replace the Res Block to reduce the number of parameters, and cooperate with the spatial information extraction network above to strengthen the information extraction ability. On the basis of CASP-GAN, an attentional generative cross-modality segmentation (AGCMS) method is further proposed. This method inputs the modalities generated by CASP-GAN and the real modalities into the segmentation network for brain tumor segmentation. Experimental results show that CASP-GAN outperforms CycleGAN and some state-of-the-art methods in PSNR, SSMI and RMSE in most tasks. In addition, the Dice and Hausdorff95 obtained by AGCMS segmentation are higher than the values corresponding to a single modality, and are close to the values obtained by multiple real modalities, indicating that the method can achieve similar effects as multi-modalities.. In summary, the method proposed in this paper can be used as an effective method for clinical diagnosis of brain tumors and has broad application prospects."}, "cited_paper_content": {"title": "Distribution Matching Losses Can Hallucinate Features In Medical Image Translation", "abstract": "This paper discusses how distribution matching losses, such as those used in CycleGAN, when used to synthesize medical images can lead to mis-diagnosis of medical conditions. It seems appealing to use these new image synthesis methods for translating images from a source to a target domain because they can produce high quality images and some even do not require paired data. However, the basis of how these image translation models work is through matching the translation output to the distribution of the target domain. This can cause an issue when the data provided in the target domain has an over or under representation of some classes (e.g. healthy or sick). When the output of an algorithm is a transformed image there are uncertainties whether all known and unknown class labels have been preserved or changed. Therefore, we recommend that these translated images should not be used for direct interpretation (e.g. by doctors) because they may lead to misdiagnosis of patients based on hallucinated image features by an algorithm that matches a distribution. However there are many recent papers that seem as though this is the goal."}, "keywords": ["CycleGAN", "misdiagnosis"], "citation_intent": "background"} {"citing_id": "2303.14646v1", "cited_id": "1901.00596", "section_title": "B. Topics Regarding Emerging Ml Techniques", "citation": "In response, graph neural networks (GNNs), the deep learning models that are capable of tackling graph-related tasks, were proposed #REFR .", "text_before_citation": ["This is particularly problematic when a grid corresponds to a large region in the real world.", "Instead, we can use graph models for ride-hailing planning #OTHEREFR , #OTHEREFR . Road networks can be modeled as graphs.", "In a road graph, nodes are geographical locations, and edges are travel paths.", "Additional information, such as the number of riders and drivers, and the travel costs on paths, can be used as representations of the nodes and edges.", "Such graphs are represented in a non-Euclidean space which is difficult for knowledge extraction using traditional ML techniques."], "text_after_citation": ["We propose to use GNN to solve the ride-hailing planning problem in an end-to-end manner.", "Specifically, we can model road networks via graphs, design a GNN-based model to extract features from the graphs, and then map them to planning decisions.", "There are existing works that have employed GNNs in the transportation area, e.g., to calculate traffic predictions #OTHEREFR , #OTHEREFR and demand predictions #OTHEREFR , #OTHEREFR .", "Only a limited number of works however have used GNN in ride-hailing planning, e.g., #OTHEREFR , #OTHEREFR .", "The algorithm in #OTHEREFR can only match drivers and riders that are located on the same road segment, which restricts many other drivers and riders from being matched, and thus could lead to poor performance of their proposed method in terms of measures such as the total waiting times of riders."], "citing_paper_content": {"title": "A Survey Of Machine Learning-Based Ride-Hailing Planning", "abstract": "Ride-hailing is a sustainable transportation paradigm where riders access door-to-door traveling services through a mobile phone application, which has attracted a colossal amount of usage. There are two major planning tasks in a ride-hailing system: (1) matching, i.e., assigning available vehicles to pick up the riders, and (2) repositioning, i.e., proactively relocating vehicles to certain locations to balance the supply and demand of ride-hailing services. Recently, many studies of ride-hailing planning that leverage machine learning techniques have emerged. In this article, we present a comprehensive overview on latest developments of machine learning-based ride-hailing planning. To offer a clear and structured review, we introduce a taxonomy into which we carefully fit the different categories of related works according to the types of their planning tasks and solution schemes, which include collective matching, distributed matching, collective repositioning, distributed repositioning, and joint matching and repositioning. We further shed light on many real-world datasets and simulators that are indispensable for empirical studies on machine learning-based ride-hailing planning strategies. At last, we propose several promising research directions for this rapidly growing research and practical field."}, "cited_paper_content": {"title": "A Comprehensive Survey On Graph Neural Networks", "abstract": "Deep learning has revolutionized many machine learning tasks in recent years, ranging from image classification and video processing to speech recognition and natural language understanding. The data in these tasks are typically represented in the Euclidean space. However, there is an increasing number of applications, where data are generated from non-Euclidean domains and are represented as graphs with complex relationships and interdependency between objects. The complexity of graph data has imposed significant challenges on the existing machine learning algorithms. Recently, many studies on extending deep learning approaches for graph data have emerged. In this article, we provide a comprehensive overview of graph neural networks (GNNs) in data mining and machine learning fields. We propose a new taxonomy to divide the state-of-the-art GNNs into four categories, namely, recurrent GNNs, convolutional GNNs, graph autoencoders, and spatial-temporal GNNs. We further discuss the applications of GNNs across various domains and summarize the open-source codes, benchmark data sets, and model evaluation of GNNs. Finally, we propose potential research directions in this rapidly growing field."}, "keywords": ["graph-related tasks", "graph neural networks"], "citation_intent": "background"} {"citing_id": "2303.08544v1", "cited_id": "1404.5859", "section_title": "Many-To-One-Stable Matching Solution", "citation": "The order of preferences is given by the strictly ranked rate utilities of the two sides #REFR .", "text_before_citation": ["In this section, we propose assigning/matching the countermeasures to the attacks by a framework that considers stability as the solution concept instead of optimality. The applied framework involves a two-sided matching game.", "A Stable Matching Problem (SMP) is produced by a distributed process that matches together preference relations of the two sides that are of the same size."], "text_after_citation": ["SM solutions have been broadly used in wireless networks for problem-solving #OTHEREFR .", "In our problem, however, the number of detected attacks might be different from the number of countermeasures (i.e., different set sizes), which means we need to seek a many-to-one generalization of SMP called the HR problem #OTHEREFR ."], "citing_paper_content": {"title": "Joint Security-Vs-Qos Game Theoretical Optimization For Intrusion Response Mechanisms For Future Network Systems", "abstract": "Network connectivity exposes the network infrastructure and assets to vulnerabilities that attackers can exploit. Protecting network assets against attacks requires the application of security countermeasures. Nevertheless, employing countermeasures incurs costs, such as monetary costs, along with time and energy to prepare and deploy the countermeasures. Thus, an Intrusion Response System (IRS) shall consider security and QoS costs when dynamically selecting the countermeasures to address the detected attacks. This has motivated us to formulate a joint Security-vs-QoS optimization problem to select the best countermeasures in an IRS. The problem is then transformed into a matching game-theoretical model. Considering the monetary costs and attack coverage constraints, we first derive the theoretical upper bound for the problem and later propose stable matching-based solutions to address the trade-off. The performance of the proposed solution, considering different settings, is validated over a series of simulations."}, "cited_paper_content": {"title": "Distributed Channel Assignment In Cognitive Radio Networks: Stable Matching And Walrasian Equilibrium", "abstract": "We consider a set of secondary transmitter-receiver pairs in a cognitive radio setting. Based on channel sensing and access performances, we consider the problem of assigning channels orthogonally to secondary users through distributed coordination and cooperation algorithms. Two economic models are applied for this purpose: matching markets and competitive markets. In the matching market model, secondary users and channels build two agent sets. We implement a stable matching algorithm in which each secondary user, based on his achievable rate, proposes to the coordinator to be matched with desirable channels. The coordinator accepts or rejects the proposals based on the channel preferences which depend on interference from the secondary user. The coordination algorithm is of low complexity and can adapt to network dynamics. In the competitive market model, channels are associated with prices and secondary users are endowed with monetary budget. Each secondary user, based on his utility function and current channel prices, demands a set of channels. A Walrasian equilibrium maximizes the sum utility and equates the channel demand to their supply. We prove the existence of Walrasian equilibrium and propose a cooperative mechanism to reach it. The performance and complexity of the proposed solutions are illustrated by numerical simulations."}, "keywords": ["preferences", "strictly ranked rate"], "citation_intent": "background"} {"citing_id": "2304.11618v1", "cited_id": "1812.06410", "section_title": "C. Rq1: Main Results", "citation": "Besides, MANS performs particularly well in Hit@1 and MRR, which are sensitive to high-rank results #REFR .", "text_before_citation": ["We could observe that existing NS methods have poor performance and they are even worse than the normal NS.", "According to our previous analysis, these NS methods are designed for general KGE models and are unsuitable for the multi-modal scenario where modal information is carefully considered.", "They could not align different embeddings of each entity and get bad performance in MMKGE. The outperformance of MANS.", "MANS could achieve better link prediction results compared with baselines.", "For example, MANS-A achieves much better Hit@1 on FB15K compared with baselines (from 0.318 to 0.353, a relative improvement of 9.9%)."], "text_after_citation": ["This means that MANS can largely improve the accurate discriminatory ability of the model by aligning structural and visual embeddings. Necessity and effectiveness of MANS-V.", "According to the previous section, MANS-V is designed to align different modal information.", "Though it does not perform better than baseline methods, MANS-V is the fundamental component of the other three settings of MANS.", "Besides, we could prove with such a result that both modal alignment and positivenegative discrimination are important for MMKGE, which could be achieved by MANS-V and normal NS respectively.", "MANS-T, MANS-H, and MANS-A could perform better because they combine the advantages of both. In summary, MANS-V is a necessary design for MMKGE. Comparison of different MANS settings."], "citing_paper_content": {"title": "Modality-Aware Negative Sampling For Multi-Modal Knowledge Graph Embedding", "abstract": "Negative sampling (NS) is widely used in knowledge graph embedding (KGE), which aims to generate negative triples to make a positive-negative contrast during training. However, existing NS methods are unsuitable when multi-modal information is considered in KGE models. They are also inefficient due to their complex design. In this paper, we propose Modality-Aware Negative Sampling (MANS) for multi-modal knowledge graph embedding (MMKGE) to address the mentioned problems. MANS could align structural and visual embeddings for entities in KGs and learn meaningful embeddings to perform better in multi-modal KGE while keeping lightweight and efficient. Empirical results on two benchmarks demonstrate that MANS outperforms existing NS methods. Meanwhile, we make further explorations about MANS to confirm its effectiveness."}, "cited_paper_content": {"title": "Nscaching: Simple And Efficient Negative Sampling For Knowledge Graph Embedding", "abstract": "Knowledge graph (KG) embedding is a fundamental problem in data mining research with many real-world applications. It aims to encode the entities and relations in the graph into low dimensional vector space, which can be used for subsequent algorithms. Negative sampling, which samples negative triplets from non-observed ones in the training data, is an important step in KG embedding. Recently, generative adversarial network (GAN), has been introduced in negative sampling. By sampling negative triplets with large scores, these methods avoid the problem of vanishing gradient and thus obtain better performance. However, using GAN makes the original model more complex and harder to train, where reinforcement learning must be used. In this paper, motivated by the observation that negative triplets with large scores are important but rare, we propose to directly keep track of them with cache. However, how to sample from and update the cache are two important questions. We carefully design the solutions, which are not only efficient but also achieve good balance between exploration and exploitation. In this way, our method acts as a \"distilled\" version of previous GAN-based methods, which does not waste training time on additional parameters to fit the full distribution of negative triplets. The extensive experiments show that our method can gain significant improvement on various KG embedding models, and outperform the state-of-the-arts negative sampling methods based on GAN."}, "keywords": ["high-rank results"], "citation_intent": "background"} {"citing_id": "2303.05683v1", "cited_id": "1109.2378", "section_title": "Introduction", "citation": "Hierarchical agglomerative clustering algorithms (e.g., #REFR ) allow for partitioning the datasets for which merely a pairwise distance function (e.g., a metric) is defined.", "text_before_citation": ["Cluster analysis (e.g., #OTHEREFR ) is a machine learning technique where we discover interesting or otherwise useful partitions of a given dataset in a purely unsupervised way."], "text_after_citation": ["Most importantly, the number of clusters is not set in advance -a whole hierarchy of nested partitions can be generated with ease and then depicted on a tree-like diagram called a dendrogram.", "Hierarchical agglomerative clustering evolves around one simple idea: in each step, we merge the pair of closest clusters.", "To measure the proximity between two point sets, the intracluster distance is defined as an extension of the point-pairwise distance called a linkage function.", "For instance, in the single linkage approach, the distance between a cluster pair is given by the distance between the closest pair of points, one from the first cluster, the other from the second one.", "In complete linkage, on the other hand, we take the farthest-away pair."], "citing_paper_content": {"title": "Hierarchical Clustering With Owa-Based Linkages, The Lance-Williams Formula, And Dendrogram Inversions", "abstract": "Agglomerative hierarchical clustering based on Ordered Weighted Averaging (OWA) operators not only generalises the single, complete, and average linkages, but also includes intercluster distances based on a few nearest or farthest neighbours, trimmed and winsorised means of pairwise point similarities, amongst many others. We explore the relationships between the famous Lance-Williams update formula and the extended OWA-based linkages with weights generated via infinite coefficient sequences. Furthermore, we provide some conditions for the weight generators to guarantee the resulting dendrograms to be free from unaesthetic inversions."}, "cited_paper_content": {"title": "Modern Hierarchical, Agglomerative Clustering Algorithms", "abstract": "This paper presents algorithms for hierarchical, agglomerative clustering which perform most efficiently in the general-purpose setup that is given in modern standard software. Requirements are: (1) the input data is given by pairwise dissimilarities between data points, but extensions to vector data are also discussed (2) the output is a \"stepwise dendrogram\", a data structure which is shared by all implementations in current standard software. We present algorithms (old and new) which perform clustering in this setting efficiently, both in an asymptotic worst-case analysis and from a practical point of view. The main contributions of this paper are: (1) We present a new algorithm which is suitable for any distance update scheme and performs significantly better than the existing algorithms. (2) We prove the correctness of two algorithms by Rohlf and Murtagh, which is necessary in each case for different reasons. (3) We give well-founded recommendations for the best current algorithms for the various agglomerative clustering schemes."}, "keywords": ["Hierarchical agglomerative clustering"], "citation_intent": "background"} {"citing_id": "2303.01338v1", "cited_id": "1812.02843", "section_title": "C. Impact On Network Interpretation", "citation": "Although it has been demonstrated that adversarial patches are quite powerful at causing misclassi cation, these patches are highlighted using standard network interpretation methods, thereby disclosing the identity of the adversary #REFR .", "text_before_citation": [], "text_after_citation": ["One of the most well-known network interpretation algorithms, Grad-CAM #OTHEREFR , outperforms other state-of-theart interpretation algorithms on a sanity check. e Grad-CAM visualization results for traditional adversarial patch vs.", "AdvRain are evaluated using an ImageNet pretrained VGG-19 classi er (adding low vs high frequency pa erns).", "Unlike patch-based a acks that shi the model's focus from the object to the location of the patch, making them detectable, our a ack causes the model to overlook some important features that help the model make the decision, as shown in Figure 11 ."], "citing_paper_content": {"title": "Advrain: Adversarial Raindrops To A Ack Camera-Based Smart Vision Systems", "abstract": "Vision-based perception modules are increasingly deployed in many applications, especially autonomous vehicles and intelligent robots. ese modules are being used to acquire information about the surroundings and identify obstacles. Hence, accurate detection and classi cation are essential to reach appropriate decisions and take appropriate and safe actions at all times. Current studies have demonstrated that \"printed adversarial attacks\", known as physical adversarial attacks, can successfully mislead perception models such as object detectors and image classi ers. However, most of these physical attacks are based on noticeable and eye-catching patterns for generated perturbations making them identi able/detectable by human eye or in test drives. In this paper, we propose a camera-based inconspicuous adversarial attack (AdvRain) capable of fooling camera-based perception systems over all objects of the same class. Unlike mask based fake-weather attacks that require access to the underlying computing hardware or image memory, our attack is based on emulating the e ects of a natural weather condition (i.e., Raindrops) that can be printed on a translucent sticker, which is externally placed over the lens of a camera. Note, such perturbations are still inconspicuous in real-world deployments and their presence goes unnoticed due to their association with a natural phenomenon (as also advocated in [1]). To accomplish this, we provide an iterative process based on performing a random search aiming to identify critical positions to make sure that the performed transformation is adversarial for a target classi er. Our transformation is based on blurring prede ned parts of the captured image corresponding to the areas covered by the raindrop. We achieve a drop in average model accuracy of more than 45% and 40% on VGG19 for ImageNet and Resnet34 for Caltech-101, respectively, using only 20 raindrops. Index Terms-Adversarial machine learning, physical adversarial attack, Security, e ciency, perturbations, physical attacks. I. I e emergence of deep learning (DL) is creating disruptive transformations in a wide range of sectors especially autonomous driving [2]. For instance, leading manufacturers such as Google, Audi, BMW, and Tesla are striving to create autonomous vehicles (AVs) by combining this cu ing-edge technology with low-cost cameras forming the vision-based perception modules. AVs are being increasingly equipped with these modules to address high-pressure real-life scenarios, reach suitable decisions, and take appropriate and safe actions. In fact, their incorporation increased product demand and helped the market of autonomous vehicles grow. According to the Strategic Market Research (SMR), the market for autonomous vehicles will reach $196.97 billion by 2030 [3], growing at a CAGR of 25.7%."}, "cited_paper_content": {"title": "Towards Hiding Adversarial Examples From Network Interpretation", "abstract": "This work was performed under ::: the following financial assistance award: 60NANB18D279 ::: from U.S. Department of Commerce, National Institute of ::: Standards and Technology, and also funding from SAP SE"}, "keywords": ["adversarial patches"], "citation_intent": "method"} {"citing_id": "2305.02911v1", "cited_id": "1610.02391", "section_title": "Related Work", "citation": "Such interpretations can help improve the transparency and trustworthiness of the models, as well as identify potential sources of bias or error #REFR .", "text_before_citation": ["Since the UPD annotation in this dataset is based on a single factor, it cannot be utilized for UPD detection with complex scenarios and UPD factor ranking analysis.", "Consequently, we construct our experimental data using the Place Pulse 2.0 dataset in this work.", "Interpretability of Deep Learning Models.", "Although deep learning techniques have shown significant potential to achieve high accuracy for various complex tasks, it is crucial to understand how the model reaches its conclusions.", "This has led to a growing interest in developing methods for interpreting deep learning models, with the goal of understanding the reasoning behind their predictions."], "text_after_citation": ["Visual explanations are a class of methods for interpreting deep learning models that use visualizations to highlight the features of the input data that are most important for the model's decision #OTHEREFR .", "These methods offer a clear understanding of how the model processes the input data and which features are responsible for influencing the output.", "Visual explanations can also provide guidance to identify instances where the model may be making errors or exhibiting bias.", "There are various types of visual explanation for deep learning models, such as attention-based maps, activation maps, and occlusion-based methods #OTHEREFR .", "Attention maps #OTHEREFR highlight the regions of an image that are most important to the decision of the model by computing the gradients of the output with respect to the input."], "citing_paper_content": {"title": "Updexplainer: An Interpretable Transformer-Based Framework For Urban Physical Disorder Detection Using Street View Imagery", "abstract": "Urban Physical Disorder (UPD), such as old or abandoned buildings, broken sidewalks, litter, and graffiti, has a negative impact on residents' quality of life. They can also increase crime rates, cause social disorder, and pose a public health risk. Currently, there is a lack of efficient and reliable methods for detecting and understanding UPD. To bridge this gap, we propose UPDExplainer, an interpretable transformer-based framework for UPD detection. We first develop a UPD detection model based on the Swin Transformer architecture, which leverages readily accessible street view images to learn discriminative representations. In order to provide clear and comprehensible evidence and analysis, we subsequently introduce a UPD factor identification and ranking module that combines visual explanation maps with semantic segmentation maps. This novel integrated approach enables us to identify the exact objects within street view images that are responsible for physical disorders and gain insights into the underlying causes. Experimental results on the re-annotated Place Pulse 2.0 dataset demonstrate promising detection performance"}, "cited_paper_content": {"title": "Grad-Cam: Visual Explanations From Deep Networks Via Gradient-Based Localization", "abstract": "We propose a technique for producing \"visual explanations\" for decisions from a large class of CNN-based models, making them more transparent. Our approach - Gradient-weighted Class Activation Mapping (Grad-CAM), uses the gradients of any target concept, flowing into the final convolutional layer to produce a coarse localization map highlighting the important regions in the image for predicting the concept. Unlike previous approaches, GradCAM is applicable to a wide variety of CNN model-families: (1) CNNs with fully-connected layers (e.g. VGG), (2) CNNs used for structured outputs (e.g. captioning), (3) CNNs used in tasks with multimodal inputs (e.g. VQA) or reinforcement learning, without any architectural changes or re-training. We combine GradCAM with fine-grained visualizations to create a high-resolution class-discriminative visualization and apply it to off-the-shelf image classification, captioning, and visual question answering (VQA) models, including ResNet-based architectures. In the context of image classification models, our visualizations (a) lend insights into their failure modes (showing that seemingly unreasonable predictions have reasonable explanations), (b) are robust to adversarial images, (c) outperform previous methods on weakly-supervised localization, (d) are more faithful to the underlying model and (e) help achieve generalization by identifying dataset bias. For captioning and VQA, our visualizations show that even non-attention based models can localize inputs. Finally, we conduct human studies to measure if GradCAM explanations help users establish trust in predictions from deep networks and show that GradCAM helps untrained users successfully discern a \"stronger\" deep network from a \"weaker\" one. Our code is available at this https URL. A demo and a video of the demo can be found at this http URL and this http URL."}, "keywords": ["interpretations"], "citation_intent": "background"} {"citing_id": "2303.02906v1", "cited_id": "1707.04993", "section_title": "I. Introduction", "citation": "As illustrated in MoCoGAN #REFR , the latent space of GANs can be decomposed into content subspace and motion subspace.", "text_before_citation": ["That results in more complex networks, larger model sizes, and training costs.", "Early works #OTHEREFR - #OTHEREFR attempt to generate temporally coherent videos from random noises with conv-based GAN #OTHEREFR models directly, leading to high computational costs and unsatisfying performance on large-resolution datasets.", "Video synthesis frameworks based on pre-trained image generators are proposed to pursue higher quality and larger resolution, including MoCoGAN-HD and StyleVideoGAN.", "They design motion generators to manipulate latent codes for synthesizing videos based on pre-trained image generators such as BigGAN #OTHEREFR and StyleGAN2 #OTHEREFR .", "StyleGAN-V #OTHEREFR proposes to generate videos by concatenating sequences of encoded motion codes to the constant input tensor of StyleGAN2."], "text_after_citation": ["Following prior works, we focus on studying modern video generation datasets, where frames in a video share the same contents but vary in motions, e.g., persons talking in FaceForensics 256 2 #OTHEREFR and cloud moving in SkyTimelapse 256 2 #OTHEREFR .", "Previous works #OTHEREFR , #OTHEREFR propose sequential generation methods to synthesize videos with image-based generators.", "Additional motion code generators or randomly sampled motion codes are needed for motion generation.", "They train generative models to fit the distribution of videos without unique methods of keeping contents consistent. The contents and motions are learned implicitly during training.", "We find that such methods result in inappropriate content editing when generating motions."], "citing_paper_content": {"title": "Motionvideogan: A Novel Video Generator Based On The Motion Space Learned From Image Pairs", "abstract": "Video generation has achieved rapid progress benefiting from high-quality renderings provided by powerful image generators. We regard the video synthesis task as generating a sequence of images sharing the same contents but varying in motions. However, most previous video synthesis frameworks based on pre-trained image generators treat content and motion generation separately, leading to unrealistic generated videos. Therefore, we design a novel framework to build the motion space, aiming to achieve content consistency and fast convergence for video generation. We present MotionVideoGAN, a novel video generator synthesizing videos based on the motion space learned by pre-trained image pair generators. Firstly, we propose an image pair generator named MotionStyleGAN to generate image pairs sharing the same contents and producing various motions. Then we manage to acquire motion codes to edit one image in the generated image pairs and keep the other unchanged. The motion codes help us edit images within the motion space since the edited image shares the same contents with the other unchanged one in image pairs. Finally, we introduce a latent code generator to produce latent code sequences using motion codes for video generation. Our approach achieves state-of-the-art performance on the most complex video dataset ever used for unconditional video generation evaluation, UCF101. The source code is available on https://github.com/bbzhu-jy16/MotionVideoGAN."}, "cited_paper_content": {"title": "Mocogan: Decomposing Motion And Content For Video Generation", "abstract": "Visual signals in a video can be divided into content and motion. While content specifies which objects are in the video, motion describes their dynamics. Based on this prior, we propose the Motion and Content decomposed Generative Adversarial Network (MoCoGAN) framework for video generation. The proposed framework generates a video by mapping a sequence of random vectors to a sequence of video frames. Each random vector consists of a content part and a motion part. While the content part is kept fixed, the motion part is realized as a stochastic process. To learn motion and content decomposition in an unsupervised manner, we introduce a novel adversarial learning scheme utilizing both image and video discriminators. Extensive experimental results on several challenging datasets with qualitative and quantitative comparison to the state-of-the-art approaches, verify effectiveness of the proposed framework. In addition, we show that MoCoGAN allows one to generate videos with same content but different motion as well as videos with different content and same motion."}, "keywords": ["GANs"], "citation_intent": "background"} {"citing_id": "2303.00202v1", "cited_id": "2001.07676", "section_title": "Pre-Processing", "citation": "The label token z is mapped into the predicted label y by a verbalizer #REFR to complete the downstream tasks.", "text_before_citation": ["The intuition of prompting is to convert the downstream tasks into a similar form as the pre-training stage.", "For pre-trained models whose pretraining objective is to predict the next token given previous tokens, e.g., GPT-3 #OTHEREFR and BLOOM #OTHEREFR , prompting aims to ask a model to predict the next token (i.e.", "\"correct\" or \"wrong\" in this task) given previous tokens (patch contents and demonstrations).", "To help pre-trained models understand task-specific information, prompting modifies the input data by adding a piece of text description, namely prompt templates.", "The prompt template is a textual string that has two slots: (1) an input slot [X] for original input data x and (2) an answer slot [Z] for the predicted answer/label token z."], "text_after_citation": ["The verbalizer, denoted as V , is a function that maps each predicted label token z to a class\u0177 in the target class set Y :", "V : Z \u2192 Y (1)", "where Z indicates the label token set.", "In the APCA task, the label token set Z includes two tokens, i.e., {\"correct\", \"wrong\"}, and the class set Y contains {-, +} for indicating correct (clean) and wrong (overfitting) patches, respectively.", "Note that the verbalizer is manually defined instead of learned from data."], "citing_paper_content": {"title": "Patchzero: Zero-Shot Automatic Patch Correctness Assessment", "abstract": "Automated Program Repair (APR) techniques have shown more and more promising results in fixing real-world bugs. Despite the effectiveness, APR techniques still face an overfitting problem: a generated patch can be incorrect although it passes all tests. It is time-consuming to manually evaluate the correctness of generated patches that can pass all tests. To address this problem, many approaches have been proposed to automatically assess the correctness of patches generated by APR techniques. However, existing approaches require a large set of manually labeled patches as the training data. To mitigate the issue, in this study, we propose PatchZero, the patch correctness assessment by adopting large pre-trained models. Specifically, for patches generated by a new or unseen APR tool, PatchZero does not need labeled patches of this new or unseen APR tool for training (i.e., zero-shot) but directly queries the large pre-trained model to get predictions on the correctness labels without training. In this way, PatchZero can reduce the manual labeling effort when building a model to automatically assess the correctness of generated patches of new APR tools. To provide knowledge regarding the automatic patch correctness assessment (APCA) task to the large pre-trained models, we also design an instance-wise demonstration formation strategy by using contrastive learning. Specifically, PatchZero selects semantically similar patches to help the large pre-trained model to give more accurate predictions on the unlabeled patches. Our experimental results showed that PatchZero can achieve an accuracy of 82.7% and an F1-score of 86.0% on average although no labeled patch of the new or unseen APR tool is available. In addition, our proposed technique outperformed the prior state-of-the-art by a large margin."}, "cited_paper_content": {"title": "Exploiting Cloze Questions For Few-Shot Text Classification And Natural Language Inference.", "abstract": "Some NLP tasks can be solved in a fully unsupervised fashion by providing a pretrained language model with \"task descriptions\" in natural language (e.g., Radford et al., 2019). While this approach underperforms its supervised counterpart, we show in this work that the two ideas can be combined: We introduce Pattern-Exploiting Training (PET), a semi-supervised training procedure that reformulates input examples as cloze-style phrases which help the language model understand the given task. Theses phrases are then used to assign soft labels to a large set of unlabeled examples. Finally, regular supervised training is performed on the resulting training set. On several tasks, we show that PET outperforms both supervised training and unsupervised approaches in low-resource settings by a large margin."}, "keywords": ["predicted label"], "citation_intent": "method"} {"citing_id": "2304.08781v1", "cited_id": "1807.06306", "section_title": "A. Related Works", "citation": "In #REFR , the authors discussed power and time allocation in NOMA-assisted MEC and derived closed-form expressions for optimal MEC offloading policies.", "text_before_citation": ["In #OTHEREFR , the authors studied a IoT MEC network with multiple users and edge servers, where users randomly upload various tasks to the edge servers, and the servers utilize shared computation resources to process the uploaded tasks.", "To optimize the resource utilization, the authors proposed a heuristic resource scheduling policy.", "In #OTHEREFR , the authors defined the Age of Data (AoD) for IoT big data processing in MEC networks and proposed a Multi-armed Bandit (MAB) based online learning algorithm to minimize AoD.", "In #OTHEREFR , the authors considered a UAV-assisted MEC with NOMA and optimized the trajectory and computation offloading using a successive convex approximation.", "In #OTHEREFR , the authors leveraged federated learning (FL) in NOMA-based MEC and used graph theory to improve the communication efficiency of FL and accelerate model convergence."], "text_after_citation": ["In #OTHEREFR , the authors studied a data analysis scenario in MEC, where data is generated by energy harvesting technology-powered wireless devices and uploaded to the MEC server for centralized data processing.", "They proposed a Lyapunov-based algorithm to schedule resources while satisfying AoI constraints.", "However, despite achieving promising performance in various applications, these approaches have neglected to consider the impact of delay, which can be significant when there is a heavy load of requests from multiple users or when downlink resources are limited.", "This can result in a degradation of service quality for the users.", "Thus, there is a need to discuss \"when to serve\" these requests, taking into account the incurred delay."], "citing_paper_content": {"title": "Aoi-Delay Tradeoff In Mobile Edge Caching: A Mixed-Order Drift-Plus-Penalty Algorithm", "abstract": "We consider a scheduling problem in a Mobile Edge Caching (MEC) network, where a base station (BS) uploads messages from multiple source nodes (SNs) and transmits them to mobile users (MUs) via downlinks, aiming to jointly optimize the average service Age of Information (AoI) and service delay over MUs. This problem is formulated as a difficult sequential decision making problem with discrete-valued and linearly-constrained design variables. To solve this problem, we first approximate its achievable region by characterizing its superset and subset. The superset is derived based on the rate stability theorem, while the subset is obtained using a novel stochastic policy. We also validate that this subset is substantially identical to the achievable region when the number of schedule resources is large. Additionally, we propose a sufficient condition to check the existence of the solution to the problem. Then, we propose the mixed-order drift-plus-penalty algorithm that uses a dynamic programming (DP) method to optimize the summation over a linear and quadratic Lyapunov drift and a penalty term, to handle the product term over different queue backlogs in the objective function. Finally, by associating the proposed algorithm with the stochastic policy, we demonstrate that it achieves an O(1/V) versus O(V) tradeoff for the average AoI and average delay."}, "cited_paper_content": {"title": "Joint Power And Time Allocation For Noma\u2013Mec Offloading", "abstract": "This correspondence considers non-orthogonal multiple access (NOMA) assisted mobile edge computing (MEC), where the power and time allocation is jointly optimized to reduce the energy consumption of computation offloading. Closed-form expressions for the optimal power and time allocation solutions are obtained and used to establish the conditions for determining whether the conventional orthogonal multiple access (OMA), pure NOMA or hybrid NOMA should be used for MEC offloading."}, "keywords": ["optimal MEC offloading", "NOMA-assisted MEC"], "citation_intent": "background"} {"citing_id": "2304.02098v1", "cited_id": "1506.02142", "section_title": "Basic Performance", "citation": "This improvement is in line with other work using MC Dropout as much as comparisons can be made across domains, including when it was first proposed and tested on a regression task #REFR .", "text_before_citation": ["Results are summarized in Table 1 .", "First, we can clearly see that adding Gaussian and Shot noise decimates performance, by a significant degree.", "As the model was not trained to be invariant to this type of input noise, this is expected.", "Second, our approach generally leads to a slight performance increase over baseline (with one exception on the VIPER dataset with noise), on both clean and noisy data.", "Given that acquiring a more reliable measure of uncertainty is the goal, rather than outright performance on the PQ metric, any performance improvement is a welcome addition."], "text_after_citation": ["Unfortunately, our approach is not able to compensate for the loss in performance caused by the addition of noise.", "Third, we see that while adding more samples does result in a slight performance increase, 5 samples is typically sufficient to generate a reasonable seg-mentation result, especially considering the computational burden of using more and more samples.", "Finally, we see that due to the network's tendency to generate many false positives the details around pruning and evaluation are critical.", "The per-pixel evaluation, using the same thresholds as the baseline but without the brute-force subsumption of objects, performs worse than baseline, indicating that the brute-force approach is effective with respect to the PQ metric even if not useful from an uncertainty standpoint.", "Not applying any thresholds on the other hand harms performance on COCO and VIPER, but actually leads to a slight increase on KITTI-STEP."], "citing_paper_content": {"title": "Uncertainty Estimation In Deep Learning For Panoptic Segmentation", "abstract": "As deep learning-based computer vision algorithms continue to improve and advance the state of the art, their robustness to real-world data continues to lag their performance on datasets. This makes it difficult to bring an algorithm from the lab to the real world. Ensemble-based uncertainty estimation approaches such as Monte Carlo Dropout have been successfully used in many applications in an attempt to address this robustness issue. Unfortunately, it is not always clear if such ensemble-based approaches can be applied to a new problem domain. This is the case with panoptic segmentation, where the structure of the problem and architectures designed to solve it means that unlike image classification or even semantic segmentation, the typical solution of using a mean across samples cannot be directly applied. In this paper, we demonstrate how ensemblebased uncertainty estimation approaches such as Monte Carlo Dropout can be used in the panoptic segmentation domain with no changes to an existing network, providing both improved performance and more importantly a better measure of uncertainty for predictions made by the network. Results are demonstrated quantitatively and qualitatively on the COCO, KITTI-STEP and VIPER datasets."}, "cited_paper_content": {"title": "Dropout As A Bayesian Approximation: Representing Model Uncertainty In Deep Learning", "abstract": "Deep learning tools have gained tremendous attention in applied machine learning. However such tools for regression and classification do not capture model uncertainty. In comparison, Bayesian models offer a mathematically grounded framework to reason about model uncertainty, but usually come with a prohibitive computational cost. In this paper we develop a new theoretical framework casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes. A direct result of this theory gives us tools to model uncertainty with dropout NNs -- extracting information from existing models that has been thrown away so far. This mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy. We perform an extensive study of the properties of dropout's uncertainty. Various network architectures and non-linearities are assessed on tasks of regression and classification, using MNIST as an example. We show a considerable improvement in predictive log-likelihood and RMSE compared to existing state-of-the-art methods, and finish by using dropout's uncertainty in deep reinforcement learning."}, "keywords": ["MC Dropout"], "citation_intent": "result"} {"citing_id": "2303.02814v1", "cited_id": "1109.2378", "section_title": "How To Identify The Most Vulnerable Neurons", "citation": "We address this question by hierarchically clustering #REFR the neurons and visualizing the hierarchy with an interactive dendrogram, as shown in the Neuron Cluster View (Figure 2 -e).", "text_before_citation": ["Each row of this weight matrix corresponds to the weight of one class.", "respectively; however, for the case where several classes have a non-zero probability, the view shows multiple bars.", "The second type of information is the two vulnerability maps: one highlights regions where the image perturbation decreases the benign class's probability the most (Figure 2-d3) , and the other highlights regions where the image perturbation increases the adversarial class's probability the most (Figure 2-d4 ).", "While we show the top-20% vulnerable pixels by default, this can be interactively changed with a slider placed in the view.", "Once we identify a group of similar neurons, by only examining one representative neuron from the group, we may understand the behaviors of the entire group, making the exploration more efficient."], "text_after_citation": ["As a dissimilarity measure used for clustering, we use the 2 distance between two RFs to measure the corresponding neurons' dissimilarity.", "We want to note that the IoU between two RFs or the 2 distance between the corresponding activation maps could be used instead; however, based on our comparisons included in our Supplementary Material, there is no obvious advantage of one measure against the others.", "When users select a neuron from the Neuron Vulnerability View, its location in the dendrogram is highlighted, as shown in the red line at the far right of Figure 2 -e.", "From this location, users can easily identify neurons with similar behavior as similar neurons share the same ancestors and are generally located close to each other in the dendrogram.", "Also, to see the RFs summarized across similar neurons, the user can click multiple nodes from the dendrogram, which selects the clicked nodes and their descendants."], "citing_paper_content": {"title": "Visual Analytics Of Neuron Vulnerability To Adversarial Attacks On Convolutional Neural Networks", "abstract": "Adversarial attacks on a convolutional neural network (CNN)-injecting human-imperceptible perturbations into an input image-could fool a high-performance CNN into making incorrect predictions. The success of adversarial attacks raises serious concerns about the robustness of CNNs, and prevents them from being used in safety-critical applications, such as medical diagnosis and autonomous driving. Our work introduces a visual analytics approach to understanding adversarial attacks by answering two questions: (1) which neurons are more vulnerable to attacks and (2) which image features do these vulnerable neurons capture during the prediction? For the first question, we introduce multiple perturbation-based measures to break down the attacking magnitude into individual CNN neurons and rank the neurons by their vulnerability levels. For the second, we identify image features (e.g., cat ears) that highly stimulate a user-selected neuron to augment and validate the neuron's responsibility. Furthermore, we support an interactive exploration of a large number of neurons by aiding with hierarchical clustering based on the neurons' roles in the prediction. To this end, a visual analytics system is designed to incorporate visual reasoning for interpreting adversarial attacks. We validate the effectiveness of our system through multiple case studies as well as feedback from domain experts. CCS Concepts: \u2022 Human-centered computing \u2192 Visual analytics."}, "cited_paper_content": {"title": "Modern Hierarchical, Agglomerative Clustering Algorithms", "abstract": "This paper presents algorithms for hierarchical, agglomerative clustering which perform most efficiently in the general-purpose setup that is given in modern standard software. Requirements are: (1) the input data is given by pairwise dissimilarities between data points, but extensions to vector data are also discussed (2) the output is a \"stepwise dendrogram\", a data structure which is shared by all implementations in current standard software. We present algorithms (old and new) which perform clustering in this setting efficiently, both in an asymptotic worst-case analysis and from a practical point of view. The main contributions of this paper are: (1) We present a new algorithm which is suitable for any distance update scheme and performs significantly better than the existing algorithms. (2) We prove the correctness of two algorithms by Rohlf and Murtagh, which is necessary in each case for different reasons. (3) We give well-founded recommendations for the best current algorithms for the various agglomerative clustering schemes."}, "keywords": ["Neuron Cluster View", "interactive dendrogram"], "citation_intent": "method"} {"citing_id": "2304.05195v1", "cited_id": "1804.09893", "section_title": "Introduction", "citation": "For the purpose of privacy protection, we leverage Random Fourier Feature (RFF) #REFR to transform the extracted features to reduce the risk of privacy leakage.", "text_before_citation": ["To overcome these challenges, we propose HPN, a novel method to achieve pFedHPO.", "Instead of a context-free bandit model widely adopted in existing FedHPO works (e.g., FedEx #OTHEREFR ), we design a policy network that takes a client encoding as input and outputs a distribution over the original search space.", "Therefore, each client has its specific encoding to determine its hyperparameters personally.", "Meanwhile, this policy network's parameters are updated based on the observations collected from all the clients, which can be regarded as improving the sample efficiency via model sharing.", "In HPN, a client encoding is calculated based on the training sample of that client to reflect the similarity among clients."], "text_after_citation": ["Furthermore, we design a mechanism to conduct low-fidelity evaluations of hyperparameter configurations, which reduces variance for the signals used to update our policy and, in the meantime, alleviates the impact of model parameters' state.", "Finally, we conduct extensive experiments on FedHPO tasks of various domains, where the generalization error of the hyperparameter configurations searched by HPN is lower than that of related baselines.", "Our contributions are summarized as follows:", "\u2022 We are the first to systematically explore pFedHPO, discussing its setting, challenges, and critical problemsolving factors.", "\u2022 We propose a novel pFedHPO method HPN, which satisfies the sample efficiency and privacy preservation requirements."], "citing_paper_content": {"title": "Hpn: Personalized Federated Hyperparameter Optimization", "abstract": "Numerous research studies in the field of federated learning (FL) have attempted to use personalization to address the heterogeneity among clients, one of FL's most crucial and challenging problems. However, existing works predominantly focus on tailoring models. Yet, due to the heterogeneity of clients, they may each require different choices of hyperparameters, which have not been studied so far. We pinpoint two challenges of personalized federated hyperparameter optimization (pFedHPO): handling the exponentially increased search space and characterizing each client without compromising its data privacy. To overcome them, we propose learning a HyperParameter Network (HPN) fed with client encoding to decide personalized hyperparameters. The client encoding is calculated with a random projectionbased procedure to protect each client's privacy. Besides, we design a novel mechanism to debias the low-fidelity function evaluation samples for learning HPN. We conduct extensive experiments on FL tasks from various domains, demonstrating the superiority of HPN."}, "cited_paper_content": {"title": "Random Fourier Features For Kernel Ridge Regression: Approximation Bounds And Statistical Guarantees", "abstract": "Random Fourier features is one of the most popular techniques for scaling up kernel methods, such as kernel ridge regression. However, despite impressive empirical results, the statistical properties of random Fourier features are still not well understood. In this paper we take steps toward filling this gap. Specifically, we approach random Fourier features from a spectral matrix approximation point of view, give tight bounds on the number of Fourier features required to achieve a spectral approximation, and show how spectral matrix approximation bounds imply statistical guarantees for kernel ridge regression. Qualitatively, our results are twofold: on the one hand, we show that random Fourier feature approximation can provably speed up kernel ridge regression under reasonable assumptions. At the same time, we show that the method is suboptimal, and sampling from a modified distribution in Fourier space, given by the leverage function of the kernel, yields provably better performance. We study this optimal sampling distribution for the Gaussian kernel, achieving a nearly complete characterization for the case of low-dimensional bounded datasets. Based on this characterization, we propose an efficient sampling scheme with guarantees superior to random Fourier features in this regime."}, "keywords": ["privacy leakage", "Random Fourier Feature"], "citation_intent": "method"} {"citing_id": "2305.01482v1", "cited_id": "1706.10006", "section_title": "I. Introduction", "citation": "Following this idea, the Automated Audio Captioning (AAC) task appeared in 2017 #REFR and aims to create systems that generate a sentence written in natural language that describes an audio file.", "text_before_citation": ["In recent years, new machine learning systems have been significantly improved for text processing, generation, and understanding, leading to the use of natural language as a global interface between humans and machines.", "Free-form text can contain much more information than a predefined set of classes, which could improve the machine understanding of our world.", "In audio, most of the tasks are focused on classification and localization of sound events."], "text_after_citation": ["The audio can contain various sound events (human, natural, domestic, urban, music, effects...) of different lengths, recorded with different devices and in different scenes.", "The description can contain any kind of detail in the audio, with temporal or spatial relations between them (followed by, in the background...) or different characterizations (high-pitched, short, repetitive...).", "Since the descriptions are written by humans, we need to consider different words used to describe similar sounds (Birds are calling / chirping / singing / tweeting), different sentence structures (A door that needs to be oiled / A door with squeaky hinges), subjectivity (Man speaks in a foreign language), high-level descriptions (A vulgar man speaks / Unintelligible conversation), and vagueness (Someone speaks instead of A man gives a speech over a reverberating microphone).", "In AAC, most approaches use deep learning models trained with the standard Cross-Entropy (CE) loss.", "However, this loss tends to generate repetitive and generic content #OTHEREFR and does not take into account synonyms, various sentences structures or the semantic closeness."], "citing_paper_content": {"title": "Multitask Learning In Audio Captioning: A Sentence Embedding Regression Loss Acts As A Regularizer", "abstract": "In this work, we propose to study the performance of a model trained with a sentence embedding regression loss component for the Automated Audio Captioning task. This task aims to build systems that can describe audio content with a single sentence written in natural language. Most systems are trained with the standard Cross-Entropy loss, which does not take into account the semantic closeness of the sentence. We found that adding a sentence embedding loss term reduces overfitting, but also increased SPIDEr from 0.397 to 0.418 in our first setting on the AudioCaps corpus. When we increased the weight decay value, we found our model to be much closer to the current stateof-the-art methods, with a SPIDEr score up to 0.444 compared to a 0.475 score. Moreover, this model uses eight times less trainable parameters. In this training setting, the sentence embedding loss has no more impact on the model performance."}, "cited_paper_content": {"title": "Automated Audio Captioning With Recurrent Neural Networks", "abstract": "We present the first approach to automated audio captioning. We employ an encoder-decoder scheme with an alignment model in between. The input to the encoder is a sequence of log mel-band energies calculated from an audio file, while the output is a sequence of words, i.e. a caption. The encoder is a multi-layered, bi-directional gated recurrent unit (GRU) and the decoder a multi-layered GRU with a classification layer connected to the last GRU of the decoder. The classification layer and the alignment model are fully connected layers with shared weights between timesteps. The proposed method is evaluated using data drawn from a commercial sound effects library, ProSound Effects. The resulting captions were rated through metrics utilized in machine translation and image captioning fields. Results from metrics show that the proposed method can predict words appearing in the original caption, but not always correctly ordered."}, "keywords": ["natural language", "audio file"], "citation_intent": "background"} {"citing_id": "2304.02064v1", "cited_id": "1206.4683", "section_title": "F.1 Amazon Review", "citation": "As presented in the main paper, the original dataset is pre-processed to 5000-dimension bag-of words features following #REFR .", "text_before_citation": [], "text_after_citation": ["And the target shift data is created by randomly dropping 50% negative reviews.", "Model Structure Representation learner: [5000, 1000, 500, 100] MLP net using 0.7 dropout rate with Relu activation added after each hidden layer and finally output a 100-dimension feature representation.", "Predictor and duplicate predictor: [100, 2] linear transformation followed by a log softmax layer transforming the 100dimension feature to 2-class log probabilities.", "Loss function: we choose the \"negative log-likelihood loss\" as the loss function.", "Computing Resources The experiments were run on a server with 6 CPUs and 1 GPU of 32GB memory."], "citing_paper_content": {"title": "Algorithm-Dependent Bounds For Representation Learning Of Multi-Source Domain Adaptation", "abstract": "We use information-theoretic tools to derive a novel analysis of Multi-source Domain Adaptation (MDA) from the representation learning perspective. Concretely, we study joint distribution alignment for supervised MDA with few target labels and unsupervised MDA with pseudo labels, where the latter is relatively hard and less commonly studied. We further provide algorithm-dependent generalization bounds for these two settings, where the generalization is characterized by the mutual information between the parameters and the data. Then we propose a novel deep MDA algorithm, implicitly addressing the target shift through joint alignment. Finally, the mutual information bounds are extended to this algorithm providing a nonvacuous gradient-norm estimation. The proposed algorithm has comparable performance to the state-of-the-art on target-shifted MDA benchmark with improved memory efficiency. 1 We use the terminology of target shift in the rest of the paper to avoid confusion with the label shift assumption, where S(X|Y) = T (X|Y), S(Y) = T (Y)."}, "cited_paper_content": {"title": "Marginalized Denoising Autoencoders For Domain Adaptation", "abstract": "Stacked denoising autoencoders (SDAs) have been successfully used to learn new representations for domain adaptation. Recently, they have attained record accuracy on standard benchmark tasks of sentiment analysis across different text domains. SDAs learn robust data representations by reconstruction, recovering original features from data that are artificially corrupted with noise. In this paper, we propose marginalized SDA (mSDA) that addresses two crucial limitations of SDAs: high computational cost and lack of scalability to high-dimensional features. In contrast to SDAs, our approach of mSDA marginalizes noise and thus does not require stochastic gradient descent or other optimization algorithms to learn parameters--in fact, they are computed in closed-form. Consequently, mSDA, which can be implemented in only 20 lines of MATLAB\u2122, significantly speeds up SDAs by two orders of magnitude. Furthermore, the representations learnt by mSDA are as effective as the traditional SDAs, attaining almost identical accuracies in benchmark tasks."}, "keywords": ["original dataset"], "citation_intent": "method"} {"citing_id": "2304.06907v1", "cited_id": "0908.0050", "section_title": "Literature Review", "citation": "Traditional sparse representation approaches can be considered as unsupervised methods that either ignore label information #REFR or learn prototypes for each label separately.", "text_before_citation": ["The prototype-based approaches cluster samples and then choose one or a few samples or their representatives in each cluster #OTHEREFR .", "Dimensionality-reductionbased approaches, such as product quantization #OTHEREFR and hashing #OTHEREFR , focus on encoding high-dimensional feature spaces densely to achieve speed-up in searchbased methods as well as reducing the memory costs.", "Our proposed approach belongs to the third group of scalable methods, transform-based approaches #OTHEREFR , that treat image annotation as a multi-label problem.", "In these approaches, both visual and semantic modalities are incorporated into the learning procedure for transforming input data into another space with higher levels of discrimination.", "One of the successful techniques in this category is sparse representation whose objective is to represent each pattern just using the linear combination of a few numbers of prototypes."], "text_after_citation": ["In recent years, many researchers have focused on embedding label information into the prototype learning procedure, generally known as discriminative #OTHEREFR or coupled #OTHEREFR dictionary learning, extensively applied for multi-label classification problems #OTHEREFR .", "Discriminative sparse models have many applications in image classification, super-resolution #OTHEREFR , fault-diagnosis, etc.", "class-specific and shared discriminative dictionary learning (CASDDL) method #OTHEREFR aims to classify the steel sheets based on the Fisher discrimination method.", "They strive to extract the discriminative features for each class separately (inter-class information), along with a shared sub-dictionary which is common between all the classes for extracting the intra-class information. Li et al.", "#OTHEREFR offered a weighted regularization approach to tackle the noisy images."], "citing_paper_content": {"title": "Toward Real-Time Image Annotation Using Marginalized Coupled Dictionary Learning", "abstract": "In most image retrieval systems, images include various high-level semantics, called tags or annotations. Virtually all the state-of-the-art image annotation methods that handle imbalanced labeling are search-based techniques which are time-consuming. In this paper, a novel coupled dictionary learning approach is proposed to learn a limited number of visual prototypes and their corresponding semantics simultaneously. This approach leads to a real-time image annotation procedure. Another contribution of this paper is that utilizes a marginalized loss function instead of the squared loss function that is inappropriate for image annotation with imbalanced labels. We have employed a marginalized loss function in our method to leverage a simple and effective method of prototype updating. Meanwhile, we have introduced 1 regularization on semantic prototypes to preserve the sparse and imbalanced nature of labels in learned semantic prototypes. Finally, comprehensive experimental results on various datasets demonstrate the efficiency of the pro-Authors contributed equally on this research."}, "cited_paper_content": {"title": "Online Learning For Matrix Factorization And Sparse Coding", "abstract": "Sparse coding--that is, modelling data vectors as sparse linear combinations of basis elements--is widely used in machine learning, neuroscience, signal processing, and statistics. This paper focuses on the large-scale matrix factorization problem that consists of learning the basis set in order to adapt it to specific data. Variations of this problem include dictionary learning in signal processing, non-negative matrix factorization and sparse principal component analysis. In this paper, we propose to address these tasks with a new online optimization algorithm, based on stochastic approximations, which scales up gracefully to large data sets with millions of training samples, and extends naturally to various matrix factorization formulations, making it suitable for a wide range of learning problems. A proof of convergence is presented, along with experiments with natural images and genomic data demonstrating that it leads to state-of-the-art performance in terms of speed and optimization for both small and large data sets."}, "keywords": ["Traditional sparse representation"], "citation_intent": "background"} {"citing_id": "2304.03563v1", "cited_id": "1201.0490", "section_title": "B. Classification Models And Settings", "citation": "We use Scikit-learn #REFR , one of the popular and widely used tools to implement the techniques.", "text_before_citation": ["According to our comparative study, the relationship between question classes and their corresponding feature values might be complex.", "We thus choose five popular supervised machine learning techniques with different learning strategies.", "They are -i) Decision Tree #OTHEREFR (DT) ii) Random Forest (RF) #OTHEREFR , iii) Artificial Neural Network (ANN) #OTHEREFR , iv) K-Nearest Neighbors (KNN) #OTHEREFR , and v) Gaussian Naive Bayes (GNB) #OTHEREFR .", "In particular, we choose these machine learning algorithms because they are widely used in the relevant studies #OTHEREFR .", "We thus believe they can build reliable models to classify promoted and discoursed questions."], "text_after_citation": ["Parameter Tuning.", "Tuning parameters in classifiers is important because it changes the heuristics determining how they learn #OTHEREFR .", "For example, it controls the number of decision trees to use in RF or the number of clusters in KNN.", "Models trained with suboptimal parameter settings may underperform as parameter settings depend on the dataset #OTHEREFR .", "To select the best model configuration, we use GridSearchCV, the cross-validated grid search algorithm of Scikit-learn #OTHEREFR ."], "citing_paper_content": {"title": "Do Subjectivity And Objectivity Always Agree? A Case Study With Stack Overflow Questions", "abstract": "In Stack Overflow (SO), the quality of posts (i.e., questions and answers) is subjectively evaluated by users through a voting mechanism. The net votes (upvotes \u2212 downvotes) obtained by a post are often considered an approximation of its quality. However, about half of the questions that received working solutions got more downvotes than upvotes. Furthermore, about 18% of the accepted answers (i.e., verified solutions) also do not score the maximum votes. All these counter-intuitive findings cast doubts on the reliability of the evaluation mechanism employed at SO. Moreover, many users raise concerns against the evaluation, especially downvotes to their posts. Therefore, rigorous verification of the subjective evaluation is highly warranted to ensure a non-biased and reliable quality assessment mechanism. In this paper, we compare the subjective assessment of questions with their objective assessment using 2.5 million questions and ten text analysis metrics. According to our investigation, four objective metrics agree with the subjective evaluation, two do not agree, one either agrees or disagrees, and the remaining three neither agree nor disagree with the subjective evaluation. We then develop machine learning models to classify the promoted and discouraged questions. Our models outperform the state-ofthe-art models with a maximum of about 76%-87% accuracy."}, "cited_paper_content": {"title": "Scikit-Learn: Machine Learning In Python", "abstract": "Scikit-learn is a Python module integrating a wide range of state-of-the-art machine learning algorithms for medium-scale supervised and unsupervised problems. This package focuses on bringing machine learning to non-specialists using a general-purpose high-level language. Emphasis is put on ease of use, performance, documentation, and API consistency. It has minimal dependencies and is distributed under the simplified BSD license, encouraging its use in both academic and commercial settings. Source code, binaries, and documentation can be downloaded from http://scikit-learn.sourceforge.net."}, "keywords": ["Scikit-learn"], "citation_intent": "method"} {"citing_id": "2304.00320v1", "cited_id": "1904.09080", "section_title": "Inference Stabilizer As Implicit Regularizer", "citation": "Similar results have been obtained by assuming the deep learning algorithms have been driven by an Ornstein-Uhlenbeck like process #REFR , while our work does not rely on such assumption but is all based on our proposed Doubly Stochastic Models.", "text_before_citation": ["The regularization effects of unbiased random label noises should be", "EQUATION", "where \u2207 \u03b8 f (x, \u03b8) refers to the gradient of f over \u03b8 and the effects are controlled by the batch size B and the variance of label noises \u03c3 2 ."], "text_after_citation": [], "citing_paper_content": {"title": "Stochastic Gradient Descent With Random Label Noises: Doubly Stochastic Models And Inference Stabilizer", "abstract": "Random label noises (or observational noises) widely exist in practical machine learning settings. While previous studies primarily focus on the affects of label noises to the performance of learning, our work intends to investigate the implicit regularization effects of the label noises, under mini-batch sampling settings of stochastic gradient descent (SGD), with assumptions that label noises are unbiased. Specifically, we analyze the learning dynamics of SGD over the quadratic loss with unbiased label noises, where we model the dynamics of SGD as a stochastic differentiable equation (SDE) with two diffusion terms (namely a Doubly Stochastic Model). While the first diffusion term is caused by mini-batch sampling over the (label-noiseless) loss gradients as many other works on SGD [1, 2], our model investigates the second noise term of SGD dynamics, which is caused by mini-batch sampling over the label noises, as an implicit regularizer. Our theoretical analysis finds such implicit regularizer would favor some convergence points that could stabilize model outputs against perturbation of parameters (namely inference stability). Though similar phenomenon have been investigated in [3], our work"}, "cited_paper_content": {"title": "Implicit Regularization For Deep Neural Networks Driven By An Ornstein-Uhlenbeck Like Process", "abstract": "We consider deep networks, trained via stochastic gradient descent to minimize L2 loss, with the training labels perturbed by independent noise at each iteration. We characterize the behavior of the training dynamics near any parameter vector that achieves zero training error, in terms of an implicit regularization term corresponding to the sum over the data points, of the squared L2 norm of the gradient of the model with respect to the parameter vector, evaluated at each data point. We then leverage this general characterization, which holds for networks of any connectivity, width, depth, and choice of activation function, to show that for 2-layer ReLU networks of arbitrary width and L2 loss, when trained on one-dimensional labeled data $(x_1,y_1),\\ldots,(x_n,y_n),$ the only stable solutions with zero training error correspond to functions that: 1) are linear over any set of three or more co-linear training points (i.e. the function has no extra \"kinks\"); and 2) change convexity the minimum number of times that is necessary to fit the training data. Additionally, for 2-layer networks of arbitrary width, with tanh or logistic activations, we show that when trained on a single $d$-dimensional point $(x,y)$ the only stable solutions correspond to networks where the activations of all hidden units at the datapoint, and all weights from the hidden units to the output, take at most two distinct values, or are zero. In this sense, we show that when trained on \"simple\" data, models corresponding to stable parameters are also \"simple\"; in short, despite fitting in an over-parameterized regime where the vast majority of expressible functions are complicated and badly behaved, stable parameters reached by training with noise express nearly the \"simplest possible\" hypothesis consistent with the data. These results shed light on the mystery of why deep networks generalize so well in practice."}, "keywords": ["deep learning algorithms"], "citation_intent": "result"} {"citing_id": "2303.17963v1", "cited_id": "1401.5508", "section_title": "C. Optimal Control With Generic Basis Functions", "citation": "Instead of the actual basis functions, we use the reduced-rank GP approximation proposed in #REFR to systematically determine basis the basis functions \u03d5(x, u) and the parameter V of the prior.", "text_before_citation": ["In the following, we show that the proposed optimal control approach can yield good results even if no parametric model is known, which might be the case in practice."], "text_after_citation": ["We choose a GP with a squared exponential kernel and Fig. 2 .", "Optimal control with generic basis functions: The red area shows the output constraints, the gray area encompasses the 100 scenarios that were used to determine the input trajectory, the green line shows the mean prediction, and the blue line shows one realization of the output of the actual system when the input trajectory u \u22c6 0:H is applied from time t = 0.", "select the hyperparameters of the GP and the approximation based on the training data. These parameters are given in Table II .", "Afterwards, K = 100 models are sampled using the PG sampler, and the resulting OCP is solved as in the previous example.", "As Figure 2 shows, the results are similar to the case with known basis functions."], "citing_paper_content": {"title": "Learning-Based Optimal Control With Performance Guarantees For Unknown Systems With Latent States", "abstract": "As control engineering methods are applied to increasingly complex systems, data-driven approaches for system identification appear as a promising alternative to physicsbased modeling. While many of these approaches rely on the availability of state measurements, the states of a complex system are often not directly measurable. It may then be necessary to jointly estimate the dynamics and a latent state, making it considerably more challenging to design controllers with performance guarantees. This paper proposes a novel method for the computation of an optimal input trajectory for unknown nonlinear systems with latent states. Probabilistic performance guarantees are derived for the resulting input trajectory, and an approach to validate the performance of arbitrary control laws is presented. The effectiveness of the proposed method is demonstrated in a numerical simulation."}, "cited_paper_content": {"title": "Hilbert Space Methods For Reduced-Rank Gaussian Process Regression", "abstract": "This paper proposes a novel scheme for reduced-rank Gaussian process regression. The method is based on an approximate series expansion of the covariance function in terms of an eigenfunction expansion of the Laplace operator in a compact subset of $$\\mathbb {R}^d$$ ::: . On this approximate eigenbasis, the eigenvalues of the covariance function can be expressed as simple functions of the spectral density of the Gaussian process, which allows the GP inference to be solved under a computational cost scaling as $$\\mathcal {O}(nm^2)$$ ::: (initial) and $$\\mathcal {O}(m^3)$$ ::: (hyperparameter learning) with m basis functions and n data points. Furthermore, the basis functions are independent of the parameters of the covariance function, which allows for very fast hyperparameter learning. The approach also allows for rigorous error analysis with Hilbert space theory, and we show that the approximation becomes exact when the size of the compact subset and the number of eigenfunctions go to infinity. We also show that the convergence rate of the truncation error is independent of the input dimensionality provided that the differentiability order of the covariance function increases appropriately, and for the squared exponential covariance function it is always bounded by $${\\sim }1/m$$ ::: regardless of the input dimensionality. The expansion generalizes to Hilbert spaces with an inner product which is defined as an integral over a specified input density. The method is compared to previously proposed methods theoretically and through empirical tests with simulated and real data."}, "keywords": ["reduced-rank GP approximation"], "citation_intent": "method"} {"citing_id": "2303.00891v1", "cited_id": "1609.05158", "section_title": "B. Network Architecture", "citation": "In contrast, the decoder block incorporates a pixel shuffle layer #REFR between every two stages to increase the feature map's size by a factor of two. Next, we explain each component in more detail.", "text_before_citation": ["In order to reconstruct the robot, our model uses an image of the robot to predict the 3D coordinates along its centerline.", "Subsequently, a weighted linear least squares algorithm is employed to derive a 3D curve that parametrizes the center of the robot. As shown in Fig.", "2 , we design a network with a shared encoder and three decoders that are composed of four stages each.", "These stages consist of a residual block #OTHEREFR with two convolutional layers that are connected through BatchNorm #OTHEREFR and Leaky ReLU activations #OTHEREFR .", "The encoder block uses a maxpooling layer between every two stages to decrease the feature map's size by a factor of two."], "text_after_citation": ["a) Encoder: To incorporate location information, we add the 2D image indices to I RGB .", "This results in a 5channel image, represented as I in \u2208 R H\u00d7W \u00d75 .", "The encoder then extracts multi-scale features from the input image, which are subsequently passed through three decoders for further processing.", "b) Decoders: Given that the image includes background, not every pixel is relevant in determining the robot's shape.", "To address this, the importance decoder learns the significance of each pixel in shape reconstruction."], "citing_paper_content": {"title": "Moss: Monocular Shape Sensing For Continuum Robots", "abstract": "Continuum robots are promising candidates for interactive tasks in various applications due to their unique shape, compliance, and miniaturization capability. Accurate and real-time shape sensing is essential for such tasks yet remains a challenge. Embedded shape sensing has high hardware complexity and cost, while vision-based methods require stereo setup and struggle to achieve real-time performance. This paper proposes the first eye-to-hand monocular approach to continuum robot shape sensing. Utilizing a deep encoder-decoder network, our method, MoSSNet, eliminates the computation cost of stereo matching and reduces requirements on sensing hardware. In particular, MoSSNet comprises an encoder and three parallel decoders to uncover spatial, length, and contour information from a single RGB image, and then obtains the 3D shape through curve fitting. A two-segment tendon-driven continuum robot is used for data collection and testing, demonstrating accurate (mean shape error of 0.91 mm, or 0.36% of robot length) and real-time (70 fps) shape sensing on real-world data. Additionally, the method is optimized end-to-end and does not require fiducial markers, manual segmentation, or camera calibration. Code and datasets will be made available at https: //github.com/ContinuumRoboticsLab/MoSSNet. * indicates equal contribution."}, "cited_paper_content": {"title": "Real-Time Single Image And Video Super-Resolution Using An Efficient Sub-Pixel Convolutional Neural Network", "abstract": "Recently, several models based on deep neural networks have achieved great success in terms of both reconstruction accuracy and computational performance for single image super-resolution. In these methods, the low resolution (LR) input image is upscaled to the high resolution (HR) space using a single filter, commonly bicubic interpolation, before reconstruction. This means that the super-resolution (SR) operation is performed in HR space. We demonstrate that this is sub-optimal and adds computational complexity. In this paper, we present the first convolutional neural network (CNN) capable of real-time SR of 1080p videos on a single K2 GPU. To achieve this, we propose a novel CNN architecture where the feature maps are extracted in the LR space. In addition, we introduce an efficient sub-pixel convolution layer which learns an array of upscaling filters to upscale the final LR feature maps into the HR output. By doing so, we effectively replace the handcrafted bicubic filter in the SR pipeline with more complex upscaling filters specifically trained for each feature map, whilst also reducing the computational complexity of the overall SR operation. We evaluate the proposed approach using images and videos from publicly available datasets and show that it performs significantly better (+0.15dB on Images and +0.39dB on Videos) and is an order of magnitude faster than previous CNN-based methods."}, "keywords": ["detail", "pixel shuffle layer"], "citation_intent": "background"} {"citing_id": "2303.11162v1", "cited_id": "1703.10593", "section_title": "Competitors:", "citation": "CycleGAN #REFR utilises cycle-consistency loss with a GAN model for bidirectional image-to-image translation.", "text_before_citation": ["We compare our proposed framework with various state-of-the-art (SOTA) methods and two selfdesigned baselines.", "Among those, pix2pix #OTHEREFR uses a conditional generative model for sketch-to-photo translation.", "MUNIT #OTHEREFR aims to produce diverse outputs given one input sketch.", "It tries to decompose an image into a content and a style code followed by learning those codes simultaneously."], "text_after_citation": ["U-GAT-IT #OTHEREFR uses an attention module for image translation while focusing on the domain-discriminative parts.", "Moreover, employing a pre-trained StyleGAN #OTHEREFR we compare with the baseline B-Sketch Mapper which is equivalent to the baseline sketch mapper described in Sec. 5.1.", "Following optimisation-based GAN inversion #OTHEREFR , we design B-Sketch Optimiser where we iteratively optimise the latent code using input sketch as a ground-truth with perceptual loss #OTHEREFR .", "For a fair comparison, we trained all competing methods in a supervised manner with sketch-photo pairs from ShoeV2, ChairV2, and Handbag datasets."], "citing_paper_content": {"title": "Picture That Sketch: Photorealistic Image Generation From Abstract Sketches", "abstract": "Edgemap Sketch Sketch Sketch Sketch Sketch Existing Methods Proposed Method Figure 1. (a) Set of photos generated by the proposed method. (b) While existing methods can generate faithful photos from perfectly pixel-aligned edgemaps, they fall short drastically in case of highly deformed and sparse free-hand sketches. In contrast, our autoregressive sketch-to-photo generation model produces highly photorealistic outputs from highly abstract sketches."}, "cited_paper_content": {"title": "Unpaired Image-To-Image Translation Using Cycle-Consistent Adversarial Networks", "abstract": "Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain $X$ to a target domain $Y$ in the absence of paired examples. Our goal is to learn a mapping $G: X \\rightarrow Y$ such that the distribution of images from $G(X)$ is indistinguishable from the distribution $Y$ using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping $F: Y \\rightarrow X$ and introduce a cycle consistency loss to push $F(G(X)) \\approx X$ (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach."}, "keywords": ["GAN model"], "citation_intent": "method"} {"citing_id": "2303.07265v1", "cited_id": "1912.01703", "section_title": "A. Model Architecture", "citation": "Then the output of the dropout layer is fed to the output layer followed by ReLU activation and gives a vector encoding HEL's (DA, action) output pair. For implementation, we used the Pythorch library #REFR .", "text_before_citation": ["The HEL agent network includes two fully connected layers followed by a dropout layer (ratio=0.1)."], "text_after_citation": [], "citing_paper_content": {"title": "Multimodal Reinforcement Learning For Robots Collaborating With Humans", "abstract": "Robot assistants for older adults and people with disabilities need to interact with their users in collaborative tasks. The core component of these systems is an interaction manager whose job is to observe and assess the task, and infer the state of the human and their intent to choose the best course of action for the robot. Due to the sparseness of the data in this domain, the policy for such multi-modal systems is often crafted by hand; as the complexity of interactions grows this process is not scalable. In this paper, we propose a reinforcement learning (RL) approach to learn the robot policy. In contrast to the dialog systems, our agent is trained with a simulator developed by using human data and can deal with multiple modalities such as language and physical actions. We conducted a human study to evaluate the performance of the system in the interaction with a user. Our designed system shows promising preliminary results when it is used by a real user."}, "cited_paper_content": {"title": "Pytorch: An Imperative Style, High-Performance Deep Learning Library", "abstract": "Deep learning frameworks have often focused on either usability or speed, but not both. PyTorch is a machine learning library that shows that these two goals are in fact compatible: it provides an imperative and Pythonic programming style that supports code as a model, makes debugging easy and is consistent with other popular scientific computing libraries, while remaining efficient and supporting hardware accelerators such as GPUs. ::: In this paper, we detail the principles that drove the implementation of PyTorch and how they are reflected in its architecture. We emphasize that every aspect of PyTorch is a regular Python program under the full control of its user. We also explain how the careful and pragmatic implementation of the key components of its runtime enables them to work together to achieve compelling performance. ::: We demonstrate the efficiency of individual subsystems, as well as the overall speed of PyTorch on several common benchmarks."}, "keywords": ["HEL's (DA, action", "Pythorch library"], "citation_intent": "method"} {"citing_id": "2304.02098v1", "cited_id": "1506.02142", "section_title": "Our Approach", "citation": "We use Monte Carlo Dropout for this, conducting multiple passes through the network with dropout enabled at inference time #REFR .", "text_before_citation": ["Our task is to aggregate multiple samples collected from the neural network in such a way that we obtain both an acceptable panoptic segmentation result and a corresponding pixel-level uncertainty estimate.", "This occurs in three main parts, detailed in Algorithm 4 with Fig. 1 providing a highlevel overview of the process.", "The first is collecting and processing the samples from the network."], "text_after_citation": ["However, these samples must be at least minimally processed, with many of the detected proposals being nothing more than noise (labeled as \"background\") #OTHEREFR and requiring removal.", "Failing to do so introduces both needless computational overhead at later steps, and introduces a significant degree of noise to any generated segmentation.", "This part is straightforward, being an an adaptation of the baseline approach #OTHEREFR and described in Algorithm 1.", "For each sample, we first examine each proposal, and verify that the associated class label obtained through the argmax and softmax operators on the logits are not \"background\".", "Then, applying the softmax and argmax operators to the masks yields a proposal map, thus linking each pixel to an instance ID and a classification."], "citing_paper_content": {"title": "Uncertainty Estimation In Deep Learning For Panoptic Segmentation", "abstract": "As deep learning-based computer vision algorithms continue to improve and advance the state of the art, their robustness to real-world data continues to lag their performance on datasets. This makes it difficult to bring an algorithm from the lab to the real world. Ensemble-based uncertainty estimation approaches such as Monte Carlo Dropout have been successfully used in many applications in an attempt to address this robustness issue. Unfortunately, it is not always clear if such ensemble-based approaches can be applied to a new problem domain. This is the case with panoptic segmentation, where the structure of the problem and architectures designed to solve it means that unlike image classification or even semantic segmentation, the typical solution of using a mean across samples cannot be directly applied. In this paper, we demonstrate how ensemblebased uncertainty estimation approaches such as Monte Carlo Dropout can be used in the panoptic segmentation domain with no changes to an existing network, providing both improved performance and more importantly a better measure of uncertainty for predictions made by the network. Results are demonstrated quantitatively and qualitatively on the COCO, KITTI-STEP and VIPER datasets."}, "cited_paper_content": {"title": "Dropout As A Bayesian Approximation: Representing Model Uncertainty In Deep Learning", "abstract": "Deep learning tools have gained tremendous attention in applied machine learning. However such tools for regression and classification do not capture model uncertainty. In comparison, Bayesian models offer a mathematically grounded framework to reason about model uncertainty, but usually come with a prohibitive computational cost. In this paper we develop a new theoretical framework casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes. A direct result of this theory gives us tools to model uncertainty with dropout NNs -- extracting information from existing models that has been thrown away so far. This mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy. We perform an extensive study of the properties of dropout's uncertainty. Various network architectures and non-linearities are assessed on tasks of regression and classification, using MNIST as an example. We show a considerable improvement in predictive log-likelihood and RMSE compared to existing state-of-the-art methods, and finish by using dropout's uncertainty in deep reinforcement learning."}, "keywords": ["Monte Carlo Dropout"], "citation_intent": "method"} {"citing_id": "2305.02869v1", "cited_id": "1511.05641", "section_title": "C Details For Fine-Tuning", "citation": "However, the SQuAD results of MSG significantly outperform the baselines in all cases (Table 3, #REFR , which can be converted to additional advantages in pre-training time.", "text_before_citation": ["We fine-tune for 5 epochs for small datasets including CoLA, MRPC, STS-B, and RTE, (we exclude WNLI following most related work #OTHEREFR ), and 3 epochs for other tasks.", "For SQuAD, we fine-tune with a batch size of 12 and learning rate of 3e-5 for 2 epochs.", "SQuAD metrics are very sensitive to sequence length, and most models work well with only more than 384 sequence length.", "Thus, we continue pre-training after the whole schedule with a sequence length of 512 for 100k steps for all the methods compared.", "This yields slight drop on speed-up ratios (120% to 100% on Bert-large, disappears if pre-training with a large sequence length from scratch)."], "text_after_citation": [], "citing_paper_content": {"title": "2X Faster Language Model Pre-Training Via Masked Structural Growth", "abstract": "Acceleration of large language model pre-training is a critical issue in present NLP research. In this paper, we focus on speeding up pre-training by progressively growing from a small Transformer structure to a large one. There are two main research problems related to progressive growth: growth schedule and growth operator. For growth schedule, existing work has explored multi-stage expansion of depth and feedforward layers. However, the impact of each dimension on the schedule's efficiency is still an open question. For growth operator, existing work relies on the initialization of new weights to inherit knowledge, and achieve only non-strict function preservation, limiting further optimization of training dynamics. To address these issues, we propose Masked Structural Growth (MSG), including growth schedules involving all possible dimensions and strictly functionpreserving growth operators that is independent of the initialization of new weights. Experiments show that MSG is significantly faster than related work: we achieve a speed-up of 80% for Bert-base and 120% for Bert-large pre-training. Moreover, MSG is able to improve fine-tuning performances at the same time. 1 * Corresponding author 1 We will release our code for maximum reproducibility. Preprint. Under review."}, "cited_paper_content": {"title": "Net2Net: Accelerating Learning Via Knowledge Transfer", "abstract": "We introduce techniques for rapidly transferring the information stored in one neural net into another neural net. The main purpose is to accelerate the training of a significantly larger neural net. During real-world workflows, one often trains very many different neural networks during the experimentation and design process. This is a wasteful process in which each new model is trained from scratch. Our Net2Net technique accelerates the experimentation process by instantaneously transferring the knowledge from a previous network to each new deeper or wider network. Our techniques are based on the concept of function-preserving transformations between neural network specifications. This differs from previous approaches to pre-training that altered the function represented by a neural net when adding layers to it. Using our knowledge transfer mechanism to add depth to Inception modules, we demonstrate a new state of the art accuracy rating on the ImageNet dataset."}, "keywords": ["pre-training time"], "citation_intent": "result"} {"citing_id": "2304.04647v1", "cited_id": "1410.5024", "section_title": "A. Related Works", "citation": "In accordance to the classification in #REFR , the above-mentioned algorithms were designed based on general SSI.", "text_before_citation": ["This approach integrates Lp-norm regularization factor of filter weight into objection functions of the AFAs directly to quicken convergence process of non-zero elements of underlying systems (where 0, 1 p \uf03d", ", namely obtaining L0and L1-norms).", "Furthermore, several sparsity-aware LMS-type algorithms and corresponding improved variants have also been designed #OTHEREFR - #OTHEREFR .", "Likewise, the L1-norm NSAF (L1-NSAF) and reweighted L1-NSAF (L1-RNSAF) algorithms were presented by employing L1-and reweighted L1-norms respectively to address highly correlated signals #OTHEREFR .", "Similarly, through introducing L0-norm penalty factor, the resulting A L0-norm constraint NSAF (L0-NSAF) algorithm realized higher filtering accurateness and faster convergence #OTHEREFR , #OTHEREFR ."], "text_after_citation": ["However, for block-sparse systems that dominant coefficients are clustered into several groups, in which are generally encountered in satellite-linked and MIMO communications, they may produce evident decrease in learning performance.", "With regards to this, some block-sparsity-induced-type AFAs were developed #OTHEREFR - #OTHEREFR .", "It is indisputable that AFAs with constant step-size generate conflicting requirements between convergence rate and filtering accuracy.", "To eliminate this limitation, based on NSAF, various variable step-size (VSS) variants have been established one after another #OTHEREFR - #OTHEREFR .", "The variable step-size matrix NSAF (VSSM-NSAF) was firstly designed by assuming subband noise powers are the same as the posteriori error and acquired excellent estimation accurateness and quicker convergence behavior in comparison with NSAF #OTHEREFR ."], "citing_paper_content": {"title": "", "abstract": "Limited by fixed step-size and sparsity penalty factor, the conventional sparsity-aware normalized subband adaptive filtering (NSAF) type algorithms suffer from trade-off requirements of high filtering accurateness and quicker convergence behavior. To deal with this problem, this paper proposes variable step-size L0-norm constraint NSAF algorithms (VSS-L0-NSAFs) for sparse system identification. We first analyze mean-square-deviation (MSD) statistics behavior of the L0-NSAF algorithm innovatively in according to a novel recursion form and arrive at corresponding expressions for the cases that background noise variance is available and unavailable, where correlation"}, "cited_paper_content": {"title": "Block-Sparsity-Induced Adaptive Filter For Multi-Clustering System Identification", "abstract": "In order to improve the performance of least mean square (LMS)-based adaptive filtering for identifying block-sparse systems, a new adaptive algorithm called block-sparse LMS (BS-LMS) is proposed in this paper. The basis of the proposed algorithm is to insert a penalty of block-sparsity, which is a mixed \\$l_{2, 0}\\$ norm of adaptive tap-weights with equal group partition sizes, into the cost function of traditional LMS algorithm. To describe a block-sparse system response, we first propose a Markov-Gaussian model, which can generate a kind of system responses of arbitrary average sparsity and arbitrary average block length using given parameters. Then we present theoretical expressions of the steady-state misadjustment and transient convergence behavior of BS-LMS with an appropriate group partition size for white Gaussian input data. Based on the above results, we theoretically demonstrate that BS-LMS has much better convergence behavior than \\$l_0\\$-LMS with the same small level of misadjustment. Finally, numerical experiments verify that all of the theoretical analysis agrees well with simulation results in a large range of parameters."}, "keywords": ["above-mentioned algorithms"], "citation_intent": "method"} {"citing_id": "2304.13672v1", "cited_id": "2002.02255", "section_title": "V. Discussion", "citation": "Therefore, we think the adaptation of CT\u2192MRI is more difficult than MRI\u2192CT, which is consistent with discussions in #REFR . We think it is probably because that MRI Fig. 12 . Visualization of the training process of FVP.", "text_before_citation": ["For the experiments of three dataset in Table I, Table II and Table IV with two adaptation directions, the proposed FVP yields the best Dice values in 5 cases over all 6 cases, and the best ASD values in all 6 cases.", "We think that the experiments clearly demonstrate that our FVP perform generally better compared with other DA methods.", "In Table I , the number of CT samples (30) is larger than the number of MRI samples #OTHEREFR .", "While the Dice score by source-only in CT\u2192MRI (0.517) is much lower than that in MRI\u2192CT (0.647).", "In Table II , the Dice score by source-only in CT\u2192MRI (0.412) is much lower than that in MRI\u2192CT (0.714)."], "text_after_citation": ["The first and second rows are visualization of the real and imaginary parts of the prompt in the frequency domain, respectively.", "Note that the size of the prompt is 16 \u00d7 16.", "The third row is the prompt in the spatial domain with a size of 256 \u00d7 256.", "provides more texture details in organs, compared with CT, which makes training the source segmentation model in CT results in a worse model for adaptation than training in MRI.", "FVP does not work well on Dice in MRI\u2192CT adaptation in Table II , probably because MRI\u2192CT is the least discriminating task among all four tasks in Table I and II."], "citing_paper_content": {"title": "Fvp: Fourier Visual Prompting For Source-Free Unsupervised Domain Adaptation Of Medical Image Segmentation", "abstract": "Medical image segmentation methods normally perform poorly when there is a domain shift between training and testing data. Unsupervised Domain Adaptation (UDA) addresses the domain shift problem by training the model using both labeled data from the source domain and unlabeled data from the target domain. Source-Free UDA (SFUDA) was recently proposed for UDA without requiring the source data during the adaptation, due to data privacy or data transmission issues, which normally adapts the pre-trained deep model in the testing stage. However, in real clinical scenarios of medical image segmentation, the trained model is normally frozen in the testing stage. In this paper, we propose Fourier Visual Prompting (FVP) for SFUDA of medical image segmentation. Inspired by prompting learning in natural language processing, FVP steers the frozen pre-trained model to perform well in the target domain by adding a visual prompt to the input target data. In FVP, the visual prompt is parameterized using only a small amount of low-frequency learnable parameters in the input frequency space, and is learned by minimizing the segmentation loss between the predicted segmentation of the prompted target image and reliable pseudo segmentation label of the target image under the frozen model. To our knowledge, FVP is the first work to apply visual prompts to SFUDA for medical image segmentation. The proposed FVP is validated using three public datasets, and experiments demonstrate that FVP yields better segmentation results, compared with various existing methods."}, "cited_paper_content": {"title": "Unsupervised Bidirectional Cross-Modality Adaptation Via Deeply Synergistic Image And Feature Alignment For Medical Image Segmentation", "abstract": "Unsupervised domain adaptation has increasingly gained interest in medical image computing, aiming to tackle the performance degradation of deep neural networks when being deployed to unseen data with heterogeneous characteristics. In this work, we present a novel unsupervised domain adaptation framework, named as Synergistic Image and Feature Alignment (SIFA), to effectively adapt a segmentation network to an unlabeled target domain. Our proposed SIFA conducts synergistic alignment of domains from both image and feature perspectives. In particular, we simultaneously transform the appearance of images across domains and enhance domain-invariance of the extracted features by leveraging adversarial learning in multiple aspects and with a deeply supervised mechanism. The feature encoder is shared between both adaptive perspectives to leverage their mutual benefits via end-to-end learning. We have extensively evaluated our method with cardiac substructure segmentation and abdominal multi-organ segmentation for bidirectional cross-modality adaptation between MRI and CT images. Experimental results on two different tasks demonstrate that our SIFA method is effective in improving segmentation performance on unlabeled target images, and outperforms the state-of-the-art domain adaptation approaches by a large margin."}, "keywords": ["training process", "MRI\u2192CT"], "citation_intent": "result"} {"citing_id": "2303.07538v1", "cited_id": "1412.6980", "section_title": "Model Training Procedure", "citation": "We used the Adam optimizer #REFR with an L2 regularization penalty for performing stochastic gradient descent in all models.", "text_before_citation": ["HiSSNet and ProtoNet models were trained using episodic batches, where each batch contains a random subset of 12 sound classes from the taxonomy and 5 recordings for each class.", "To sufficiently balance training on both SED and SID classes, we created three different batch configurations: SED only, SED & SID, and SID only.", "The configuration for each batch was randomly selected during training, with a weight distribution of 60%/20%/20%, respectively.", "Each epoch contained 100 episodic batches, and the model was trained for 1000 epochs."], "text_after_citation": ["All HiSSNet models were trained on the full SEID dataset, while SOTA baseline models were trained on SED-specific or SID-specific subsets of the dataset.", "For the SED baselines, we implemented a dilated convolutional recurrent neural network (CRNN) #OTHEREFR and a non-hierarchical ProtoNet #OTHEREFR , and for the SID baselines we implemented a non-hierarchical ProtoNet #OTHEREFR .", "The SED baselines were trained on the data subset from ESC50, TUT, TAU, FSD50K and BBC, while the SID baselines were trained on the data subset from VCTK and LibriSpeech.", "The dilated CRNN was trained using standard batch processing with a batch size of 128."], "citing_paper_content": {"title": "Hissnet: Sound Event Detection And Speaker Identification Via Hierarchical Prototypical Networks For Low-Resource Headphones", "abstract": "Modern noise-cancelling headphones have significantly improved users' auditory experiences by removing unwanted background noise, but they can also block out sounds that matter to users. Machine learning (ML) models for sound event detection (SED) and speaker identification (SID) can enable headphones to selectively pass through important sounds; however, implementing these models for a user-centric experience presents several unique challenges. First, most people spend limited time customizing their headphones, so the sound detection should work reasonably well out of the box. Second, the models should be able to learn over time the specific sounds that are important to users based on their implicit and explicit interactions. Finally, such models should have a small memory footprint to run on low-power headphones with limited on-chip memory. In this paper, we propose addressing these challenges using HiSSNet (Hierarchical SED and SID Network). HiSSNet is an SEID (SED and SID) model that uses a hierarchical prototypical network to detect both general and specific sounds of interest and characterize both alarm-like and speech sounds. We show that HiSSNet outperforms an SEID model trained using non-hierarchical prototypical networks by 6.9-8.6%. When compared to state-of-the-art (SOTA) models trained specifically for SED or SID alone, HiSSNet achieves similar or better performance while reducing the memory footprint required to support multiple capabilities on-device."}, "cited_paper_content": {"title": "Adam: A Method For Stochastic Optimization", "abstract": "We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and/or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm."}, "keywords": ["stochastic gradient descent"], "citation_intent": "method"} {"citing_id": "2304.03367v1", "cited_id": "2001.09336", "section_title": "V. Simulations", "citation": "First, in a similar fashion to #REFR we report the coverage of the actual safe region and the overlap with the unsafe one.", "text_before_citation": ["In this section, we carry out simulations to quantify the performance of IGCI.", "We utilize a two dimensional navigation task and a robotic arm environment and consider scenaria of perfect and noisy state observations, as well as suboptimal trajectories.", "To evaluate the quality of the inferred constraints, we utilize a number of metrics."], "text_after_citation": ["More specifically, we define the constrained and unconstrained regions as", "EQUATION", "respectively.", "The corresponding regions constructed using the estimated parametersc i , i = 1, . . .", ", N c are designated with\u00c3 and\u00c3 c , respectively."], "citing_paper_content": {"title": "Constraint Inference In Control Tasks From Expert Demonstrations Via Inverse Optimization", "abstract": "Inferring unknown constraints is a challenging and crucial problem in many robotics applications. When only expert demonstrations are available, it becomes essential to infer the unknown domain constraints to deploy additional agents effectively. In this work, we propose an approach to infer affine constraints in control tasks after observing expert demonstrations. We formulate the constraint inference problem as an inverse optimization problem, and we propose an alternating optimization scheme that infers the unknown constraints by minimizing a KKT residual objective. We demonstrate the effectiveness of our method in a number of simulations, and show that our method can infer less conservative constraints than a recent baseline method while maintaining comparable safety guarantees."}, "cited_paper_content": {"title": "Learning Constraints From Locally-Optimal Demonstrations Under Cost Function Uncertainty", "abstract": "We present an algorithm for learning parametric constraints from locally-optimal demonstrations, where the cost function being optimized is uncertain to the learner. Our method uses the Karush-Kuhn-Tucker (KKT) optimality conditions of the demonstrations within a mixed integer linear program (MILP) to learn constraints which are consistent with the local optimality of the demonstrations, by either using a known constraint parameterization or by incrementally growing a parameterization that is consistent with the demonstrations. We provide theoretical guarantees on the conservativeness of the recovered safe/unsafe sets and analyze the limits of constraint learnability when using locally-optimal demonstrations. We evaluate our method on high-dimensional constraints and systems by learning constraints for 7-DOF arm and quadrotor examples, show that it outperforms competing constraint-learning approaches, and can be effectively used to plan new constraint-satisfying trajectories in the environment."}, "keywords": ["actual safe region"], "citation_intent": "method"} {"citing_id": "2303.16322v1", "cited_id": "1809.04184", "section_title": "Reducing Training Time", "citation": "Table 3 required between 0.49 and 0.8 GPU days to be discovered by FMAS, which is negligible compared to the 2,590 GPU days required by DPC #REFR .", "text_before_citation": ["Similar to Table 3 , Table 4 reports results when using the Mo-bileNetV2 backbone and searching for 25 generations.", "In addition to FLOPs and parameters, we also report inference latency on the GAP8 for the original model, FCN-VGG16, and selected search results.", "Note that while FCN-VGG16 uses only GAP8-supported operations, making it a suitable baseline, it requires more than 8\u00d7 more RAM than the GAP8 has, and therefore cannot be deployed. Table 1 reports their hyperparameters.", "FMAS-F1 cuts the number of FLOPs by 43% with respect to DL3+, and network parameters by 7.9%, for a relative increase of 5.2% in MIoU error; it was discovered in 0.68 GPU days (generation 17).", "FMAS-F2 trades off only 2.5% of the MIoU error of DL3+ for reducing FLOPs by 10%, and network parameters by 20%, in 0.52 GPU days (generation 13)."], "text_after_citation": ["Although DPC outperforms the MIoU of FMAS-F2 by 6.1%, FMAS-F2 cuts FLOPs and parameters by 9 and 22% respectively in only 0.65 GPU days."], "citing_paper_content": {"title": "Fmas: Fast Multi-Objective Supernet Architecture Search For Semantic Segmentation", "abstract": "We present FMAS, a fast multi-objective neural architecture search framework for semantic segmentation. FMAS subsamples the structure and pre-trained parameters of DeepLabV3+, without finetuning, dramatically reducing training time during search. To further reduce candidate evaluation time, we use a subset of the validation dataset during the search. Only the final, Pareto non-dominated, candidates are ultimately fine-tuned using the complete training set. We evaluate FMAS by searching for models that effectively trade accuracy and computational cost on the PASCAL VOC 2012 dataset. FMAS finds competitive designs quickly, e.g., taking just 0.5 GPU days to discover a DeepLabV3+ variant that reduces FLOPs and parameters by 10% and 20% respectively, for less than 3% increased error. We also search on an edge device called GAP8 and use its latency as the metric. FMAS is capable of finding 2.2\u00d7 faster network with 7.61% MIoU loss."}, "cited_paper_content": {"title": "Searching For Efficient Multi-Scale Architectures For Dense Image Prediction", "abstract": "The design of neural network architectures is an important component for achieving state-of-the-art performance with machine learning systems across a broad array of tasks. Much work has endeavored to design and build architectures automatically through clever construction of a search space paired with simple learning algorithms. Recent progress has demonstrated that such meta-learning methods may exceed scalable human-invented architectures on image classification tasks. An open question is the degree to which such methods may generalize to new domains. In this work we explore the construction of meta-learning techniques for dense image prediction focused on the tasks of scene parsing, person-part segmentation, and semantic image segmentation. Constructing viable search spaces in this domain is challenging because of the multi-scale representation of visual information and the necessity to operate on high resolution imagery. Based on a survey of techniques in dense image prediction, we construct a recursive search space and demonstrate that even with efficient random search, we can identify architectures that outperform human-invented architectures and achieve state-of-the-art performance on three dense prediction tasks including 82.7\\% on Cityscapes (street scene parsing), 71.3\\% on PASCAL-Person-Part (person-part segmentation), and 87.9\\% on PASCAL VOC 2012 (semantic image segmentation). Additionally, the resulting architecture is more computationally efficient, requiring half the parameters and half the computational cost as previous state of the art systems."}, "keywords": ["0.8 GPU days"], "citation_intent": "result"} {"citing_id": "2305.02374v1", "cited_id": "1810.04805", "section_title": "Bert-Based Word Embedding", "citation": "In this article, we employ the BERT model #REFR as one of the most recent PLM approaches.", "text_before_citation": ["The objective of word embedding [7] is to map words to semantic vectors to be used in algorithms of machine learning.", "It has been demonstrated to be a reliable method for extracting meaningful word representations based on their context #OTHEREFR .", "Diverse word embedding techniques, including Skip-gram #OTHEREFR and matrix factorization techniques such as GloVe #OTHEREFR , have been suggested to produce meaningful word representations for neural network models.", "Pre-trained language models (PLMs) typically employ unlabelled data to learn model parameters [3] ."], "text_after_citation": ["BERT is a The general BERT architecture is shown in Figure 2 .", "BERT uses a bi-directional transformer, in which representations are jointly conditioned on both the left and right context in all layers #OTHEREFR .", "This distinguishes BERT from Word2Vec and GloVe models which produce an embedding in one direction to ignore its contextual differences."], "citing_paper_content": {"title": "A Novel Plagiarism Detection Approach Combining Bert-Based Word Embedding, Attention-Based Lstms And An Improved Differential Evolution Algorithm", "abstract": "Detecting plagiarism involves finding similar items in two different sources. In this article, we propose a novel method for detecting plagiarism that is based on attention mechanism-based long short-term memory (LSTM) and bidirectional encoder representations from transformers (BERT) word embedding, enhanced with optimized differential evolution (DE) method for pre-training and a focal loss function for training. BERT could be included in a downstream task and fine-tuned as a task-specific BERT can be included in a downstream task and fine-tuned as a task-specific structure, while the trained BERT model is capable of detecting various linguistic characteristics. Unbalanced classification is one of the primary issues with plagiarism detection. We suggest a focal loss-based training technique that carefully learns minority class instances to solve this. Another issue that we tackle is the training phase itself, which typically employs gradient-based methods like back-propagation for the learning process and thus suffers from some drawbacks, including sensitivity to initialization. To initiate the BP process, we suggest a novel DE algorithm that makes use of a clustering-based mutation operator. Here, a winning cluster is identified for the current DE population, and a fresh updating method is used to produce potential answers. We evaluate our proposed approach on three benchmark datasets (MSRP, SNLI, and SemEval2014) and demonstrate that it performs well when compared to both conventional and population-based methods."}, "cited_paper_content": {"title": "Bert: Pre-Training Of Deep Bidirectional Transformers For Language Understanding", "abstract": "We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. ::: BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement)."}, "keywords": ["BERT model"], "citation_intent": "method"} {"citing_id": "2305.02056v1", "cited_id": "1806.02771", "section_title": "S", "citation": "We will now rewrite the formula from #REFR by adding a global disjunction over all values of these numbers g.", "text_before_citation": ["In particular, if for a, b \u2208 N we use Q a,b to denote the set of all numbers q of granularity b \u2208 N such that 0 \u2264 q \u2264 a, then", "\u2022 |X 1 | \u2264 |X 2 | (for some X 1 , X 2 \u2286 V", "g(t \u2264 t ) \u2208 Q N +1,\u03b3 .", "Moreover, the same also holds for t 2 (X\u0232 ) t 1 (X) since we can assume g(t 2 t 1 ) = t 1 (X), irrespective of t 2 (X\u0232 ) or the instantiation of Y .", "The weight comparison g(t 2 t 1 ) t 1 (X) may thus occur outside the quantification of Y ."], "text_after_citation": ["To avoid any confusion, we remark that this indeed increases the total length of the formula by a factor of O(N ), but the length of each individual subformula forming an atom of this disjunction remains upper-bounded by O(|\u03d5|).", "S\u2286comp(\u03d5,X) g:", "S\u222a{t 2 t 1 }\u2192Q N +1,\u03b3 t\u2264t \u2208S t(X) \u2264 g(t \u2264 t ) \u2227 g(t \u2264 t ) \u2264 t (X) (14) \u2227 g(t 2 t 1 ) t 1 (X) \u2227 \u2200\u0232 \u03c7 t 2 t 1 S (X\u0232 ) \u2227 (t 2 (X\u0232 ) g(t 2 t 1 ) \u2228 \u03c7 t 2 t 1 S (X\u0232 ) .", "To truly transform #OTHEREFR into a disjunction of table formulas, for g : S \u222a{t 2 t 1 } \u2192 Q N +1,\u03b3 and t \u2208 \u03c4 1 , we introduce a set of thresholds that will apply to individual weight terms occurring in #OTHEREFR rather than weight comparisons.", "For this we will want to choose the strictest threshold for an individual weight term implied by a comparison involving it, which in the case of t 1 may include a threshold introduced for t 1 t 2 ."], "citing_paper_content": {"title": "Approximate Evaluation Of Quantitative Second Order Queries", "abstract": "Courcelle's theorem and its adaptations to cliquewidth have shaped the field of exact parameterized algorithms and are widely considered the archetype of algorithmic meta-theorems. In the past decade, there has been growing interest in developing parameterized approximation algorithms for problems which are not captured by Courcelle's theorem and, in particular, are considered not fixed-parameter tractable under the associated widths. We develop a generalization of Courcelle's theorem that yields efficient approximation schemes for any problem that can be captured by an expanded logic we call \u2200 CMSO, capable of making logical statements about the sizes of set variables via so-called weight comparisons. The logic controls weight comparisons via the quantifier-alternation depth of the involved variables, allowing full comparisons for zero-alternation variables and limited comparisons for one-alternation variables. We show that the developed framework threads the very needle of tractability: on one hand it can describe a broad range of approximable problems, while on the other hand we show that the restrictions of our logic cannot be relaxed under well-established complexity assumptions. The running time of our approximation scheme is polynomial in 1/\u03b5, allowing us to fully interpolate between faster approximate algorithms and slower exact algorithms. This provides a unified framework to explain the tractability landscape of graph problems parameterized by treewidth and cliquewidth, as well as classical non-graph problems such as Subset Sum and Knapsack. * acknowledges support by Austrian Science fund (FWF) START project Y1329 \u2020 acknowledges support by Austrian Science fund (FWF) project J4651-N 1"}, "cited_paper_content": {"title": "Structural Rounding: Approximation Algorithms For Graphs Near An Algorithmically Tractable Class", "abstract": "We develop a new framework for generalizing approximation algorithms from the structural graph algorithm literature so that they apply to graphs somewhat close to that class (a scenario we expect is common when working with real-world networks) while still guaranteeing approximation ratios. The idea is to $\\textit{edit}$ a given graph via vertex- or edge-deletions to put the graph into an algorithmically tractable class, apply known approximation algorithms for that class, and then $\\textit{lift}$ the solution to apply to the original graph. We give a general characterization of when an optimization problem is amenable to this approach, and show that it includes many well-studied graph problems, such as Independent Set, Vertex Cover, Feedback Vertex Set, Minimum Maximal Matching, Chromatic Number, ($\\ell$-)Dominating Set, Edge ($\\ell$-)Dominating Set, and Connected Dominating Set. To enable this framework, we develop new editing algorithms that find the approximately-fewest edits required to bring a given graph into one of several important graph classes (in some cases, also approximating the target parameter of the family). For bounded degeneracy, we obtain a bicriteria $(4,4)$-approximation which also extends to a smoother bicriteria trade-off. For bounded treewidth, we obtain a bicriteria $(O(\\log^{1.5} n), O(\\sqrt{\\log w}))$-approximation, and for bounded pathwidth, we obtain a bicriteria $(O(\\log^{1.5} n), O(\\sqrt{\\log w} \\cdot \\log n))$-approximation. For treedepth $2$ (also related to bounded expansion), we obtain a $4$-approximation. We also prove complementary hardness-of-approximation results assuming $\\mathrm{P} \\neq \\mathrm{NP}$: in particular, these problems are all log-factor inapproximable, except the last which is not approximable below some constant factor ($2$ assuming UGC)."}, "keywords": ["global disjunction"], "citation_intent": "background"} {"citing_id": "2304.14660v2", "cited_id": "1902.09063", "section_title": "C. Model Selection For Different Testing Modes", "citation": "However, for C8 of #REFR , ViT-B showed better performance than ViT-H (S 2 : 69.3% (B) vs. 64.8% (H)).", "text_before_citation": ["Since the everything mode is the bright and key function of SAM, we evaluated it using both two models (everything with ViT-B and ViT-H, called S 1B and S 1H ).", "For the prompt mode, we first conducted preexperiments on part of our whole dataset #OTHEREFR , #OTHEREFR , #OTHEREFR , #OTHEREFR , #OTHEREFR , #OTHEREFR , as shown in Table III .", "Specifically, we tested two typical strategies (S 2 and S 5 ) using the DICE and HD metrics.", "The experiments showed that the two models had no absolute advantage in different datasets or objects.", "For example, in #OTHEREFR , ViT-H outperformed ViT-B on the DICE score for C2 (S 2 : 59.1% (B) vs. 73.2% (H))."], "text_after_citation": ["For different structures in the same dataset, e.g., C2, C3, and C4 in #OTHEREFR , there was no winner between ViT-B and ViT-H.", "Even for the same structure in different datasets such as the A4 structures in two datasets #OTHEREFR , #OTHEREFR , the performance of these two models remained indistinguishable.", "Based on the experiments in Table III, our conclusion is that the larger ViT-H model does not show a significant advantage over the smaller ViT-B model in MIS task under different prompt modes.", "In other words, we can assume that the two models should achieve similar average results over a large number of medical images.", "Thus, we chose ViT-B as the backbone to test our whole medical dataset to speed up the testing while ensuring that the result reflected SAM's segmentation ability, which is similar to #OTHEREFR ."], "citing_paper_content": {"title": "Segment Anything Model For Medical Images?", "abstract": "The Segment Anything Model (SAM) is the first foundation model for general image segmentation. It designed a novel promotable segmentation task, ensuring zero-shot image segmentation using the pre-trained model via two main modes including automatic everything and manual prompt (e.g., points and boxes). SAM has achieved impressive results on various natural image segmentation tasks. However, medical image segmentation (MIS) is more challenging due to the complex modalities, fine anatomical structures, uncertain and complex object boundaries, and wide-range object scales. Meanwhile, zero-shot and efficient MIS can well reduce the annotation time and boost the development of medical image analysis. Hence, SAM seems to be a potential tool and its performance on large medical datasets should be further validated. We collected and sorted 52 opensource datasets, and built a large medical segmentation dataset with 16 modalities, 68 objects, and 553K slice. We conducted a comprehensive analysis of different SAM testing strategies on the so-called COSMOS 553K dataset. Extensive experiments validate that SAM performs better with manual hints like points and boxes for object perception in medical images, leading to better performance in prompt mode compared to everything mode. Additionally, SAM shows remarkable performance in some specific objects and modalities, but is imperfect or even totally fails in other situations. Finally, we analyze the influence of different factors (e.g., the Fourier-based boundary complexity and size of the segmented objects) on SAM's segmentation performance. Extensive experiments validate that SAM's zeroshot segmentation capability is not sufficient to ensure its direct application to the MIS."}, "cited_paper_content": {"title": "A Large Annotated Medical Image Dataset For The Development And Evaluation Of Segmentation Algorithms", "abstract": "Semantic segmentation of medical images aims to associate a pixel with a label in a medical image without human initialization. The success of semantic segmentation algorithms is contingent on the availability of high-quality imaging data with corresponding labels provided by experts. We sought to create a large collection of annotated medical image datasets of various clinically relevant anatomies available under open source license to facilitate the development of semantic segmentation algorithms. Such a resource would allow: 1) objective assessment of general-purpose segmentation methods through comprehensive benchmarking and 2) open and free access to medical image data for any researcher interested in the problem domain. Through a multi-institutional effort, we generated a large, curated dataset representative of several highly variable segmentation tasks that was used in a crowd-sourced challenge - the Medical Segmentation Decathlon held during the 2018 Medical Image Computing and Computer Aided Interventions Conference in Granada, Spain. Here, we describe these ten labeled image datasets so that these data may be effectively reused by the research community."}, "keywords": ["ViT-B", "C8"], "citation_intent": "result"} {"citing_id": "2303.17334v1", "cited_id": "1801.05852", "section_title": "Introduction", "citation": "Current graph-based data mining tasks mainly model the relationships between nodes from the perspective of topology and attribute content #REFR , making nodes of the same class more closely embedded in the embedding space and dissimilar nodes further away.", "text_before_citation": ["With the help of network data processing platform, operators can mine user Call Detail Records (CDR) to detect fraudsters, thereby assisting mobile network operation decisions.", "Data analysis is the most crucial aspect of the whole process, and it is also full of challenges.", "Subscribers' communication behaviors naturally constitute graphs, and the use of graph mining techniques for data arXiv:2303.17334v1 [cs.", "LG] 29 Mar 2023 analysis has become an important trend.", "In recent years, graph neural network (GNN) #OTHEREFR , #OTHEREFR , #OTHEREFR has gradually become the mainstream technology for graph data mining."], "text_after_citation": ["A typical semi-supervised node classification task is performed as follows #OTHEREFR : given a large graph with a small scale of node labels, a classifier is trained on those labeled nodes and used to classify other nodes during the testing process.", "These related works include graph convolutional networks (GCN) #OTHEREFR and many of its variants proposed in recent years #OTHEREFR , which effectively utilize features in the spectral domain by using simplified first-order approximations.", "GraphSage #OTHEREFR and Graph Attention Network(GAT) #OTHEREFR utilize features in the spatial domain to better adapt to different graph topologies.", "GNNs have achieved remarkable performance in many application domains, such as text classification #OTHEREFR , image recognition #OTHEREFR , and recommender systems #OTHEREFR .", "GNN-based graph data anomaly detection has also made great progress #OTHEREFR ."], "citing_paper_content": {"title": "Gat-Cobo: Cost-Sensitive Graph Neural Network For Telecom Fraud Detection", "abstract": "Along with the rapid evolution of mobile communication technologies, such as 5G, there has been a drastically increase in telecom fraud, which significantly dissipates individual fortune and social wealth. In recent years, graph mining techniques are gradually becoming a mainstream solution for detecting telecom fraud. However, the graph imbalance problem, caused by the Pareto principle, brings severe challenges to graph data mining. This is a new and challenging problem, but little previous work has been noticed. In this paper, we propose a Graph ATtention network with COst-sensitive BOosting (GAT-COBO) for the graph imbalance problem. First, we design a GAT-based base classifier to learn the embeddings of all nodes in the graph. Then, we feed the embeddings into a well-designed cost-sensitive learner for imbalanced learning. Next, we update the weights according to the misclassification cost to make the model focus more on the minority class. Finally, we sum the node embeddings obtained by multiple cost-sensitive learners to obtain a comprehensive node representation, which is used for the downstream anomaly detection task. Extensive experiments on two real-world telecom fraud detection datasets demonstrate that our proposed method is effective for the graph imbalance problem, outperforming the state-of-the-art GNNs and GNN-based fraud detectors. In addition, our model is also helpful for solving the widespread over-smoothing problem in GNNs. The GAT-COBO code and datasets are available at https://github.com/xxhu94/GAT-COBO."}, "cited_paper_content": {"title": "Network Representation Learning: A Survey", "abstract": "With the widespread use of information technologies, information networks are becoming increasingly popular to capture complex relationships across various disciplines, such as social networks, citation networks, telecommunication networks, and biological networks. Analyzing these networks sheds light on different aspects of social life such as the structure of societies, information diffusion, and communication patterns. In reality, however, the large scale of information networks often makes network analytic tasks computationally expensive or intractable. Network representation learning has been recently proposed as a new learning paradigm to embed network vertices into a low-dimensional vector space, by preserving network topology structure, vertex content, and other side information. This facilitates the original network to be easily handled in the new vector space for further analysis. In this survey, we perform a comprehensive review of the current literature on network representation learning in the data mining and machine learning field. We propose new taxonomies to categorize and summarize the state-of-the-art network representation learning techniques according to the underlying learning mechanisms, the network information intended to preserve, as well as the algorithmic designs and methodologies. We summarize evaluation protocols used for validating network representation learning including published benchmark datasets, evaluation methods, and open source algorithms. We also perform empirical studies to compare the performance of representative algorithms on common datasets, and analyze their computational complexity. Finally, we suggest promising research directions to facilitate future study."}, "keywords": ["Current graph-based data"], "citation_intent": "background"} {"citing_id": "2303.11019v1", "cited_id": "1909.10726", "section_title": "A. Datasets", "citation": "This is consistent with the work in #REFR and allows sampling of healthy tissue patches that can be used as meaningful negative examples.", "text_before_citation": ["2) PAIP2019 dataset: The PAIP 2019 dataset #OTHEREFR contains 50 WSIs of liver cancer from 50 patients who underwent resection for hepatocellular carcinoma (HCC) at the Seoul National University Hospital.", "The slides were stained by H&E and digitalised with an Aperio AT2 scanner at 20\u00d7 power and 0.5021\u00b5m/px resolution, resulting in image sizes between 35, 855 \u00d7 39, 407 and 64, 768 \u00d7 47, 009 pixels.", "Two types of annotation are provided: viable regions of cancer cells for continuous tumour areas, as well as whole cancer regions for boundary between the non-tumorous hepatic lobules and the viable tumour (including peritumoral fibrosis, capsules, and inflammation).", "The initial annotations were provided by a pathologist with 11 years of experience in liver histopathology and reviewed by another expert pathologist.", "Additionally, we also generated annotations for \"tissue area\" which indicates healthy tissue pixels by threshold of (R, G, B) \u2264 (235, 210, 235)."], "text_after_citation": ["For the data pre-processing, we generated context (5\u00d7 magnification) and target patches (20\u00d7 magnification) consistent with the settings used in the BCSS dataset.", "We randomly selected 10 out of 50 WSIs as the validation set for our CV."], "citing_paper_content": {"title": "A Dual-Branch Self-Supervised Representation Learning Framework For Tumour Segmentation In Whole Slide Images", "abstract": "Supervised deep learning methods have achieved considerable success in medical image analysis, owing to the availability of large-scale and well-annotated datasets. However, creating such datasets for whole slide images (WSIs) in histopathology is a challenging task due to their gigapixel size. In recent years, self-supervised learning (SSL) has emerged as an alternative solution to reduce the annotation overheads in WSIs, as it does not require labels for training. These SSL approaches, however, are not designed for handling multi-resolution WSIs, which limits their performance in learning discriminative image features. In this paper, we propose a Dual-branch SSL Framework for WSI tumour segmentation (DSF-WSI) that can effectively learn image features from multi-resolution WSIs. Our DSF-WSI connected two branches and jointly learnt low and high resolution WSIs in a self-supervised manner. Moreover, we introduced a novel Context-Target Fusion Module (CTFM) and a masked jigsaw pretext task to align the learnt multi-resolution features. Furthermore, we designed a Dense SimSiam Learning (DSL) strategy to maximise the similarity of different views of WSIs, enabling the learnt representations to be more efficient and discriminative. We evaluated our method using two public datasets on breast and liver cancer segmentation tasks. The experiment results demonstrated that our DSF-WSI can effectively extract robust and efficient representations, which we validated through subsequent fine-tuning and semi-supervised settings. Our proposed method achieved better accuracy than other state-of-the-art approaches. Code is available at https://github.com/Dylan-H-Wang/dsf-wsi."}, "cited_paper_content": {"title": "Multi-Scale Fully Convolutional Neural Networks For Histopathology Image Segmentation: From Nuclear Aberrations To The Global Tissue Architecture", "abstract": "Histopathologic diagnosis is dependent on simultaneous information from a broad range of scales, ranging from nuclear aberrations ($\\approx \\mathcal{O}(0.1 \\mu m)$) over cellular structures ($\\approx \\mathcal{O}(10\\mu m)$) to the global tissue architecture ($\\gtrapprox \\mathcal{O}(1 mm)$). Bearing in mind which information is employed by human pathologists, we introduce and examine different strategies for the integration of multiple and widely separate spatial scales into common U-Net-based architectures. Based on this, we present a family of new, end-to-end trainable, multi-scale multi-encoder fully-convolutional neural networks for human modus operandi-inspired computer vision in histopathology."}, "keywords": ["healthy tissue patches"], "citation_intent": "result"} {"citing_id": "2304.01016v2", "cited_id": "1603.09320", "section_title": "Related Work", "citation": "At runtime, a query is an encoder into a latent space, and the k documents are retrieved using a nearest neighbor algorithm such as HNSW #REFR .", "text_before_citation": ["Transformer Based Language Models such as BERT provide contextual language representations built on the Transformer architecture (Vaswani et al., 2017) which can be specialized and adapted for specific tasks and domains .", "Using contextual word representations, it becomes relatively easy to excel at a broad range of natural language processing tasks such as Question Answering, Text Classification, and sentiment analysis.", "Bi-Encoders, commonly called dual-encoders or dense retrievers, decompose ranking by leveraging the inner product of query and document representations to produce a relevance score for query document pairs.", "While not as accurate at cross-encoders #OTHEREFR , they are more efficient for inference and easier to deploy.", "Bi-encoder document representations are query invariant, allowing them to be pre-computed and loaded into an Approximate Nearest Neighbor (ANN) such as FAISS #OTHEREFR ."], "text_after_citation": ["Since the entire document index has been pre-computed, the retrieval latency is limited to a single call of the document encoder.", "Bi-encoders commonly leverage LLM such as BERT to retrieve short passages of text leading to the task descriptor of Dense Passage Retrievers (DPR) (Karpukhin et al., 2020) .", "Driven by their efficiency in deployment and relevance performance, DPR-based models have rapidly become the building blocks for systems doing product search #OTHEREFR , open domain question answering (Karpukhin et al., 2020) and customer support #OTHEREFR .", "Efficient Inference study methods and models which decrease the model execution cost while minimizing the losses to model performance.", "Knowledge Distillation #OTHEREFR ) is a training method where a model, called the student, learns to emulate a teacher model, which is commonly larger or better performing than the student."], "citing_paper_content": {"title": "Quick Dense Retrievers Consume Kale: Post Training Kullback-Leibler Alignment Of Embeddings For Asymmetrical Dual Encoders", "abstract": "In this paper, we consider the problem of improving the inference latency of language model-based dense retrieval systems by introducing structural compression and model size asymmetry between the context and query encoders. First, we investigate the impact of pre and post-training compression on the MSMARCO, Natural Questions, TriviaQA, SQUAD, and SCIFACT, finding that asymmetry in the dual-encoders in dense retrieval can lead to improved inference efficiency. Knowing this, we introduce Kullback-Leibler Alignment of Embeddings (KALE), an efficient and accurate method for increasing the inference efficiency of dense retrieval methods by pruning and aligning the query encoder after training. Specifically, KALE extends traditional Knowledge Distillation after bi-encoder training, allowing for effective query encoder compression without full retraining or index generation. Using KALE and asymmetric training, we can generate models which exceed the performance of DistilBERT despite having 3x faster inference."}, "cited_paper_content": {"title": "Efficient And Robust Approximate Nearest Neighbor Search Using Hierarchical Navigable Small World Graphs", "abstract": "We present a new approach for the approximate K-nearest neighbor search based on navigable small world graphs with controllable hierarchy (Hierarchical NSW, HNSW). The proposed solution is fully graph-based, without any need for additional search structures, which are typically used at the coarse search stage of the most proximity graph techniques. Hierarchical NSW incrementally builds a multi-layer structure consisting from hierarchical set of proximity graphs (layers) for nested subsets of the stored elements. The maximum layer in which an element is present is selected randomly with an exponentially decaying probability distribution. This allows producing graphs similar to the previously studied Navigable Small World (NSW) structures while additionally having the links separated by their characteristic distance scales. Starting search from the upper layer together with utilizing the scale separation boosts the performance compared to NSW and allows a logarithmic complexity scaling. Additional employment of a heuristic for selecting proximity graph neighbors significantly increases performance at high recall and in case of highly clustered data. Performance evaluation has demonstrated that the proposed general metric space search index is able to strongly outperform previous opensource state-of-the-art vector-only approaches. Similarity of the algorithm to the skip list structure allows straightforward balanced distributed implementation."}, "keywords": ["latent space", "nearest neighbor algorithm"], "citation_intent": "method"} {"citing_id": "2303.06350v1", "cited_id": "1506.03134", "section_title": "B. Network Structure", "citation": "There, inspired by the Pointer Network #REFR , we directly use the decoder's attention score \u03b1 j as the final output policy \u03c0 \u03b8 , from which the agent chooses the next node to move to.", "text_before_citation": ["In doing so, each encoder, conditioned on the previous one's output, allows each spatio-temporal node feature h ST G j to be informed by the dependencies across all targets and their predictions, history beliefs, and features from all other nodes.", "4) Decoder: The decoder is used to yield the final policy based on the spatio-temporal node features.", "We first associate each node v j with the shortest distance to the current node \u03c8 t , which is pre-solved by Dijkstra #OTHEREFR . Then, a fully-connected layer maps the concatenated feature", "Concat(h ST G j , dist(v j , \u03c8 t )) toh ST G j .", "From these node features, we extract the current node feature (features of the current agent's node) as query (i.e., h q =h ST G t ) and its neighboring nodes' features as key-value pairs h k,v for the decoder unit."], "text_after_citation": ["This scheme relaxes the requirement of a fixed policy size, instead adapting the policy's dimension dynamically to the number of neighboring nodes.", "Together with spatial positional encoding, this endows our network with the ability to generalize to arbitrary graphs and topologies."], "citing_paper_content": {"title": "Spatio-Temporal Attention Network For Persistent Monitoring Of Multiple Mobile Targets", "abstract": "This work focuses on the persistent monitoring problem, where a set of targets moving based on an unknown model must be monitored by an autonomous mobile robot with a limited sensing range. To keep each target's position estimate as accurate as possible, the robot needs to adaptively plan its path to (re-)visit all the targets and update its belief from measurements collected along the way. In doing so, the main challenge is to strike a balance between exploitation, i.e., re-visiting previously-located targets, and exploration, i.e., finding new targets or re-acquiring lost ones. Encouraged by recent advances in deep reinforcement learning, we introduce an attention-based neural solution to the persistent monitoring problem, where the agent can learn the inter-dependencies between targets, i.e., their spatial and temporal correlations, conditioned on past measurements. This endows the agent with the ability to determine which target, time, and location to attend to across multiple scales, which we show also helps relax the usual limitations of a finite target set. We experimentally demonstrate that our method outperforms other baselines in terms of number of targets visits and average estimation error in complex environments. Finally, we implement and validate our model in a drone-based simulation experiment to monitor mobile ground targets in a high-fidelity simulator."}, "cited_paper_content": {"title": "Pointer Networks", "abstract": "We introduce a new neural architecture to learn the conditional probability of an output sequence with elements that are discrete tokens corresponding to positions in an input sequence. Such problems cannot be trivially addressed by existent approaches such as sequence-to-sequence [1] and Neural Turing Machines [2], because the number of target classes in each step of the output depends on the length of the input, which is variable. Problems such as sorting variable sized sequences, and various combinatorial optimization problems belong to this class. Our model solves the problem of variable size output dictionaries using a recently proposed mechanism of neural attention. It differs from the previous attention attempts in that, instead of using attention to blend hidden units of an encoder to a context vector at each decoder step, it uses attention as a pointer to select a member of the input sequence as the output. We call this architecture a Pointer Net (Ptr-Net). We show Ptr-Nets can be used to learn approximate solutions to three challenging geometric problems - finding planar convex hulls, computing Delaunay triangulations, and the planar Travelling Salesman Problem - using training examples alone. Ptr-Nets not only improve over sequence-to-sequence with input attention, but also allow us to generalize to variable size output dictionaries. We show that the learnt models generalize beyond the maximum lengths they were trained on. We hope our results on these tasks will encourage a broader exploration of neural learning for discrete problems."}, "keywords": ["Pointer Network"], "citation_intent": "method"} {"citing_id": "2305.01775v1", "cited_id": "1908.07964", "section_title": "E. Proof Of Proposition 3", "citation": "Consider the epigraph formulation in #REFR and define c j = \u2212 G g=1 c A g \u03b1 gj as a shorthand.", "text_before_citation": [], "text_after_citation": ["As discussed in Section V-B, the largest sum of s co ji is attained if \u03bb co j = 0, in which case s co ji = c j \u03be j , \u2200i per (14c), i.e., the worst case, and the smallest sum of s co ji is attained if \u03bb co j = c j , in which case s co ji = c j \u03be j , \u2200i as (14b)-(14d) are equal.", "As (14b) is always dominated by (14c) and (14d), write s co ji = c j \u03be j +mc j (\u03be j \u2212 \u03be ji ) where we set \u03bb co j = mc j with m = [0, 1].", "Using this definition the relevant part of (14a) can be written as mc j j + (N ) \u22121 N i=1 (c j \u03be j \u2212 mc j ( \u03be ji \u2212 \u03be j ) = mc j j \u2212 (N ) \u22121 N i=1 ( \u03be ji \u2212 \u03be j ) + c j \u03be j .", "If the term in rectangular parentheses is positive (case 1 in Proposition 3), m should be minimized leading to m = 0 \u21d2 \u03bb co j = 0.", "If term (A) is negative (case 2 in Proposition 3), m should be maximized leading to m = 1 \u21d2 \u03bb co j = c j ."], "citing_paper_content": {"title": "Data Valuation From Data-Driven Optimization", "abstract": "With the ongoing investment in data collection and communication technology in power systems, data-driven optimization has been established as a powerful tool for system operators to handle stochastic system states caused by weatherand behavior-dependent resources. However, most methods are ignorant to data quality, which may differ based on measurement and underlying privacy-protection mechanisms. This paper addresses this shortcoming by (i) proposing a practical data quality metric based on Wasserstein distance, (ii) leveraging a novel modification of distributionally robust optimization using information from multiple data sets with heterogeneous quality to valuate data, (iii) applying the proposed optimization framework to an optimal power flow problem, and (iv) showing a direct method to valuate data from the optimal solution. We conduct numerical experiments to analyze and illustrate the proposed model and publish the implementation open-source."}, "cited_paper_content": {"title": "Constrained Thompson Sampling For Real-Time Electricity Pricing With Grid Reliability Constraints", "abstract": "We consider the problem of an aggregator attempting to learn customers' load flexibility models while implementing a load shaping program by means of broadcasting daily dispatch signals. We adopt a multi-armed bandit formulation to account for the stochastic and unknown nature of customers' responses to dispatch signals. We propose a constrained Thompson sampling heuristic, Con-TS-RTP, that accounts for various possible aggregator objectives (e.g., to reduce demand at peak hours, integrate more intermittent renewable generation, track a desired daily load profile, etc) and takes into account the operational constraints of a distribution system to avoid potential grid failures as a result of uncertainty in the customers' response. We provide a discussion on the regret bounds for our algorithm as well as a discussion on the operational reliability of the distribution system's constraints being upheld throughout the learning process."}, "keywords": ["epigraph formulation"], "citation_intent": "background"} {"citing_id": "2303.12501v1", "cited_id": "1503.02531", "section_title": "Introduction", "citation": "However, we found that the projection in CMPM can be regarded as a variable weight that adjusts the distribution of softmax output logits, similar to the temperature parameter #REFR for knowledge distillation.", "text_before_citation": ["Our main innovation is the design of a multimodal interaction encoder that can efficiently fuse visual and textual representations, align cross-modal fine-grained features through the MLM task.", "This design helps the backbone network to extract more discriminative global image-text representations without requiring additional supervision.", "To guide the image-text matching, commonly used loss functions include ranking loss and cross-modal projection matching (CMPM) #OTHEREFR loss.", "Compared to ranking loss, the CMPM loss does not require the selection of specific triplets or margin parameter tuning.", "It exhibits great stability with varying batch sizes, making it widely used in textto-image person retrieval #OTHEREFR ."], "text_after_citation": ["Nevertheless, limited by the varying projection length, CMPM therefore cannot precisely control the projection probability distribution, making it difficult to focus on hard-negative samples during model updates.", "To explore more effective cross-modal matching objective, we further propose an image-text similarity distribution matching (SDM) loss.", "The SDM loss minimizes the KL divergence between the normalized image-text similarity score distributions and the normalized ground truth label matching distributions.", "Additionally, we introduce a temperature hyperparameter to precisely control the similarity distribution compactness, which enables the model updates focus on hard-negative samples and effectively enlarges the variance between non-matching pairs and the correlation between matching pairs.", "To address the limitations of separate pre-trained models on unimodal datasets, we leverage the Contrastive Language-Image Pre-training (CLIP) #OTHEREFR as the initialization of our model."], "citing_paper_content": {"title": "Cross-Modal Implicit Relation Reasoning And Aligning For Text-To-Image Person Retrieval", "abstract": "Text-to-image person retrieval aims to identify the target person based on a given textual description query. The primary challenge is to learn the mapping of visual and textual modalities into a common latent space. Prior works have attempted to address this challenge by leveraging separately pre-trained unimodal models to extract visual and textual features. However, these approaches lack the necessary underlying alignment capabilities required to match multimodal data effectively. Besides, these works use prior information to explore explicit part alignments, which may lead to the distortion of intra-modality information. To alleviate these issues, we present IRRA: a cross-modal Implicit Relation Reasoning and Aligning framework that learns relations between local visual-textual tokens and enhances global image-text matching without requiring additional prior supervision. Specifically, we first design an Implicit Relation Reasoning module in a masked language modeling paradigm. This achieves cross-modal interaction by integrating the visual cues into the textual tokens with a cross-modal multimodal interaction encoder. Secondly, to globally align the visual and textual embeddings, Similarity Distribution Matching is proposed to minimize the KL divergence between image-text similarity distributions and the normalized label matching distributions. The proposed method achieves new state-of-the-art results on all three public datasets, with a notable margin of about 3%-9% for Rank-1 accuracy compared to prior methods."}, "cited_paper_content": {"title": "Distilling The Knowledge In A Neural Network", "abstract": "A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel."}, "keywords": ["softmax output logits", "knowledge distillation"], "citation_intent": "background"} {"citing_id": "2304.13980v1", "cited_id": "2003.13867", "section_title": "Runtime Analysis", "citation": "Also 3D-MPA #REFR reaches a similar performance, confirming the difference between the indoor room and outdoor street scenarios.", "text_before_citation": ["The semantic segmentation results are almost the same (except for the weaker PointNet++ backbone).", "In terms of instance segmentation, PointGroup without the additional ScoreNet is significantly worse (>8 percent points difference).", "With ScoreNet, but without the specialisation to individual rooms, the performance is practically the same as for our pipeline.", "The table also shows the result given in the original PointGroup paper, and several other recent instance segmentation works.", "When tuned to indoor environments the full PointGroup does have the upper hand."], "text_after_citation": ["http://matterport.com/"], "citing_paper_content": {"title": "A Review Of Panoptic Segmentation For Mobile Mapping Point Clouds", "abstract": "3D point cloud panoptic segmentation is the combined task to (i) assign each point to a semantic class and (ii) separate the points in each class into object instances. Recently there has been an increased interest in such comprehensive 3D scene understanding, building on the rapid advances of semantic segmentation due to the advent of deep 3D neural networks. Yet, to date there is very little work about panoptic segmentation of outdoor mobile-mapping data, and no systematic comparisons. The present paper tries to close that gap. It reviews the building blocks needed to assemble a panoptic segmentation pipeline and the related literature. Moreover, a modular pipeline is set up to perform comprehensive, systematic experiments to assess the state of panoptic segmentation in the context of street mapping. As a byproduct, we also provide the first public dataset for that task, by extending the NPM3D dataset to include instance labels."}, "cited_paper_content": {"title": "3D-Mpa: Multi Proposal Aggregation For 3D Semantic Instance Segmentation", "abstract": "We present 3D-MPA, a method for instance segmentation on 3D point clouds. Given an input point cloud, we propose an object-centric approach where each point votes for its object center. We sample object proposals from the predicted object centers. Then, we learn proposal features from grouped point features that voted for the same object center. A graph convolutional network introduces inter-proposal relations, providing higher-level feature learning in addition to the lower-level point features. Each proposal comprises a semantic label, a set of associated points over which we define a foreground-background mask, an objectness score and aggregation features. Previous works usually perform non-maximum-suppression (NMS) over proposals to obtain the final object detections or semantic instances. However, NMS can discard potentially correct predictions. Instead, our approach keeps all proposals and groups them together based on the learned aggregation features. We show that grouping proposals improves over NMS and outperforms previous state-of-the-art methods on the tasks of 3D object detection and semantic instance segmentation on the ScanNetV2 benchmark and the S3DIS dataset."}, "keywords": ["3D-MPA"], "citation_intent": "result"} {"citing_id": "2304.06531v1", "cited_id": "1811.10943", "section_title": "Related Work", "citation": "Another way is to introduce the a-priors with the objective to preserve sharp features for surface reconstruction methods as done in DeepPrior #REFR .", "text_before_citation": ["Scanned 3D point sets are irregular and non-uniform, and need to be consolidated to enhance the surface reconstruction quality.", "One possible solution is to introduce edgeawareness in the consolidation of point sets in a data-driven manner.", "The EC-NET #OTHEREFR network processes patches of points and learns to consolidate points using an edge-aware joint loss function when learning from the data.", "The performance of the model was demonstrated on a very limited set of 12 manually labelled scans."], "text_after_citation": ["In both cases, the presence of high-frequency features and noise in the input scanned data, makes it extremely challenging to recover the sharpness of the scans.", "The inference of edges as parametric curves is an alternative that arises directly from CAD surface parametrization as boundary-representation (b-rep).", "Following this direction, the PC2WF model [12] infers a wireframe of linear edges from a point cloud based on a vertex localization and an edge detector that identifies the pairs of vertices connected with an edge.", "The work [14] proposes a parametric approach to extract a wireframe based on an estimated scalar distance field DEF #OTHEREFR that represents the proximity to the nearest sharp feature curve.", "PIE-Net #OTHEREFR proposes to jointly detect edge and corner points, after which a curve proposal module generates an over-complete collection of curves that are further ranked."], "citing_paper_content": {"title": "Sepicnet: Sharp Edges Recovery By Parametric Inference Of Curves In 3D Shapes", "abstract": "3D scanning as a technique to digitize objects in reality and create their 3D models, is used in many fields and areas. Though the quality of 3D scans depends on the technical characteristics of the 3D scanner, the common drawback is the smoothing of fine details, or the edges of an object. We introduce SepicNet, a novel deep network for the detection and parametrization of sharp edges in 3D shapes as primitive curves. To make the network end-to-end trainable, we formulate the curve fitting in a differentiable manner. We develop an adaptive point cloud sampling technique that captures the sharp features better than uniform sampling. The experiments were conducted on a newly introduced large-scale dataset of 50k 3D scans, where the sharp edge annotations were extracted from their parametric CAD models, and demonstrate significant improvement over state-of-the-art methods."}, "cited_paper_content": {"title": "Deep Geometric Prior For Surface Reconstruction", "abstract": "The reconstruction of a discrete surface from a point cloud is a fundamental geometry processing problem that has been studied for decades, with many methods developed. We propose the use of a deep neural network as a geometric prior for surface reconstruction. Specifically, we overfit a neural network representing a local chart parameterization to part of an input point cloud using the Wasserstein distance as a measure of approximation. By jointly fitting many such networks to overlapping parts of the point cloud, while enforcing a consistency condition, we compute a manifold atlas. By sampling this atlas, we can produce a dense reconstruction of the surface approximating the input cloud. The entire procedure does not require any training data or explicit regularization, yet, we show that it is able to perform remarkably well: not introducing typical overfitting artifacts, and approximating sharp features closely at the same time. We experimentally show that this geometric prior produces good results for both man-made objects containing sharp features and smoother organic objects, as well as noisy inputs. We compare our method with a number of well-known reconstruction methods on a standard surface reconstruction benchmark."}, "keywords": ["surface reconstruction methods"], "citation_intent": "method"} {"citing_id": "2303.01480v1", "cited_id": "1709.01507", "section_title": "Parallel Pooling Mixer", "citation": "Inspired by this, we apply a Squeeze-and-Excitation (SE) module #REFR in the mixing part of PPX.", "text_before_citation": ["EQUATION", "EQUATION", "EQUATION", "EQUATION", "Previous cross-modal fusion methods show that channel information is crucial #OTHEREFR ."], "text_after_citation": ["This structure is crucial since some channels of certain modalities do capture more significant information than others.", "It can further engage more spatially-holistic knowledge in the channels of the cross-modal complements in SQ-Hub.", "Thus, the weighted feature f w is passed to a Feed-Forward Network (FFN) and a SE module #OTHEREFR for enhancing the channel information. The second part of PPX can be written as:", "EQUATION", "After the PPX block,f w is fused with RGB feature to form the final fused feature f l \u2208{f 1 ,f 2 ,f 3 ,f 4 } by using FRM&FFM modules #OTHEREFR , as shown in Fig. 4 ."], "citing_paper_content": {"title": "Delivering Arbitrary-Modal Semantic Segmentation", "abstract": "Multimodal fusion can make semantic segmentation more robust. However, fusing an arbitrary number of modalities remains underexplored. To delve into this problem, we create the DELIVER arbitrary-modal segmentation benchmark, covering Depth, LiDAR, multiple Views, Events, and RGB. Aside from this, we provide this dataset in four severe weather conditions as well as five sensor failure cases to exploit modal complementarity and resolve partial outages. To make this possible, we present the arbitrary cross-modal segmentation model CMNEXT. It encompasses a Self-Query Hub (SQ-Hub) designed to extract effective information from any modality for subsequent fusion with the RGB representation and adds only negligible amounts of parameters (\u223c0.01M) per additional modality. On top, to efficiently and flexibly harvest discriminative cues from the auxiliary modalities, we introduce the simple Parallel Pooling Mixer (PPX). With extensive experiments on a total of six benchmarks, our CMNEXT achieves state-of-the-art performance on the DELIVER, KITTI-360, MFNet, NYU Depth V2, UrbanLF, and MCubeS datasets, allowing to scale from 1 to 81 modalities. On the freshly collected DELIVER, the quad-modal CMNEXT reaches up to 66.30% in mIoU with a +9.10% gain as compared to the mono-modal baseline. 1"}, "cited_paper_content": {"title": "Squeeze-And-Excitation Networks", "abstract": "Convolutional neural networks are built upon the convolution operation, which extracts informative features by fusing spatial and channel-wise information together within local receptive fields. In order to boost the representational power of a network, much existing work has shown the benefits of enhancing spatial encoding. In this work, we focus on channels and propose a novel architectural unit, which we term the\"Squeeze-and-Excitation\"(SE) block, that adaptively recalibrates channel-wise feature responses by explicitly modelling interdependencies between channels. We demonstrate that by stacking these blocks together, we can construct SENet architectures that generalise extremely well across challenging datasets. Crucially, we find that SE blocks produce significant performance improvements for existing state-of-the-art deep architectures at slight computational cost. SENets formed the foundation of our ILSVRC 2017 classification submission which won first place and significantly reduced the top-5 error to 2.251%, achieving a 25% relative improvement over the winning entry of 2016."}, "keywords": ["mixing part", "Squeeze-and-Excitation (SE) module"], "citation_intent": "method"} {"citing_id": "2304.14831v1", "cited_id": "1810.04805", "section_title": "Experimental Setup", "citation": "Last, we aim to tune BERT #REFR and its variants on the STS-B task under EXPECTED.", "text_before_citation": ["We pre-train a binary classifier on the records with the country of \"U.S\" and take \"non-U.S\" records as unobserved target data.", "Amazon dataset is constructed from Amazon review data with two categories of products selected, i.e., \"Electronics\" and \"Watches\".", "In our experiments, the datarich category \"Electronics\" is used to pre-train a prediction model which maps user comments to the rating score ranging from one to five, and \"Watches\" is treated as the target data.", "The settings of Adult and Amazon follows the work #OTHEREFR .", "In terms of CIFAR-10-C/CIFAR-100-C, the initial provided model is built on clean images, and it is then tuned to fit the disjoint corrupted images following the unsupervised tuning research #OTHEREFR , which mimics the unexpected distribution shift in the real world."], "text_after_citation": ["Following the research #OTHEREFR , the models are firstly trained on the sentence pairs from the genre of MSRvid and then tuned to fit the unknown target data which are extracted from Images, where the evaluation metric is Pearson's correlation coefficient.", "On the task of corrupted image classification, we treat all the corrupted images (target data) as the tuning data for a fair comparison with the unsupervised tuning methods.", "Throughout all the remaining datasets, the target data are split into two sets.", "We do randomly equal splitting for Adult and Amazon and use the default splitting for STS-B.", "One is the support set that is used for evaluating the query efficiency of tuning algorithms, and the other is the holdout set on which the model generalization is assessed."], "citing_paper_content": {"title": "Earning Extra Performance From Restrictive Feedbacks", "abstract": "Many machine learning applications encounter a situation where model providers are required to further refine the previously trained model so as to gratify the specific need of local users. This problem is reduced to the standard model tuning paradigm if the target data is permissibly fed to the model. However, it is rather difficult in a wide range of practical cases where target data is not shared with model providers but commonly some evaluations about the model are accessible. In this paper, we formally set up a challenge named Earning eXtra PerformancE from restriCTive feEDdbacks (EXPECTED) to describe this form of model tuning problems. Concretely, EXPECTED admits a model provider to access the operational performance of the candidate model multiple times via feedback from a local user (or a group of users). The goal of the model provider is to eventually deliver a satisfactory model to the local user(s) by utilizing the feedbacks. Unlike existing model tuning methods where the target data is always ready for calculating model gradients, the model providers in EXPECTED only see some feedbacks which could be as simple as scalars, such as inference accuracy or usage rate. To enable tuning in this restrictive circumstance, we propose to characterize the geometry of the model performance with regard to model parameters through exploring the parameters' distribution. In particular, for the deep models whose parameters distribute across multiple layers, a more query-efficient algorithm is further tailor-designed that conducts layerwise tuning with more attention to those layers which pay off better. Our theoretical analyses justify the proposed algorithms from the aspects of both efficacy and efficiency. Extensive experiments on different applications demonstrate that our work forges a sound solution to the EXPECTED problem, which establishes the foundation for future studies towards this direction."}, "cited_paper_content": {"title": "Bert: Pre-Training Of Deep Bidirectional Transformers For Language Understanding", "abstract": "We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. ::: BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement)."}, "keywords": ["variants", "BERT"], "citation_intent": "method"} {"citing_id": "2305.01855v1", "cited_id": "1905.10887", "section_title": "Experiment 3: Synthetic Data Filtering", "citation": "A similar discrepancy between generated data quality and downstream task performance has been reported in a prior image classification task #REFR .", "text_before_citation": ["A significant boost still exists when the volume of true data gets larger, which surpasses the improvement by SD para .", "This indicates that a suitable data filtering can improve both training efficiency and image captioning performance when true labeled data is limited.", "However, the improvement for SD base and SD para are not significant.", "And the image captioning performance even decreases in some cases for the Transformer-based model (see Table 11 in Appendix).", "Therefore, CLIPScore and the other two criteria are not golden standards to select highquality data that are suitable for the captioning task, even though they have been testified to perform well in image-caption quality evaluation #OTHEREFR ."], "text_after_citation": ["Authors found that although the GAN-generated images receive high scores close to those of true images, the classification model trained on fully synthetic images has a much lower accuracy than those trained on true images.", "Following this thread, we calculate the three metrics for both COCO data and synthetic data.", "We have a similar finding that three quality measures under the synthetic data are close level to those under true data (see Table 5 ).", "However, when completely replacing the true data with the synthetic data as the training (e.g., in Experiment 1), the performance is quite lower than that of the true data (i.e., CIDEr score: 81.0 vs. 92.4).", "In this way, we extend findings in #OTHEREFR 's study to the image captioning task."], "citing_paper_content": {"title": "Multimodal Data Augmentation For Image Captioning Using Diffusion Models", "abstract": "Image captioning, an important visionlanguage task, often requires a tremendous number of finely labeled image-caption pairs for learning the underlying alignment between images and texts. In this paper, we proposed a multimodal data augmentation method, leveraging a recent text-to-image model called Stable Diffusion, to expand the training set via high-quality generation of image-caption pairs. Extensive experiments on the MS COCO dataset demonstrate the advantages of our approach over several benchmark methods, and particularly a significant boost when having fewer training instances. In addition, models trained on our augmented datasets also outperform prior unpaired image captioning methods by a large margin. Finally, further improvement regarding the training efficiency and effectiveness can be obtained after intentionally filtering the generated data based on quality assessment."}, "cited_paper_content": {"title": "Classification Accuracy Score For Conditional Generative Models", "abstract": "Deep generative models (DGMs) of images are now sufficiently mature that they produce nearly photorealistic samples and obtain scores similar to the data distribution on heuristics such as Frechet Inception Distance (FID). These results, especially on large-scale datasets such as ImageNet, suggest that DGMs are learning the data distribution in a perceptually meaningful space and can be used in downstream tasks. To test this latter hypothesis, we use class-conditional generative models from a number of model classes\u2014variational autoencoders, autoregressive models, and generative adversarial networks (GANs)\u2014to infer the class labels of real data. We perform this inference by training an image classifier using only synthetic data and using the classifier to predict labels on real data. The performance on this task, which we call Classification Accuracy Score (CAS), reveals some surprising results not identified by traditional metrics and constitute our contributions. First, when using a state-of-the-art GAN (BigGAN-deep), Top-1 and Top-5 accuracy decrease by 27.9% and 41.6%, respectively, compared to the original data; and conditional generative models from other model classes, such as Vector-Quantized Variational Autoencoder-2 (VQ-VAE-2) and Hierarchical Autoregressive Models (HAMs), substantially outperform GANs on this benchmark. Second, CAS automatically surfaces particular classes for which generative models failed to capture the data distribution, and were previously unknown in the literature. Third, we find traditional GAN metrics such as Inception Score (IS) and FID neither predictive of CAS nor useful when evaluating non-GAN models. Furthermore, in order to facilitate better diagnoses of generative models, we open-source the proposed metric."}, "keywords": ["prior image classification"], "citation_intent": "result"} {"citing_id": "2304.03105v2", "cited_id": "1912.04838", "section_title": "Introduction", "citation": "LiDAR point clouds, which contain rich spatial information, offer superior performance on 3D detection benchmarks compared to camerabased methods #REFR Caesar et al, 2020) .", "text_before_citation": ["Although image backbones pretrained on Ima-geNet provide semantically rich features, they fail to capture critical structural information.", "The existing pre-training method #OTHEREFR introduces depth-relevant tasks, but such method excludes the view transformation module and benefits the backbone only.", "View transformation module is of great significance, since it constructs 3D information and encodes 3D prior assumptions.", "As the view transformation module is optimized only by the detection loss, the spatial information from the depth-pretrained backbone is not fully utilized.", "Ideally, the image backbone and the view transformation should incorporate aligned geometry knowledge to enhance their performance."], "text_after_citation": ["Therefore, integrating LiDAR information into camera-based models is a natural step.", "Recent works #OTHEREFR have explored LiDAR information in two paradigms.", "The first paradigm #OTHEREFR projects LiDAR points onto the perspective view to provide additional depth supervision.", "However, the task of depth estimation in 2D space is inherently ill-posed and challenging.", "The second paradigm #OTHEREFR ) leverages a teacher-student knowledge distillation fashion."], "citing_paper_content": {"title": "Geometric-Aware Pretraining For Vision-Centric 3D Object Detection", "abstract": "Multi-camera 3D object detection for autonomous driving is a challenging problem that has garnered notable attention from both academia and industry. An obstacle encountered in vision-based techniques involves the precise extraction of geometry-conscious features from RGB images. Recent approaches have utilized geometric-aware image backbones pretrained on depth-relevant tasks to acquire spatial information. However, these approaches overlook the critical aspect of view transformation, resulting in inadequate performance due to the misalignment of spatial knowledge between the image backbone and view transformation. To address this issue, we propose a novel geometricaware pretraining framework called GAPretrain. Our approach incorporates spatial and structural cues to camera networks by employing the geometric-rich modality as guidance during the pretraining phase. The transference of modal-specific attributes across different modalities is non-trivial, but we bridge this gap by using a unified bird's-eye-view (BEV) representation and structural hints derived from LiDAR point clouds to facilitate the pretraining process. GAPretrain serves as a plug-and-play solution that can be flexibly applied to multiple state-of-the-art detectors. Our experiments demonstrate the effectiveness and generalization ability of the proposed method. We achieve 46.2 mAP and 55.5 NDS on the nuScenes val set using the BEVFormer method, with a gain of 2.7 and 2.1 points, respectively. We also conduct experiments on various image backbones and view transformations to validate the efficacy of our approach."}, "cited_paper_content": {"title": "Scalability In Perception For Autonomous Driving: Waymo Open Dataset", "abstract": "The research community has increasing interest in autonomous driving research, despite the resource intensity of obtaining representative real world data. Existing self-driving datasets are limited in the scale and variation of the environments they capture, even though generalization within and between operating regions is crucial to the overall viability of the technology. In an effort to help align the research community's contributions with real-world self-driving problems, we introduce a new large scale, high quality, diverse dataset. Our new dataset consists of 1150 scenes that each span 20 seconds, consisting of well synchronized and calibrated high quality LiDAR and camera data captured across a range of urban and suburban geographies. It is 15x more diverse than the largest camera+LiDAR dataset available based on our proposed diversity metric. We exhaustively annotated this data with 2D (camera image) and 3D (LiDAR) bounding boxes, with consistent identifiers across frames. Finally, we provide strong baselines for 2D as well as 3D detection and tracking tasks. We further study the effects of dataset size and generalization across geographies on 3D detection methods. Find data, code and more up-to-date information at http://www.waymo.com/open."}, "keywords": ["3D detection benchmarks"], "citation_intent": "background"} {"citing_id": "2303.05080v1", "cited_id": "1302.4099", "section_title": "Big Data As A Tool For Authors", "citation": "For example, it is known that given a book the number of nodes and the average shortest path length of the book's word network is correlated with the style of certain literary eras #REFR .", "text_before_citation": ["These can be found in many writing book and generally come by way of applied psychology and accumulated wisdom of previous writers #OTHEREFR ]. Yet more books are being published than ever before.", "We have data, and by careful analysis of this data we can create new tools for fiction writers so that fiction is no longer only an art, but also a science.", "We've considered reader-communities and enjoyment-communities as ways to craft books readers will both choose to read and enjoy.", "We've also shown the potential of our maturity-realism plane to coarsely identify books one might enjoy, and more generally the potential of PCA analysis to find untapped combinations of genres.", "One can easily imagine an array of other data inspired approaches. Natural language processing represents a major area here."], "text_after_citation": ["If an author wishes to mimic the style of an era but their prose doesn't feel quite right they could look at the word network their book generates.", "Perhaps they find that the size of their word network is too small, meaning they are using too few unique words. This can help \"debug\" prose.", "Adapting this technique could help character sound distinct.", "A common problem is that different character's dialogue sound the same simply because the same author wrote it.", "If one can extract the each of the main character's dialogue (or the prose of their point of view scenes) then one can run those texts through text-author identification or natural language processing methods."], "citing_paper_content": {"title": "Revisiting The Relevance Of Traditional Genres: A Network Analysis Of Fiction Readers' Preferences", "abstract": "We investigate how well traditional fiction genres like Fantasy, Thriller, and Literature represent readers' preferences. Using user data from Goodreads we construct a book network where two books are strongly linked if the same people tend to read or enjoy them both. We then partition this network into communities of similar books and assign each a list of subjects from The Open Library to serve as a proxy for traditional genres. Our analysis reveals that the network communities correspond to existing combinations of traditional genres, but that the exact communities differ depending on whether we consider books that people read or books that people enjoy. In addition, we apply principal component analysis to the data and find that the variance in the book communities is best explained by two factors: the maturity/childishness and realism/fantastical nature of the books. We propose using this maturity-realism plane as a coarse classification tool for stories."}, "cited_paper_content": {"title": "Identification Of Literary Movements Using Complex Networks To Represent Texts", "abstract": "The use of statistical methods to analyze large databases of text has been useful in unveiling patterns of human behavior and establishing historical links between cultures and languages. In this study, we identified literary movements by treating books published from 1590 to 1922 as complex networks, whose metrics were analyzed with multivariate techniques to generate six clusters of books. The latter correspond to time periods coinciding with relevant literary movements over the last five centuries. The most important factor contributing to the distinctions between different literary styles was the average shortest path length, in particular the asymmetry of its distribution. Furthermore, over time there has emerged a trend toward larger average shortest path lengths, which is correlated with increased syntactic complexity, and a more uniform use of the words reflected in a smaller power-law coefficient for the distribution of word frequency. Changes in literary style were also found to be driven by opposition to earlier writing styles, as revealed by the analysis performed with geometrical concepts. The approaches adopted here are generic and may be extended to analyze a number of features of languages and cultures."}, "keywords": ["book's word network"], "citation_intent": "background"} {"citing_id": "2305.02997v1", "cited_id": "1603.02754", "section_title": "Related Work", "citation": "XGBoost (eXtreme Gradient Boosting) #REFR is powered by a novel sparsity-aware algorithm and weighted quantile sketch. This allows it to scale to large datasets.", "text_before_citation": ["Gradient-boosted decision trees.", "GBDTs have been a powerful technique to model tabular data, ever since their creation in 2001 #OTHEREFR .", "GBDTs work by building an ensemble of decision trees, incrementally updated using gradient descent.", "Due to their strong performance, many highperforming instantiations have been proposed."], "text_after_citation": ["LightGBM (Light Gradient Boosting Machine) #OTHEREFR is designed to be a lightweight gradientboosted tree implementation.", "CatBoost (Categorical Boosting) #OTHEREFR combines a permutation-driven variant of boosting with a novel technique for processing categorical features.", "Neural networks for tabular data.", "In their survey on deep learning for tabular data, Borisov et al. described three types of tabular data #OTHEREFR .", "Data transformation methods #OTHEREFR seek to encode the data into a format that is better-suited for neural nets."], "citing_paper_content": {"title": "When Do Neural Nets Outperform Boosted Trees On Tabular Data?", "abstract": "Tabular data is one of the most commonly used types of data in machine learning. Despite recent advances in neural nets (NNs) for tabular data, there is still an active discussion on whether or not NNs generally outperform gradient-boosted decision trees (GBDTs) on tabular data, with several recent works arguing either that GBDTs consistently outperform NNs on tabular data, or vice versa. In this work, we take a step back and ask, 'does it matter?' We conduct the largest tabular data analysis to date, by comparing 19 algorithms across 176 datasets, and we find that the 'NN vs. GBDT' debate is overemphasized: for a surprisingly high number of datasets, either the performance difference between GBDTs and NNs is negligible, or light hyperparameter tuning on a GBDT is more important than selecting the best algorithm. Next, we analyze 965 metafeatures to determine what properties of a dataset make NNs or GBDTs better-suited to perform well. For example, we find that GBDTs are much better than NNs at handling skewed feature distributions, heavy-tailed feature distributions, and other forms of dataset irregularities. Our insights act as a guide for practitioners to decide whether or not they need to run a neural net to reach top performance on their dataset. Our codebase and all raw results are available at https://github.com/naszilla/tabzilla."}, "cited_paper_content": {"title": "Xgboost: A Scalable Tree Boosting System", "abstract": "Tree boosting is a highly effective and widely used machine learning method. In this paper, we describe a scalable end-to-end tree boosting system called XGBoost, which is used widely by data scientists to achieve state-of-the-art results on many machine learning challenges. We propose a novel sparsity-aware algorithm for sparse data and weighted quantile sketch for approximate tree learning. More importantly, we provide insights on cache access patterns, data compression and sharding to build a scalable tree boosting system. By combining these insights, XGBoost scales beyond billions of examples using far fewer resources than existing systems."}, "keywords": ["(eXtreme Gradient Boosting", "XGBoost"], "citation_intent": "method"} {"citing_id": "2303.01052v1", "cited_id": "1608.04644", "section_title": "Validating Hypothesis Model And Test Function", "citation": "Intriguingly, we notice that both results from the hypothesis model generally show constant robustness even in a high-confidence adversarial attack #REFR fabricating unseen perturbation.", "text_before_citation": ["Here, we observe that the adversarial robustness of CF is inferior to that of CC, AC, and even Adv.", "Intuitively, it is an obvious result since the test function violating Eq.", "(7) forces feature representation to be the worst possible condition of extremely deviating from correct prediction.", "For the prediction results for CC and AC, they show impressive robustness performance than Adv with large margins.", "Since AC directly leverages the feature variation acquired from adversarial perturbation, they present better adversarial robustness than CC obtained from the test function outputting the worst-case counterfactuals on the feature variation."], "text_after_citation": ["Such robustness demonstrates the estimated causal features have ability to overcome various types of adversarial perturbation."], "citing_paper_content": {"title": "Demystifying Causal Features On Adversarial Examples And Causal Inoculation For Robust Network By Adversarial Instrumental Variable Regression", "abstract": "The origin of adversarial examples is still inexplicable in research fields, and it arouses arguments from various viewpoints, albeit comprehensive investigations. In this paper, we propose a way of delving into the unexpected vulnerability in adversarially trained networks from a causal perspective, namely adversarial instrumental variable (IV) regression. By deploying it, we estimate the causal relation of adversarial prediction under an unbiased environment dissociated from unknown confounders. Our approach aims to demystify inherent causal features on adversarial examples by leveraging a zero-sum optimization game between a casual feature estimator (i.e., hypothesis model) and worstcase counterfactuals (i.e., test function) disturbing to find causal features. Through extensive analyses, we demonstrate that the estimated causal features are highly related to the correct prediction for adversarial robustness, and the counterfactuals exhibit extreme features significantly deviating from the correct prediction. In addition, we present how to effectively inoculate CAusal FEatures (CAFE) into defense networks for improving adversarial robustness."}, "cited_paper_content": {"title": "Towards Evaluating The Robustness Of Neural Networks", "abstract": "Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to adversarial examples: given an input $x$ and any target classification $t$, it is possible to find a new input $x'$ that is similar to $x$ but classified as $t$. This makes it difficult to apply neural networks in security-critical areas. Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks' ability to find adversarial examples from $95\\%$ to $0.5\\%$. ::: In this paper, we demonstrate that defensive distillation does not significantly increase the robustness of neural networks by introducing three new attack algorithms that are successful on both distilled and undistilled neural networks with $100\\%$ probability. Our attacks are tailored to three distance metrics used previously in the literature, and when compared to previous adversarial example generation algorithms, our attacks are often much more effective (and never worse). Furthermore, we propose using high-confidence adversarial examples in a simple transferability test we show can also be used to break defensive distillation. We hope our attacks will be used as a benchmark in future defense attempts to create neural networks that resist adversarial examples."}, "keywords": ["constant robustness"], "citation_intent": "result"} {"citing_id": "2303.12484v1", "cited_id": "1908.10555", "section_title": "Local Detection", "citation": "Finally, to address the problem that only image-level labels are provided in MIL, Xu et al. #REFR design an automatic instance-level label generation method.", "text_before_citation": ["Thus, they propose a MIL structure involving maximum likelihood estimation to predict multiple labels, i.e., bag-level labels and diagnostic scores; instance-level labels and informativeness, simultaneously.", "Similarly, when studying the classification of the retinal nerve fiber layer (RNFL), Manivannan et al.", "#OTHEREFR have observed that regions that contain the RNFL generally have strong intra-class variation, making them difficult to distinguish from other regions.", "Therefore, they map the instances into a discriminative subspace to increase the discrepancy for disentangled instance feature learning. Jia et al.", "#OTHEREFR incorporate the multi-scale image feature into the learning process to obtain more latent information on histopathology images."], "text_after_citation": ["Their work has led to an interesting MIL algorithm design direction and may shed light on how to improve the performance of local detection algorithms.", "Other studies on problems such as phenotype categorization #OTHEREFR , #OTHEREFR , #OTHEREFR and multi-label classification #OTHEREFR have also made promising progress with the MIL algorithm."], "citing_paper_content": {"title": "Label-Efficient Deep Learning In Medical Image Analysis: Challenges And Future Directions", "abstract": "Deep learning has seen rapid growth in recent years and achieved state-of-the-art performance in a wide range of applications. However, training models typically requires expensive and time-consuming collection of large quantities of labeled data. This is particularly true within the scope of medical imaging analysis (MIA), where data are limited and labels are expensive to be acquired. Thus, label-efficient deep learning methods are developed to make comprehensive use of the labeled data as well as the abundance of unlabeled and weak-labeled data. In this survey, we extensively investigated over 300 recent papers to provide a comprehensive overview of recent progress on label-efficient learning strategies in MIA. We first present the background of label-efficient learning and categorize the approaches into different schemes. Next, we examine the current state-of-the-art methods in detail through each scheme. Specifically, we provide an in-depth investigation, covering not only canonical semi-supervised, self-supervised, and multi-instance learning schemes, but also recently emerged active and annotation-efficient learning strategies. Moreover, as a comprehensive contribution to the field, this survey not only elucidates the commonalities and unique features of the surveyed methods but also presents a detailed analysis of the current challenges in the field and suggests potential avenues for future research."}, "cited_paper_content": {"title": "Camel: A Weakly Supervised Learning Framework For Histopathology Image Segmentation", "abstract": "Histopathology image analysis plays a critical role in cancer diagnosis and treatment. To automatically segment the cancerous regions, fully supervised segmentation algorithms require labor-intensive and time-consuming labeling at the pixel level. In this research, we propose CAMEL, a weakly supervised learning framework for histopathology image segmentation using only image-level labels. Using multiple instance learning (MIL)-based label enrichment, CAMEL splits the image into latticed instances and automatically generates instance-level labels. After label enrichment, the instance-level labels are further assigned to the corresponding pixels, producing the approximate pixel-level labels and making fully supervised training of segmentation models possible. CAMEL achieves comparable performance with the fully supervised approaches in both instance-level classification and pixel-level segmentation on CAMELYON16 and a colorectal adenoma dataset. Moreover, the generality of the automatic labeling methodology may benefit future weakly supervised learning studies for histopathology image analysis."}, "keywords": ["image-level labels"], "citation_intent": "method"} {"citing_id": "2303.07914v1", "cited_id": "1706.03762", "section_title": "Model Architecture", "citation": "Semantic encoder: The semantic encoder is composed of L e Transformer #REFR encoder layers, which aims to further encode the semantic information of speech representations.", "text_before_citation": ["To learn the correct acoustic boundaries, we use the source or target text length J as the supervised signal.", "L CIF = J \u2212 T t=1 \u03b1 t 2 (5)", "There are two benefits of using CIF as a boundary detector.", "For offline ST model, it can address the length gap between speech and text.", "It can also dynamically detect the acoustic boundaries of streaming audio to perform read/write policies for streaming inference."], "text_after_citation": ["Translation decoder: The translation decoder is composed of L e Transformer decoder layers, which generates the translations in an autoregressive way. The translation loss is defined as:", "L ST (x, y) = \u2212 J j=1 log p (y j | y 10m) to evaluate the agent's ability to plan for long-horizon goals.", "Each difficulty level is assessed over 1,000 trajectories using a maximum episode length of 500.", "To ensure a comprehensive assessment, we sample distinct starting positions and goals for each environment and difficulty, resulting in a total of 4,000 trajectories per environment."], "text_after_citation": ["A navigation trial is considered successful if the agent comes to a STOP within a maximum distance of 1m from the goal.", "We also use a Soft Success Rate (SSR), where a trial is successful if the agent is less than 1m away from the goal at any point during navigation.", "Lastly, we monitor the ratio of Collision-Free Trajectories (CFT) across all difficulties.", "A trajectory is deemed collision-free if it does not collide during the experiment.", "d) Results: We present quantitative navigation results in Table I ."], "citing_paper_content": {"title": "One-4-All: Neural Potential Fields For Embodied Navigation", "abstract": "A fundamental task in robotics is to navigate between two locations. In particular, real-world navigation can require long-horizon planning using high-dimensional RGB images, which poses a substantial challenge for end-to-end learning-based approaches. Current semi-parametric methods instead achieve long-horizon navigation by combining learned modules with a topological memory of the environment, often represented as a graph over previously collected images. However, using these graphs in practice typically involves tuning a number of pruning heuristics to avoid spurious edges, limit runtime memory usage and allow reasonably fast graph queries. In this work, we present One-4-All (O4A), a method leveraging self-supervised and manifold learning to obtain a graph-free, end-to-end navigation pipeline in which the goal is specified as an image. Navigation is achieved by greedily minimizing a potential function defined continuously over the O4A latent space. Our system is trained offline on non-expert exploration sequences of RGB data and controls, and does not require any depth or pose measurements. We show that O4A can reach long-range goals in 8 simulated Gibson indoor environments, and further demonstrate successful real-world navigation using a Jackal UGV platform. a * Equal contribution. Author ordering determined by competitive duckcalling, where the winner was selected by a blind jury on their ability to recreate various duck calls."}, "cited_paper_content": {"title": "On Evaluation Of Embodied Navigation Agents", "abstract": "Skillful mobile operation in three-dimensional environments is a primary topic of study in Artificial Intelligence. The past two years have seen a surge of creative work on navigation. This creative output has produced a plethora of sometimes incompatible task definitions and evaluation protocols. To coordinate ongoing and future research in this area, we have convened a working group to study empirical methodology in navigation research. The present document summarizes the consensus recommendations of this working group. We discuss different problem statements and the role of generalization, present evaluation measures, and provide standard scenarios that can be used for benchmarking."}, "keywords": ["navigation performance"], "citation_intent": "method"} {"citing_id": "2304.14418v1", "cited_id": "1504.06852", "section_title": "Introduction", "citation": "FlowNet #REFR introduced a learning based two-frame optical flow estimation method using CNNs wherein a single flow field is estimated from a pair of image frames.", "text_before_citation": ["Horn and Schunck #OTHEREFR was a pioneering algorithm that introduced a variational energy minimization approach to estimate optical flow using a brightness consistency assumption (pixel intensity during pixel coordinate transformation assumed to be constant for a smaller temporal change) and a spatial smoothness constraint for the optical flow field as part of the objective function.", "Other classical multi-frame methods use a bank of hand-crafted motion filters tuned to capture moving patterns and texture characteristics from each image sequence to estimate the direction and magnitude of optical flow fields #OTHEREFR .", "With the recent advancement of deep learning and availability of large datasets with known ground truth, optical flow estimation is reformulated as an end-to-end learning problem without requiring assumptions regarding the characteristics of images and of motion patterns.", "Motivated by Heeger's #OTHEREFR approach to use hand crafted spatiotemporal Gabor filters, Teney et al.", "#OTHEREFR developed a learning-based multi-frame optical flow estimation model based on learnable 3D convolutional neural networks (CNN) and signal processing concepts."], "text_after_citation": ["Several other learning based two-frame methods improved upon the FlowNet model, namely the coarse-to-fine pyramid network #OTHEREFR , multiple intermediate flow estimates and warping based brightness error computation #OTHEREFR , receptive field guided motion feature extraction networks #OTHEREFR and the 4D all-pairs correlation volume with gated recurrent network #OTHEREFR .", "In dynamic scenes, limited temporal information available from only two frames causes the two-frame methods to have poor generalizations in and near occluded regions as well as in out-of-boundary regions.", "This is likely because in these regions the flow dynamics are often complex and cannot be fully captured using only two frames.", "In regions with disappearing scene elements, learning based two-frame methods do not have access to context Figure 1 : Examples of optical flow estimates (OFEs) using our methods compared with recent state-of-the-art two-frame and multi-frame methods.", "The regions bounded by red boxes in the input frames represent the regions where our method significantly outperformed the other methods."], "citing_paper_content": {"title": "Sstm: Spatiotemporal Recurrent Transformers For Multi-Frame Optical Flow Estimation", "abstract": "Inaccurate optical flow estimates in and near occluded regions, and out-ofboundary regions are two of the current significant limitations of optical flow estimation algorithms. Recent state-of-the-art optical flow estimation algorithms are two-frame based methods where optical flow is estimated sequentially for each consecutive image pair in a sequence. While this approach gives good flow estimates, it fails to generalize optical flows in occluded regions mainly due to limited local evidence regarding moving elements in a scene. In this work, we propose a learning-based multi-frame optical flow estimation method that estimates two or more consecutive optical flows in parallel from multi-frame image sequences. Our underlying hypothesis is that by understanding temporal scene dynamics from longer sequences with more than two frames, we can characterize pixel-wise dependencies in a larger spatiotemporal domain, generalize complex motion patterns and thereby improve the accuracy of optical flow estimates in occluded regions. We present learning-based spatiotemporal recurrent transformers for multi-frame based optical flow estimation (SSTMs). Our method utilizes 3D Convolutional Gated Recurrent Units (3D-ConvGRUs) and spatiotemporal transformers to learn recurrent space-time motion dynamics and global dependencies in the scene and provide a generalized optical flow estimation. When compared with recent state-of-the-art two-frame and multi-frame methods on real world and synthetic datasets, performance of the SSTMs were significantly higher in occluded and out-of-boundary regions. Among all published state-of-the-art multi-frame methods, SSTM achieved state-of the-art results on the Sintel Final and KITTI2015 benchmark datasets."}, "cited_paper_content": {"title": "Flownet: Learning Optical Flow With Convolutional Networks", "abstract": "Convolutional neural networks (CNNs) have recently been very successful in a variety of computer vision tasks, especially on those linked to recognition. Optical flow estimation has not been among the tasks CNNs succeeded at. In this paper we construct CNNs which are capable of solving the optical flow estimation problem as a supervised learning task. We propose and compare two architectures: a generic architecture and another one including a layer that correlates feature vectors at different image locations. Since existing ground truth data sets are not sufficiently large to train a CNN, we generate a large synthetic Flying Chairs dataset. We show that networks trained on this unrealistic data still generalize very well to existing datasets such as Sintel and KITTI, achieving competitive accuracy at frame rates of 5 to 10 fps."}, "keywords": ["optical flow estimation"], "citation_intent": "method"} {"citing_id": "2303.10369v1", "cited_id": "1705.09406", "section_title": "Multimodal Quality Alignment", "citation": "Existing methods build the multimodal alignment by designing constraints across different modalities (called crossmodal constraints #REFR ).", "text_before_citation": ["In BMQA, multimodal quality alignment refers to finding the corresponding quality representation relationship between two modalities.", "The image and QSD are obtained for the same scene, and the quality description can be highly consistent #OTHEREFR .", "For example, the QSD keywords can directly indicate the image regions that the visual attention focuses on #OTHEREFR , thus improving the learning performance of an image feature encoder."], "text_after_citation": ["In the construction of training objective, it is necessary to define a special metric to measure the difference between two modalities.", "If two modalities come from different perspectives of a single sensor, the metric is defined as an absolute value error, such as mean absolute error (MAE) and mean square error (MSE).", "If two modalities come from different sensors, the metric is defined as a relative value error, such as cosine similarity #OTHEREFR .", "The image and QSD are heterogeneous modalities.", "Therefore, we adopt the cosine similarity to measure the relative difference, and design an attentive pooling for multimodal quality alignment."], "citing_paper_content": {"title": "Blind Multimodal Quality Assessment: A Brief Survey And A Case Study Of Low-Light Images", "abstract": "Blind image quality assessment (BIQA) aims at automatically and accurately forecasting objective scores for visual signals, which has been widely used to monitor product and service quality in low-light applications, covering smartphone photography, video surveillance, autonomous driving, etc. Recent developments in this field are dominated by unimodal solutions inconsistent with human subjective rating patterns, where human visual perception is simultaneously reflected by multiple sensory information (e.g., sight and hearing). In this article, we present a unique blind multimodal quality assessment (BMQA) of low-light images from subjective evaluation to objective score. To investigate the multimodal mechanism, we first establish a multimodal low-light image quality (MLIQ) database with authentic low-light distortions, containing image and audio modality pairs. Further, we specially design the key modules of BMQA, considering multimodal quality representation, latent feature alignment and fusion, and hybrid self-supervised and supervised learning. Extensive experiments show that our BMQA yields state-of-the-art accuracy on the proposed MLIQ benchmark database. In particular, we also build an independent single-image modality Dark-4K database, which is used to verify its applicability and generalization performance in mainstream unimodal applications. Qualitative and quantitative results on Dark-4K show that BMQA achieves superior performance to existing BIQA approaches as long as a pre-trained quality semantic description model is provided. The proposed framework and two databases as well as the collected BIQA methods and evaluation metrics are made publicly available."}, "cited_paper_content": {"title": "Multimodal Machine Learning: A Survey And Taxonomy", "abstract": "Our experience of the world is multimodal - we see objects, hear sounds, feel texture, smell odors, and taste flavors. Modality refers to the way in which something happens or is experienced and a research problem is characterized as multimodal when it includes multiple such modalities. In order for Artificial Intelligence to make progress in understanding the world around us, it needs to be able to interpret such multimodal signals together. Multimodal machine learning aims to build models that can process and relate information from multiple modalities. It is a vibrant multi-disciplinary field of increasing importance and with extraordinary potential. Instead of focusing on specific multimodal applications, this paper surveys the recent advances in multimodal machine learning itself and presents them in a common taxonomy. We go beyond the typical early and late fusion categorization and identify broader challenges that are faced by multimodal machine learning, namely: representation, translation, alignment, fusion, and co-learning. This new taxonomy will enable researchers to better understand the state of the field and identify directions for future research."}, "keywords": ["multimodal alignment"], "citation_intent": "method"} {"citing_id": "2303.06314v2", "cited_id": "1912.04977", "section_title": "Introduction", "citation": "Despite the merits, FL in IoT systems could also meet with plenty of challenges due to potentially heterogeneous device capabilities and data distributions, which are also known as system heterogeneity and non-IID data #REFR .", "text_before_citation": ["As a result, training deep learning models centrally by gathering data from user devices might be prohibited and impractical.", "To alleviate the above issues, it is attractive to adopt federated learning (FL) #OTHEREFR - #OTHEREFR to collectively train models, which can take full advantages of the local data and edge computational resources in the IoT systems.", "FL is a distributed computing paradigm where multiple edge devices, i.e., clients, collaboratively train a global model without disclosing local sensitive data.", "Conventional FL systems usually consist of a central server and a number of clients, where each client updates the model based on local data and the server is responsible for synchronizing model parameters.", "The primary goal of FL is to efficiently train a global model with the highest possible accuracy #OTHEREFR , #OTHEREFR ."], "text_after_citation": ["The non-IID data in FL means the underlying distributions of local data are not identical across clients #OTHEREFR .", "System heterogeneity could be caused by different resource constraints, including storage, computing capability, energy and network bandwidth #OTHEREFR .", "In synchronous FL, aggregation is conducted when all the selected clients fully complete their local training and return their model updates.", "However, the clients could have significantly different performances in real-world scenarios, which implies that some faster clients have to wait for the slower ones, i.e., stragglers, resulting in unnecessary time-consuming.", "Moreover, some clients might be unresponsive due to inactive devices and/or communication failures and thus induce unavailability of some local updates, i.e., dropout."], "citing_paper_content": {"title": "Stabilizing And Improving Federated Learning With Non-Iid Data And Client Dropout", "abstract": "The label distribution skew induced data heterogeniety has been shown to be a significant obstacle that limits the model performance in federated learning, which is particularly developed for collaborative model training over decentralized data sources while preserving user privacy. This challenge could be more serious when the participating clients are in unstable circumstances and dropout frequently. Previous work and our empirical observations demonstrate that the classifier head for classification task is more sensitive to label skew and the unstable performance of FedAvg mainly lies in the imbalanced training samples across different classes. The biased classifier head will also impact the learning of feature representations. Therefore, maintaining a balanced classifier head is of significant importance for building a better global model. To this end, we propose a simple yet effective framework by introducing a prior-calibrated softmax function for computing the cross-entropy loss and a prototype-based feature augmentation scheme to re-balance the local training, which are lightweight for edge devices and can facilitate the global model aggregation. The improved model performance over existing baselines in the presence of non-IID data and client dropout is demonstrated by conducting extensive experiments on benchmark classification tasks."}, "cited_paper_content": {"title": "Advances And Open Problems In Federated Learning", "abstract": "Federated learning (FL) is a machine learning setting where many clients (e.g. mobile devices or whole organizations) collaboratively train a model under the orchestration of a central server (e.g. service provider), while keeping the training data decentralized. FL embodies the principles of focused data collection and minimization, and can mitigate many of the systemic privacy risks and costs resulting from traditional, centralized machine learning and data science approaches. Motivated by the explosive growth in FL research, this paper discusses recent advances and presents an extensive collection of open problems and challenges."}, "keywords": ["IoT systems"], "citation_intent": "background"} {"citing_id": "2303.14836v1", "cited_id": "1909.03496", "section_title": "Gnn Explanation", "citation": "Using the AFGs and their corresponding labels (benign or vulnerable) as the training dataset, one can train a GNN model for vulnerability detection, e.g., Devign #REFR .", "text_before_citation": ["Example with code vulnerability detection.", "Figure 1(a) shows an example source code with a \"double free\" vulnerability, which happens when the second free (line 12) is called after the first free (line 9).", "Vulnerability detection methods firstly convert the source code to an attributed graph.", "For example, we construct the attributed graph from the source code as shown in Figure 1 (b) by building the Attributed control and data Flow Graph (AFG) and encoding the syntax attributes for each node.", "The node denotes the statement, the edge denotes control or data flow between two statements, and the attributes include syntax features, such as which keywords are used in a statement."], "text_after_citation": ["For the AFG generated from the example source code in Figure 1 , nodes 9, 12 and the keyword free should be identified in the final explanation results.", "Figure 1 (c) presents the output from two recent representative works and ILLUMINATI.", "GNNExplainer estimates the edge importance from the AFG by learning the soft continuous edge masks.", "In this example, GNNExplainer identifies #OTHEREFR and #OTHEREFR as important and considers this subgraph as the explanation result.", "This is not accurate because node 12 is missed due to none of its edges is considered important."], "citing_paper_content": {"title": "Illuminati: Towards Explaining Graph Neural Networks For Cybersecurity Analysis", "abstract": "Graph neural networks (GNNs) have been utilized to create multi-layer graph models for a number of cybersecurity applications from fraud detection to software vulnerability analysis. Unfortunately, like traditional neural networks, GNNs also suffer from a lack of transparency, that is, it is challenging to interpret the model predictions. Prior works focused on specific factor explanations for a GNN model. In this work, we have designed and implemented ILLUMINATI, a comprehensive and accurate explanation framework for cybersecurity applications using GNN models. Given a graph and a pre-trained GNN model, ILLUMINATI is able to identify the important nodes, edges, and attributes that are contributing to the prediction while requiring no prior knowledge of GNN models. We evaluate ILLUMINATI in two cybersecurity applications, i.e., code vulnerability detection and smart contract vulnerability detection. The experiments show that ILLUMINATI achieves more accurate explanation results than state-of-theart methods, specifically, 87.6% of subgraphs identified by ILLUMINATI are able to retain their original prediction, an improvement of 10.3% over others at 77.3%. Furthermore, the explanation of ILLUMINATI can be easily understood by the domain experts, suggesting the significant usefulness for the development of cybersecurity applications."}, "cited_paper_content": {"title": "Devign: Effective Vulnerability Identification By Learning Comprehensive Program Semantics Via Graph Neural Networks", "abstract": "Vulnerability identification is crucial to protect the software systems from attacks for cyber security. It is especially important to localize the vulnerable functions among the source code to facilitate the fix. However, it is a challenging and tedious process, and also requires specialized security expertise. Inspired by the work on manually-defined patterns of vulnerabilities from various code representation graphs and the recent advance on graph neural networks, we propose Devign, a general graph neural network based model for graph-level classification through learning on a rich set of code semantic representations. It includes a novel Conv module to efficiently extract useful features in the learned rich node representations for graph-level classification. The model is trained over manually labeled datasets built on 4 diversified large-scale open-source C projects that incorporate high complexity and variety of real source code instead of synthesis code used in previous works. The results of the extensive evaluation on the datasets demonstrate that Devign outperforms the state of the arts significantly with an average of 10.51% higher accuracy and 8.68\\% F1 score, increases averagely 4.66% accuracy and 6.37% F1 by the Conv module."}, "keywords": ["GNN model", "vulnerability detection"], "citation_intent": "method"} {"citing_id": "2304.09453v1", "cited_id": "1810.05270", "section_title": "Retraining Subnetworks", "citation": "When retrained for enough epochs (e.g., 600 or 800 epochs), retraining from scratch is able to produce comparable performance, which is consistent with #REFR .", "text_before_citation": ["Comparison between rewinding and fine-tuning shows that their performance is almost the same after retraining 200 epochs.", "However, in a low-epoch retraining regime, fine-tuning is more efficient than rewinding for filter pruning.", "From scratch.", "We also conduct experiments that retrain pruned subnetworks from scratch, where the parameters of a subnetwork are re-initialized after pruning. The original learning rate schedule is adopted during retraining.", "As shown in Figure 1 (right, green line), retraining from scratch requires more epochs to reduce accuracy drop compared to fine-tuning and rewinding."], "text_after_citation": ["3 Moreover, our experimental results suggest that it is unfair to compare retraining techniques under different epochs, since all of them benefit from more epochs.", "Discussion.", "In this subsection, we empirically study existing retraining techniques, including finetuning, rewinding and retraining from scratch.", "Our experimental results show that these retraining techniques can achieve similar performance if we retrain pruned subnetworks with enough epochs.", "However, fine-tuning for a few epochs is a more efficient choice among them as our focus is not on recovering the original accuracy as much as possible in the following. Finally, we introduce a standard setting for further exploration."], "citing_paper_content": {"title": "Network Pruning Spaces", "abstract": "Network pruning techniques, including weight pruning and filter pruning, reveal that most state-of-the-art neural networks can be accelerated without a significant performance drop. This work focuses on filter pruning which enables accelerated inference with any off-the-shelf deep learning library and hardware. We propose the concept of network pruning spaces that parametrize populations of subnetwork architectures. Based on this concept, we explore the structure aspect of subnetworks that result in minimal loss of accuracy in different pruning regimes and arrive at a series of observations by comparing subnetwork distributions. We conjecture through empirical studies that there exists an optimal FLOPs-to-parameterbucket ratio related to the design of original network in a pruning regime. Statistically, the structure of a winning subnetwork guarantees an approximately optimal ratio in this regime. Upon our conjectures, we further refine the initial pruning space to reduce the cost of searching a good subnetwork architecture. Our experimental results on ImageNet show that the subnetwork we found is superior to those from the state-of-the-art pruning methods under comparable FLOPs."}, "cited_paper_content": {"title": "Rethinking The Value Of Network Pruning", "abstract": "Network pruning is widely used for reducing the heavy inference cost of deep models in low-resource settings. A typical pruning algorithm is a three-stage pipeline, i.e., training (a large model), pruning and fine-tuning. During pruning, according to a certain criterion, redundant weights are pruned and important weights are kept to best preserve the accuracy. In this work, we make several surprising observations which contradict common beliefs. For all state-of-the-art structured pruning algorithms we examined, fine-tuning a pruned model only gives comparable or worse performance than training that model with randomly initialized weights. For pruning algorithms which assume a predefined target network architecture, one can get rid of the full pipeline and directly train the target network from scratch. Our observations are consistent for multiple network architectures, datasets, and tasks, which imply that: 1) training a large, over-parameterized model is often not necessary to obtain an efficient final model, 2) learned \"important\" weights of the large model are typically not useful for the small pruned model, 3) the pruned architecture itself, rather than a set of inherited \"important\" weights, is more crucial to the efficiency in the final model, which suggests that in some cases pruning can be useful as an architecture search paradigm. Our results suggest the need for more careful baseline evaluations in future research on structured pruning methods. We also compare with the \"Lottery Ticket Hypothesis\" (Frankle & Carbin 2019), and find that with optimal learning rate, the \"winning ticket\" initialization as used in Frankle & Carbin (2019) does not bring improvement over random initialization."}, "keywords": ["enough epochs"], "citation_intent": "result"} {"citing_id": "2304.02247v1", "cited_id": "1909.02670", "section_title": "Structural Analysis", "citation": "We use the BASIL news bias prediction dataset which contains sentence-level annotations for two types of bias #REFR : lexical and informational bias.", "text_before_citation": ["In this section, we analyze the structural properties of biased news articles using the main sentences identified by the model.", "To do so, we collect the predicted main sentences and then assess whether they capture elements of the formalized discourse structures commonly used in journalism.", "By validating these from the viewpoints of both summarization and structure, we ensure the reliability of using multiple attention heads as a form of explanation mechanism"], "text_after_citation": ["Lexical bias refers to the bias mainly caused by the word choice of the journalist, such as using polarized words (e.g.", "Donald Trump is investing more in the conspiracy theories about President Obama's birth certificate as he explores his own bid for the presidency.) And informational bias refers to the biased elaboration of certain events or facts, which includes using selective quotations to strengthen their viewpoint. (e.g. The Arizona group said the call from Mr.", "Trump on Wednesday came unexpectedly, and the group had spent much of the day Thursday scurrying to make travel arrangements to New York.)"], "citing_paper_content": {"title": "Disentangling Structure And Style: Political Bias Detection In News By Inducing Document Hierarchy", "abstract": "We address an important gap in detection of political bias in news articles. Previous works that perform supervised document classification can be biased towards the writing style of each news outlet, leading to overfitting and limited generalizability. Our approach overcomes this limitation by considering both the sentence-level semantics and the documentlevel rhetorical structure, resulting in a more robust and style-agnostic approach to detecting political bias in news articles. We introduce a novel multi-head hierarchical attention model that effectively encodes the structure of long documents through a diverse ensemble of attention heads. While journalism follows a formalized rhetorical structure, the writing style may vary by news outlet. We demonstrate that our method overcomes this domain dependency and outperforms previous approaches for robustness and accuracy. Further analysis demonstrates the ability of our model to capture the discourse structures commonly used in the journalism domain."}, "cited_paper_content": {"title": "In Plain Sight: Media Bias Through The Lens Of Factual Reporting", "abstract": "The increasing prevalence of political bias in news media calls for greater public awareness of it, as well as robust methods for its detection. While prior work in NLP has primarily focused on the lexical bias captured by linguistic attributes such as word choice and syntax, other types of bias stem from the actual content selected for inclusion in the text. In this work, we investigate the effects of informational bias: factual content that can nevertheless be deployed to sway reader opinion. We first produce a new dataset, BASIL, of 300 news articles annotated with 1,727 bias spans and find evidence that informational bias appears in news articles more frequently than lexical bias. We further study our annotations to observe how informational bias surfaces in news articles by different media outlets. Lastly, a baseline model for informational bias prediction is presented by fine-tuning BERT on our labeled data, indicating the challenges of the task and future directions."}, "keywords": ["sentence-level annotations", "bias"], "citation_intent": "method"} {"citing_id": "2304.00673v1", "cited_id": "1903.11027", "section_title": "Setup", "citation": "For Car, we evaluate models on NuScenes #REFR , a driving dataset with 3D detection and tracking annotations.", "text_before_citation": ["For the additional experiment on shape reconstruction, we use Chamfer distance and F1 score, following #OTHEREFR .", "Datasets.", "We evaluate all methods on three categories of objects: Chair #OTHEREFR , Table #OTHEREFR , and Car #OTHEREFR .", "For all categories, we first pre-train each method on ShapeNet #OTHEREFR , a dataset of synthetic objects, to obtain a category-level prior.", "For Chair and Table, we evaluate models on ScanNet #OTHEREFR , a dataset of real-world indoor scene scans."], "text_after_citation": ["For more details on these datasets, refer to section C of the supplementary material.", "Baselines.", "For partial-view novel view synthesis, we compare FINV against Instant-NGP #OTHEREFR , pixelNeRF #OTHEREFR , IBRNet #OTHEREFR , IBRNet fine-tuned during test time, Au-toRF #OTHEREFR , and EG3D with pivotal tuning inversion (EG3D+PTI) #OTHEREFR .", "We use open-source implementations of each method when available, or private implementations shared by the authors.", "Note that AutoRF is originally proposed for reconstruction from one single view, but it supports multiple input views as well, which we find to improve reconstruction quality."], "citing_paper_content": {"title": "Partial-View Object View Synthesis Via Filtering Inversion", "abstract": "We propose Filtering Inversion (FINV), a learning framework and optimization process that predicts a renderable 3D object representation from one or few partial views. FINV addresses the challenge of synthesizing novel views of objects from partial observations, spanning cases where the object is not entirely in view, is partially occluded, or is only observed from similar views. To achieve this, FINV learns shape priors by training a 3D generative model. At inference, given one or more views of a novel real-world object, FINV first finds a set of latent codes for the object by inverting the generative model from multiple initial seeds. Maintaining the set of latent codes, FINV filters and resamples them after receiving each new observation, akin to particle filtering. The generator is then finetuned for each latent code on the available views in order to adapt to novel objects. We show that FINV successfully synthesizes novel views of real-world objects (e.g., chairs, tables, and cars), even if the generative prior is trained only on synthetic objects. The ability to address the sim-to-real problem allows FINV to be used for object categories without real-world datasets. FINV achieves state-of-the-art performance on multiple real-world datasets, recovers object shape and texture from partial and sparse views, is robust to occlusion, and is able to incrementally improves its representation with more observations. * This work was partly done during an internship at Nvidia."}, "cited_paper_content": {"title": "Nuscenes: A Multimodal Dataset For Autonomous Driving", "abstract": "Robust detection and tracking of objects is crucial for the deployment of autonomous vehicle technology. Image-based benchmark datasets have driven development in computer vision tasks such as object detection, tracking and segmentation of agents in the environment. Most autonomous vehicles, however, carry a combination of cameras and range sensors such as lidar and radar. As machine learning based methods for detection and tracking become more prevalent, there is a need to train and evaluate such methods on datasets containing range sensor data along with images. In this work we present nuTonomy scenes (nuScenes), the first published dataset to carry the full autonomous vehicle sensor suite: 6 cameras, 5 radars and 1 lidar, all with full 360 degree field of view. nuScenes comprises 1000 scenes, each 20s long and fully annotated with 3D bounding boxes for 23 classes and 8 attributes. It has 7x as many annotations and 100x as many images as the pioneering KITTI dataset. We define novel 3D detection and tracking metrics. We also provide careful dataset analysis as well as baselines for lidar and image based detection and tracking. Data, development kit and more information are available online at this http URL."}, "keywords": ["3D detection", "driving dataset"], "citation_intent": "method"} {"citing_id": "2303.18201v1", "cited_id": "1904.12274", "section_title": "Outlier Analysis", "citation": "Cauchy loss #REFR , however, performed the best as compared to the other loss functions, as evident from Table 7 .", "text_before_citation": ["We now present the robustness of TPMCF in the presence of outliers.", "Table 7 presents the performance of TPMCF with four different loss functions for various values of the outlier ratio (\u03bb) on RT-10 dataset.", "It may be noted that meansquared-error (MSE) #OTHEREFR is outlier sensitive, and hence, the performance of TPMCF with MSE as the loss function was the worst.", "Mean-absolute-error (MAE) #OTHEREFR , Huber loss #OTHEREFR , and Cauchy loss are comparatively outlier resilient."], "text_after_citation": ["Table 8 presents the impact of the outliers on the prediction accuracy of TPMCF for all four datasets.", "As observed from Table 8 , the outliers have a severe impact on the performance of TPMCF.", "In Figure 6 , we report the performance Figure 6 , when we removed the first 2% outliers (\u03bb = 0.02), we achieved maximum performance gain (PG) that is defined as follows:", "P G(m 1 , m 2 ) = m 1 \u2212 m 2 m 2 \u00d7 100% (7)", "where m 1 and m 2 represent the MAE of TPMCF after removing outliers with the ratio \u03bb 1 and \u03bb 2 , respectively (\u03bb 1 < \u03bb 2 )."], "citing_paper_content": {"title": "Tpmcf: Temporal Qos Prediction Using Multi-Source Collaborative Features", "abstract": "In recent times, with the proliferation of online activities and the rapid deployment of web service APIs, personalized service recommendations have played a paramount role in the growth of the e-commerce industry. The performance of web services is one of the standard measures for service recommendation. Quality-of-Service (QoS) parameters determining the service performance fluctuate over time for a user. Thus, for a given user, the service QoS prediction over time plays an essential part in identifying a suitable service to invoke among a pool of services having the same functionality. The contemporary temporal QoS prediction methods hardly achieved the desired accuracy due to various limitations, such as data sparsity, the presence of outliers, and the inability to capture higher-order temporal relationships among user-service interactions. Even though some recent recurrent neural network-based sequential architectures can model temporal relationships among QoS data, the prediction accuracy degrades due to the absence of other features (e.g., collaborative self features, collaborative spatial features of users/services) to comprehend the relationship among the user-service interactions. In addition to the lack of effective representation of implicit features, having the same attention across all the features in every timestep may impede improving the prediction accuracy. This paper addresses the above challenges and proposes a scalable strategy for Temporal QoS Prediction using Multi-source Collaborative Features (namely, TPMCF) enabling faster responsiveness while attaining high prediction accuracy. Our work combines the collaborative features of the user/service by exploiting the user-service relationship with the spatio-temporal auto-extracted features by employing graph convolution and a variant of transformer encoder with multi-head self-attention. While the graph convolutional is responsible for automatic feature extraction exploiting the spatial information, the transformer encoder is accountable for capturing the temporal dependency among QoS data for prediction. We validated our proposed method on WS-DREAM-2 benchmark datasets. Extensive experiments showed that TPMCF outperformed the major state-of-the-art approaches in terms of prediction accuracy while ensuring high scalability and reasonably faster responsiveness."}, "cited_paper_content": {"title": "Robust Subspace Clustering By Cauchy Loss Function", "abstract": "Subspace clustering is a problem of exploring the low-dimensional subspaces of high-dimensional data. State-of-the-art approaches are designed by following the model of spectral clustering-based method. These methods pay much attention to learn the representation matrix to construct a suitable similarity matrix and overlook the influence of the noise term on subspace clustering. However, the real data are always contaminated by the noise and the noise usually has a complicated statistical distribution. To alleviate this problem, in this paper, we propose a subspace clustering method based on Cauchy loss function (CLF). Particularly, it uses CLF to penalize the noise term for suppressing the large noise mixed in the real data. This is due to that the CLF\u2019s influence function has an upper bound that can alleviate the influence of a single sample, especially the sample with a large noise, on estimating the residuals. Furthermore, we theoretically prove the grouping effect of our proposed method, which means that highly correlated data can be grouped together. Finally, experimental results on five real data sets reveal that our proposed method outperforms several representative clustering methods."}, "keywords": ["loss functions", "Cauchy loss"], "citation_intent": "result"} {"citing_id": "2303.06884v1", "cited_id": "1412.6550", "section_title": "Results", "citation": "For instance, compared with FitNets #REFR which directly mimics the teacher features, our DSKD can bring more than 2 mIoU, showing the effectiveness of the proposed relationbased distillation algorithm.", "text_before_citation": ["Methods mIoU SCPNet w/o DSKD 34.4 + KD #OTHEREFR 33.8 + FitNets #OTHEREFR 33.8 + PKT #OTHEREFR 34.6 + PVKD #OTHEREFR 36.2 + NST #OTHEREFR 36.3 + DSKD 37.2 two large-scale benchmarks strongly demonstrate the superiority of our SCPNet .", "Besides, we use the trained weight of the segmentation sub-network as initialization to train Cylinder3D on the Se-manticKITTI semantic segmentation task.", "From Table 3 , Cylinder3D initialized from trained weight of the completion task outperforms the original Cylinder3D model by 2.6 mIoU, and achieves impressive segmentation performance among various competitive LiDAR segmentation models such as 2DPASS #OTHEREFR , PVKD #OTHEREFR and RPVNet #OTHEREFR .", "The encouraging results show that knowledge learned in the completion task is also beneficial to the segmentation task. Comparison with baseline KD algorithms.", "From Table 4 , it is evident that the proposed DSKD method can bring more gains than conventional knowledge distillation algorithms."], "text_after_citation": ["The vanilla KD objective and FitNets hamper the performance of the base model, indicating that directly mimicking the logits or features can not boost the completion performance. Qualitative results.", "We also provide visual comparison between JS3C-Net #OTHEREFR , SCPNet (single-frame) and SCP-Net (multi-frame). As can be seen from Fig.", "6 , our SCP-Net (single-frame) make more accurate completion predictions than JS3C-Net on road and vegetation.", "On long, thin objects such as poles, our single-frame model also yields high-quality completion results compared with JS3C-Net.", "The predictions of our single-frame model also resemble those of the multi-frame network, demonstrating the efficacy of the proposed DSKD algorithm."], "citing_paper_content": {"title": "Scpnet: Semantic Scene Completion On Point Cloud", "abstract": "Training deep models for semantic scene completion (SSC) is challenging due to the sparse and incomplete input, a large quantity of objects of diverse scales as well as the inherent label noise for moving objects. To address the above-mentioned problems, we propose the following three solutions: 1) Redesigning the completion sub-network. We design a novel completion sub-network, which consists of several Multi-Path Blocks (MPBs) to aggregate multi-scale features and is free from the lossy downsampling operations. 2) Distilling rich knowledge from the multi-frame model. We design a novel knowledge distillation objective, dubbed Dense-to-Sparse Knowledge Distillation (DSKD). It transfers the dense, relation-based semantic knowledge from the multi-frame teacher to the single-frame student, significantly improving the representation learning of the single-frame model. 3) Completion label rectification. We propose a simple yet effective label rectification strategy, which uses off-the-shelf panoptic segmentation labels to remove the traces of dynamic objects in completion labels, greatly improving the performance of deep models especially for those moving objects. Extensive experiments are conducted in two public SSC benchmarks, i.e., Se-manticKITTI and SemanticPOSS. Our SCPNet ranks 1st on SemanticKITTI semantic scene completion challenge and surpasses the competitive S3CNet [3] by 7.2 mIoU. SCP-Net also outperforms previous completion algorithms on the SemanticPOSS dataset. Besides, our method also achieves competitive results on SemanticKITTI semantic segmentation tasks, showing that knowledge learned in the scene completion is beneficial to the segmentation task."}, "cited_paper_content": {"title": "Fitnets: Hints For Thin Deep Nets", "abstract": "While depth tends to improve network performances, it also makes gradient-based training more difficult since deeper networks tend to be more non-linear. The recently proposed knowledge distillation approach is aimed at obtaining small and fast-to-execute models, and it has shown that a student network could imitate the soft output of a larger teacher network or ensemble of networks. In this paper, we extend this idea to allow the training of a student that is deeper and thinner than the teacher, using not only the outputs but also the intermediate representations learned by the teacher as hints to improve the training process and final performance of the student. Because the student intermediate hidden layer will generally be smaller than the teacher's intermediate hidden layer, additional parameters are introduced to map the student hidden layer to the prediction of the teacher hidden layer. This allows one to train deeper students that can generalize better or run faster, a trade-off that is controlled by the chosen student capacity. For example, on CIFAR-10, a deep student network with almost 10.4 times less parameters outperforms a larger, state-of-the-art teacher network."}, "keywords": ["proposed relationbased distillation"], "citation_intent": "method"} {"citing_id": "2303.08848v1", "cited_id": "1901.02446", "section_title": "Ii. Related Works", "citation": "Panoptic FPN #REFR also uses a semantic segmentation branch and a Mask R-CNN instance segmentation branch, but it uses a shared Feature Pyramid Network network as the backbone.", "text_before_citation": ["Panoptic Segmentation.", "This segmentation task can generate a scene representation that contains the segmentation mask of \"stuff\" (non-instance such as sky and road) and \"thing\" (such as cars and humans which we need to identify different instances), therefore, unifying semantic segmentation and instance segmentation.", "Panoptic Segmentation also separates objects' instances with different instance ID #OTHEREFR .", "UPSNet #OTHEREFR proposes a unified panoptic segmentation network which combines a semantic segmentation head with a Mask R-CNN #OTHEREFR style instance segmentation head and introduces a parameter-free panoptic head to predict the panoptic label."], "text_after_citation": ["FPSNet #OTHEREFR solves the same problem as a dense pixel-wise classification problem, which predicts a class label or an instance ID for each pixel, which runs faster than the above methods.", "Panoptic-DeepLab #OTHEREFR uses a class-agnostic instance segmentation branch that includes a simple instance center regression, which improves the accuracy of panoptic segmentation with a simple and fast design.", "Our work is inspired by the design choice of Panoptic-DeepLab.", "However, rather than separating the prediction of the instance edges and semantic edges, we obtain the instance edges by combining the prediction of semantic edges, the center prediction of instances, and center regression of the instances.", "Finally, we fuse the semantic edges with the instance edges to produce the panoptic edges."], "citing_paper_content": {"title": "Penet: A Joint Panoptic Edge Detection Network", "abstract": "In recent years, compact and efficient scene understanding representations have gained popularity in increasing situational awareness and autonomy of robotic systems. In this work, we illustrate the concept of a panoptic edge segmentation and propose PENet, a novel detection network called that combines semantic edge detection and instancelevel perception into a compact panoptic edge representation. This is obtained through a joint network by multi-task learning that concurrently predicts semantic edges, instance centers and offset flow map without bounding box predictions exploiting the cross-task correlations among the tasks. The proposed approach allows extending semantic edge detection to panoptic edge detection which encapsulates both category-aware and instanceaware segmentation. We validate the proposed panoptic edge segmentation method and demonstrate its effectiveness on the real-world Cityscapes dataset."}, "cited_paper_content": {"title": "Panoptic Feature Pyramid Networks", "abstract": "The recently introduced panoptic segmentation task has renewed our community's interest in unifying the tasks of instance segmentation (for thing classes) and semantic segmentation (for stuff classes). However, current state-of-the-art methods for this joint task use separate and dissimilar networks for instance and semantic segmentation, without performing any shared computation. In this work, we aim to unify these methods at the architectural level, designing a single network for both tasks. Our approach is to endow Mask R-CNN, a popular instance segmentation method, with a semantic segmentation branch using a shared Feature Pyramid Network (FPN) backbone. Surprisingly, this simple baseline not only remains effective for instance segmentation, but also yields a lightweight, top-performing method for semantic segmentation. In this work, we perform a detailed study of this minimally extended version of Mask R-CNN with FPN, which we refer to as Panoptic FPN, and show it is a robust and accurate baseline for both tasks. Given its effectiveness and conceptual simplicity, we hope our method can serve as a strong baseline and aid future research in panoptic segmentation."}, "keywords": ["semantic segmentation branch"], "citation_intent": "method"} {"citing_id": "2305.01447v1", "cited_id": "1612.00837", "section_title": "Multimodal Neural Databases", "citation": "Recently, deep learning techniques, particularly large deep learning models, have shown excellent reasoning capabilities #REFR .", "text_before_citation": ["For instance, a set query can produce answers that include images, audio, and natural language (and their combination) seamlessly.", "Designing a Multimodal Neural Database presents several substantial challenges.", "First, it is crucial that the system is able to reason on the modalities given in input.", "For instance, if I were to look for images of cats and dogs fighting, I need to recognize both the presence of these animals and that the interactions between the two is indeed that of fighting (a poster of Mike Tyson boxing in the background is not sufficient).", "Similarly, if the query mentions someone whispering or yelling, the system must understand such subtleties in an audio frame."], "text_after_citation": ["The tasks of Visual Question Answering and multi-hop question answering have reached near human results #OTHEREFR for natural language processing, with promising candidates in the multimodal setting as well.", "However, these models are usually extremely large, with billions of parameters, leading to the next challenge, namely scale.", "Given a large collection of documents, it is infeasible to run such models on every query-document pair, or even on every document for that matter.", "Open domain question answering systems (ODQA), developed for answering queries from natural language text, provide a methodology for scaling to larger document collections.", "ODQA answers a query by first retrieving relevant documents from the document collection and feeding them as context to a transformer along with the query."], "citing_paper_content": {"title": "Multimodal Neural Databases", "abstract": "The rise in loosely-structured data available through text, images, and other modalities has called for new ways of querying them. Multimedia Information Retrieval has filled this gap and has witnessed exciting progress in recent years. Tasks such as search and retrieval of extensive multimedia archives have undergone massive performance improvements, driven to a large extent by recent developments in multimodal deep learning. However, methods in this field remain limited in the kinds of queries they support and, in particular, their inability to answer database-like queries. For this reason, inspired by recent work on neural databases, we propose a new framework, which we name Multimodal Neural Databases (MM-NDBs). MMNDBs can answer complex database-like queries that involve reasoning over different input modalities, such as text and images, at scale. In this paper, we present the first architecture able to fulfill this set of requirements and test it with several baselines, showing the limitations of currently available models. The results show the potential of these new techniques to process unstructured data coming from different modalities, paving the way for future research in the area. Code to replicate the experiments will be released at https://github.com/GiovanniTRA/MultimodalNeuralDatabases CCS CONCEPTS \u2022 Information systems \u2192 Information retrieval; Multimedia and multimodal retrieval."}, "cited_paper_content": {"title": "Making The V In Vqa Matter: Elevating The Role Of Image Understanding In Visual Question Answering", "abstract": "Problems at the intersection of vision and language are of significant importance both as challenging research questions and for the rich set of applications they enable. However, inherent structure in our world and bias in our language tend to be a simpler signal for learning than visual modalities, resulting in models that ignore visual information, leading to an inflated sense of their capability. ::: We propose to counter these language priors for the task of Visual Question Answering (VQA) and make vision (the V in VQA) matter! Specifically, we balance the popular VQA dataset by collecting complementary images such that every question in our balanced dataset is associated with not just a single image, but rather a pair of similar images that result in two different answers to the question. Our dataset is by construction more balanced than the original VQA dataset and has approximately twice the number of image-question pairs. Our complete balanced dataset is publicly available at this http URL as part of the 2nd iteration of the Visual Question Answering Dataset and Challenge (VQA v2.0). ::: We further benchmark a number of state-of-art VQA models on our balanced dataset. All models perform significantly worse on our balanced dataset, suggesting that these models have indeed learned to exploit language priors. This finding provides the first concrete empirical evidence for what seems to be a qualitative sense among practitioners. ::: Finally, our data collection protocol for identifying complementary images enables us to develop a novel interpretable model, which in addition to providing an answer to the given (image, question) pair, also provides a counter-example based explanation. Specifically, it identifies an image that is similar to the original image, but it believes has a different answer to the same question. This can help in building trust for machines among their users."}, "keywords": ["deep learning techniques", "excellent reasoning capabilities"], "citation_intent": "background"} {"citing_id": "2303.17937v1", "cited_id": "1703.01780", "section_title": "Domain Adaptive Object Detection", "citation": "To enhance the robustness of the cross-domain model, both UMT and TDD utilize the teacher-student learning scheme, in which UMT adopts Mean Teacher #REFR and TDD adopts Dual-branch detection network.", "text_before_citation": ["Self-training is further regularized by distribution alignment for improvement robustness.", "consistency between image-level and instance-level predictions to re-weight the instance-level alignment. ii) Training from noisy labels, a.k.a. self-training, e.g., NL #OTHEREFR and WST-BSR #OTHEREFR . iii) Based on sample generation strategy.", "In this line of works, DD-MRL #OTHEREFR leverages an imageto-image translation via GAN to generate various distinctive shifted domains from the source domain.", "AFAN #OTHEREFR obtains the intermediate domain (fusing the source and target domains) by interpolation.", "UMT #OTHEREFR and TDD #OTHEREFR utilize both the source-like and target-like images to perform the cross-domain distillation."], "text_after_citation": ["In this work, a momentum-updated Faster R-CNN is performed for more stability in test-time adaptation.", "Although the excellent performance is reached, all UDAOD methods require access to the source domain data during the adaptation process.", "When the source data is not accessible due to privacy issues or storage overhead, more challenging settings are emerged with source-free domain adaptation #OTHEREFR and test-time adaptation #OTHEREFR ."], "citing_paper_content": {"title": "Stfar: Improving Object Detection Robustness At Test-Time By Self-Training With Feature Alignment Regularization", "abstract": "Domain adaptation helps generalizing object detection models to target domain data with distribution shift. It is often achieved by adapting with access to the whole target domain data. In a more realistic scenario, target distribution is often unpredictable until inference stage. This motivates us to explore adapting an object detection model at test-time, a.k.a. test-time adaptation (TTA). In this work, we approach test-time adaptive object detection (TTAOD) from two perspective. First, we adopt a self-training paradigm to generate pseudo labeled objects with an exponential moving average model. The pseudo labels are further used to supervise adapting source domain model. As self-training is prone to incorrect pseudo labels, we further incorporate aligning feature distributions at two output levels as regularizations to self-training. To validate the performance on TTAOD, we create benchmarks based on three standard object detection datasets and adapt generic TTA methods to object detection task. Extensive evaluations suggest our proposed method sets the state-of-the-art on test-time adaptive object detection task. Type equa)on here."}, "cited_paper_content": {"title": "Weight-Averaged Consistency Targets Improve Semi-Supervised Deep Learning Results", "abstract": "The recently proposed Temporal Ensembling has achieved state-of-the-art results in several semi-supervised learning benchmarks. It maintains an exponential moving average of label predictions on each training example, and penalizes predictions that are inconsistent with this target. However, because the targets change only once per epoch, Temporal Ensembling becomes unwieldy when learning large datasets. To overcome this problem, we propose Mean Teacher, a method that averages model weights instead of label predictions. As an additional benefit, Mean Teacher improves test accuracy and enables training with fewer labels than Temporal Ensembling. Without changing the network architecture, Mean Teacher achieves an error rate of 4.35% on SVHN with 250 labels, outperforming Temporal Ensembling trained with 1000 labels. We also show that a good network architecture is crucial to performance. Combining Mean Teacher and Residual Networks, we improve the state of the art on CIFAR-10 with 4000 labels from 10.55% to 6.28%, and on ImageNet 2012 with 10% of the labels from 35.24% to 9.11%."}, "keywords": ["Dual-branch detection network"], "citation_intent": "method"} {"citing_id": "2305.01356v1", "cited_id": "1809.11147", "section_title": "Covering With Hyperbolic Quadtrees", "citation": "The following lemma was used to define these useful shifts. Lemma 8 (Chan et al. #REFR ).", "text_before_citation": ["R a \u2022 2 2 \u22121 \u221a d \u2212 1 , 2 b\u20222 , 2 2 \u22121 \u221a d \u2212 1 , 2", "for each (a, b) where a \u2208 Z d\u22121 and b \u2208 Z.", "As a result we get the infinite quadtree Q d \u221e where for each integer the cells of the quadtree define a subdivision of H d .", "For x, y \u2208 R + we define x mod y = x \u2212 y x/y . Chan et al.", "#OTHEREFR observed that shifting a quadtree by certain special vectors results in a useful shifts also for levels with smaller cells."], "text_after_citation": ["Let n > 1 be a positive odd integer, and consider the set", "X = {i/n | i = 0, . . . , n \u2212 1}.", "Then, for any \u03b1 = 2 \u2212 , where \u2265 0 is an integer, we have that X mod \u03b1 = {i/n mod \u03b1 | i = 0, . . .", ", n \u2212 1} is equal to the set \u03b1X = {\u03b1i/n | i = 0, . . . , n \u2212 1}.", "We will look at the projection \u03c0 z (x, z) = z, or more precisely log \u03c0 z (x, z)."], "citing_paper_content": {"title": "A Quadtree For Hyperbolic Space", "abstract": "We propose a data structure in d-dimensional hyperbolic space that can be considered a natural counterpart to quadtrees in Euclidean spaces. Based on this data structure we propose a so-called L-order for hyperbolic point sets, which is an extension of the Z-order defined in Euclidean spaces. We demonstrate the usefulness of our hyperbolic quadtree data structure by giving an algorithm for constant-approximate closest pair and dynamic constant-approximate nearest neighbours in hyperbolic space of constant dimension d."}, "cited_paper_content": {"title": "On Locality-Sensitive Orderings And Their Applications", "abstract": "For any constant $d$ and parameter $\\varepsilon > 0$, we show the existence of (roughly) $1/\\varepsilon^d$ orderings on the unit cube $[0,1)^d$, such that any two points $p,q\\in [0,1)^d$ that are close together under the Euclidean metric are \"close together\" in one of these linear orderings in the following sense: the only points that could lie between $p$ and $q$ in the ordering are points with Euclidean distance at most $\\varepsilon\\| p - q \\|$ from $p$ or $q$. These orderings are extensions of the $\\mathcal{Z}$-order, and they can be efficiently computed. Functionally, the orderings can be thought of as a replacement to quadtrees and related structures (like well-separated pair decompositions). We use such orderings to obtain surprisingly simple algorithms for a number of basic problems in low-dimensional computational geometry, including (i) dynamic approximate bichromatic closest pair, (ii) dynamic spanners, (iii) dynamic approximate minimum spanning trees, (iv) static and dynamic fault-tolerant spanners, and (v) approximate nearest neighbor search."}, "keywords": ["Chan", "Lemma"], "citation_intent": "background"} {"citing_id": "2305.02299v1", "cited_id": "1706.02677", "section_title": "Resnet-50 Trained On Imagenet", "citation": "Linearly scaling the learning rate in this manner was included in the original RigL source code and is further motivated by #REFR .", "text_before_citation": ["Our training regimen for training on the ImageNet dataset follows #OTHEREFR", "(2021) with the exception of using a mini-batch size of 512 instead of 4096.", "We linearly scale the learning rate \u2206T to account for our smaller batch size.", "Our learning rate uses a linear warm-up to reach a maximum value of 0.2 at epoch five and is reduced by a factor of 10 at epochs 30, 70, and 90.", "Using a mini-batch of 512, we train the networks for 256,000 steps to match RigL's training duration."], "text_after_citation": ["We increase \u2206T to 800 and average the dense gradients over eight mini-batch steps to ensure that SRigL has the same quality of parameter saliency information available as RigL.", "We use a cosine connectivity update schedule with \u03b1 = 0.3.", "We initialize the sparse model weights per #OTHEREFR .", "We train the networks using SGD with momentum, L2 weight decay, and label smoothing #OTHEREFR coefficients of 0.9, 1e-4 and 0.1, respectively.", "We set the minimum percentage of salient weights per neuron to 30% based on our grid search presented in Fig. 9 ."], "citing_paper_content": {"title": "Dynamic Sparse Training With Structured Sparsity", "abstract": "Dynamic Sparse Training (DST) methods achieve state-of-the-art results in sparse neural network training, matching the generalization of dense models while enabling sparse training and inference. Although the resulting models are highly sparse and theoretically cheaper to train, achieving speedups with unstructured sparsity on real-world hardware is challenging. In this work we propose a DST method to learn a variant of structured N:M sparsity, the acceleration of which in general is commonly supported in commodity hardware. Furthermore, we motivate with both a theoretical analysis and empirical results, the generalization performance of our specific N:M sparsity (constant fan-in), present a condensed representation with a reduced parameter and memory footprint, and demonstrate reduced inference time compared to dense models with a naive PyTorch CPU implementation of the condensed representation Our source code is available at github.com/calgaryml/condensed-sparsity."}, "cited_paper_content": {"title": "Accurate, Large Minibatch Sgd: Training Imagenet In 1 Hour", "abstract": "Deep learning thrives with large neural networks and large datasets. However, larger networks and larger datasets result in longer training times that impede research and development progress. Distributed synchronous SGD offers a potential solution to this problem by dividing SGD minibatches over a pool of parallel workers. Yet to make this scheme efficient, the per-worker workload must be large, which implies nontrivial growth in the SGD minibatch size. In this paper, we empirically show that on the ImageNet dataset large minibatches cause optimization difficulties, but when these are addressed the trained networks exhibit good generalization. Specifically, we show no loss of accuracy when training with large minibatch sizes up to 8192 images. To achieve this result, we adopt a hyper-parameter-free linear scaling rule for adjusting learning rates as a function of minibatch size and develop a new warmup scheme that overcomes optimization challenges early in training. With these simple techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of 8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using commodity hardware, our implementation achieves ~90% scaling efficiency when moving from 8 to 256 GPUs. Our findings enable training visual recognition models on internet-scale data with high efficiency."}, "keywords": ["learning rate"], "citation_intent": "background"} {"citing_id": "2303.11040v1", "cited_id": "1903.11027", "section_title": "Nuscenes-C", "citation": "The nuScenes dataset #REFR contains 1000 sequences of approximately 20s duration with a LiDAR frequency of 20 FPS. The box annotations are provided for every 0.5s.", "text_before_citation": [], "text_after_citation": ["Each frame has one point cloud and six images covering 360 \u2022 horizontal FOV.", "In total, there are 40k frames which are split into 28k, 6k, 6k for training, validation, and testing.", "As the dataset provides full annotations and information of vehicle pose and timestamp, we can simulate all corruptions.", "Thus, we apply all 27 corruptions to the nuScenes validation set with 5 severities to obtain nuScenes-C.", "For 3D object detection, the main evaluation metrics are mean Average Precision (mAP) and nuScenes detection score (NDS) computed on 10 object categories."], "citing_paper_content": {"title": "Benchmarking Robustness Of 3D Object Detection To Common Corruptions In Autonomous Driving", "abstract": "3D object detection is an important task in autonomous driving to perceive the surroundings. Despite the excellent performance, the existing 3D detectors lack the robustness to real-world corruptions caused by adverse weathers, sensor noises, etc., provoking concerns about the safety and reliability of autonomous driving systems. To comprehensively and rigorously benchmark the corruption robustness of 3D detectors, in this paper we design 27 types of common corruptions for both LiDAR and camera inputs considering real-world driving scenarios. By synthesizing these corruptions on public datasets, we establish three corruption robustness benchmarks-KITTI-C, nuScenes-C, and Waymo-C. Then, we conduct large-scale experiments on 24 diverse 3D object detection models to evaluate their corruption robustness. Based on the evaluation results, we draw several important findings, including: 1) motion-level corruptions are the most threatening ones that lead to significant performance drop of all models; 2) LiDAR-camera fusion models demonstrate better robustness; 3) camera-only models are extremely vulnerable to image corruptions, showing the indispensability of LiDAR point clouds. We release the benchmarks and codes at https://github.com/kkkcx/ 3D_Corruptions_AD. We hope that our benchmarks and findings can provide insights for future research on developing robust 3D object detection models."}, "cited_paper_content": {"title": "Nuscenes: A Multimodal Dataset For Autonomous Driving", "abstract": "Robust detection and tracking of objects is crucial for the deployment of autonomous vehicle technology. Image-based benchmark datasets have driven development in computer vision tasks such as object detection, tracking and segmentation of agents in the environment. Most autonomous vehicles, however, carry a combination of cameras and range sensors such as lidar and radar. As machine learning based methods for detection and tracking become more prevalent, there is a need to train and evaluate such methods on datasets containing range sensor data along with images. In this work we present nuTonomy scenes (nuScenes), the first published dataset to carry the full autonomous vehicle sensor suite: 6 cameras, 5 radars and 1 lidar, all with full 360 degree field of view. nuScenes comprises 1000 scenes, each 20s long and fully annotated with 3D bounding boxes for 23 classes and 8 attributes. It has 7x as many annotations and 100x as many images as the pioneering KITTI dataset. We define novel 3D detection and tracking metrics. We also provide careful dataset analysis as well as baselines for lidar and image based detection and tracking. Data, development kit and more information are available online at this http URL."}, "keywords": ["LiDAR frequency"], "citation_intent": "method"} {"citing_id": "2304.12190v1", "cited_id": "1709.02012", "section_title": "Inherent Tradeoffs", "citation": "Namely, the slope of the lines of equal accuracy is determined by the base rates of the outcome within the group #REFR .", "text_before_citation": ["The circle at the intersection of the two curves represents the single operating point for which the false positive and false negative rates of the model are equal for both groups (equalized odds), while still being at an \"optimal\" operating point for both groups.", "In other words, all fair operating points besides the circle require a strict trade-off with the model's FP and/or FN rates.", "Because the base rates differ between groups, an additional problem arises: operating at the circle means that the model is miscalibrated for one or both of the groups.", "Consider the dotted lines, which represent levels of accuracy for the model on each group.", "Accuracy is the number of correct predictions divided by the total number of predictions, and is tightly linked to calibration."], "text_after_citation": ["Therefore, the model is both calibrated and equally accurate for both groups wherever the lines of equal accuracy intersect within the feasible region.", "We denote such an intersection by the square, which, notably, is not on either group's ROC curve. The square and circle will only intersect, i.e.", "equalized odds and fair calibration be simultaneously achieved, if the two two groups have identical base rates or the classifier is perfect on both groups.", "Thus we have not only an error-fairness, but also a fairness-fairness tradeoff among operating points in the model.", "We can relax equalized odds by dividing it into simpler constraints that only ask for equivalence among the positive samples negative ones."], "citing_paper_content": {"title": "Optimizing Fairness Tradeoffs In Machine Learning With Multiobjective Meta-Models", "abstract": "Improving the fairness of machine learning models is a nuanced task that requires decision makers to reason about multiple, conflicting criteria. The majority of fair machine learning methods transform the error-fairness trade-off into a single objective problem with a parameter controlling the relative importance of error versus fairness. We propose instead to directly optimize the errorfairness tradeoff by using multi-objective optimization. We present a flexible framework for defining the fair machine learning task as a weighted classification problem with multiple cost functions. This framework is agnostic to the underlying prediction model as well as the metrics. We use multiobjective optimization to define the sample weights used in model training for a given machine learner, and adapt the weights to optimize multiple metrics of fairness and accuracy across a set of tasks. To reduce the number of optimized parameters, and to constrain their complexity with respect to population subgroups, we propose a novel meta-model approach that learns to map protected attributes to sample weights, rather than optimizing those weights directly. On a set of real-world problems, this approach outperforms current state-of-the-art methods by finding solution sets with preferable error/fairness trade-offs. CCS CONCEPTS \u2022 Computing methodologies \u2192 Cost-sensitive learning; Supervised learning by classification; Multi-task learning; Randomized search."}, "cited_paper_content": {"title": "On Fairness And Calibration", "abstract": "The machine learning community has become increasingly concerned with the potential for bias and discrimination in predictive models, and this has motivated a growing line of work on what it means for a classification procedure to be\"fair.\"In particular, we investigate the tension between minimizing error disparity across different population groups while maintaining calibrated probability estimates. We show that calibration is compatible only with a single error constraint (i.e. equal false-negatives rates across groups), and show that any algorithm that satisfies this relaxation is no better than randomizing a percentage of predictions for an existing classifier. These unsettling findings, which extend and generalize existing results, are empirically confirmed on several datasets."}, "keywords": ["equal accuracy"], "citation_intent": "background"} {"citing_id": "2304.02846v1", "cited_id": "1707.06347", "section_title": "Introduction", "citation": "We employ the proximal policy optimization (PPO) #REFR algorithm to update the model weights during training.", "text_before_citation": ["They also suffer from the problems of mode collapse #OTHEREFR , class imbalance #OTHEREFR , and computational expense #OTHEREFR when generating high-dimensional data.", "But what we care about most is that a lot of synthetic samples generated are used directly for training a classifier without studying if these samples actually help the classifier learn. Instead, these samples are chosen based on \"realness\".", "Figure 1 shows a comparison of the standard pipeline and our proposed pipeline for feature-generating approaches.", "To address the limitations of GANs in synthetic feature selection, we propose a novel reinforcement learning-based approach that automatically selects generated features that improve model performance.", "Specifically, we use a transformer model #OTHEREFR for synthetic sample selection and use validation classification accuracy as the reward for RL training."], "text_after_citation": ["Our proposed approach aims to pick samples that help classification and not just generate real-looking samples.", "We dub our synthetic sample selection method as \"SPOT\" for Selection using Proximal policy OpTimization.", "Furthermore, our proposed approach is model-agnostic and data-agnostic, as we evaluate our method on multiple benchmark datasets in images and videos and various feature-generating models.", "Our comprehensive experiments demonstrate that our approach consistently improves model performance across different datasets and models, highlighting the effectiveness and versatility of our proposed method.", "By leveraging RL-based synthetic feature selection, we can more effectively generate synthetic data that captures the underlying structure of the data, improving the generalization performance of downstream models."], "citing_paper_content": {"title": "Synthetic Sample Selection For Generalized Zero-Shot Learning", "abstract": "Generalized Zero-Shot Learning (GZSL) has emerged as a pivotal research domain in computer vision, owing to its capability to recognize objects that have not been seen during training. Despite the significant progress achieved by generative techniques in converting traditional GZSL to fully supervised learning, they tend to generate a large number of synthetic features that are often redundant, thereby increasing training time and decreasing accuracy. To address this issue, this paper proposes a novel approach for synthetic feature selection using reinforcement learning. In particular, we propose a transformer-based selector that is trained through proximal policy optimization (PPO) to select synthetic features based on the validation classification accuracy of the seen classes, which serves as a reward. The proposed method is model-agnostic and data-agnostic, making it applicable to both images and videos and versatile for diverse applications. Our experimental results demonstrate the superiority of our approach over existing feature-generating methods, yielding improved overall performance on multiple benchmarks."}, "cited_paper_content": {"title": "Proximal Policy Optimization Algorithms", "abstract": "We propose a new family of policy gradient methods for reinforcement learning, which alternate between sampling data through interaction with the environment, and optimizing a \"surrogate\" objective function using stochastic gradient ascent. Whereas standard policy gradient methods perform one gradient update per data sample, we propose a novel objective function that enables multiple epochs of minibatch updates. The new methods, which we call proximal policy optimization (PPO), have some of the benefits of trust region policy optimization (TRPO), but they are much simpler to implement, more general, and have better sample complexity (empirically). Our experiments test PPO on a collection of benchmark tasks, including simulated robotic locomotion and Atari game playing, and we show that PPO outperforms other online policy gradient methods, and overall strikes a favorable balance between sample complexity, simplicity, and wall-time."}, "keywords": ["proximal policy optimization"], "citation_intent": "method"} {"citing_id": "2304.00395v1", "cited_id": "1906.05849", "section_title": "H.1.4 Datasets", "citation": "When extracting images from the original the ImageNet-1K dataset to create the ImageNet-100 dataset, we select the 100 classes used in #REFR .", "text_before_citation": ["In the experiments, we use the following datasets: CIFAR-10 ( #OTHEREFR , STL-10 (Coates et al., 2011) , and ImageNet-100 #OTHEREFR .", "Note that ImageNet-100 is a subset of the ImageNet-1K dataset #OTHEREFR , where the ImageNet-100 dataset contains images categorized in 100 classes #OTHEREFR ."], "text_after_citation": ["We also remark that for the experiments with the STL-10 dataset, we use the mixed dataset that consists of the unlabeled images and the labeled training images for pretraining, the labeled training images for the training of the linear head in the stage of linear evaluation, and the labeled test images for computing the accuracy in linear evaluation.", "Throughout the experiments, we use the following image size for each dataset: 32 \u00d7 32 pixels for CIFAR-10, 96 \u00d7 96 pixels for STL-10, and 224 \u00d7 224 pixels for ImageNet-100, where the image sizes of CIFAR-10 and STL-10 are the same as the sizes of the original images, respectively, and the image sizes for ImageNet-100 are inspired by those for the ImageNet-1K dataset used in #OTHEREFR ; #OTHEREFR ."], "citing_paper_content": {"title": "Towards Understanding The Mechanism Of Contrastive Learning Via Similarity Structure: A Theoretical Analysis", "abstract": "Contrastive learning is an efficient approach to self-supervised representation learning. Although recent studies have made progress in the theoretical understanding of contrastive learning, the investigation of how to characterize the clusters of the learned representations is still limited. In this paper, we aim to elucidate the characterization from theoretical perspectives. To this end, we consider a kernel-based contrastive learning framework termed Kernel Contrastive Learning (KCL), where kernel functions play an important role when applying our theoretical results to other frameworks. We introduce a formulation of the similarity structure of learned representations by utilizing a statistical dependency viewpoint. We investigate the theoretical properties of the kernel-based contrastive loss via this formulation. We first prove that the formulation characterizes the structure of representations learned with the kernel-based contrastive learning framework. We show a new upper bound of the classification error of a downstream task, which explains that our theory is consistent with the empirical success of contrastive learning. We also establish a generalization error bound of KCL. Finally, we show a guarantee for the generalization ability of KCL to the downstream classification task via a surrogate bound."}, "cited_paper_content": {"title": "Contrastive Multiview Coding", "abstract": "Humans view the world through many sensory channels, e.g., the long-wavelength light channel, viewed by the left eye, or the high-frequency vibrations channel, heard by the right ear. Each view is noisy and incomplete, but important factors, such as physics, geometry, and semantics, tend to be shared between all views (e.g., a \"dog\" can be seen, heard, and felt). We hypothesize that a powerful representation is one that models view-invariant factors. Based on this hypothesis, we investigate a contrastive coding scheme, in which a self-supervsied representation is learned that aims to maximize mutual information between different views but is otherwise compact. Our approach scales to any number of views, and is view-agnostic. The resulting learned representations perform above the state of the art for downstream tasks such as object classification, compared to formulations based on predictive learning or single view reconstruction, and improve as more views are added. On the Imagenet linear readoff benchmark, we achieve 68.4% top-1 and 88.2% top-5 accuracies. Code and reference implementations are released on our project page: this http URL."}, "keywords": ["ImageNet-100 dataset"], "citation_intent": "method"} {"citing_id": "2304.10103v1", "cited_id": "2004.00440", "section_title": "Experimental Results", "citation": "A (higher indicates better performance) and F (lower indicates less forgetting) are calculated averagely over all learned tasks, while the detailed ACC is calculated on per learned task #REFR . Compared Methods.", "text_before_citation": ["Each class has more than 1, 000 images for training and 500 for testing.", "Each image is resized to 256\u00d7256 before randomly sampling 224 \u00d7 224 crops for training.", "For testing, the original center crop with 224 \u00d7 224 shapes is used.", "Evaluation Metrics.", "We employ average incremental accuracy (A), average forgetting measure (F ), and classification accuracies (ACC) for evaluation #OTHEREFR ."], "text_after_citation": ["We compare two reference methods, fine-tuning (Fine) and joint training (Joint) #OTHEREFR , and several SOTA methods on CIFAR-100 and ImageNet-sub.", "They include two parameter isolation methods, LwF #OTHEREFR and LwM #OTHEREFR , two regularization-based methods, EWC #OTHEREFR and MAS #OTHEREFR , two extractoraimed methods, IL2M #OTHEREFR and Lucir #OTHEREFR , and the prototype-aimed method, GFR .", "Besides, we directly introduce the embedding distillation into each SOTA method to obtain a modified eXX method, where XX is LwF, LwM, MAS, IL2M, Lucir, or GFR.", "Fine updates the network on the newly arrived tasks, while Joint assumes data of all previous tasks are available in each training task.", "EWC, MAS, LwF, LwM, and GFR are trained without exemplars, whereas IL2M and Lucir store 20 exemplars for each learned class."], "citing_paper_content": {"title": "Etag: Class-Incremental Learning With Embedding Distillation And Task-Oriented Generation", "abstract": "Class-Incremental Learning (CIL) aims to solve the neural networks' catastrophic forgetting problem, which refers to the fact that once the network updates on a new task, its performance on previouslylearned tasks drops dramatically. Most successful CIL methods incrementally train a feature extractor with the aid of stored exemplars, or estimate the feature distribution with the stored prototypes. However, the stored exemplars would violate the data privacy concerns, while the stored prototypes might not reasonably be consistent with a proper feature distribution, hindering the exploration of real-world CIL applications. In this paper, we propose a method of embedding distillation and Taskoriented generation (eTag) for CIL, which requires neither the exemplar nor the prototype. Instead, eTag achieves a data-free manner to train the neural networks incrementally. To prevent the feature extractor from forgetting, eTag distills the embeddings of the network's intermediate blocks. Additionally, eTag enables a generative network to produce suitable features, fitting the needs of the top incremental classifier. Experimental results confirmed that our proposed eTag considerably outperforms the state-of-the-art methods on CIFAR-100 and ImageNet-sub 1 ."}, "cited_paper_content": {"title": "Semantic Drift Compensation For Class-Incremental Learning", "abstract": "Class-incremental learning of deep networks sequentially increases the number of classes to be classified. During training, the network has only access to data of one task at a time, where each task contains several classes. In this setting, networks suffer from catastrophic forgetting which refers to the drastic drop in performance on previous tasks. The vast majority of methods have studied this scenario for classification networks, where for each new task the classification layer of the network must be augmented with additional weights to make room for the newly added classes. Embedding networks have the advantage that new classes can be naturally included into the network without adding new weights. Therefore, we study incremental learning for embedding networks. In addition, we propose a new method to estimate the drift, called semantic drift, of features and compensate for it without the need of any exemplars. We approximate the drift of previous tasks based on the drift that is experienced by current task data. We perform experiments on fine-grained datasets, CIFAR100 and ImageNet-Subset. We demonstrate that embedding networks suffer significantly less from catastrophic forgetting. We outperform existing methods which do not require exemplars and obtain competitive results compared to methods which store exemplars. Furthermore, we show that our proposed SDC when combined with existing methods to prevent forgetting consistently improves results."}, "keywords": ["learned tasks"], "citation_intent": "method"} {"citing_id": "2303.00918v1", "cited_id": "1810.02334", "section_title": "I Comparison With Augmentation-Based Unsupervised", "citation": "However, unlike the image domain, they performed worse than CACTUs #REFR , as shown in Table 11 .", "text_before_citation": ["META-LEARNING SCHEMES Table 11 : Few-shot test accuracy (%) on 8 datasets from the OpenML-CC18 benchmark #OTHEREFR .", "We report the mean test accuracy over 100 different seeds. The bold denotes the highest mean score.", "We evaluate UMTRA #OTHEREFR and SES #OTHEREFR ) (also utilizing SNS proposed by Ye et al.", "(2022)) on few-shot tabular learning tasks, where we use augmentation strategies used in SubTab #OTHEREFR ) (i.e., Gaussian noise and marginal distribution masking).", "Here, we tried our best to improve the performance of SES and UMTRA (e.g., tune variance of Gaussian noise)."], "text_after_citation": ["We believe that the failures of SES and UMTRA are mainly due to the absence of effective augmentation strategies for tabular data, and developing them will be an interesting future direction."], "citing_paper_content": {"title": "Stunt: Few-Shot Tabular Learning With Self-Generated Tasks From Unlabeled Tables", "abstract": "Learning with few labeled tabular samples is often an essential requirement for industrial machine learning applications as varieties of tabular data suffer from high annotation costs or have difficulties in collecting new samples for novel tasks. Despite the utter importance, such a problem is quite under-explored in the field of tabular learning, and existing few-shot learning schemes from other domains are not straightforward to apply, mainly due to the heterogeneous characteristics of tabular data. In this paper, we propose a simple yet effective framework for few-shot semi-supervised tabular learning, coined Self-generated Tasks from UNlabeled Tables (STUNT). Our key idea is to self-generate diverse few-shot tasks by treating randomly chosen columns as a target label. We then employ a meta-learning scheme to learn generalizable knowledge with the constructed tasks. Moreover, we introduce an unsupervised validation scheme for hyperparameter search (and early stopping) by generating a pseudo-validation set using STUNT from unlabeled data. Our experimental results demonstrate that our simple framework brings significant performance gain under various tabular few-shot learning benchmarks, compared to prior semi-and self-supervised baselines. Code is available at https://github.com/jaehyun513/STUNT."}, "cited_paper_content": {"title": "Unsupervised Learning Via Meta-Learning", "abstract": "A central goal of unsupervised learning is to acquire representations from unlabeled data or experience that can be used for more effective learning of downstream tasks from modest amounts of labeled data. Many prior unsupervised learning works aim to do so by developing proxy objectives based on reconstruction, disentanglement, prediction, and other metrics. Instead, we develop an unsupervised meta-learning method that explicitly optimizes for the ability to learn a variety of tasks from small amounts of data. To do so, we construct tasks from unlabeled data in an automatic way and run meta-learning over the constructed tasks. Surprisingly, we find that, when integrated with meta-learning, relatively simple task construction mechanisms, such as clustering embeddings, lead to good performance on a variety of downstream, human-specified tasks. Our experiments across four image datasets indicate that our unsupervised meta-learning approach acquires a learning algorithm without any labeled data that is applicable to a wide range of downstream classification tasks, improving upon the embedding learned by four prior unsupervised learning methods."}, "keywords": ["Table", "image domain"], "citation_intent": "result"} {"citing_id": "2304.05127v1", "cited_id": "1911.00222", "section_title": "Introduction", "citation": "Furthermore, similar to #REFR , we find the optimal number of total communication rounds that achieves the best model performance at a fixed privacy budget.", "text_before_citation": ["Nevertheless, their algorithm requires performing the exact arg min on local clients, and their theory works only under very restrictive assumptions that implicitly require no collaboration, as each client is assumed to have the same unique minimizer.", "In this work, we bridge this gap and provide, to our knowledge, the first analysis of the standard FL algorithms with DP under realistic assumptions with the application to medical image analysis. Our contributions can be summarized as follows:", "Theoretical Guarantees.", "We analyze one of the most popular algorithms for FL with DP -DP-FedAvg #OTHEREFR under the standard non-restrictive assumptions.", "Our analysis reveals that clients benefit from running local steps, and there is an optimal number of local steps each client should take."], "text_after_citation": ["Practical Performance.", "We provide an extensive empirical evaluation of the analyzed method.", "We show optimal values for the number of local steps and communication rounds that maximize the practical performance of DP-FedAvg as predicted by our theory.", "On top of that, the tuned model achieves similar performance as the centralized model, i.e., the model that has access to the full dataset in a single location while providing rigorous privacy guarantees, which are essential for sensitive medical imaging applications."], "citing_paper_content": {"title": "Improving Performance Of Private Federated Models In Medical Image Analysis", "abstract": "Federated learning (FL) is a distributed machine learning (ML) approach that allows data to be trained without being centralized. This approach is particularly beneficial for medical applications because it addresses some key challenges associated with medical data, such as privacy, security, and data ownership. On top of that, FL can improve the quality of ML models used in medical applications. Medical data is often diverse and can vary significantly depending on the patient population, making it challenging to develop ML models that are accurate and generalizable. FL allows medical data to be used from multiple sources, which can help to improve the quality and generalizability of ML models. Differential privacy (DP) is a go-to algorithmic tool to make this process secure and private. In this work, we show that the model performance can be further improved by employing local steps, a popular approach to improving the communication efficiency of FL, and tuning the number of communication rounds. Concretely, given the privacy budget, we show an optimal number of local steps and communications rounds. We provide theoretical motivations further corroborated with experimental evaluations on real-world medical imaging tasks."}, "cited_paper_content": {"title": "Federated Learning With Differential Privacy: Algorithms And Performance Analysis", "abstract": "In this paper, to effectively prevent information leakage, we propose a novel framework based on the concept of differential privacy (DP), in which artificial noises are added to the parameters at the clients side before aggregating, namely, noising before model aggregation FL (NbAFL). First, we prove that the NbAFL can satisfy DP under distinct protection levels by properly adapting different variances of artificial noises. Then we develop a theoretical convergence bound of the loss function of the trained FL model in the NbAFL. Specifically, the theoretical bound reveals the following three key properties: 1) There is a tradeoff between the convergence performance and privacy protection levels, i.e., a better convergence performance leads to a lower protection level; 2) Given a fixed privacy protection level, increasing the number $N$ of overall clients participating in FL can improve the convergence performance; 3) There is an optimal number of maximum aggregation times (communication rounds) in terms of convergence performance for a given protection level. Furthermore, we propose a $K$-random scheduling strategy, where $K$ ($1 0 is a small constant to ensure the proper behavior of the measure. We use this metric to conduct our experiments.", "13."], "citing_paper_content": {"title": "Revisiting Deepfool: Generalization And Improvement", "abstract": "Deep neural networks have been known to be vulnerable to adversarial examples, which are inputs that are modified slightly to fool the network into making incorrect predictions. This has led to a significant amount of research on evaluating the robustness of these networks against such perturbations. One particularly important robustness metric is the robustness to minimal 2 adversarial perturbations. However, existing methods for evaluating this robustness metric are either computationally expensive or not very accurate. In this paper, we introduce a new family of adversarial attacks that strike a balance between effectiveness and computational efficiency. Our proposed attacks are generalizations of the well-known Deep-Fool (DF) attack, while they remain simple to understand and implement. We demonstrate that our attacks outperform existing methods in terms of both effectiveness and computational efficiency. Our proposed attacks are also suitable for evaluating the robustness of large models and can be used to perform adversarial training (AT) to achieve state-of-the-art robustness to minimal 2 adversarial perturbations 1 ."}, "cited_paper_content": {"title": "Robustness Via Curvature Regularization, And Vice Versa", "abstract": "State-of-the-art classifiers have been shown to be largely vulnerable to adversarial perturbations. One of the most effective strategies to improve robustness is adversarial training. In this paper, we investigate the effect of adversarial training on the geometry of the classification landscape and decision boundaries. We show in particular that adversarial training leads to a significant decrease in the curvature of the loss surface with respect to inputs, leading to a drastically more \"linear\" behaviour of the network. Using a locally quadratic approximation, we provide theoretical evidence on the existence of a strong relation between large robustness and small curvature. To further show the importance of reduced curvature for improving the robustness, we propose a new regularizer that directly minimizes curvature of the loss surface, and leads to adversarial robustness that is on par with adversarial training. Besides being a more efficient and principled alternative to adversarial training, the proposed regularizer confirms our claims on the importance of exhibiting quasi-linear behavior in the vicinity of data points in order to achieve robustness."}, "keywords": ["Hessian", "function's curvature"], "citation_intent": "background"} {"citing_id": "2304.07527v1", "cited_id": "1807.11590", "section_title": "Introduction", "citation": "To address it, IoU-Net #REFR proposes an individual IoU prediction branch and an IoU-guided NMS to align the classification confidence and regression precision.", "text_before_citation": ["1a shows an empirical study on a strong baseline DINO #OTHEREFR , and the recall of BR samples by the well-trained model on the top-k confident outputs in an image is calculated, where a higher recall indicates that more BR samples are selected in final prediction.", "As we can see, only 45% and 48% BR samples are covered by HC samples with top-N and top-2N scores, respectively, suggesting that more than half of the well-localized predictions have low confidence scores. In Fig.", "1b , the frequency histogram of the HC and BR samples is plotted from 5,000 samples on COCO val.", "and a clear discrepancy between the two distributions is observed, revealing that even the highly-optimized model DINO #OTHEREFR suffers from the misalignment problem.", "In fact, this problem also appears in CNN-based detectors #OTHEREFR ."], "text_after_citation": ["A number of alternatives #OTHEREFR introduce an IoU-aware loss or weight to integrate the IoU branch into the original classification branch and adopt a joint training strategy.", "These methods are specially designed for NMS-based detectors, but DETR implicitly selects samples by modeling query relations under the guidance of one-to-one matching without an explicit query selection step like NMS, making them less applicable to DETR #OTHEREFR .", "To the best of our knowledge, the misalignment problem still remains unexplored in the DETR series.", "This paper investigates the solution to the misalignment problem in DETR.", "To this end, we propose a novel method, namely Align-DETR."], "citing_paper_content": {"title": "Align-Detr: Improving Detr With Simple Iou-Aware Bce Loss", "abstract": "DETR has set up a simple end-to-end pipeline for object detection by formulating this task as a set prediction problem, showing promising potential. However, despite the significant progress in improving DETR, this paper identifies a problem of misalignment in the output distribution, which prevents the best-regressed samples from being assigned with high confidence, hindering the model's accuracy. We propose a metric, recall of best-regressed samples, to quantitively evaluate the misalignment problem. Observing its importance, we propose a novel Align-DETR that incorporates a localization precision aware classification loss in optimization. The proposed loss, IA-BCE, guides the training of DETR to build a strong correlation between classification score and localization precision. We also adopt the mixed-matching strategy, to facilitate DETR-based detectors with faster training convergence while keeping an end-to-end scheme. Moreover, to overcome the dramatic decrease in sample quality induced by the sparsity of queries, we introduce a prime sample weighting mechanism to suppress the interference of unimportant samples. Extensive experiments are conducted with very competitive results reported. In particular, it delivers a 46 (+3.8)% AP on the DAB-DETR baseline with the ResNet-50 backbone and reaches a new SOTA performance of 50.2% AP in the 1x setting on the COCO validation set when employing the strong baseline DINO. Our code is available at https://github.com/FelixCaae/AlignDETR."}, "cited_paper_content": {"title": "Acquisition Of Localization Confidence For Accurate Object Detection", "abstract": "Modern CNN-based object detectors rely on bounding box regression and non-maximum suppression to localize objects. While the probabilities for class labels naturally reflect classification confidence, localization confidence is absent. This makes properly localized bounding boxes degenerate during iterative regression or even suppressed during NMS. In the paper we propose IoU-Net learning to predict the IoU between each detected bounding box and the matched ground-truth. The network acquires this confidence of localization, which improves the NMS procedure by preserving accurately localized bounding boxes. Furthermore, an optimization-based bounding box refinement method is proposed, where the predicted IoU is formulated as the objective. Extensive experiments on the MS-COCO dataset show the effectiveness of IoU-Net, as well as its compatibility with and adaptivity to several state-of-the-art object detectors."}, "keywords": ["classification confidence"], "citation_intent": "method"} {"citing_id": "2303.00914v2", "cited_id": "1206.5538", "section_title": "Challenges In Fully Test-Time Uda", "citation": "From the perspective of machine learning, early representations through the lower layer play an important role to capture the pos-terior distribution of the underlying explanatory factors for the observed input #REFR .", "text_before_citation": ["We recognize that most of the domain variations, such as changes in the visual scenes and image transformations or corruptions, are early layers of features in the semantic hierarchy #OTHEREFR .", "They can be effectively captured and modeled by lower layers of the network model."], "text_after_citation": ["For instance, in deep neural network models, the early layers of the network tend to respond to corners, edges, or colors.", "In contrast, deeper layers respond to more class-specific features #OTHEREFR .", "In the corruption testtime adaptation scenario, the class-specific features are always the same because the testing datasets are the corruption of the training domain.", "However, the early layers of models can be failed due to corruption.", "Therefore, the central challenge in fully test-time UDA lies in how to learn useful early layer representations of the test samples without supervision."], "citing_paper_content": {"title": "Neuro-Modulated Hebbian Learning For Fully Test-Time Adaptation", "abstract": "Fully test-time adaptation aims to adapt the network model based on sequential analysis of input samples during the inference stage to address the cross-domain performance degradation problem of deep neural networks. We take inspiration from the biological plausibility learning where the neuron responses are tuned based on a local synapse-change procedure and activated by competitive lateral inhibition rules. Based on these feed-forward learning rules, we design a soft Hebbian learning process which provides an unsupervised and effective mechanism for online adaptation. We observe that the performance of this feed-forward Hebbian learning for fully test-time adaptation can be significantly improved by incorporating a feedback neuro-modulation layer. It is able to fine-tune the neuron responses based on the external feedback generated by the error back-propagation from the top inference layers. This leads to our proposed neuro-modulated Hebbian learning (NHL) method for fully test-time adaptation. With the unsupervised feed-forward soft Hebbian learning being combined with a learned neuro-modulator to capture feedback from external responses, the source model can be effectively adapted during the testing process. Experimental results on benchmark datasets demonstrate that our proposed method can significantly improve the adaptation performance of network models and outperforms existing stateof-the-art methods."}, "cited_paper_content": {"title": "Representation Learning: A Review And New Perspectives", "abstract": "The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks. This motivates longer term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation, and manifold learning."}, "keywords": ["machine learning"], "citation_intent": "background"} {"citing_id": "2303.12930v1", "cited_id": "1803.08842", "section_title": "Results And Analysis", "citation": "Even though it is proved in #REFR that motion features are useless for audio-visual event localization, we argue that our experiment clearly demonstrates their significance for dense-localizing audio-visual events.", "text_before_citation": ["In addition, we found that the appropriate number of uni-modal blocks is also impor- tant, which reveals that applying self-attention before crossmodal interaction can help the model to focus on informative signals and eliminate noise from each modality. Dependency Modeling and Class-Aware Regression. As shown in Tab.", "4, applying temporal dependency modeling and class-aware regression separately can both achieve higher performances than the base model that just contains our transformer encoder with a class-agnostic regression head in the decoder.", "Besides, we found that when using both of them, they can promote each other and achieve a further significant performance boost, which clearly demonstrates their effectiveness. The Impact of Motion Features. In Tab.", "5, we observe that utilizing both RGB and optical flow features extracted by I3D #OTHEREFR achieves the best performance.", "It outperforms the model that uses visual features extracted by ResNet50 #OTHEREFR pre-trained on ImageNet by a large margin (+3.5% at the average mAP)."], "text_after_citation": ["The Capability of Localizing Concurrent Events.", "We further evaluate our models and the state-of-the-art TAL method #OTHEREFR on the videos that contain concurrent events with different overlap rates in Fig. 6 .", "We observe that our model equipped with dependency modeling and class-aware regression obviously gains more performance improvement on the videos with higher overlap rates, compared with our baseline and ActionFormer #OTHEREFR .", "It suggests that our model has a better ability to localize overlapping audiovisual events in untrimmed videos. Qualitative Results. In Fig.", "5 , we present the qualitative results of our model variants that utilize different modalities as input."], "citing_paper_content": {"title": "Dense-Localizing Audio-Visual Events In Untrimmed Videos: A Large-Scale Benchmark And Baseline", "abstract": "Existing audiovisual event localization (AVE) handles manually trimmed videos with only a single instance in each of them. However, this setting is unrealistic as natural videos often contain numerous audiovisual events with different categories. To better adapt to real-life applications, in this paper we focus on the task of denselocalizing audiovisual events, which aims to jointly localize and recognize all audiovisual events occurring in an untrimmed video. The problem is challenging as it requires fine-grained audiovisual scene and context understanding. To tackle this problem, we introduce the first Untrimmed AudioVisual (UnAV-100) dataset, which contains 10K untrimmed videos with over 30K audiovisual events. Each video has 2.8 audiovisual events on average, and the events are usually related to each other and might co-occur as in real-life scenes. Next, we formulate the task using a new learning-based framework, which is capable of fully integrating audio and visual modalities to localize audiovisual events with various lengths and capture dependencies between them in a single pass. Extensive experiments demonstrate the effectiveness of our method as well as the significance of multi-scale cross-modal perception and dependency modeling for this task."}, "cited_paper_content": {"title": "Audio-Visual Event Localization In Unconstrained Videos", "abstract": "In this paper, we introduce a novel problem of audio-visual event localization in unconstrained videos. We define an audio-visual event as an event that is both visible and audible in a video segment. We collect an Audio-Visual Event (AVE) dataset to systemically investigate three temporal localization tasks: supervised and weakly-supervised audio-visual event localization, and cross-modality localization. We develop an audio-guided visual attention mechanism to explore audio-visual correlations, propose a dual multimodal residual network (DMRN) to fuse information over the two modalities, and introduce an audio-visual distance learning network to handle the cross-modality localization. Our experiments support the following findings: joint modeling of auditory and visual modalities outperforms independent modeling, the learned attention can capture semantics of sounding objects, temporal alignment is important for audio-visual fusion, the proposed DMRN is effective in fusing audio-visual features, and strong correlations between the two modalities enable cross-modality localization."}, "keywords": ["dense-localizing audio-visual events", "audio-visual event localization"], "citation_intent": "background"} {"citing_id": "2304.09623v2", "cited_id": "1912.03699", "section_title": "Effect Of Mcc And Transport Losses", "citation": "In our ablation studies, we observe that CHATTY, when combined with MCC #REFR gives the best result on most of the domain adaptation tasks.", "text_before_citation": ["The sensitivity is reduced by replacing L T L by L T L Cos which normalizes every entry in the matrix Y by the product of the norms.", "Empirically, the best adaptation algorithm was seen to be the loss given by Equation 18.", "When the loss was modified to Equation 20 it was observed that the convergence was slower than when the class confusion loss was included separately in the total loss.", "This could be possibly attributed to the back-propagation updates of M being very slow, owing to the camouflaged nature of M in T 1 MT 2 T .", "Nevertheless, it is interesting to study an optimal choice of M."], "text_after_citation": ["The evolution of the target classifier accuracy on the domain adaptation from Ar to Cl is plotted in Figure 4 .", "We observe that by shifting the samples in the classification (logit) space itself produces a significantly improved accuracy compared to SOTA methods, as shown in Table 1 , and tuning the transport vectors increases the accuracy.", "Furthermore, using the MCC loss in synergy with the transport loss favoured accuracy even more.", "When using just the transport loss as a means of reducing class confusion, we observe that it does a better job than the MCC loss."], "citing_paper_content": {"title": "Chatty: Coupled Holistic Adversarial Transport Terms With Yield For Unsupervised Domain Adaptation", "abstract": "We propose a new technique called CHATTY: Coupled Holistic Adversarial Transport Terms with Yield for Unsupervised Domain Adaptation. Adversarial training is commonly used for learning domain-invariant representations by reversing the gradients from a domain discriminator head to train the feature extractor layers of a neural network. We propose significant modifications to the adversarial head, its training objective, and the classifier head. With the aim of reducing class confusion, we introduce a sub-network which displaces the classifier outputs of the source and target domain samples in a learnable manner. We control this movement using a novel transport loss that spreads class clusters away from each other and makes it easier for the classifier to find the decision boundaries for both the source and target domains. The results of adding this new loss to a careful selection of previously proposed losses leads to improvement in UDA results compared to the previous state-of-the-art methods on benchmark datasets. We show the importance of the proposed loss term using ablation studies and visualization of the movement of target domain sample in representation space."}, "cited_paper_content": {"title": "Minimum Class Confusion For Versatile Domain Adaptation", "abstract": "There are a variety of DA scenarios subject to label sets and domain configurations, including closed-set and partial-set DA, as well as multi-source and multi-target DA. It is notable that existing DA methods are generally designed only for a specific scenario, and may underperform for scenarios they are not tailored to. A versatile method, which can handle several different scenarios without any extra modifications, is still remained to be explored. Towards such purpose, a more general inductive bias other than the domain alignment should be explored. In this paper, we delve into a missing piece of existing methods: class confusion, the tendency that a classifier confuses the predictions between the correct and ambiguous classes for target examples, which exists in all of the scenarios above. We unveil that reducing such pair-wise class confusion brings about significant transfer gains. Based on this, we propose a general loss function: Minimum Class Confusion (MCC). It can be characterized by (1) a non-adversarial DA method without explicitly deploying domain alignment, enjoying fast convergence speed (about 3 times faster than mainstream adversarial methods); (2) a versatile approach that can handle the four existing scenarios: Closed-Set, Partial-Set, Multi-Source, and Multi-Target DA, outperforming the state-of-the-art methods in these scenarios, especially on the largest and hardest dataset to date (7.25% on DomainNet). Strong performance in the two scenarios proposed in this paper: Multi-Source Partial and Multi-Target Partial DA, further proves its versatility. In addition, it can also be used as a general regularizer that is orthogonal and complementary to a variety of existing DA methods, accelerating convergence and pushing those readily competitive methods to a stronger level."}, "keywords": ["domain adaptation tasks"], "citation_intent": "result"} {"citing_id": "2303.15023v1", "cited_id": "1905.02249", "section_title": "Related Work", "citation": "MixMatch #REFR combines the consistency regularization with the entropy minimization to encourage the network to output confident predictions for unlabeled data.", "text_before_citation": ["In view of this, we focus on learning from scarce annotations to minimize human labor and achieve accurate animal pose estimation at the same time.", "Semi-supervised Learning.", "Semi-supervised learning is powerful in leveraging unlabeled data to improve a model's performance when labeled data is limited.", "One of the most widely used SSL techniques is pseudo labeling #OTHEREFR , which generates artificial labels for unlabeled images from model predictions.", "Another technique is consistency regularization #OTHEREFR , which enforces that the model output should be consistent when the input is randomly perturbed."], "text_after_citation": ["More recent Unsupervised Data Augmentation (UDA) #OTHEREFR and FixMatch #OTHEREFR combine pseudo labeling, consistency regularization with strong augmentations #OTHEREFR and achieve superior performance.", "FlexMatch #OTHEREFR further improves over UDA and Fixmatch by introducing a curriculum learning scheme.", "We also adopt the PL-based techniques in this work given its effectiveness.", "To mitigate the negative effect of noisy pseudo labels, we combine PL with reliable pseudo label selection and reusable sample re-labeling.", "Learning from Noisy Labels."], "citing_paper_content": {"title": "Scarcenet: Animal Pose Estimation With Scarce Annotations", "abstract": "Animal pose estimation is an important but underexplored task due to the lack of labeled data. In this paper, we tackle the task of animal pose estimation with scarce annotations, where only a small set of labeled data and unlabeled images are available. At the core of the solution to this problem setting is the use of the unlabeled data to compensate for the lack of well-labeled animal pose data. To this end, we propose the ScarceNet, a pseudo label-based approach to generate artificial labels for the unlabeled images. The pseudo labels, which are generated with a model trained with the small set of labeled images, are generally noisy and can hurt the performance when directly used for training. To solve this problem, we first use a small-loss trick to select reliable pseudo labels. Although effective, the selection process is improvident since numerous highloss samples are left unused. We further propose to identify reusable samples from the high-loss samples based on an agreement check. Pseudo labels are regenerated to provide supervision for those reusable samples. Lastly, we introduce a student-teacher framework to enforce a consistency constraint since there are still samples that are neither reliable nor reusable. By combining the reliable pseudo label selection with the reusable sample re-labeling and the consistency constraint, we can make full use of the unlabeled data. We evaluate our approach on the challenging AP-10K dataset, where our approach outperforms existing semi-supervised approaches by a large margin. We also test on the TigDog dataset, where our approach can achieve better performance than domain adaptation based approaches when only very few annotations are available. Our code is available at the project website 1 ."}, "cited_paper_content": {"title": "Mixmatch: A Holistic Approach To Semi-Supervised Learning", "abstract": "Semi-supervised learning has proven to be a powerful paradigm for leveraging unlabeled data to mitigate the reliance on large labeled datasets. In this work, we unify the current dominant approaches for semi-supervised learning to produce a new algorithm, MixMatch, that works by guessing low-entropy labels for data-augmented unlabeled examples and mixing labeled and unlabeled data using MixUp. We show that MixMatch obtains state-of-the-art results by a large margin across many datasets and labeled data amounts. For example, on CIFAR-10 with 250 labels, we reduce error rate by a factor of 4 (from 38% to 11%) and by a factor of 2 on STL-10. We also demonstrate how MixMatch can help achieve a dramatically better accuracy-privacy trade-off for differential privacy. Finally, we perform an ablation study to tease apart which components of MixMatch are most important for its success."}, "keywords": ["unlabeled data"], "citation_intent": "method"} {"citing_id": "2303.02153v1", "cited_id": "1901.02446", "section_title": "Experiment Setups", "citation": "Since our method can adapt faster to the downstream tasks, we train our model for 80K iterations using a Semantic FPN #REFR by default.", "text_before_citation": ["To fully preserve the pre-trained knowledge of the \u03b8 , we always set the learning rate of \u03b8 as 1/10 of the base learning rate. We use \u03b3=1e-4 for the text adapter.", "The task-specific settings and training details are elaborated as follows.", "Semantic Segmentation.", "The goal of semantic segmentation is to assign pixel-level labels to a given image, which requires a fine-grained high-level understanding of visual content.", "We evaluate our method on ADE20K #OTHEREFR , which consists of 20K images for training and 2K images for validation."], "text_after_citation": ["We use a global batch size of 16 and set the learning rate as 1e-4.", "We use the AdamW optimizer with a weight decay of 1e-4 and warming-up iterations of 1500.", "We adopt the polynomial learning rate scheduler with a power of 0.9 and a minimum learning rate of 1e-6.", "For the fast schedule (8K iterations), we linear scale the learning rate schedule and set the warming-up iterations to 150.", "During inference, we use the slide inference with a crop size 512 \u00d7 512 and a stride of 341 \u00d7 341."], "citing_paper_content": {"title": "Unleashing Text-To-Image Diffusion Models For Visual Perception", "abstract": "Diffusion models (DMs) have become the new trend of generative models and have demonstrated a powerful ability of conditional synthesis. Among those, text-to-image diffusion models pre-trained on large-scale image-text pairs are highly controllable by customizable prompts. Unlike the unconditional generative models that focus on low-level attributes and details, text-to-image diffusion models contain more high-level knowledge thanks to the vision-language pre-training. In this paper, we propose VPD (Visual Perception with a pre-trained Diffusion model), a new framework that exploits the semantic information of a pre-trained text-to-image diffusion model in visual perception tasks. Instead of using the pre-trained denoising autoencoder in a diffusion-based pipeline, we simply use it as a backbone and aim to study how to take full advantage of the learned knowledge. Specifically, we prompt the denoising decoder with proper textual inputs and refine the text features with an adapter, leading to a better alignment to the pre-trained stage and making the visual contents interact with the text prompts. We also propose to utilize the cross-attention maps between the visual features and the text features to provide explicit guidance. Compared with other pre-training methods, we show that vision-language pre-trained diffusion models can be faster adapted to downstream visual perception tasks using the proposed VPD. Extensive experiments on semantic segmentation, referring image segmentation and depth estimation demonstrates the effectiveness of our method. Notably, VPD attains 0.254 RMSE on NYUv2 depth estimation and 73.3% oIoU on RefCOCOval referring image segmentation, establishing new records on these two benchmarks. Code is available at https: //github.com/wl-zhao/VPD."}, "cited_paper_content": {"title": "Panoptic Feature Pyramid Networks", "abstract": "The recently introduced panoptic segmentation task has renewed our community's interest in unifying the tasks of instance segmentation (for thing classes) and semantic segmentation (for stuff classes). However, current state-of-the-art methods for this joint task use separate and dissimilar networks for instance and semantic segmentation, without performing any shared computation. In this work, we aim to unify these methods at the architectural level, designing a single network for both tasks. Our approach is to endow Mask R-CNN, a popular instance segmentation method, with a semantic segmentation branch using a shared Feature Pyramid Network (FPN) backbone. Surprisingly, this simple baseline not only remains effective for instance segmentation, but also yields a lightweight, top-performing method for semantic segmentation. In this work, we perform a detailed study of this minimally extended version of Mask R-CNN with FPN, which we refer to as Panoptic FPN, and show it is a robust and accurate baseline for both tasks. Given its effectiveness and conceptual simplicity, we hope our method can serve as a strong baseline and aid future research in panoptic segmentation."}, "keywords": ["model", "Semantic FPN"], "citation_intent": "method"} {"citing_id": "2303.09660v1", "cited_id": "1704.02685", "section_title": "Literature Review", "citation": "However, the nonlinearity of the fully connected layers may cause gradient saturation or undesirable artifacts during backpropagation #REFR .", "text_before_citation": ["In this way, Grad-CAM can derive the saliency map without needing to modify and retrain the CNN models.", "Methods that have followed up Grad-CAM include Grad-CAM++ #OTHEREFR", "2018) for the localization of multi-objects belonging to the same class and Score-CAM #OTHEREFR", "2020) , which obtains weights for corresponding feature maps without gradient calculation.", "These gradient-based methods require only one forward-backward pass and are computationally efficient compared to the perturbation-based methods."], "text_after_citation": ["Recent works have developed new strategies to address this issue. For example, Sundararajan et al.", "2017developed an integrated gradient method to aggregate the gradients over an entire image as it goes through continuous modifications, to avoid gradient saturation and unexpected artifacts.", "The entire image is gradually altered from the original image to an all-black image (i.e., baseline).", "The result is an aggregation of all the intermediate results after each alternation.", "This method can be considered as a combination of perturbationbased and gradient-based methods and has shown promising results in many tasks #OTHEREFR ."], "citing_paper_content": {"title": "Explainable Geoai: Can Saliency Maps Help Interpret Artificial Intelligence'S Learning Process? An Empirical Study On Natural Feature Detection", "abstract": "Improving the interpretability of geospatial artificial intelligence (GeoAI) models has become critically important to open the \"black box\" of complex AI models, such as deep learning. This paper compares popular saliency map generation techniques and their strengths and weaknesses in interpreting GeoAI and deep learning models' reasoning behaviors, particularly when applied to geospatial analysis and image processing tasks. We surveyed two broad classes of model explanation methods: perturbation-based and gradient-based methods. The former identifies important image areas, which help machines make predictions by modifying a localized area of the input image. The latter evaluates the contribution of every single pixel of the input image to the model's prediction results through gradient backpropagation. In this study, three algorithms-the occlusion method, the integrated gradients method, and the class activation map method-are examined for a natural feature detection task using deep learning. The algorithms' strengths and weaknesses are discussed, and the consistency between model-learned and human-understandable concepts for object recognition is also compared. The experiments used two GeoAIready datasets to demonstrate the generalizability of the research findings."}, "cited_paper_content": {"title": "Learning Important Features Through Propagating Activation Differences", "abstract": "The purported \"black box\" nature of neural networks is a barrier to adoption in applications where interpretability is essential. Here we present DeepLIFT (Deep Learning Important FeaTures), a method for decomposing the output prediction of a neural network on a specific input by backpropagating the contributions of all neurons in the network to every feature of the input. DeepLIFT compares the activation of each neuron to its 'reference activation' and assigns contribution scores according to the difference. By optionally giving separate consideration to positive and negative contributions, DeepLIFT can also reveal dependencies which are missed by other approaches. Scores can be computed efficiently in a single backward pass. We apply DeepLIFT to models trained on MNIST and simulated genomic data, and show significant advantages over gradient-based methods. Video tutorial: http://goo.gl/qKb7pL, code: http://goo.gl/RM8jvH."}, "keywords": ["backpropagation"], "citation_intent": "background"} {"citing_id": "2303.05093v1", "cited_id": "1810.07212", "section_title": "Datasets", "citation": "We follow the approach proposed in #REFR , which concatenates all the descriptions of a video to a paragraph.", "text_before_citation": ["The second one is from #OTHEREFR with 9000 clips for training and 1000 clips for testing.", "The last one is the partitions from #OTHEREFR , which uses 9000 clips for training and 1000 clips for testing.", "To increase the persuasiveness of the experiment, we conducted experiments on all data partitions.", "ActivityNet #OTHEREFR : It is an increasingly popular dataset consisting of densely annotated temporal segments of 20000 YouTube videos.", "Each video is an average of 2 minutes long, and there are 72000 video-text pairs in this dataset."], "text_after_citation": ["We trained the model with 10009 videos and evaluated our model on the \"val1\" split, which contains 4917 videos.", "LSMDC #OTHEREFR : It contains 118,081 short video clips( 45 s) extracted from 202 movies.", "Unlike other datasets, each video in LSMDC only has one description, either extracted from the movie script or the transcribed audio description.", "The test set consisted of 1000 videos that were not present in the training set."], "citing_paper_content": {"title": "Improving Video Retrieval By Adaptive Margin", "abstract": "Video retrieval is becoming increasingly important owing to the rapid emergence of videos on the Internet. The dominant paradigm for video retrieval learns video-text representations by pushing the distance between the similarity of positive pairs and that of negative pairs apart from a fixed margin. However, negative pairs used for training are sampled randomly, which indicates that the semantics between negative pairs may be related or even equivalent, while most methods still enforce dissimilar representations to decrease their similarity. This phenomenon leads to inaccurate supervision and poor performance in learning video-text representations. While most video retrieval methods overlook that phenomenon, we propose an adaptive margin changed with the distance between positive and negative pairs to solve the aforementioned issue. First, we design the calculation framework of the adaptive margin, including the method of distance measurement and the function between the distance and the margin. Then, we explore a novel implementation called \"Cross-Modal Generalized Self-Distillation\" (CMGSD), which can be built on the top of most video retrieval models with few modifications. Notably, CMGSD adds few computational overheads at train time and adds no computational overhead at test time. Experimental results on three widely used datasets demonstrate that the proposed method can yield significantly better performance than the corresponding backbone model, and it outperforms state-of-the-art methods by a large margin. CCS CONCEPTS \u2022 Information systems \u2192 Video search."}, "cited_paper_content": {"title": "Cross-Modal And Hierarchical Modeling Of Video And Text", "abstract": "Visual data and text data are composed of information at multiple granularities. A video can describe a complex scene that is composed of multiple clips or shots, where each depicts a semantically coherent event or action. Similarly, a paragraph may contain sentences with different topics, which collectively conveys a coherent message or story. In this paper, we investigate the modeling techniques for such hierarchical sequential data where there are correspondences across multiple modalities. Specifically, we introduce hierarchical sequence embedding (HSE), a generic model for embedding sequential data of different modalities into hierarchically semantic spaces, with either explicit or implicit correspondence information. We perform empirical studies on large-scale video and paragraph retrieval datasets and demonstrated superior performance by the proposed methods. Furthermore, we examine the effectiveness of our learned embeddings when applied to downstream tasks. We show its utility in zero-shot action recognition and video captioning."}, "keywords": ["video"], "citation_intent": "method"} {"citing_id": "2303.18072v1", "cited_id": "1407.6118", "section_title": "2D Linear Wave Equation", "citation": "In contrast, with 512 or more basis vectors the relative error in the Hamiltonian is about 10 \u22126 for the standard cSVD which is in agreement with the experiments from #REFR for the linear wave equation.", "text_before_citation": ["The temporary increase in the error curves for m s \u2265 120 and an average basis size of 70 is due to the fact that the basis is too small for the window size m s \u2265 120. Larger window sizes require larger basis sizes.", "In Figures 3 and 4 we present the relative error in the Hamiltonian", "EQUATION", "averaged over the three training parameters in dependence on the time-step.", "In Figure 3 we see that using a non-symplectic, standard POD-basis leads to unstable models in the sense that the energy drastically increases if more than 256 basis vectors are used."], "text_after_citation": ["As expected, the error in the Hamiltonian is constant for the standard cSVD up to numerical inaccuracies.", "These inaccuracies can only be seen in the bottom two curves due to the logarithmic axis.", "In Figure 4 , we plot the same errors for the dictionary-based methods.", "With n s = 100 selected snapshots per basis update, we observe high error jumps across the basis changes for DB-cSVD for window sizes m s \u2265 120.", "However, with n s = 250 snapshots selected, this behavior is no longer observed and a relative error of 10 \u22126 is achieved as with the standard cSVD."], "citing_paper_content": {"title": "Dictionary-Based Online-Adaptive Structure-Preserving Model Order Reduction For Parametric Hamiltonian Systems", "abstract": "Classical model order reduction (MOR) for parametric problems may become computationally inefficient due to large sizes of the required projection bases, especially for problems with slowly decaying Kolmogorov n-widths. Additionally, Hamiltonian structure of dynamical systems may be available and should be preserved during the reduction. In the current presentation, we address these two aspects by proposing a corresponding dictionary-based, online-adaptive MOR approach. The method requires dictionaries for the state-variable, non-linearities and discrete empirical interpolation (DEIM) points. During the online simulation, local basis extensions/simplifications are performed in an online-efficient way, i.e. the runtime complexity of basis modifications and online simulation of the reduced models do not depend on the full state dimension. Experiments on a linear wave equation and a non-linear Sine-Gordon example demonstrate the efficiency of the approach."}, "cited_paper_content": {"title": "Symplectic Model Reduction Of Hamiltonian Systems", "abstract": "In this paper, a symplectic model reduction technique, proper symplectic decomposition (PSD) with symplectic Galerkin projection, is proposed to save the computational cost for the simplification of large-scale Hamiltonian systems while preserving the symplectic structure. As an analogy to the classical proper orthogonal decomposition (POD)-Galerkin approach, PSD is designed to build a symplectic subspace to fit empirical data, while the symplectic Galerkin projection constructs a reduced Hamiltonian system on the symplectic subspace. For practical use, we introduce three algorithms for PSD, which are based upon: the cotangent lift, complex singular value decomposition, and nonlinear programming. The proposed technique has been proven to preserve system energy and stability. Moreover, PSD can be combined with the discrete empirical interpolation method to reduce the computational cost for nonlinear Hamiltonian systems. Owing to these properties, the proposed technique is better suited than the classical POD-Galerkin approach for model reduction of Hamiltonian systems, especially when long-time integration is required. The stability, accuracy, and efficiency of the proposed technique are illustrated through numerical simulations of linear and nonlinear wave equations."}, "keywords": ["Hamiltonian"], "citation_intent": "result"} {"citing_id": "2304.09068v1", "cited_id": "2003.09758", "section_title": "\u2022 Causal Inference:", "citation": "Notably, the utility achieved on these tasks is comparable to the values reported in prior work that use these datasets #REFR , demonstrating effectiveness of METAM to achieve competitive quality while generalizing to a wide variety of tasks.", "text_before_citation": ["Regression.", "The goal is to predict number of collisions in NYC using data such as number of daily taxi trips #OTHEREFR over a dataset containing 350 records..", "The task uses a random forest regressor and computes the mean absolute error (MAE), returning 1 \u2212 M AE as utility. METAM outperforms all baselines.", "With only 300 queries, METAM reduces MAE from 0.66 to 0.55.", "Other techniques require three times more queries to achieve similar MAE."], "text_after_citation": ["We further evaluate predictive analytics (classification and regression) tasks with informative domainspecific data profiles such as feature importance #OTHEREFR and uninformative generic data profiles to understand METAM's flexibility in Section VI-C.", "What-if analysis.", "The task takes an input dataset along with a hypothetical update, and outputs the causal impact of the update query on other attributes.", "We consider an initial table containing SAT scores of 450 students [43] and ask what attributes would be causally affected if \"critical reading score\" of students is updated.", "Understanding the attributes sheds light on what affects students' reading score, paving the way for the implementation of interventions."], "citing_paper_content": {"title": "Metam: Goal-Oriented Data Discovery", "abstract": "Data is a central component of machine learning and causal inference tasks. The availability of large amounts of data from sources such as open data repositories, data lakes and data marketplaces creates an opportunity to augment data and boost those tasks' performance. However, augmentation techniques rely on a user manually discovering and shortlisting useful candidate augmentations. Existing solutions do not leverage the synergy between discovery and augmentation, thus underexploiting data. In this paper, we introduce METAM, a novel goal-oriented framework that queries the downstream task with a candidate dataset, forming a feedback loop that automatically steers the discovery and augmentation process. To select candidates efficiently, METAM leverages properties of the: i) data, ii) utility function, and iii) solution set size. We show METAM's theoretical guarantees and demonstrate those empirically on a broad set of tasks. All in all, we demonstrate the promise of goal-oriented data discovery to modern data science applications."}, "cited_paper_content": {"title": "Arda: Automatic Relational Data Augmentation For Machine Learning", "abstract": "Automatic machine learning (\\AML) is a family of techniques to automate the process of training predictive models, aiming to both improve performance and make machine learning more accessible. While many recent works have focused on aspects of the machine learning pipeline like model selection, hyperparameter tuning, and feature selection, relatively few works have focused on automatic data augmentation. Automatic data augmentation involves finding new features relevant to the user's predictive task with minimal ``human-in-the-loop'' involvement. We present \\system, an end-to-end system that takes as input a dataset and a data repository, and outputs an augmented data set such that training a predictive model on this augmented dataset results in improved performance. Our system has two distinct components: (1) a framework to search and join data with the input data, based on various attributes of the input, and (2) an efficient feature selection algorithm that prunes out noisy or irrelevant features from the resulting join. We perform an extensive empirical evaluation of different system components and benchmark our feature selection algorithm on real-world datasets."}, "keywords": ["datasets"], "citation_intent": "result"} {"citing_id": "2304.13718v1", "cited_id": "1902.09574", "section_title": "Magnitude Pruning Outperforms Variational Dropout", "citation": "Related work found that MP can outperform VD, especially for moderate sparsity ratios #REFR . Our results confirm that on population level.", "text_before_citation": [], "text_after_citation": ["MP outperforms VD for sparsification levels of up to 80% consistently, see Figure 5 , Appendix B and C.", "At higher sparsification levels, MP shows steep drops in performance.", "VD on some zoos is more stable and thus shows higher performance at higher sparsification levels, justifying the larger parameter count and computational load."], "citing_paper_content": {"title": "Sparsified Model Zoo Twins: Investigating Populations Of Sparsified Neural Network Models", "abstract": "With growing size of Neural Networks (NNs), model sparsification to reduce the computational cost and memory demand for model inference has become of vital interest for both research and production. While many sparsification methods have been proposed and successfully applied on individual models, to the best of our knowledge their behavior and robustness has not yet been studied on large populations of models. With this paper, we address that gap by applying two popular sparsification methods on populations of models (so called model zoos) to create sparsified versions of the original zoos. We investigate the performance of these two methods for each zoo, compare sparsification layer-wise, and analyse agreement between original and sparsified populations. We find both methods to be very robust with magnitude pruning able outperform variational dropout with the exception of high sparsification ratios above 80%. Further, we find sparsified models agree to a high degree with their original non-sparsified counterpart, and that the performance of original and sparsified model is highly correlated. Finally, all models of the model zoos and their sparsified model twins are publicly available: modelzoos.cc."}, "cited_paper_content": {"title": "The State Of Sparsity In Deep Neural Networks", "abstract": "We rigorously evaluate three state-of-the-art techniques for inducing sparsity in deep neural networks on two large-scale learning tasks: Transformer trained on WMT 2014 English-to-German, and ResNet-50 trained on ImageNet. Across thousands of experiments, we demonstrate that complex techniques (Molchanov et al., 2017; Louizos et al., 2017b) shown to yield high compression rates on smaller datasets perform inconsistently, and that simple magnitude pruning approaches achieve comparable or better results. Additionally, we replicate the experiments performed by (Frankle & Carbin, 2018) and (Liu et al., 2018) at scale and show that unstructured sparse architectures learned through pruning cannot be trained from scratch to the same test set performance as a model trained with joint sparsification and optimization. Together, these results highlight the need for large-scale benchmarks in the field of model compression. We open-source our code, top performing model checkpoints, and results of all hyperparameter configurations to establish rigorous baselines for future work on compression and sparsification."}, "keywords": ["moderate sparsity ratios"], "citation_intent": "result"} {"citing_id": "2304.05498v1", "cited_id": "1810.11203", "section_title": "B. Hardware And Hyperparameter Configurations", "citation": "In addition, for a medium dataset like QM8, a medium discriminator dimension, such as [64, 128] , 64, [128, #REFR or slightly smaller, is preferred to generate the molecules that have considerable metric values.", "text_before_citation": ["On the other hand, training a higher complexity model over a small dataset may lead to mode collapse, which reduces the values of QED, Uniqueness, and LogP.", "Thus, there is a tradeoff among Validity and QED/Uniqueness/LogP/Similarity, and the tradeoff can be adjusted by changing the discriminator For QM8, which is considered a medium dataset that has more compounds than ESOL but less than QM9, we observe the results for the IID setting in Table II and find that as the discriminator complexity increases, the values of QED, Diversity, and Validity decrease, i.e., (0.49, 0.46, 0.45), (1.00, 0.99, 0.99), and (78.4, 26.9, 13.7), respectively, but the values of Uniqueness, LogP, and similarity increases, i.e., (18.1, 51.4, 67.9), (0.54, 0.59, 0.62), and (0.0067, 0.0261, 0.0336), respectively.", "The tradeoff changes from \"between Validity and QED/Uniqueness/LogP\" for ESOL into \"between QED/Diversity/Validity and Uniqueness/LogP/Similarity\" for QM8.", "Also, if we observe the results for non-IID in Table III , we can find that as the discriminator complexity increases, only Uniqueness decreases, while QED, Validity, LogP, and Similarity increase.", "Hence, we conclude that the tradeoff among different evaluation metrics may change based on the size of the dataset and the data sample distribution among clients."], "text_after_citation": ["For QM9, which is considered a large dataset, we can observe from the results that as we increase the discriminator complexity, QED, Validity, and Similarity increase, but Uniqueness reduces in the IID setting as shown in Table II .", "In the non-IID setting, as shown in Table III , only Uniqueness decreases, but QED, Validity, LogP, and Similarity increase as we increase the discriminator dimension.", "Hence, we can derive the same conclusion that the tradeoff among different metrics may change based on the size of the data set and the data sample distribution among clients.", "In addition, for a large dataset like QM9, a large discriminator complexity is preferred to generate the molecules that have considerable metric values.", "3) Effect of different client settings on the metrics: In this section, we evaluate how different numbers of clients affect the performance of GraphGANFed."], "citing_paper_content": {"title": "Graphganfed: A Federated Generative Framework For Graph-Structured Molecules Towards Efficient Drug Discovery", "abstract": "Recent advances in deep learning have accelerated its use in various applications, such as cellular image analysis and molecular discovery. In molecular discovery, a generative adversarial network (GAN), which comprises a discriminator to distinguish generated molecules from existing molecules and a generator to generate new molecules, is one of the premier technologies due to its ability to learn from a large molecular data set efficiently and generate novel molecules that preserve similar properties. However, different pharmaceutical companies may be unwilling or unable to share their local data sets due to the geodistributed and sensitive nature of molecular data sets, making it impossible to train GANs in a centralized manner. In this paper, we propose a Graph convolutional network in Generative Adversarial Networks via Federated learning (GraphGANFed) framework, which integrates graph convolutional neural Network (GCN), GAN, and federated learning (FL) as a whole system to generate novel molecules without sharing local data sets. In GraphGANFed, the discriminator is implemented as a GCN to better capture features from molecules represented as molecular graphs, and FL is used to train both the discriminator and generator in a distributive manner to preserve data privacy. Extensive simulations are conducted based on the three benchmark data sets to demonstrate the feasibility and effectiveness of GraphGANFed. The molecules generated by GraphGANFed can achieve high novelty (\u2248 100) and diversity (> 0.9). The simulation results also indicate that 1) a lower complexity discriminator model can better avoid mode collapse for a smaller data set, 2) there is a tradeoff among different evaluation metrics, and 3) having the right dropout ratio of the generator and discriminator can avoid mode collapse. Index Terms-Generative adversarial networks, Graph convolutional networks, Federated learning, Drug discovery I. INTRODUCTION The discovery of new organic and inorganic molecules remains a challenge in medicine, chemistry, and materials sciences. Traditional approaches to molecular discovery involve mathematical frameworks derived from related properties calculated from chemical structures with different physical or biological reactions [1, 2]. However, these mathematical frameworks may not fully capture the properties of the chemical structures, limiting the ability to fully explore novel D. Manu and X. Sun are with the"}, "cited_paper_content": {"title": "Crystalgan: Learning To Discover Crystallographic Structures With Generative Adversarial Networks", "abstract": "Our main motivation is to propose an efficient approach to generate novel multi-element stable chemical compounds that can be used in real world applications. This task can be formulated as a combinatorial problem, and it takes many hours of human experts to construct, and to evaluate new data. Unsupervised learning methods such as Generative Adversarial Networks (GANs) can be efficiently used to produce new data. Cross-domain Generative Adversarial Networks were reported to achieve exciting results in image processing applications. However, in the domain of materials science, there is a need to synthesize data with higher order complexity compared to observed samples, and the state-of-the-art cross-domain GANs can not be adapted directly. In this contribution, we propose a novel GAN called CrystalGAN which generates new chemically stable crystallographic structures with increased domain complexity. We introduce an original architecture, we provide the corresponding loss functions, and we show that the CrystalGAN generates very reasonable data. We illustrate the efficiency of the proposed method on a real original problem of novel hydrides discovery that can be further used in development of hydrogen storage materials."}, "keywords": ["molecules"], "citation_intent": "method"} {"citing_id": "2304.10333v1", "cited_id": "2003.02752", "section_title": "B. Overall Concept", "citation": "We further decreased the divergence of the selected clean source samples, which maximized the agreement of the multiple classifiers to achieve better results, similar to #REFR .", "text_before_citation": ["The probability that samples x i belong to class k predicted by each classifier are denoted as p k 1 (y|x i ) and p k 2 (y|x i ), respectively.", "It is also possible to build these multiple independent classifiers by using dropout regularization #OTHEREFR , #OTHEREFR repeatedly to a single classifier. Section III-F goes into more detail about this.", "To solve the noisy label problem of source samples, we calculated the divergence between the multiple classifier outputs for each source sample at the mini-batch level.", "The multiple classifiers tend to output similar predictions on clean samples and different predictions on noisy data because the multiple classifiers that are trained individually have varying capacities to learn the noisy label.", "In each mini-batch, we chose samples with small divergences in addition to the wellknown small-loss strategy to filter out the noisy samples."], "text_after_citation": ["We propose a divergence separation loss for the target samples to address the issues with the source and target private samples.", "The target private samples will have larger divergences than the target common samples because they can also be thought of as noisy samples with wrong labels.", "Therefore, we can filter out some target private samples to achieve stable performance by separating the divergences of the target samples.", "Derived from the existing methods #OTHEREFR , #OTHEREFR , #OTHEREFR , #OTHEREFR that deploy multiple classifiers with different parameters to achieve domain adaptation, we further used the multiple classifiers as a discriminator to detect the target samples that are far from the cluster of the source domain.", "After that, we trained the generator to minimize the divergence to prevent it from generating target features that are not supported by the source samples."], "citing_paper_content": {"title": "Noisy Universal Domain Adaptation Via Divergence Optimization For Visual Recognition", "abstract": "To transfer the knowledge learned from a labeled source domain to an unlabeled target domain, many studies have worked on universal domain adaptation (UniDA), where there is no constraint on the label sets of the source domain and target domain. However, the existing UniDA methods rely on source samples with correct annotations. Due to the limited resources in the real world, it is difficult to obtain a large amount of perfectly clean labeled data in a source domain in some applications. As a result, we propose a novel realistic scenario named Noisy UniDA, in which classifiers are trained using noisy labeled data from the source domain as well as unlabeled domain data from the target domain that has an uncertain class distribution. A multi-head convolutional neural network framework is proposed in this paper to address all of the challenges faced in the Noisy UniDA at once. Our network comprises a single common feature generator and multiple classifiers with various decision bounds. We can detect noisy samples in the source domain, identify unknown classes in the target domain, and align the distribution of the source and target domains by optimizing the divergence between the outputs of the various classifiers. The proposed method outperformed the existing methods in most of the settings after a thorough analysis of the various domain adaption scenarios."}, "cited_paper_content": {"title": "Combating Noisy Labels By Agreement: A Joint Training Method With Co-Regularization", "abstract": "Deep Learning with noisy labels is a practically challenging problem in weakly-supervised learning. The state-of-the-art approaches\"Decoupling\"and\"Co-teaching+\"claim that the\"disagreement\"strategy is crucial for alleviating the problem of learning with noisy labels. In this paper, we start from a different perspective and propose a robust learning paradigm called JoCoR, which aims to reduce the diversity of two networks during training. Specifically, we first use two networks to make predictions on the same mini-batch data and calculate a joint loss with Co-Regularization for each training example. Then we select small-loss examples to update the parameters of both two networks simultaneously. Trained by the joint loss, these two networks would be more and more similar due to the effect of Co-Regularization. Extensive experimental results on corrupted data from benchmark datasets including MNIST, CIFAR-10, CIFAR-100 and Clothing1M demonstrate that JoCoR is superior to many state-of-the-art approaches for learning with noisy labels."}, "keywords": ["multiple classifiers"], "citation_intent": "result"} {"citing_id": "2304.01950v1", "cited_id": "1909.00560", "section_title": "Because Of The Challenges Of Limitation In Network Bandwidth", "citation": "Thanks to the improvements in storage and computing capabilities of edge intelligence devices, most computing tasks can now be completed directly at the edge, making the Mobile Edge Computing (MEC) paradigm the next-generation computing network #REFR .", "text_before_citation": ["Md.", "Shirajum Munir is with the Virginia Modeling, Analysis, and Simulation Center, Department of Electrical and Computer Engineering, Old Dominion University, Suffolk, VA 23435, USA, and also with the Department of Computer Science and Engineering, Kyung Hee University, Yongin-si 17104, Republic of Korea (e-mail: munir@khu.ac.kr).", "Apurba Adhikary, Huy Q.", "Le, Avi Deb Raha, and Choong Seon Hong are with the Department of Computer Science and Engineering, School of Computing, Kyung Hee University, Yongin-si 17104, Republic of Korea (e-mail: apurba@khu.ac.kr; quanghuy69@khu.ac.kr; avi@khu.ac.kr; cshong@khu.ac.kr).", "Corresponding author: Choong Seon Hong (e-mail: cshong@khu.ac.kr) or the requirements for transmission delay, the traditional cloud computing paradigm that uploads such big data to a cloud centre for data processing can no longer meet these demands #OTHEREFR ."], "text_after_citation": ["Further, collecting data from distributed devices poses risks and challenges due to the sensitive nature of a large amount of data, as well as regulations such as the General Data Protection Regulation (GDPR) #OTHEREFR in Europe.", "Therefore, as edge devices' storage and computing power continue to grow, coupled with concerns about privacy issues, it becomes more attractive to implement edge intelligence in MEC systems in a distributed manner #OTHEREFR .", "To this end, Federated learning (FL), as one application of edge computing in distributed machine learning, is first proposed by #OTHEREFR to simultaneously achieve edge intelligence and address privacy concerns.", "It trains a global model through the cooperation between local clients and an edge server while keeping the clients' raw data within their respective local environments.", "In general, the typical federated training process consists of the following four steps #OTHEREFR : (1) the server chooses a certain network architecture such as Convolutional Neural Network (CNN) as the global model to be optimized and sends it to local clients; (2) the clients update the received the model parameters of the global model based on their local data; #OTHEREFR all clients send their updated model parameters back to the edge server for aggregation; (4) the server averages all the sent parameters as the new global model parameters for the next global round, repeating these four steps until convergence."], "citing_paper_content": {"title": "Mp-Fedcl: Multi-Prototype Federated Contrastive Learning For Edge Intelligence", "abstract": "Federated learning-assisted edge intelligence enables privacy protection in modern intelligent services. However, not Independent and Identically Distributed (non-IID) distribution among edge clients can impair the local model performance. The existing single prototype-based strategy represents a sample by using the mean of the feature space. However, feature spaces are usually not clustered, and a single prototype may not represent a sample well. Motivated by this, this paper proposes a multiprototype federated contrastive learning approach (MP-FedCL) which demonstrates the effectiveness of using a multi-prototype strategy over a single-prototype under non-IID settings, including both label and feature skewness. Specifically, a multi-prototype computation strategy based on k-means is first proposed to capture different embedding representations for each class space, using multiple prototypes (k centroids) to represent a class in the embedding space. In each global round, the computed multiple prototypes and their respective model parameters are sent to the edge server for aggregation into a global prototype pool, which is then sent back to all clients to guide their local training. Finally, local training for each client minimizes their own supervised learning tasks and learns from shared prototypes in the global prototype pool through supervised contrastive learning, which encourages them to learn knowledge related to their own class from others and reduces the absorption of unrelated knowledge in each global iteration. Experimental results on MNIST, Digit-5, Office-10, and DomainNet show that our method outperforms multiple baselines, with an average test accuracy improvement of about 4.6% and 10.4% under feature and label non-IID distributions, respectively."}, "cited_paper_content": {"title": "Edge Intelligence: The Confluence Of Edge Computing And Artificial Intelligence", "abstract": "Along with the deepening development in communication technologies and the surge of mobile devices, a brand-new computation paradigm, Edge Computing, is surging in popularity. Meanwhile, ::: Artificial Intelligence (AI) applications are thriving with the breakthroughs in deep learning and the upgrade of hardware architectures. Billions of bytes of data, generated at the network edge, put great demands on data processing and structural optimization. ::: Therefore, there exists a strong demand to integrate Edge Computing and AI, which gives birth to Edge Intelligence. ::: In this article, we divide Edge Intelligence into AI for edge (Intelligence-enabled Edge Computing) and AI on edge (Artificial Intelligence on Edge). ::: The former focuses on providing a more optimal solution to the key concerns in Edge Computing with the help of popular and effective AI technologies while the latter studies how to carry out the entire process of building AI models, i.e., model training and inference, on edge. This article focuses on giving insights into this new inter-disciplinary field from a broader vision and perspective. It discusses the core concepts and the research road-map, which should provide the necessary background for potential future research programs in Edge Intelligence."}, "keywords": ["edge intelligence devices"], "citation_intent": "background"} {"citing_id": "2304.08662v1", "cited_id": "2002.05200", "section_title": "Pastis", "citation": "ELBA also provides support for the GPU-based -Drop alignment called LOGAN #REFR . LOGAN does not support protein alignment.", "text_before_citation": ["Thus, the overlap detection phase has the form of ASA T .", "Once the output matrix is formed, PASTIS computes an alignment step on each non-zero, similar to ELBA.", "PASTIS has two alignment modes: seedand-extend with -Drop and Smith-Waterman alignment.", "Using -Drop, PASTIS initiates the alignment from the -mer match.", "Both PASTIS and ELBA defer implementation of -Drop to the Library for Sequence Analysis (SeqAn) C++ library for CPU #OTHEREFR ."], "text_after_citation": [], "citing_paper_content": {"title": "Space Efficient Sequence Alignment For Sram-Based Computing: X-Drop On The Graphcore Ipu", "abstract": "Dedicated accelerator hardware has become essential for processing AI-based workloads, leading to the rise of novel accelerator architectures. Furthermore, fundamental differences in memory architecture and parallelism have made these accelerators targets for scientific computing. The sequence alignment problem is fundamental in bioinformatics; we have implemented the-Drop algorithm, a heuristic method for pairwise alignment that reduces search space, on the Graphcore Intelligence Processor Unit (IPU) accelerator. The-Drop algorithm has an irregular computational pattern, which makes it difficult to accelerate due to load balancing. Here, we introduce a graph-based partitioning and queue-based batch system to improve load balancing. Our implementation achieves 10\u00d7 speedup over a state-of-the-art GPU implementation and up to 4.65\u00d7 compared to CPU. In addition, we introduce a memoryrestricted-Drop algorithm that reduces memory footprint by 55\u00d7 and efficiently uses the IPU's limited low-latency SRAM. This optimization further improves the strong scaling performance by 3.6\u00d7."}, "cited_paper_content": {"title": "Logan: High-Performance Gpu-Based X-Drop Long-Read Alignment", "abstract": "Pairwise sequence alignment is one of the most computationally intensive kernels in genomic data analysis, accounting for more than 90% of the runtime for key bioinformatics applications. This method is particularly expensive for third-generation sequences due to the high computational cost of analyzing sequences of length between 1Kb and 1Mb. Given the quadratic overhead of exact pairwise algorithms for long alignments, the community primarily relies on approximate algorithms that search only for high-quality alignments and stop early when one is not found. In this work, we present the first GPU optimization of the popular X-drop alignment algorithm, that we named LOGAN. Results show that our high-performance multi-GPU implementation achieves up to 181.6 GCUPS and speed-ups up to 6.6x and 30.7x using 1 and 6 NVIDIA Tesla V100, respectively, over the state-of-the-art software running on two IBM Power9 processors using 168 CPU threads, with equivalent accuracy. We also demonstrate a 2.3x LOGAN speed-up versus ksw2, a state-of-art vectorized algorithm for sequence alignment implemented in minimap2, a long-read mapping software. To highlight the impact of our work on a real-world application, we couple LOGAN with a many-to-many long-read alignment software called BELLA, and demonstrate that our implementation improves the overall BELLA runtime by up to 10.6x. Finally, we adapt the Roofline model for LOGAN and demonstrate that our implementation is near-optimal on the NVIDIA Tesla V100s."}, "keywords": ["GPU-based -Drop alignment"], "citation_intent": "method"} {"citing_id": "2303.11575v2", "cited_id": "1802.05797", "section_title": "Related Work", "citation": "Many users center their concern around the threats from other users, e.g., as a bystander #REFR .", "text_before_citation": ["#OTHEREFR discovered that users are concerned about the security issues in losing the authentication token.", "Lassak et al.'s study #OTHEREFR indicated that users have misconceptions about how biometric data is stored using FIDO2 biometric authentication.", "Both studies showed that users' security perception of authentication does not necessarily match the inherent security.", "Recent research also focused on users' security and privacy perception in VR.", "VR developers and users felt the lack of privacy due to opaque data collection policies #OTHEREFR ."], "text_after_citation": ["Users are also concerned about being deceived by the digital content in VR #OTHEREFR .", "User authentication for payment. Authentication requirements can differ in different settings.", "For example, using a chip card may suffice in a physical store, whereas additional one-time passwords (OTPs) may be required while shopping online #OTHEREFR .", "The perceived security of authentication also impact users' adoption and use of payment services.", "For example, there is a significant uptake of mobile payments because users associate perceived control and security with user authentication on their devices #OTHEREFR . Voskobojnikov et al."], "citing_paper_content": {"title": "\"I Want The Payment Process To Be Cool\": Understanding How Interaction Factors Into Security And Privacy Perception Of Authentication In Virtual Reality", "abstract": "Users embrace the rapid development of virtual reality (VR) technology. We are witnessing a widespread adoption of VR technology in more routine settings, such as gaming, social interactions, shopping, and commerce. VR systems access sensitive user data and assets when handling these routine activities, including payment, which raises the need for user authentication in VR. However, there is a limited understanding of how users perceive user authentication in VR, in particular, how users' interaction experiences factor into their perception of security and privacy. Our work adopts a \"technology probe\" approach to understand this question. We design technology probes of authentication in VR based on existing authentication interactions in both VR and the physical world. Further, we embed these probes in the routine payment of a VR game. Our qualitative analysis reveals that users face unique usability challenges in VR authentication, e.g., in motion control. Such challenges also hinder users from accessing security and privacy accurately in VR authentication. Users' expectations for VR authentication mainly center on improvements in interaction. However, their expectations could appear nonspecific and conflicting. We provide recommendations to accommodate users' expectations and resolve conflicts between usability and security."}, "cited_paper_content": {"title": "Security And Privacy Approaches In Mixed Reality: A Literature Survey", "abstract": "Mixed reality (MR) technology development is now gaining momentum due to advances in computer vision, sensor fusion, and realistic display technologies. With most of the research and development focused on delivering the promise of MR, the privacy and security implications of this technology are yet to be thoroughly investigated. This survey article aims to put in to light these risks and to look into the latest security and privacy work on MR. Specifically, we list and review the different protection approaches that have been proposed to ensure user and data security and privacy in MR. We extend the scope to include work on related technologies such as augmented reality, virtual reality, and human-computer interaction as crucial components, if not the origins, of MR, as well as numerous related work from the larger area of mobile devices, wearables, and Internet-of-Things. We highlight the lack of investigation, implementation, and evaluation of data protection approaches in MR. Further challenges and directions on MR security and privacy are also discussed."}, "keywords": ["users", "threats"], "citation_intent": "background"} {"citing_id": "2303.09455v1", "cited_id": "1808.06226", "section_title": "Fine-Tuning", "citation": "For labels, we use a character set for Mandarin and subword units #REFR of vocabulary size 1000 for all other languages.", "text_before_citation": ["Following #OTHEREFR , we fine-tune the pre-trained visual student encoder to perform visual speech recognition by attaching a linear layer and a Transformer decoder with attention dimension 256, 4 heads, 6 layers and 2048 linear units.", "We apply a joint CTC / attention loss #OTHEREFR .", "The beam size and CTC weight are fixed to 40 and 0.1 respectively, as in #OTHEREFR ."], "text_after_citation": [], "citing_paper_content": {"title": "Learning Cross-Lingual Visual Speech Representations", "abstract": "Cross-lingual self-supervised learning has been a growing research topic in the last few years. However, current works only explored the use of audio signals to create representations. In this work, we study cross-lingual self-supervised visual representation learning. We use the recently-proposed Raw AudioVisual Speech Encoders (RAVEn) framework to pre-train an audiovisual model with unlabelled multilingual data, and then fine-tune the visual model on labelled transcriptions. Our experiments show that: (1) multilingual models with more data outperform monolingual ones, but, when keeping the amount of data fixed, monolingual models tend to reach better performance; (2) multilingual outperforms English-only pre-training; (3) using languages which are more similar yields better results; and (4) fine-tuning on unseen languages is competitive to using the target language in the pre-training set. We hope our study inspires future research on non-English-only speech representation learning."}, "cited_paper_content": {"title": "Sentencepiece: A Simple And Language Independent Subword Tokenizer And Detokenizer For Neural Text Processing", "abstract": "This paper describes SentencePiece, a language-independent subword tokenizer and detokenizer designed for Neural-based text processing, including Neural Machine Translation. It provides open-source C++ and Python implementations for subword units. While existing subword segmentation tools assume that the input is pre-tokenized into word sequences, SentencePiece can train subword models directly from raw sentences, which allows us to make a purely end-to-end and language independent system. We perform a validation experiment of NMT on English-Japanese machine translation, and find that it is possible to achieve comparable accuracy to direct subword training from raw sentences. We also compare the performance of subword training and segmentation with various configurations. SentencePiece is available under the Apache 2 license at https://github.com/google/sentencepiece."}, "keywords": ["Mandarin", "subword units"], "citation_intent": "method"} {"citing_id": "2303.08697v1", "cited_id": "1910.10683", "section_title": "Related Works", "citation": "Without any training, Codex has shown near state-of-the-art performance that is comparable with a fine-tuned T5 model #REFR .", "text_before_citation": ["Some concurrent works also explored pretrained language models for data querying. Rajkumar et al.", "#OTHEREFR The code editor displays SQL command generated by the pretrained code model.", "It allows users to edit and re-run the SQL command. (Bottom right) The automatically generated visualization is interactive.", "Users can easily export the generated visualization with the embedded menu.", "GPT-3 #OTHEREFR and Codex #OTHEREFR on the Spider #OTHEREFR benchmark."], "text_after_citation": ["Notably, this work serves as a proof-of-concept and lays a solid foundation for our work.", "Binder #OTHEREFR is a pipeline for few-shot data querying that generates SQL by utilizing the Codex API to fill in designated slots.", "Note that different from this work, Mirror focuses on building a human-in-the-loop system for data querying and analysis.", "Thus, Mirror is orthogonal to Binder and is designed to be generic and compatible with different prompting techniques with minimal modification.", "Business intelligence (BI) Tools are a type of data analysis applications that have user-friendly interfaces which allow people to connect to and query data sources, build dashboards to visualize data and share the results."], "citing_paper_content": {"title": "Mirror: A Natural Language Interface For Data Querying, Summarization, And Visualization", "abstract": "We present Mirror, an open-source platform for data exploration and analysis powered by large language models. Mirror offers an intuitive natural language interface for querying databases, and automatically generates executable SQL commands to retrieve relevant data and summarize it in natural language. In addition, users can preview and manually edit the generated SQL commands to ensure the accuracy of their queries. Mirror also generates visualizations to facilitate understanding of the data. Designed with flexibility and human input in mind, Mirror is suitable for both experienced data analysts and non-technical professionals looking to gain insights from their data. 1 CCS CONCEPTS \u2022 Information systems \u2192 Data mining; \u2022 Human-centered computing \u2192 Natural language interfaces; Visualization systems and tools; \u2022 Computing methodologies \u2192 Natural language processing."}, "cited_paper_content": {"title": "Exploring The Limits Of Transfer Learning With A Unified Text-To-Text Transformer", "abstract": "Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new \"Colossal Clean Crawled Corpus\", we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code."}, "keywords": ["fine-tuned T5 model"], "citation_intent": "result"} {"citing_id": "2304.11073v1", "cited_id": "1907.01669", "section_title": "Speech-Aware Dialog Systems Technology Challenge", "citation": "The organizers released a new version of MultiWOZ 2.1 #REFR with user utterances voiced by crowdworkers, as illustrated in Figure 1 .", "text_before_citation": ["The lack of recent work on spoken dialogue can be attributed in part to the lack of available datasets.", "Track 3 of the Dialog Systems Technology Challenge 11 4 seeks to promote work on spoken dialogue by releasing a spoken version of Multi-WOZ.", "This Multi-domain (restaurant, hotel, attraction, taxi, train, hospital and police) Wizard-of-Oz dataset is a large-scale human-human task-oriented conversational corpus commonly used for training and evaluating dialogue state tracking (DST), policy optimization and end-to-end dialogue modeling systems.", "The goal of this track is to characterize the performance of DST models in the presence of ASR errors and speech phenomena such as disfluencies."], "text_after_citation": ["Despite being widely used by the research community, MultiWOZ has been shown to exhibit an entity bias and a large overlap in the distribution of slot-values between the training and the evaluation sets which can lead to memorization in generative models (Qian et al., 2021).", "To encourage generalization, the organizers introduced modifications in the dev and test sets: the values for the slots hotel-name, restaurant-name, train-departure and train-destination were replaced with unseen entities, and time mentions were offset by a constant amount.", "User utterances in the dev and test sets are vocalized by crowdworkers.", "A speech synthesized version of the training data is also provided in the aim of assessing the validity of such data to mitigate the lack of real spoken conversations.", "Two verbatim versions of the dev set are provided to the participants, i.e."], "citing_paper_content": {"title": "Olisia: A Cascade System For Spoken Dialogue State Tracking", "abstract": "Though Dialogue State Tracking (DST) is a core component of spoken dialogue systems, recent work on this task mostly deals with chat corpora, disregarding the discrepancies between spoken and written language. In this paper, we propose OLISIA, a cascade system which integrates an Automatic Speech Recognition (ASR) model and a DST model. We introduce several adaptations in the ASR and DST modules to improve integration and robustness to spoken conversations. With these adaptations, our system ranked first in DSTC11 Track 3, a benchmark to evaluate spoken DST. We conduct an in-depth analysis of the results and find that normalizing the ASR outputs and adapting the DST inputs through data augmentation, along with increasing the pre-trained models size all play an important role in reducing the performance discrepancy between written and spoken conversations. 1"}, "cited_paper_content": {"title": "Multiwoz 2.1: A Consolidated Multi-Domain Dialogue Dataset With State Corrections And State Tracking Baselines", "abstract": "MultiWOZ 2.0 (Budzianowski et al., 2018) is a recently released multi-domain dialogue dataset spanning 7 distinct domains and containing over 10,000 dialogues. Though immensely useful and one of the largest resources of its kind to-date, MultiWOZ 2.0 has a few shortcomings. Firstly, there is substantial noise in the dialogue state annotations and dialogue utterances which negatively impact the performance of state-tracking models. Secondly, follow-up work (Lee et al., 2019) has augmented the original dataset with user dialogue acts. This leads to multiple co-existent versions of the same dataset with minor modifications. In this work we tackle the aforementioned issues by introducing MultiWOZ 2.1. To fix the noisy state annotations, we use crowdsourced workers to re-annotate state and utterances based on the original utterances in the dataset. This correction process results in changes to over 32% of state annotations across 40% of the dialogue turns. In addition, we fix 146 dialogue utterances by canonicalizing slot values in the utterances to the values in the dataset ontology. To address the second problem, we combined the contributions of the follow-up works into MultiWOZ 2.1. Hence, our dataset also includes user dialogue acts as well as multiple slot descriptions per dialogue state slot. We then benchmark a number of state-of-the-art dialogue state tracking models on the MultiWOZ 2.1 dataset and show the joint state tracking performance on the corrected state annotations. We are publicly releasing MultiWOZ 2.1 to the community, hoping that this dataset resource will allow for more effective models across various dialogue subproblems to be built in the future."}, "keywords": ["user utterances"], "citation_intent": "method"} {"citing_id": "2303.04906v1", "cited_id": "1910.13796", "section_title": "Introduction", "citation": "For example, traditional ML models can offer a better performance-to-complexity ratio on tabular data than DNNs #REFR . Third, DNNs act in a nontransparent black-box manner.", "text_before_citation": ["The popularity of FL caused the development of a plethora of FL frameworks, e.g., Intel \u00ae OpenFL #OTHEREFR , Flower #OTHEREFR , TensorFlow Federated [1] , and HPE Swarm Learning #OTHEREFR to cite a few.", "This software only supports one ML model type: Deep Neural Networks (DNNs).", "While DNNs have shown unprecedented results across a wide range of applications, from image recognition #OTHEREFR to natural language processing #OTHEREFR , from drug discovery #OTHEREFR to fraud detection #OTHEREFR , they are not the best ML model for every use case.", "First, DNNs require massive amounts of data, and collecting and labelling enough high-quality samples is often prohibitive.", "Second, DNNs are not the best for all types of data."], "text_after_citation": ["This property makes them undesirable when the model's output has to be explained or justified #OTHEREFR .", "Lastly, DNNs require high computational resources, and modern security-preserving approaches, e.g. #OTHEREFR , only exacerbate this issues #OTHEREFR .", "An FL system leveraging lightweight models could open new possibilities for the industry and allow research where such resources are hardly available.", "We propose the MAFL (Model-Agnostic Federated Learning) framework to alleviate these problems.", "This framework leverages Ensemble Learning to work with all types of ML models."], "citing_paper_content": {"title": "Model-Agnostic Federated Learning", "abstract": "Since its debut in 2016, Federated Learning (FL) has been tied to the inner workings of Deep Neural Networks (DNNs). On the one hand, this allowed its development and widespread use as DNNs proliferated. On the other hand, it neglected all those scenarios in which using DNNs is not possible or advantageous. The fact that most current FL frameworks only allow training DNNs reinforces this problem. To address the lack of FL solutions for non-DNN-based use cases, we propose MAFL (Model-Agnostic Federated Learning). MAFL marries a modelagnostic FL algorithm, AdaBoost.F, with an open industry-grade FL framework: Intel \u00ae OpenFL. MAFL is the first FL system not tied to any specific type of machine learning model, allowing exploration of FL scenarios beyond DNNs and trees. We test MAFL from multiple points of view, assessing its correctness, flexibility and scaling properties up to 64 nodes. We optimised the base software achieving a 5.5x speedup on a standard FL scenario. MAFL is compatible with x86-64, ARM-v8, Power and RISC-V."}, "cited_paper_content": {"title": "Deep Learning Vs. Traditional Computer Vision", "abstract": "Deep Learning has pushed the limits of what was possible in the domain of Digital Image Processing. However, that is not to say that the traditional computer vision techniques which had been undergoing progressive development in years prior to the rise of DL have become obsolete. This paper will analyse the benefits and drawbacks of each approach. The aim of this paper is to promote a discussion on whether knowledge of classical computer vision techniques should be maintained. The paper will also explore how the two sides of computer vision can be combined. Several recent hybrid methodologies are reviewed which have demonstrated the ability to improve computer vision performance and to tackle problems not suited to Deep Learning. For example, combining traditional computer vision techniques with Deep Learning has been popular in emerging domains such as Panoramic Vision and 3D vision for which Deep Learning models have not yet been fully optimised."}, "keywords": ["traditional ML models"], "citation_intent": "background"} {"citing_id": "2304.14611v1", "cited_id": "1703.01467", "section_title": "I. Introduction", "citation": "However, it has been shown that in practice, minimizing the distortion does not result in perceptually satisfying output for human subjects #REFR .", "text_before_citation": ["Lossy compression plays a vital role in the communication and storage of images, videos, and audio data #OTHEREFR - #OTHEREFR .", "As the cornerstone of lossy compression, the classical Rate-Distortion (RD) theory #OTHEREFR studies the tradeoff between the bit rate used for representing data and the distortion caused by compression #OTHEREFR .", "The reconstruction quality is traditionally measured by a per-letter distortion metric, such as the mean-squared error."], "text_after_citation": ["Since high perceptual quality may come at the expense of distortion #OTHEREFR , #OTHEREFR , researchers are motivated to extend the RD theory by bringing perception into account #OTHEREFR - #OTHEREFR .", "Blau and Michaeli first proposed and studied the information Rate-Distortion-Perception (RDP) functions in #OTHEREFR .", "Theoretical solutions with closed form expressions to the RDP problem are often intractable, except for some special cases such as the Gaussian source with squared error distortion and Wasserstein-2 metric perception #OTHEREFR . Therefore, a computation method for RDP functions is desirable.", "Traditionally, the Blahut-Arimoto (BA) algorithm #OTHEREFR , #OTHEREFR has been successful in the computation of capacities and RD functions.", "However, to our best knowledge, we have not seen any generalization of the BA algorithm to computing RDP functions."], "citing_paper_content": {"title": "Computation Of Rate-Distortion-Perception Functions With Wasserstein Barycenter", "abstract": "The nascent field of Rate-Distortion-Perception (RDP) theory is seeing a surge of research interest due to the application of machine learning techniques in the area of lossy compression. The information RDP function characterizes the three-way trade-off between description rate, average distortion, and perceptual quality measured by discrepancy between probability distributions. However, computing RDP functions has been a challenge due to the introduction of the perceptual constraint, and existing research often resorts to data-driven methods. In this paper, we show that the information RDP function can be transformed into a Wasserstein Barycenter problem. The nonstrictly convexity brought by the perceptual constraint can be regularized by an entropy regularization term. We prove that the entropy regularized model converges to the original problem. Furthermore, we propose an alternating iteration method based on the Sinkhorn algorithm to numerically solve the regularized optimization problem. Experimental results demonstrate the efficiency and accuracy of the proposed algorithm."}, "cited_paper_content": {"title": "Generative Compression", "abstract": "Traditional image and video compression algorithms rely on hand-crafted encoder/decoder pairs (codecs) that lack adaptability and are agnostic to the data being compressed. Here we describe the concept of generative compression, the compression of data using generative models, and show its potential to produce more accurate and visually pleasing reconstructions at much deeper compression levels for both image and video data. We also demonstrate that generative compression is orders-of-magnitude more resilient to bit error rates (e.g. from noisy wireless channels) than traditional variable-length entropy coding schemes."}, "keywords": ["distortion", "output"], "citation_intent": "background"} {"citing_id": "2304.08889v1", "cited_id": "1210.3184", "section_title": "Getting Started With Two Examples 4.1 A Standard Polynomial System", "citation": "For inner RoA approximation #REFR , the commands are Here the last argument with value 1 is an optional argument that asks SOStab to also represent the target set in the figure.", "text_before_citation": ["It returns the volume of the calculated RoA appproximation and the coefficients of the polynomial variables v k and w k .", "Once the optimization is done, the results can be plotted in two dimensions using:", "VdP.plot_roa(1, 2, 'outer');", "where the first two arguments indicate the indices of the plotted variables (respectively in abscissa and ordinate).", "The string \"outer\" indicates that the outer approximation RoA is plotted."], "text_after_citation": ["This gives the plot represented in Figure 1 , which reproduces results presented in [7, Figure 2 ] and [13, Figure 3] .", "The 3d-plots of polynomials v k and w k can be displayed with:", "VdP.plot_v(1, 2, 'outer'); VdP.plot_w(1, 2, 'outer');", "of course, one can also represent the certificates v k and w k obtained in inner approximation, simply by setting the last argument at 'inner'."], "citing_paper_content": {"title": "Sostab: A Matlab Toolbox For Approximating Regions Of Attraction Of Nonlinear Systems", "abstract": "This paper presents a novel Matlab toolbox, aimed at facilitating the use of polynomial optimization for stability analysis of nonlinear systems. Indeed, in the past decade several decisive contributions made it possible to recast the difficult problem of computing stability regions of nonlinear systems, under the form of convex optimization problems that are tractable in modest dimensions. However, these techniques combine sophisticated frameworks such as algebraic geometry, measure theory and mathematical programming, and existing software still requires their user to be fluent in Sum-of-Squares and Moment programming, preventing these techniques from being used more widely in the control community. To address this issue, SOStab entirely automates the writing and solving of optimization problems, and directly outputs relevant data for the user, while requiring minimal input. In particular, no specific knowledge of optimization is needed to use it."}, "cited_paper_content": {"title": "Inner Approximations Of The Region Of Attraction For Polynomial Dynamical Systems", "abstract": "In a previous work we developed a convex infinite dimensional linear programming (LP) approach to approximating the region of attraction (ROA) of polynomial dynamical systems subject to compact basic semialgebraic state constraints. Finite dimensional relaxations to the infinite-dimensional LP lead to a truncated moment problem in the primal and a polynomial sum-of-squares problem in the dual. This primal-dual linear matrix inequality (LMI) problem can be solved numerically with standard semidefinite programming solvers, producing a hierarchy of outer (i.e. exterior) approximations of the ROA by polynomial sublevel sets, with a guarantee of almost uniform and set-wise convergence. In this companion paper, we show that our approach is flexible enough to be modified so as to generate a hierarchy of polynomial inner (i.e.\\,interior) approximations of the ROA with similar convergence guarantees."}, "keywords": ["inner RoA approximation"], "citation_intent": "background"} {"citing_id": "2304.02729v1", "cited_id": "1907.08474", "section_title": "Experiments On Real Data", "citation": "In Figure 10 we show results only for the instance groups for which Hybroscale or TreeChild could output a solution within 1 hour, consistent with the experiments in #REFR .", "text_before_citation": ["For sufficiently small instances, we compared the results of our heuristics with the results of two existing tools for reconstructing networks from binary trees: TreeChild #OTHEREFR and Hybroscale #OTHEREFR .", "Hybroscale is an exact method performing an exhaustive search on the networks displaying the input trees, therefore it can only handle reasonably small instances in terms of the number of input trees.", "TreeChild is a fixed-parameter (in the number of reticulations of the output) exact algorithm that reconstructs the best tree-child network, a restricted class of phylogenetic networks, and due to its fast-growing computation time cannot handle large instances either.", "We tested ML and TrivialRand against Hybroscale and TreeChild using the same dataset used in #OTHEREFR , in turn taken from #OTHEREFR .", "The dataset consists of ten instances for each possible combination of the parameters L \u2208 {10, 20, 30, 40, 50, 60, 80, 100, 150} and |T | \u2208 #OTHEREFR ."], "text_after_citation": ["As a consequence of Hybroscale and TreeChild being exact methods (TreeChild only for a restricted class of networks), they performed better than both ML and TrivialRand on all instances they could solve, although the best results of TrivialRand are often close (no worse than 15%) and sometimes match the optimal value.", "The main advantage of our heuristics is that they can handle much larger instances than the exact methods.", "In the conference version of this paper #OTHEREFR we showed the results of our heuristics on large real instances, using a ML model trained on 10 networks with at most 100 leaves each.", "These results demonstrated that consistently with the simulated data, the machine-learned heuristics gave significantly better results than the randomised ones for the largest instances.", "When we first repeated the experiments with the new models trained on 1000 networks with maxL = 100, however, we did not obtain similar results: instead, the results of the randomised heuristics were better or only marginally worse than the machine-learned ones on almost all the instance groups, including the largest."], "citing_paper_content": {"title": "Constructing Phylogenetic Networks Via Cherry Picking And Machine Learning *", "abstract": "Combining a set of phylogenetic trees into a single phylogenetic network that explains all of them is a fundamental challenge in evolutionary studies. Existing methods are computationally expensive and can either handle only small numbers of phylogenetic trees or are limited to severely restricted classes of networks. In this paper, we apply the recently-introduced theoretical framework of cherry picking to design a class of efficient heuristics that are guaranteed to produce a network containing each of the input trees, for datasets consisting of binary trees. Some of the heuristics in this framework are based on the design and training of a machine learning model that captures essential information on the structure of the input trees and guides the algorithms towards better solutions. We also propose simple and fast randomised heuristics that prove to be very effective when run multiple times. Unlike the existing exact methods, our heuristics are applicable to datasets of practical size, and the experimental study we conducted on both simulated and real data shows that these solutions are qualitatively good, always within some small constant factor from the optimum. Moreover, our machine-learned heuristics are one of the first applications of machine learning to phylogenetics and show its promise."}, "cited_paper_content": {"title": "A Practical Fixed-Parameter Algorithm For Constructing Tree-Child Networks From Multiple Binary Trees", "abstract": "We present the first fixed-parameter algorithm for constructing a tree-child phylogenetic network that displays an arbitrary number of binary input trees and has the minimum number of reticulations among all such networks. The algorithm uses the recently introduced framework of cherry picking sequences and runs in $O((8k)^k \\mathrm{poly}(n, t))$ time, where $n$ is the number of leaves of every tree, $t$ is the number of trees, and $k$ is the reticulation number of the constructed network. Moreover, we provide an efficient parallel implementation of the algorithm and show that it can deal with up to $100$ input trees on a standard desktop computer, thereby providing a major improvement over previous phylogenetic network construction methods."}, "keywords": ["TreeChild"], "citation_intent": "result"} {"citing_id": "2303.07236v1", "cited_id": "1611.03631", "section_title": "B. Volumetric Exploration", "citation": "It operates over a volumetric representation of the environment based on #REFR and functions in a bifurcated architecture of local-and global path planning.", "text_before_citation": ["SWAP implements its autonomous exploration functionality by interfacing our previous and open-sourced work on graph-based exploration (GBPlanner) #OTHEREFR .", "The method, verified extensively in subterranean and industrial environments, offers efficient exploration within a volume of set bounds assuming no prior map knowledge."], "text_after_citation": ["At the local stage the method exploits a dense random graph G L E around the robot to identify collisionfree paths maximizing volumetric exploration.", "Simultaneously, as such local steps take place, the algorithm builds a sparse global graph G G E , used by the global stage that is invoked when local exploration reports inability to find a path of significant gain or when the robot approaches its endurance limits.", "Accordingly, the method offers repositioning to previously detected unexploited frontiers of the exploration space or timely auto-homing.", "In SWAP, the autonomous exploration behavior is invoked for T e seconds, before the system possibly switches to its semantically-driven behaviors. Fig. 3 .", "Viewpoint generation procedure for a boundary edge of a hole during Semantics Hole Coverage mode."], "citing_paper_content": {"title": "Semantics-Aware Exploration And Inspection Path Planning", "abstract": "This paper contributes a novel strategy for semantics-aware autonomous exploration and inspection path planning. Attuned to the fact that environments that need to be explored often involve a sparse set of semantic entities of particular interest, the proposed method offers volumetric exploration combined with two new planning behaviors that together ensure that a complete mesh model is reconstructed for each semantic, while its surfaces are observed at appropriate resolution and through suitable viewing angles. Evaluated in extensive simulation studies and experimental results using a flying robot, the planner delivers efficient combined exploration and high-fidelity inspection planning that is focused on the semantics of interest. Comparisons against relevant methods of the state-of-the-art are further presented."}, "cited_paper_content": {"title": "Voxblox: Incremental 3D Euclidean Signed Distance Fields For On-Board Mav Planning", "abstract": "Micro Aerial Vehicles (MAVs) that operate in unstructured, unexplored environments require fast and flexible local planning, which can replan when new parts of the map are explored. Trajectory optimization methods fulfill these needs, but require obstacle distance information, which can be given by Euclidean Signed Distance Fields (ESDFs). We propose a method to incrementally build ESDFs from Truncated Signed Distance Fields (TSDFs), a common implicit surface representation used in computer graphics and vision. TSDFs are fast to build and smooth out sensor noise over many observations, and are designed to produce surface meshes. We show that we can build TSDFs faster than Octomaps, and that it is more accurate to build ESDFs out of TSDFs than occupancy maps. Our complete system, called voxblox, is available as open source and runs in real-time on a single CPU core. We validate our approach on-board an MAV, by using our system with a trajectory optimization local planner, entirely on-board and in real-time."}, "keywords": ["planning"], "citation_intent": "background"} {"citing_id": "2303.08904v1", "cited_id": "0709.4118", "section_title": "Exploring Minimizations", "citation": "Like our implementation, the prototype of SA #REFR ran out of memory while determining similarity for vasy_18_73.", "text_before_citation": ["Our algorithm can deal with bigger examples than #OTHEREFR (which fails at peterson, vasy_10_56 and cwi_1_2, and takes more than 500 seconds for vasy_8_24). Even where #OTHEREFR has a smaller game graph (e.g. cwi_3_14), the exponential formula construction renders it slower.", "Also, the clever game graph indeed is much smaller than for examples with a lot of nondeterminism such as peterson.", "Of those terminating, the heavily nondeterministic cwi_1_2 is the most expensive example.", "As many coarse notions must record the nondeterministic options, this blowup is to be expected.", "If we compare to the best similarity algorithm by Ranzato and Tapparo #OTHEREFR , they report their algorithm SA to tackle cwi_1_2 single-handedly."], "text_after_citation": ["This is in spite of SA theoretically having optimal complexity and similarity being less complex (cubic) than trace equivalence, which we need to cover.", "The benchmarks in #OTHEREFR failed at vasy_10_56, and vasy_25_25, which might be due to 2010's tighter memory requirements (they used 2 GB of RAM) or the degree to which bisimilarity and enabledness in the models is exploited."], "citing_paper_content": {"title": "Process Equivalence Problems As Energy Games", "abstract": "We characterize all common notions of behavioral equivalence by one 6-dimensional energy game, where energies bound capabilities of an attacker trying to tell processes apart. The defender-winning initial credits determine exhaustively which preorders and equivalences from the (strong) linear-time-branching-time spectrum relate processes. The time complexity is exponential, which is optimal due to trace equivalence being covered. This complexity improves drastically on our recent approach for deciding groups of equivalences where exponential sets of distinguishing HML formulas are constructed on top of a super-exponential reachability game. In experiments using the VLTS benchmarks, the algorithm performs on par with the best similarity algorithm."}, "cited_paper_content": {"title": "An Efficient Simulation Algorithm Based On Abstract Interpretation", "abstract": "A number of algorithms for computing the simulation preorder are available. Let Sigma denote the state space, ->the transition relation and Psim the partition of Sigma induced by simulation equivalence. The algorithms by Henzinger, Henzinger, Kopke and by Bloom and Paige run in O(|Sigma||->|)-time and, as far as time-complexity is concerned, they are the best available algorithms. However, these algorithms have the drawback of a space complexity that is more than quadratic in the size of the state space. The algorithm by Gentilini, Piazza, Policriti--subsequently corrected by van Glabbeek and Ploeger--appears to provide the best compromise between time and space complexity. Gentilini et al.'s algorithm runs in O(|Psim|^2|->|)-time while the space complexity is in O(|Psim|^2 + |Sigma|log|Psim|). We present here a new efficient simulation algorithm that is obtained as a modification of Henzinger et al.'s algorithm and whose correctness is based on some techniques used in applications of abstract interpretation to model checking. Our algorithm runs in O(|Psim||->|)-time and O(|Psim||Sigma|log|Sigma|)-space. Thus, this algorithm improves the best known time bound while retaining an acceptable space complexity that is in general less than quadratic in the size of the state space. An experimental evaluation showed good comparative results with respect to Henzinger, Henzinger and Kopke's algorithm."}, "keywords": ["similarity", "implementation"], "citation_intent": "method"} {"citing_id": "2303.01265v1", "cited_id": "1806.03536", "section_title": "Introduction", "citation": "However, it is still challenging for the labeled nodes to propagate their information far away using a conventional message passing algorithm, since the influence of labeled nodes decays as the topological distance increases #REFR .", "text_before_citation": ["G RAPH or network is widely used for describing the interactions between elements of a complex system, such as those in social networks #OTHEREFR , knowledge graphs #OTHEREFR , molecular graphs #OTHEREFR , and recommender systems #OTHEREFR .", "To deal with those non-Euclidean data for various graph analytical tasks such as node classification #OTHEREFR and link prediction #OTHEREFR , graph neural networks (GNNs) #OTHEREFR , #OTHEREFR have been developed and shown having superior performances.", "The core of current GNNs such as GCN #OTHEREFR is message passing.", "In message passing, feature representations are learned for each node by recursively performing aggregation and transformation on the representations of its immediate neighbors, revealing that information about long-distance neighbors can be captured this way."], "text_after_citation": ["Moreover, increasing message passing number will lead to oversmoothing #OTHEREFR , #OTHEREFR , i.e., the case where representations are determined by the graph structure.", "While techniques like residual connections used in GCNII #OTHEREFR allow the network architecture to be deeper, they substantially increase the number of learnable parameters and computational complexity of the GNN.", "Another shortcoming of message passing is its negative smoothing effect in the circumstances where the nodes of the same type discontinuously distributed in the topology space.", "For instance, in heterophilious graphs, the immediate neighbors of a node come from different classes.", "It has been revealed #OTHEREFR that in smoothing such nodes, message passing forcefully make the feature representations of the nodes with different labels approximate the average of the local neighborhood, thus deteriorating the representation learning."], "citing_paper_content": {"title": "Steering Graph Neural Networks With Pinning Control", "abstract": "In the semi-supervised setting where labeled data are largely limited, it remains to be a big challenge for message passing based graph neural networks (GNNs) to learn feature representations for the nodes with the same class label that is distributed discontinuously over the graph. To resolve the discontinuous information transmission problem, we propose a control principle to supervise representation learning by leveraging the prototypes (i.e., class centers) of labeled data. Treating graph learning as a discrete dynamic process and the prototypes of labeled data as \"desired\" class representations, we borrow the pinning control idea from automatic control theory to design learning feedback controllers for the feature learning process, attempting to minimize the differences between message passing derived features and the class prototypes in every round so as to generate class-relevant features. Specifically, we equip every node with an optimal controller in each round through learning the matching relationships between nodes and the class prototypes, enabling nodes to rectify the aggregated information from incompatible neighbors in a graph with strong heterophily. Our experiments demonstrate that the proposed PCGCN model achieves better performances than deep GNNs and other competitive heterophily-oriented methods, especially when the graph has very few labels and strong heterophily."}, "cited_paper_content": {"title": "Representation Learning On Graphs With Jumping Knowledge Networks", "abstract": "Recent deep learning approaches for representation learning on graphs follow a neighborhood aggregation procedure. We analyze some important properties of these models, and propose a strategy to overcome those. In particular, the range of \"neighboring\" nodes that a node's representation draws from strongly depends on the graph structure, analogous to the spread of a random walk. To adapt to local neighborhood properties and tasks, we explore an architecture -- jumping knowledge (JK) networks -- that flexibly leverages, for each node, different neighborhood ranges to enable better structure-aware representation. In a number of experiments on social, bioinformatics and citation networks, we demonstrate that our model achieves state-of-the-art performance. Furthermore, combining the JK framework with models like Graph Convolutional Networks, GraphSAGE and Graph Attention Networks consistently improves those models' performance."}, "keywords": ["labeled nodes"], "citation_intent": "method"} {"citing_id": "2304.04784v1", "cited_id": "1611.01232", "section_title": "", "citation": "Second, what role does the particular distribution of post-activations in a given layer play in determining network performance? For example, the activation function considered in #REFR is hyperbolic tangent, which we adopt henceforth.", "text_before_citation": ["The boundary between these two phases is a critical line called the edge of chaos, 2 which is a continuous phase transition characterized by a diverging correlation length \u03be for the layer-to-layer two-point function of the neurons.", "Since the correlation length sets the depth scale at which information can propagate, this theoretically enables networks of arbitrary depth to be trained at criticality (more generally, networks are trainable provided their depth does not exceed the scale set by \u03be).", "In other words, the deeper the network, the closer one must lie to the edge of chaos; this was demonstrated in #OTHEREFR along a slice of parameter space at bias variance 0.05 and weight variance ranging from 1 to 4, and subsequently generalized/corroborated in, e.g., #OTHEREFR Several questions naturally arise from the above work.", "First, given that the network parameters will evolve under training in order to minimize the specified cost function and, in particular, develop interdependencies, why does the choice of initialization have such a decisive effect on network performance? 3 Indeed, it was observed in #OTHEREFR that the hidden-layer pre-activation distributions (as quantified by their variance) rapidly approach some asymptotic value within 10 or fewer layers, and then remain relatively unchanged for arbitrarily many additional layers.", "We corroborate this fact at the level of the post-activation in fig. 6 of appendix A."], "text_after_citation": ["When \u03c3 2 b 1 and \u03c3 2 w 1, the pre-activations z of the hidden layers are approximately Gaussian-distributed with small variance (cf. (8) ).", "In this case, tanh(z) \u2248 z, so the network behaves like a linear network.", "These are quite restrictive, being incapable of representing functions whose output data are nonlinearly separable and cannot be generated by a combination of linearly separable data.", "In the opposite extreme, for large values of \u03c3 2 w and \u03c3 2 b , the pre-activation variance becomes so large that the post-activation distribution becomes peaked at \u00b11.", "In other words, large pre-activation variance saturates the tanh, causing it to behave like a discrete step-function."], "citing_paper_content": {"title": "Criticality Versus Uniformity In Deep Neural Networks", "abstract": "Deep feedforward networks initialized along the edge of chaos exhibit exponentially superior training ability as quantified by maximum trainable depth. In this work, we explore the effect of saturation of the tanh activation function along the edge of chaos. In particular, we determine the line of uniformity in phase space along which the post-activation distribution has maximum entropy. This line intersects the edge of chaos, and indicates the regime beyond which saturation of the activation function begins to impede training efficiency. Our results suggest that initialization along the edge of chaos is a necessary but not sufficient condition for optimal trainability."}, "cited_paper_content": {"title": "Deep Information Propagation", "abstract": "We study the behavior of untrained neural networks whose weights and biases are randomly distributed using mean field theory. We show the existence of depth scales that naturally limit the maximum depth of signal propagation through these random networks. Our main practical result is to show that random networks may be trained precisely when information can travel through them. Thus, the depth scales that we identify provide bounds on how deep a network may be trained for a specific choice of hyperparameters. As a corollary to this, we argue that in networks at the edge of chaos, one of these depth scales diverges. Thus arbitrarily deep networks may be trained only sufficiently close to criticality. We show that the presence of dropout destroys the order-to-chaos critical point and therefore strongly limits the maximum trainable depth for random networks. Finally, we develop a mean field theory for backpropagation and we show that the ordered and chaotic phases correspond to regions of vanishing and exploding gradient respectively."}, "keywords": ["activation function"], "citation_intent": "background"} {"citing_id": "2304.14738v1", "cited_id": "1804.00344", "section_title": "J Algorithms", "citation": "Table 11 : Comparing CSST(FixMatch) against other Semi-Supervised Learning Methods for long tailed data distribution for the objectives (4), #REFR .", "text_before_citation": ["We provide a detailed description of algorithms used for optimizing non decomposable objectives through CSST(FixMatch) ans CSST(UDA).", "Algorithm 1 is used for experiments in Section 5 for maximizing worst-case recall (i.e. min recall using CSST(FixMatch) and CSST(UDA)).", "Algorithm 2 is used for experiments in Section 5 for maximizing recall under coverage constraints (i.e. min coverage experiments on CIFAR10-LT, CIFAR100-LT and ImageNet100-LT)."], "text_after_citation": ["Only CSST(FixMatch) is the closest to satisfying the constraint yet suffers very little on the avg. recall.", "these methods on the long-tailed CIFAR-10 (\u03c1 = 100) and CIFAR-100 (\u03c1 = 10) datasets.", "The objective for long-tailed CIFAR-10 dataset was to maximise the worst-case recall (2.1) and average recall, subject to a per-class coverage constraint (4).", "For CIFAR-100 LT dataset, we compare these methods for the objectives maximizing HT recall #OTHEREFR and recall under HT coverage constraints #OTHEREFR .", "For the objectives (2.1) and #OTHEREFR , DARP achieves the best average recall yet it suffers on the worst-case recall."], "citing_paper_content": {"title": "Cost-Sensitive Self-Training For Optimizing Non-Decomposable Metrics", "abstract": "Self-training based semi-supervised learning algorithms have enabled the learning of highly accurate deep neural networks, using only a fraction of labeled data. However, the majority of work on self-training has focused on the objective of improving accuracy whereas practical machine learning systems can have complex goals (e.g. maximizing the minimum of recall across classes etc.) that are non-decomposable in nature. In this work, we introduce the Cost-Sensitive Self-Training (CSST) framework which generalizes the self-training-based methods for optimizing non-decomposable metrics. We prove that our framework can better optimize the desired non-decomposable metric utilizing unlabeled data, under similar data distribution assumptions made for the analysis of self-training. Using the proposed CSST framework we obtain practical self-training methods (for both vision and NLP tasks) for optimizing different non-decomposable metrics using deep neural networks. Our results demonstrate that CSST achieves an improvement over the state-of-the-art in majority of the cases across datasets and objectives."}, "cited_paper_content": {"title": "Marian: Fast Neural Machine Translation In C++", "abstract": "We present Marian, an efficient and self-contained Neural Machine Translation framework with an integrated automatic differentiation engine based on dynamic computation graphs. Marian is written entirely in C++. We describe the design of the encoder-decoder framework and demonstrate that a research-friendly toolkit can achieve high training and translation speed."}, "keywords": ["Semi-Supervised Learning Methods"], "citation_intent": "method"} {"citing_id": "2303.04940v2", "cited_id": "1803.02077", "section_title": "Non-Aligned Supervision", "citation": "To better explore the unaligned and clear reference image, we are inspired by the contextual loss #REFR which aims to compute the cosine similar distance between the unaligned images for imageto-image translation tasks.", "text_before_citation": ["All layers use 4 \u00d7 4 Convolution-BatchNorm-LeakyReLU layers with stride 2, except that the first and last layers do not use the BatchNorm.", "Furthermore, we extend the adversarial loss to a multi-scale adversarial loss:", "EQUATION", "where i represents the different scales, and J is generated by the dehazing network so that it is trained by using the above loss.", "Multi-scale contextual loss."], "text_after_citation": ["We extend it to a multi-scale contextual loss, which is defined as", "EQUATION", "where S is the contextual similarity between image features, and \u03a6 l (J) and \u03a6 l (J ref ) represent the feature maps extracted from layer l of the VGG-16 network \u03a6 with the inputs J and J ref , respectively.", "Our non-aligned supervision brings an important benefit to relaxing the strict alignment requirement on the hazy/clear image pair.", "This leads us to easily collect the non-aligned hazy/clear image pairs in the same real scenes under some relaxed conditions, e.g., pixel misalignment, and shift views."], "citing_paper_content": {"title": "Non-Aligned Supervision For Real Image Dehazing", "abstract": "Removing haze from real-world images is challenging due to unpredictable weather conditions, resulting in misaligned hazy and clear image pairs. In this paper, we propose a non-aligned supervision framework that consists of three networks-dehazing, airlight, and transmission. In particular, we explore a non-alignment setting by utilizing a clear reference image that is not aligned with the hazy input image to supervise the dehazing network through a multi-scale reference loss that compares the features of the two images. Our setting makes it easier to collect hazy/clear image pairs in real-world environments, even under conditions of misalignment and shift views. To demonstrate this, we have created a new hazy dataset called \"Phone-Hazy\", which was captured using mobile phones in both rural and urban areas. Additionally, we present a mean and variance self-attention network to model the infinite airlight using dark channel prior as position guidance, and employ a channel attention network to estimate the three-channel transmission. Experimental results show that our framework outperforms current state-of-the-art methods in the real-world image dehazing. Phone-Hazy and code will be available at here."}, "cited_paper_content": {"title": "The Contextual Loss For Image Transformation With Non-Aligned Data", "abstract": "Feed-forward CNNs trained for image transformation problems rely on loss functions that measure the similarity between the generated image and a target image. Most of the common loss functions assume that these images are spatially aligned and compare pixels at corresponding locations. However, for many tasks, aligned training pairs of images will not be available. We present an alternative loss function that does not require alignment, thus providing an effective and simple solution for a new space of problems. Our loss is based on both context and semantics -- it compares regions with similar semantic meaning, while considering the context of the entire image. Hence, for example, when transferring the style of one face to another, it will translate eyes-to-eyes and mouth-to-mouth. Our code can be found at https://www.github.com/roimehrez/contextualLoss"}, "keywords": ["contextual loss"], "citation_intent": "method"} {"citing_id": "2304.10637v1", "cited_id": "1910.03771", "section_title": "Entity Boundary Detection", "citation": "Our implementation is based on the sequence labelling implementation of the Huggingface open-source library #REFR .", "text_before_citation": ["Given unlabelled text as input, we predict named entity boundaries by analyzing the input sentence structure ( Figure 2 ).", "We treat this task as a sequence labelling task in which the model predicts if a given token is part of an entity or not (\"B-ENTITY\", \"I-ENTITY\",\"O\").", "We use the multilingual XLM-RoBERTa-large model #OTHEREFR ) with a token classification layer (a linear layer) on top of each token representation."], "text_after_citation": ["We evaluate the model in the development set at the end of each epoch and then select the best performing checkpoint.", "We train five independent models and then use majority vote as the ensembling strategy at inference time."], "citing_paper_content": {"title": "Ixa/Cogcomp At Semeval-2023 Task 2: Context-Enriched Multilingual Named Entity Recognition Using Knowledge Bases", "abstract": "Named Entity Recognition (NER) is a core natural language processing task in which pretrained language models have shown remarkable performance. However, standard benchmarks like CoNLL 2003 (Tjong Kim Sang and De Meulder, 2003) do not address many of the challenges that deployed NER systems face, such as having to classify emerging or complex entities in a fine-grained way. In this paper we present a novel NER cascade approach comprising three steps: first, identifying candidate entities in the input sentence; second, linking the each candidate to an existing knowledge base; third, predicting the fine-grained category for each entity candidate. We empirically demonstrate the significance of external knowledge bases in accurately classifying fine-grained and emerging entities. Our system exhibits robust performance in the MultiCoNER2 (Fetahu et al., 2023) shared task, even in the low-resource language setting where we leverage knowledge bases of high-resource languages."}, "cited_paper_content": {"title": "Huggingface'S Transformers: State-Of-The-Art Natural Language Processing", "abstract": "Recent advances in modern Natural Language Processing (NLP) research have been dominated by the combination of Transfer Learning methods with large-scale language models, in particular based on the Transformer architecture. With them came a paradigm shift in NLP with the starting point for training a model on a downstream task moving from a blank specific model to a general-purpose pretrained architecture. Still, creating these general-purpose models remains an expensive and time-consuming process restricting the use of these methods to a small sub-set of the wider NLP community. In this paper, we present HuggingFace's Transformers library, a library for state-of-the-art NLP, making these developments available to the community by gathering state-of-the-art general-purpose pretrained models under a unified API together with an ecosystem of libraries, examples, tutorials and scripts targeting many downstream NLP tasks. HuggingFace's Transformers library features carefully crafted model implementations and high-performance pretrained weights for two main deep learning frameworks, PyTorch and TensorFlow, while supporting all the necessary tools to analyze, evaluate and use these models in downstream tasks such as text/token classification, questions answering and language generation among others. The library has gained significant organic traction and adoption among both the researcher and practitioner communities. We are committed at HuggingFace to pursue the efforts to develop this toolkit with the ambition of creating the standard library for building NLP systems. HuggingFace's Transformers library is available at \\url{https://github.com/huggingface/transformers}."}, "keywords": ["sequence labelling implementation", "Huggingface open-source library"], "citation_intent": "method"} {"citing_id": "2303.17912v1", "cited_id": "1904.03278", "section_title": "Post-Processing", "citation": "We run MoSh++ #REFR (henceforth referred to as MoSh) on the C3D files to acquire the SMPL-X parameters corresponding to each frame in the sequence (Fig. 3, right) . Synthetic sensor information.", "text_before_citation": ["After collection, CIRCLE data are passed through a variety of post-processing steps. Mocap data processing.", "We use Shogun Post to process and export the captured clips to BVH and C3D formats.", "Offline synchronization Due to latency between the headset and the webserver communicating with the Vicon machine, the start times of the headset and mocap data are misaligned and must be synchronized.", "By assuming that the offset between the head bone and the headset remains constant during each sequence, this can be accomplished by solving for the time offset which maximizes the convolution between the velocity profiles of the head bone and headset.", "With start times aligned, we then trim the sequences to the same length and linearly interpolate the headset poses such that every mocap frame has a corresponding headset pose. Human mesh fitting."], "text_after_citation": ["After synchronization, we load both the mocap data in BVH format and the VR trajectories in Habitat and extract synthetic sensor information such as ego-centric RGB-D videos (Fig. 3, middle) .", "Additionally, we can use Blender to render first-person RGB-D videos with the SMPL-X meshes calculated by MoSh. Quality assurance.", "Identifying and fixing sequences with artifacts is a demanding manual process.", "We find that our pipeline has a very high yield of data that does not need to be fixed, so our focus is on identifying sequences with problems so that they can be collected again.", "To help prioritize, we develop a suite of tools that automatically check for common problems, such as:"], "citing_paper_content": {"title": "Circle: Capture In Rich Contextual Environments", "abstract": "Figure 1. Example poses from CIRCLE captured from real human motion in a virtual environment."}, "cited_paper_content": {"title": "Amass: Archive Of Motion Capture As Surface Shapes", "abstract": "Large datasets are the cornerstone of recent advances in computer vision using deep learning. In contrast, existing human motion capture (mocap) datasets are small and the motions limited, hampering progress on learning models of human motion. While there are many different datasets available, they each use a different parameterization of the body, making it difficult to integrate them into a single meta dataset. To address this, we introduce AMASS, a large and varied database of human motion that unifies 15 different optical marker-based mocap datasets by representing them within a common framework and parameterization. We achieve this using a new method, MoSh++, that converts mocap data into realistic 3D human meshes represented by a rigged body model. Here we use SMPL [Loper et al., 2015], which is widely used and provides a standard skeletal representation as well as a fully rigged surface mesh. The method works for arbitrary marker sets, while recovering soft-tissue dynamics and realistic hand motion. We evaluate MoSh++ and tune its hyperparameters using a new dataset of 4D body scans that are jointly recorded with markerbased mocap. The consistent representation of AMASS makes it readily useful for animation, visualization, and generating training data for deep learning. Our dataset is significantly richer than previous human motion collections, having more than 40 hours of motion data, spanning over 300 subjects, more than 11000 motions, and will be publicly available to the research community."}, "keywords": ["frame"], "citation_intent": "method"} {"citing_id": "2303.15361v1", "cited_id": "1704.08509", "section_title": "Beyond Vanilla Source Model", "citation": "In addition to the source model, MAS #REFR [324] also provides the estimated GMM of source features for the target domain.", "text_before_citation": ["TAN #OTHEREFR individually learns the feature encoder and the classifier in the source training stage in which the encoder is optimized based on a reconstruction objective.", "StickerDA #OTHEREFR and TTT++ #OTHEREFR introduce an auxiliary self-supervised classification head in the source model, and GarDA #OTHEREFR and ADV-M #OTHEREFR need to learn a domain discriminator from the source domain to help the semantic model adapt to the target domain.", "Besides, a few methods #OTHEREFR , #OTHEREFR , #OTHEREFR , #OTHEREFR employ the multi-head classifier learning strategy by learning multiple classification heads with one shared feature encoder in the source domain.", "On top of the prior-enforcing auto-encoder, SoMAN-cPAE #OTHEREFR also learns a global classification head and multiple local classification heads for different data augmentations.", "Extra supervision."], "text_after_citation": ["BUFR #OTHEREFR further requires the marginal feature distributions in softly-binned histograms for measurement shifts.", "U-SFAN #OTHEREFR utilizes the distribution of the Bayesian source classifier to quantify the uncertainty for more accurate adaptation.", "Moreover, Prototype-DA #OTHEREFR and OnDA #OTHEREFR need the existence of class prototypes calculated in the source domain.", "FTA-FDA #OTHEREFR needs to retain some random spectrum maps of source data for the following data translation step, while CATTAn #OTHEREFR transfers the subspace learned from source features to the target domain.", "Some approaches #OTHEREFR , #OTHEREFR achieve feature alignment using different orders of moments for features in the source domain."], "citing_paper_content": {"title": "A Comprehensive Survey On Test-Time Adaptation Under Distribution Shifts", "abstract": "Machine learning methods strive to acquire a robust model during training that can generalize well to test samples, even under distribution shifts. However, these methods often suffer from a performance drop due to unknown test distributions. Test-time adaptation (TTA), an emerging paradigm, has the potential to adapt a pre-trained model to unlabeled data during testing, before making predictions. Recent progress in this paradigm highlights the significant benefits of utilizing unlabeled data for training self-adapted models prior to inference. In this survey, we divide TTA into several distinct categories, namely, test-time (source-free) domain adaptation, test-time batch adaptation, online test-time adaptation, and test-time prior adaptation. For each category, we provide a comprehensive taxonomy of advanced algorithms, followed by a discussion of different learning scenarios. Furthermore, we analyze relevant applications of TTA and discuss open challenges and promising areas for future research. A comprehensive list of TTA methods can be found at https://github.com/tim-learn/awesome-test-time-adaptation."}, "cited_paper_content": {"title": "No More Discrimination: Cross City Adaptation Of Road Scene Segmenters", "abstract": "Despite the recent success of deep-learning based semantic segmentation, deploying a pre-trained road scene segmenter to a city whose images are not presented in the training set would not achieve satisfactory performance due to dataset biases. Instead of collecting a large number of annotated images of each city of interest to train or refine the segmenter, we propose an unsupervised learning approach to adapt road scene segmenters across different cities. By utilizing Google Street View and its time-machine feature, we can collect unannotated images for each road scene at different times, so that the associated static-object priors can be extracted accordingly. By advancing a joint global and class-specific domain adversarial learning framework, adaptation of pre-trained segmenters to that city can be achieved without the need of any user annotation or interaction. We show that our method improves the performance of semantic segmentation in multiple cities across continents, while it performs favorably against state-of-the-art approaches requiring annotated training data."}, "keywords": ["target domain"], "citation_intent": "background"} {"citing_id": "2303.06261v1", "cited_id": "1603.00567", "section_title": "Introduction", "citation": "Macrobase #REFR explains outliers by correlating them to some external attributes which are not used to detect anomalies such as location, time of occurrence, software version, etc.", "text_before_citation": ["If a system is able to summarize anomaly detection results into groups and explain why each group of objects is considered to be abnormal or normal, this will greatly reduce the effort of users in evaluating anomaly detection results. State-of-the-Art.", "To the best of our knowledge, the problem of summarizing and interpreting outlier detection results is yet to be addressed.", "Scorpion #OTHEREFR produces meaningful explanations for anomalies in aggregation queries when the 'cause' of an outlier is contained in its provenance.", "Similar to Scorpion, Cape #OTHEREFR aims to explain the outliers in aggregation queries, but using the objects that counterbalance the outliers.", "Both works do not tackle the problem of summarizing outliers."], "text_after_citation": ["However, Macrobase only targets explaining the outliers captured by its default density-based outlier detector and does not generalize to other outlier detection methods.", "In the broader field of interpretable AI, LIME #OTHEREFR explains the predictions of a classifier by learning a linear model locally around the prediction with respect to each testing object and pointing out the attributes that are most important to the prediction of the linear model.", "However, rather than use one model to represent a set of objects, LIME has to learn a linear model for each individual object.", "Therefore, using LIME to explain a large number of prediction results will be prohibitively expensive.", "Other methods #OTHEREFR explain classification results in the similar way to LIME. Challenges."], "citing_paper_content": {"title": "Interpretable Outlier Summarization", "abstract": "Outlier detection is critical in real applications to prevent financial fraud, defend network intrusions, or detecting imminent device failures. To reduce the human effort in evaluating outlier detection results and effectively turn the outliers into actionable insights, the users often expect a system to automatically produce interpretable summarizations of subgroups of outlier detection results. Unfortunately, to date no such systems exist. To fill this gap, we propose STAIR which learns a compact set of human understandable rules to summarize and explain the anomaly detection results. Rather than use the classical decision tree algorithms to produce these rules, STAIR proposes a new optimization objective to produce a small number of rules with least complexity, hence strong interpretability, to accurately summarize the detection results. The learning algorithm of STAIR produces a rule set by iteratively splitting the large rules and is optimal in maximizing this objective in each iteration. Moreover, to effectively handle high dimensional, highly complex data sets which are hard to summarize with simple rules, we propose a localized STAIR approach, called L-STAIR. Taking data locality into consideration, it simultaneously partitions data and learns a set of localized rules for each partition. Our experimental study on many outlier benchmark datasets shows that STAIR significantly reduces the complexity of the rules required to summarize the outlier detection results, thus more amenable for humans to understand and evaluate, compared to the decision tree methods."}, "cited_paper_content": {"title": "Macrobase: Prioritizing Attention In Fast Data", "abstract": "As data volumes continue to rise, manual inspection is becoming increasingly untenable. In response, we present MacroBase, a data analytics engine that prioritizes end-user attention in high-volume fast data streams. MacroBase enables efficient, accurate, and modular analyses that highlight and aggregate important and unusual behavior, acting as a search engine for fast data. MacroBase is able to deliver order-of-magnitude speedups over alternatives by optimizing the combination of explanation and classification tasks and by leveraging a new reservoir sampler and heavy-hitters sketch specialized for fast data streams. As a result, MacroBase delivers accurate results at speeds of up to 2M events per second per query on a single core. The system has delivered meaningful results in production, including at a telematics company monitoring hundreds of thousands of vehicles."}, "keywords": ["outliers", "Macrobase"], "citation_intent": "background"} {"citing_id": "2303.16173v1", "cited_id": "1909.01326", "section_title": "Limitations, Challenges, And Future Directions", "citation": "Since language models often encode biases and stereotypes derived from training corpora #REFR , they may have difficulty producing relevant individuals who are not prototypical (i.e., they do not have a particular stereotype).", "text_before_citation": ["Therefore, future work should investigate more diverse annotator pools or matching annotators to targeted groups, as well as examining how annotator's familiarity with essentialist beliefs and identities affect their judgements.", "Furthermore, prior work in countering hatespeech has show that effective strategies can vary widely depending on the target group #OTHEREFR .", "In our work, we consider results aggregated across all groups.", "However, community-specific investigations are an important future step towards developing effective counter-statements.", "Accuracy of generated exceptions The selection of specific individuals for direct exceptions presents an ongoing challenge, based on the high number of DIR-IND marked incorrect."], "text_after_citation": ["We illustrate incorrect individuals and subgroups in the bottom two examples of Table 1 .", "Additionally, as mentioned in \u00a74, many stereotypes are subjective (e.g., \"women are vain\").", "Therefore, individuals who are counterexamples to the stereotype may be judged differently by different people (e.g., our system proposes that \"taylor swift, sarah palin, and scarlett johansson\" are not vain).", "Producing accurate and relevant direct exceptions to a stereotype is important for understanding the role of such examples to counter essentialist beliefs.", "Our results and discussion highlight the complexity of countering essentialist beliefs."], "citing_paper_content": {"title": "Towards Countering Essentialism Through Social Bias Reasoning", "abstract": "Essentialist beliefs (i.e., believing that members of the same group are fundamentally alike) play a central role in social stereotypes and can lead to harm when left unchallenged. In our work, we conduct exploratory studies into the task of countering essentialist beliefs (e.g., \"liberals are stupid\"). Drawing on prior work from psychology and NLP, we construct five types of counterstatements and conduct human studies on the effectiveness of these different strategies. Our studies also investigate the role in choosing a counterstatement of the level of explicitness with which an essentialist belief is conveyed. We find that statements that broaden the scope of a stereotype (e.g., to other groups, as in \"conservatives can also be stupid\") are the most popular countering strategy. We conclude with a discussion of challenges and open questions for future work in this area (e.g., improving factuality, studying community-specific variation) and we emphasize the importance of work at the intersection of NLP and psychology."}, "cited_paper_content": {"title": "The Woman Worked As A Babysitter: On Biases In Language Generation", "abstract": "We present a systematic study of biases in natural language generation (NLG) by analyzing text generated from prompts that contain mentions of different demographic groups. In this work, we introduce the notion of the regard towards a demographic, use the varying levels of regard towards different demographics as a defining metric for bias in NLG, and analyze the extent to which sentiment scores are a relevant proxy metric for regard. To this end, we collect strategically-generated text from language models and manually annotate the text with both sentiment and regard scores. Additionally, we build an automatic regard classifier through transfer learning, so that we can analyze biases in unseen text. Together, these methods reveal the extent of the biased nature of language model generations. Our analysis provides a study of biases in NLG, bias metrics and correlated human judgments, and empirical evidence on the usefulness of our annotated dataset."}, "keywords": ["stereotypes", "language models"], "citation_intent": "background"} {"citing_id": "2303.13004v1", "cited_id": "1901.05761", "section_title": "Synthesized 1D Data Regression", "citation": "Notably, while ACNP is arguably more powerful than CNP and indeed reports better results as expected for GP data #REFR , it performs worse with sine and oscillator data, probably due to the number of functions being too limited. Also, CCNP does not improve CNP consistently either.", "text_before_citation": ["For sine waves, we vary the values of amplitude U[\u22121, 1] Results.", "Table 1 presents the quantitative results of various baseline CNPs with and without adversarial training.", "The baseline models include CNP #OTHEREFR , ACNP #OTHEREFR , and CCNP #OTHEREFR , where the first two are commonly adopted as per most CNPs literature.", "We consider CCNP as a comparison to the previous NCE setting, where NCE is also applied but on the context set as a regularization term to CNPs.", "We find that adversarial training consistently results in performance improvements to baselines in most cases."], "text_after_citation": ["Further, we observe that adversarial training can be adapted to CCNP, showing potential in improving generative CNPs by leveraging composite contrastive objectives."], "citing_paper_content": {"title": "Adversarially Contrastive Estimation Of Conditional Neural Processes", "abstract": "Conditional Neural Processes (CNPs) formulate distributions over functions and generate function observations with exact conditional likelihoods. CNPs, however, have limited expressivity for high-dimensional observations, since their predictive distribution is factorized into a product of unconstrained (typically) Gaussian outputs. Previously, this could be handled using latent variables or autoregressive likelihood, but at the expense of intractable training and quadratically increased complexity. Instead, we propose calibrating CNPs with an adversarial training scheme besides regular maximum likelihood estimates. Specifically, we train an energy-based model (EBM) with noise contrastive estimation, which enforces EBM to identify true observations from the generations of CNP. In this way, CNP must generate predictions closer to the ground-truth to fool EBM, instead of merely optimizing with respect to the fixed-form likelihood. From generative function reconstruction to downstream regression and classification tasks, we demonstrate that our method fits mainstream CNP members, showing effectiveness when unconstrained Gaussian likelihood is defined, requiring minimal computation overhead while preserving foundation properties of CNPs."}, "cited_paper_content": {"title": "Attentive Neural Processes", "abstract": "Neural Processes (NPs) (Garnelo et al 2018a;b) approach regression by learning to map a context set of observed input-output pairs to a distribution over regression functions. Each function models the distribution of the output given an input, conditioned on the context. NPs have the benefit of fitting observed data efficiently with linear complexity in the number of context input-output pairs, and can learn a wide family of conditional distributions; they learn predictive distributions conditioned on context sets of arbitrary size. Nonetheless, we show that NPs suffer a fundamental drawback of underfitting, giving inaccurate predictions at the inputs of the observed data they condition on. We address this issue by incorporating attention into NPs, allowing each input location to attend to the relevant context points for the prediction. We show that this greatly improves the accuracy of predictions, results in noticeably faster training, and expands the range of functions that can be modelled."}, "keywords": ["CNP", "better results"], "citation_intent": "result"} {"citing_id": "2304.04918v1", "cited_id": "1910.01108", "section_title": "Smart Reply For Customer Support", "citation": "Customized tokenization was first applied to message pairs and canned replies, and then DistilBERT #REFR was used to vectorize the queries and documents for ranking.", "text_before_citation": ["The Smart Reply task is to take the most recent support agent message and the most recent customer message in a customer support conversation, and choose the best reply from a set of canned reply templates.", "An example Smart Reply could be \"This link has step-by-step instructions for how to activate Microsoft 365.\" An efficient CPU-based classifier is applied at runtime to classify which specific product an incoming support message interaction is for, and then our learning-to-rank model is used to select the best reply from the canned replies for that product.", "Smart Reply is able to support customer support interactions across 22 Microsoft products such as Office 365, Teams, Surface, and Remote Assistance. Table 1 describes statistics for the Smart Reply task.", "Training and test data were based on an 80%:20% split ratio."], "text_after_citation": ["One difference from traditional retrieval systems which always retrieve the top-k documents is that we do not want to overwhelm support agents with replies when there are no good canned replies for a customer message.", "To achieve this, we added a \"Silent\" class in the product classifier and an \"Empty\" canned reply in each candidate reply set. We used data augmentation to generate synthetic conversations.", "For instance, appending a message pair that returns the \"Silent\" class or \"Empty\" reply to non-empty questions enriched non-empty triplets, and only using agent or customer messages further enlarged the data size.", "Table 2 : SR offline and online metric gains (%) Table 2 shows the 11.7% offline top-one accuracy gain of sRank compared to our previous DSSM-based #OTHEREFR system that took transformer embeddings as inputs.", "We exposed Smart Reply to insider agents for initial feedback then to 50% global agents during A/B testing."], "citing_paper_content": {"title": "Explicit And Implicit Semantic Ranking Framework", "abstract": "The core challenge in numerous real-world applications is to match an inquiry to the best document from a mutable and finite set of candidates. Existing industry solutions, especially latency-constrained services, often rely on similarity algorithms that sacrifice quality for speed. In this paper we introduce a generic semantic learning-torank framework, Self-training Semantic Cross-attention Ranking (). This transformer-based framework uses linear pairwise loss with mutable training batch sizes and achieves quality gains and high efficiency, and has been applied effectively to show gains on two industry tasks at Microsoft over real-world large-scale data sets: Smart Reply (SR) and Ambient Clinical Intelligence (ACI). In Smart Reply, assists live customers with technical support by selecting the best reply from predefined solutions based on consumer and support agent messages. It achieves 11.7% gain in offline top-one accuracy on the SR task over the previous system, and has enabled 38.7% time reduction in composing messages in telemetry recorded since its general release in January 2021. In the ACI task, selects relevant historical physician templates that serve as guidance for a text summarization model to generate higher quality medical notes. It achieves 35.5% top-one accuracy gain, along with 46% relative ROUGE-L gain in generated medical notes. CCS CONCEPTS \u2022 Information systems \u2192 Learning to rank; \u2022 Computing methodologies \u2192 Learning to rank."}, "cited_paper_content": {"title": "Distilbert, A Distilled Version Of Bert: Smaller, Faster, Cheaper And Lighter", "abstract": "As Transfer Learning from large-scale pre-trained models becomes more prevalent in Natural Language Processing (NLP), operating these large models in on-the-edge and/or under constrained computational training or inference budgets remains challenging. In this work, we propose a method to pre-train a smaller general-purpose language representation model, called DistilBERT, which can then be fine-tuned with good performances on a wide range of tasks like its larger counterparts. While most prior work investigated the use of distillation for building task-specific models, we leverage knowledge distillation during the pre-training phase and show that it is possible to reduce the size of a BERT model by 40%, while retaining 97% of its language understanding capabilities and being 60% faster. To leverage the inductive biases learned by larger models during pre-training, we introduce a triple loss combining language modeling, distillation and cosine-distance losses. Our smaller, faster and lighter model is cheaper to pre-train and we demonstrate its capabilities for on-device computations in a proof-of-concept experiment and a comparative on-device study."}, "keywords": ["canned replies", "DistilBERT"], "citation_intent": "method"} {"citing_id": "2304.00610v1", "cited_id": "1601.01487", "section_title": "Introduction", "citation": "With R redefined to have a sparse complementa string is in R unless exponentially compressible-the complement of the #REFR .", "text_before_citation": ["#OTHEREFR We show: if it is possible to efficiently rule out length t proofs of some unprovable sentence \u03c6, it is also possible to efficiently rule out a slightly shorter proof of inconsistency, which could be used in a length t proof of \u03c6 by contradiction.", "This implies a powerful generalization-if it is hard to rule out length t proofs of inconsistency, it is hard to rule of length t proofs of any unprovable sentence.", "This in turn implies that facts about unprovability and noncomputability, which are well understood, can be imported into complexity theory.", "This has wide ramifications-diverse types of unprovable sentences translate into assertions that open questions in complexity theory have the expected answers.", "For instance, unprovable sentences of the form x\u2208R are dense, so hard families of tautologies encoding \"no length t proof shows x\u2208R\" are also dense."], "text_after_citation": ["Pudl\u00e1k #OTHEREFR shows the initial conjecture was incorrect-a theory T can efficiently prove that T lacks a length t proof of '0=1'.", "The 1989 reformulation refers to the lack of efficient proofs in a weaker theory. See also Theorem 59 of Pudl\u00e1k #OTHEREFR .", "3 See also Kraj\u00ed\u010dek #OTHEREFR Section 21.3.", "language { x, 1 t | no length t proof exists of x\u2208R} is neither in P nor NPcomplete, but is NP-intermediate.", "The hardness of ruling out length t proofs of any unprovable sentence implies a deep linkage between noncomputability and complexity."], "citing_paper_content": {"title": "Ruling Out Short Proofs Of Unprovable Sentences Is Hard", "abstract": "If no optimal propositional proof system exists, we (and independently Pudl\u00e1k) prove that ruling out length t proofs of any unprovable sentence is hard. This mapping from unprovable to hard-to-prove sentences powerfully translates facts about noncomputability into complexity theory. For instance, because proving string x is Kolmogorov random (x\u2208R) is typically impossible, it is typically hard to prove \"no length t proof shows x\u2208R\", or tautologies encoding this. Therefore, a proof system with one family of hard tautologies has these densely in an enumeration of families. The assumption also implies that a natural language is NP-intermediate: with R redefined to have a sparse complement, the complement of the language { x, 1 t | no length t proof exists of x\u2208R} is also sparse. Efficiently ruling out length t proofs of x\u2208R might violate the constraint on using the fact of x\u2208R's unprovability. We conjecture: any computable predicate on R that might be used in if-then statements (or case-based proofs) does no better than branching at random, because R appears random by any effective test. This constraint could also inhibit the usefulness in circuits and propositional proofs of NOT gates and cancellation-needed to encode if-then statements. If R defeats if-then logic, exhaustive search is necessary."}, "cited_paper_content": {"title": "Incompleteness In The Finite Domain", "abstract": "Motivated by the problem of finding finite versions of classical incompleteness theorems, we present some conjectures that go beyond ${\\bf NP\\neq co NP}$. These conjectures formally connect computational complexity with the difficulty of proving some sentences, which means that high computational complexity of a problem associated with a sentence implies that the sentence is not provable in a weak theory, or requires a long proof. Another reason for putting forward these conjectures is that some results in proof complexity seem to be special cases of such general statements and we want to formalize and fully understand these statements. In this paper we review some conjectures that we have presented earlier, introduce new conjectures, systematize them and prove new connections between them and some other statements studied before."}, "keywords": ["sparse complementa string"], "citation_intent": "background"} {"citing_id": "2304.09167v1", "cited_id": "1507.00473", "section_title": "Theorem 2.8 ([Hlw94, Hau95]). Fix A Hypothesis Class", "citation": "Corollary 2.9 recovers the known optimal risk upper bound for binary classification first proven by Hanneke #REFR .", "text_before_citation": ["Corollary 2.9.", "Fix a hypothesis class H \u2286 {0, 1} X with VC dimension d.", "There is a predictor f : X \u00d7 U \u2192 {0, 1} which, for any \u03b4 \u2208 (0, 1), and S \u223c P n sampled from any realizable distribution P , satisfies", "err P ( f (\u2022; S)) 9.64 d n + 1 n log 2 \u03b4 ,", "with probability at least 1 \u2212 \u03b4 over the randomness of S."], "text_after_citation": ["Recently, Larsen #OTHEREFR showed that an implementation of the natural bagging heuristic also achieves an optimal risk bound.", "Our proof of the optimal bound is remarkably simpler than both of their proofs.", "Furthermore, both Hanneke and Larsen state that the constant factors in their upper bounds are very large and explicitly ask whether these constants can be reduced.", "Our new analysis reduces the constant factors by a few orders of magnitude.", "In addition, our result can be seen as a partial answer to a question of Warmuth #OTHEREFR who asked whether the one-inclusion graph algorithm can achieve an optimal PAC risk bound."], "citing_paper_content": {"title": "Optimal Pac Bounds Without Uniform Convergence", "abstract": "In statistical learning theory, determining the sample complexity of realizable binary classification for VC classes was a long-standing open problem. The results of Simon [Sim15] and Hanneke [Han16a] established sharp upper bounds in this setting. However, the reliance of their argument on the uniform convergence principle limits its applicability to more general learning settings such as multiclass classification. In this paper, we address this issue by providing optimal high probability risk bounds through a framework that surpasses the limitations of uniform convergence arguments. Our framework converts the leave-one-out error of permutation invariant predictors into high probability risk bounds. As an application, by adapting the one-inclusion graph algorithm of Haussler, Littlestone, and Warmuth [HLW94], we propose an algorithm that achieves an optimal PAC bound for binary classification. Specifically, our result shows that certain aggregations of one-inclusion graph algorithms are optimal, addressing a variant of a classic question posed by Warmuth [War04]. We further instantiate our framework in three settings where uniform convergence is provably suboptimal. For multiclass classification, we prove an optimal risk bound that scales with the one-inclusion hypergraph density of the class, addressing the suboptimality of the analysis of Daniely and Shalev-Shwartz [DS14]. For partial hypothesis classification, we determine the optimal sample complexity bound, resolving a question posed by Alon, Hanneke, Holzman, and Moran [AHHM22]. For realizable bounded regression with absolute loss, we derive an optimal risk bound that relies on a modified version of the scale-sensitive dimension, refining the results of Bartlett and Long [BL98]. Our rates surpass standard uniform convergence-based results due to the smaller complexity measure in our risk bound."}, "cited_paper_content": {"title": "The Optimal Sample Complexity Of Pac Learning", "abstract": "This work establishes a new upper bound on the number of samples sufficient for PAC learning in the realizable case. The bound matches known lower bounds up to numerical constant factors. This solves a long-standing open problem on the sample complexity of PAC learning. The technique and analysis build on a recent breakthrough by Hans Simon."}, "keywords": ["binary classification"], "citation_intent": "background"} {"citing_id": "2303.14595v2", "cited_id": "1611.07725", "section_title": "Baselines", "citation": "Incremental Classifier and Presentation Learning (iCaRL) #REFR performs classification using the nearest mean-of-exemplars, where the exemplars selected by herding algorithm in the feature space.", "text_before_citation": ["In our evaluation, we test our method by combing it with two popular experience replay methods, ER #OTHEREFR and DER++ #OTHEREFR .", "ER uses a memory buffer to store the training examples from past tasks and interleaves them with the current task data for training.", "In addition to this, DER++ records the output logits of the examples in the memory and performs logit distillation when doing experience replay.", "We combine the proposed BFP loss with ER and DER++ and denote them as ER w/ BFP and DER++ w/ BFP respectively.", "We also compare the proposed method with some other state-of-the-art CL baselines as listed in Table 1."], "text_after_citation": ["Functional Distance Regularization (FDR) #OTHEREFR regularize the output of the network to its past value.", "Different from DER/DER++, FDR applies the regularization on the output classification probability.", "Learning a Unified Classifier Incrementally via Rebalancing (LUCIR) #OTHEREFR augments experience replay with multiple modifications to preserve old knowledge and enforce separation class separation in continual learning.", "Bias Correction (BiC) #OTHEREFR augments the experience replay by learning a separate layer to correct the bias in the output logits.", "ER with Asymmetric Cross-Etronpy (ER-ACE) #OTHEREFR proposes to reduce representation drift by using separate cross-entropy loss for online and replayed training data."], "citing_paper_content": {"title": "Preserving Linear Separability In Continual Learning By Backward Feature Projection", "abstract": "Catastrophic forgetting has been a major challenge in continual learning, where the model needs to learn new tasks with limited or no access to data from previously seen tasks. To tackle this challenge, methods based on knowledge distillation in feature space have been proposed and shown to reduce forgetting [16, 18, 26]. However, most feature distillation methods directly constrain the new features to match the old ones, overlooking the need for plasticity. To achieve a better stability-plasticity trade-off, we propose Backward Feature Projection (BFP), a method for continual learning that allows the new features to change up to a learnable linear transformation of the old features. BFP preserves the linear separability of the old classes while allowing the emergence of new feature directions to accommodate new classes. BFP can be integrated with existing experience replay methods and boost performance by a significant margin. We also demonstrate that BFP helps learn a better representation space, in which linear separability is well preserved during continual learning and linear probing achieves high classification accuracy. The code can be found at https://github.com/rvl-lab-utoronto/BFP."}, "cited_paper_content": {"title": "Icarl: Incremental Classifier And Representation Learning", "abstract": "A major open problem on the road to artificial intelligence is the development of incrementally learning systems that learn about more and more concepts over time from a stream of data. In this work, we introduce a new training strategy, iCaRL, that allows learning in such a class-incremental way: only the training data for a small number of classes has to be present at the same time and new classes can be added progressively. iCaRL learns strong classifiers and a data representation simultaneously. This distinguishes it from earlier works that were fundamentally limited to fixed data representations and therefore incompatible with deep learning architectures. We show by experiments on CIFAR-100 and ImageNet ILSVRC 2012 data that iCaRL can learn many classes incrementally over a long period of time where other strategies quickly fail."}, "keywords": ["Incremental Classifier"], "citation_intent": "method"} {"citing_id": "2303.16424v1", "cited_id": "1911.03038", "section_title": "Y N2\u00d7N1", "citation": "Additionally, our results demonstrate meaningful gains over Turbo Autoencoder (TurboAE) #REFR and state-of-the-art classical codes.", "text_before_citation": ["k M = k for some positive integer M .", "In other words, ProductAE boils down the complex problem of training autoencoders of large dimensions and lengths to less-complex sub-problems of training encoders and decoders for smaller dimensions and lengths.", "Our training results for a relatively short-length ProductAE of dimension k = 100 show significant performance gains compared to the polar code under successive cancellation (SC) decoding.", "More importantly, we demonstrate achieving similar gains for moderate-length ProductAEs of dimensions as large as k = 300 bits.", "This clearly establishes the generalization of our proposed architecture for training higher-dimension codes."], "text_after_citation": ["These achievements are attained by applying innovative ideas from deep learning and intuitions from coding theory to further improve the training performance of our proposed ProductAE architecture.", "The main contributions of the paper are summarized as follows.", "\u2022 We introduce a new class of neural error-correction codes, namely ProductAE, aimed at enabling the training of higher-dimension codes.", "Building upon ideas from classical product codes, ProductAE boils down the complex problem of training large autoencoders to less-complex sub-problems of training smaller encoding and decoding components.", "\u2022 We present several useful modifications to our proposed ProductAE architecture, which significantly contribute to achieving excellent training performances."], "citing_paper_content": {"title": "Productae: Toward Deep Learning Driven Error-Correction Codes Of Large Dimensions", "abstract": "While decades of theoretical research have led to the invention of several classes of error-correction codes, the design of such codes is an extremely challenging task, mostly driven by human ingenuity. Recent studies demonstrate that such designs can be effectively automated and accelerated via tools from machine learning (ML), thus enabling ML-driven classes of errorcorrection codes with promising performance gains compared to classical designs. A fundamental challenge, however, is that it is prohibitively complex, if not impossible, to design and train fully ML-driven encoder and decoder pairs for large code dimensions. In this paper, we propose Product Autoencoder (ProductAE)-a computationally-efficient family of deep learning driven (encoder, decoder) pairs-aimed at enabling the training of relatively large codes (both encoder and decoder) with a manageable training complexity. We build upon ideas from classical product codes and propose constructing large neural codes using smaller code components. ProductAE boils down the complex problem of training the encoder and decoder for a large code dimension k and blocklength n to less-complex sub-problems of training encoders and decoders for smaller dimensions and blocklengths. Our training results show successful training of ProductAEs of dimensions as large as k = 300 bits with meaningful performance gains compared to state-of-the-art classical and neural designs. Moreover, we demonstrate excellent robustness and adaptivity of ProductAEs to channel models different than the ones used for training."}, "cited_paper_content": {"title": "Turbo Autoencoder: Deep Learning Based Channel Codes For Point-To-Point Communication Channels", "abstract": "Designing codes that combat the noise in a communication medium has remained a significant area of research in information theory as well as wireless communications. Asymptotically optimal channel codes have been developed by mathematicians for communicating under canonical models after over 60 years of research. On the other hand, in many non-canonical channel settings, optimal codes do not exist and the codes designed for canonical models are adapted via heuristics to these channels and are thus not guaranteed to be optimal. In this work, we make significant progress on this problem by designing a fully end-to-end jointly trained neural encoder and decoder, namely, Turbo Autoencoder (TurboAE), with the following contributions: ($a$) under moderate block lengths, TurboAE approaches state-of-the-art performance under canonical channels; ($b$) moreover, TurboAE outperforms the state-of-the-art codes under non-canonical settings in terms of reliability. TurboAE shows that the development of channel coding design can be automated via deep learning, with near-optimal performance."}, "keywords": ["Turbo Autoencoder"], "citation_intent": "result"} {"citing_id": "2304.07519v1", "cited_id": "1505.04597", "section_title": "Effects Of Different Window Size W In Dsbe (Section Iii-C).", "citation": "Our experimental results on Pancreas dataset show that changing one base model to 3D U-Net #REFR results in similar results to those reported in Table III : 73.89\u00b14.42 vs. 74.03\u00b14.00.", "text_before_citation": ["That is, the pseudo-labels generated by its peer networks will be exactly the same as the predictions made by the base model, and each base model independently learns from its own predictions in every iteration, which is equivalent to Entropy Minimization.", "The only difference between the degraded ComWin model and regular Entropy Minimization, is that the former fits a categorical probability distribution while the latter learns from a continuous soft counterpart. This disparity is minor.", "As shown in Table III , switching to Entropy Minimization would incur a huge performance drop (from 74.03 to 44.55) .", "This demonstrates the significance of initializing each base model differently. Base model architectures.", "This framework is structure agnostic and does not limit the choices of architecture configurations."], "text_after_citation": ["Thorough exploration regarding diversity and the combination of base model structures would be an interesting direction for future studies."], "citing_paper_content": {"title": "Compete To Win: Enhancing Pseudo Labels For Barely-Supervised Medical Image Segmentation", "abstract": "This study investigates barely-supervised medical image segmentation where only few labeled data, i.e., singledigit cases are available. We observe the key limitation of the existing state-of-the-art semi-supervised solution cross pseudo supervision is the unsatisfactory precision of foreground classes, leading to a degenerated result under barely-supervised learning. In this paper, we propose a novel Compete-to-Win method (ComWin) to enhance the pseudo label quality. In contrast to directly using one model's predictions as pseudo labels, our key idea is that high-quality pseudo labels should be generated by comparing multiple confidence maps produced by different networks to select the most confident one (a compete-to-win strategy). To further refine pseudo labels at near-boundary areas, an enhanced version of ComWin, namely, ComWin + , is proposed by integrating a boundary-aware enhancement module. Experiments show that our method can achieve the best performance on three public medical image datasets for cardiac structure segmentation, pancreas segmentation and colon tumor segmentation, respectively. The source code is now available at https://github.com/Huiimin5/comwin."}, "cited_paper_content": {"title": "U-Net: Convolutional Networks For Biomedical Image Segmentation", "abstract": "There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net ."}, "keywords": ["3D U-Net results"], "citation_intent": "result"} {"citing_id": "2303.07815v1", "cited_id": "1810.00736", "section_title": "Method", "citation": "This is in line with other results in the literature #REFR and is related to the trade-off between model depth and GPU compute units available.", "text_before_citation": ["In all cases, we use the authors' provided code, but with a modification to accept randomly generated video sequences of two different durations.", "The results can be seen in Table 4 and show that our MobileVOS models are significantly smaller, while retaining a constant low latency across both the short and long-duration videos.", "This result is attributed to the smaller backbones and the constant memory costs due to a finite memory queue length.", "Our largest model with ResNet18 has more than 3\u02c6the FPS than RDE-VOS but with 8f ewer parameters.", "Finally, we observe that the ResNet models perform better on the server-grade GPUs."], "text_after_citation": ["In the following section, we show that we obtain a different outcome for mobile GPUs. Table 4 . Performance evaluation on the same sets of hardware.", "The FPS metrics were evaluated on two randomly generated short and long video sequences with shape 480\u02c6910.", "The short videos consist of 50 frames, while the long videos consist of 500.", "In all cases, we use the authors' provided code and indicates that the models are exceeding the GPU memory limit."], "citing_paper_content": {"title": "Mobilevos: Real-Time Video Object Segmentation Contrastive Learning Meets Knowledge Distillation", "abstract": "This paper tackles the problem of semi-supervised video object segmentation on resource-constrained devices, such as mobile phones. We formulate this problem as a distillation task, whereby we demonstrate that small spacetime-memory networks with finite memory can achieve competitive results with state of the art, but at a fraction of the computational cost (32 milliseconds per frame on a Samsung Galaxy S22). Specifically, we provide a theoretically grounded framework that unifies knowledge distillation with supervised contrastive representation learning. These models are able to jointly benefit from both pixel-wise contrastive learning and distillation from a pretrained teacher. We validate this loss by achieving competitive J &F to state of the art on both the standard DAVIS and YouTube benchmarks, despite running up to\u02c65 faster, and with\u02c632 fewer parameters."}, "cited_paper_content": {"title": "Benchmark Analysis Of Representative Deep Neural Network Architectures", "abstract": "This work presents an in-depth analysis of the majority of the deep neural networks (DNNs) proposed in the state of the art for image recognition. For each DNN multiple performance indices are observed, such as recognition accuracy, model complexity, computational complexity, memory usage, and inference time. The behavior of such performance indices and some combinations of them are analyzed and discussed. To measure the indices we experiment the use of DNNs on two different computer architectures, a workstation equipped with a NVIDIA Titan X Pascal and an embedded system based on a NVIDIA Jetson TX1 board. This experimentation allows a direct comparison between DNNs running on machines with very different computational capacity. This study is useful for researchers to have a complete view of what solutions have been explored so far and in which research directions are worth exploring in the future; and for practitioners to select the DNN architecture(s) that better fit the resource constraints of practical deployments and applications. To complete this work, all the DNNs, as well as the software used for the analysis, are available online."}, "keywords": ["GPU compute units"], "citation_intent": "result"} {"citing_id": "2303.02862v1", "cited_id": "1704.07809", "section_title": "3D Joint Annotation", "citation": "Since our annotations have been checked manually, we do not perform a bootstrapping procedure as #REFR .", "text_before_citation": ["Our dataset is of single hands and we apply 21 keypoints scheme #OTHEREFR to annotate each hand.", "Inspired by Interhand2.6M #OTHEREFR and FreiHand #OTHEREFR , we use multi-view RGB images for 3D annotation and apply a two-stage process consisting of machine annotation and human annotation.", "First, we use Mediapipe [47] to detect 2D hand keypoints on all the RGB images and triangulate 2D keypoints to obtain 3D keypoints with RANSAC method.", "Then we manually verify all the keypoints re-projected by 3D keypoints and select the unqualified views for human annotation."], "text_after_citation": ["Due to the cost of human annotation, manual annotation is only applied to fixed pose and random short sequences and machine annotation is then applied to random long sequences.", "For fast motion sequences, we can not get accurate 3D joint annotations due to the severe motion blur in the captured images.", "To quantitatively evaluate our method, we manually annotate the 2D joints on the event sequences."], "citing_paper_content": {"title": "Evhandpose: Event-Based 3D Hand Pose Estimation With Sparse Supervision", "abstract": "Event camera shows great potential in 3D hand pose estimation, especially addressing the challenges of fast motion and high dynamic range in a low-power way. However, due to the asynchronous differential imaging mechanism, it is challenging to design event representation to encode hand motion information especially when the hands are not moving (causing motion ambiguity), and it is infeasible to fully annotate the temporally dense event stream. In this paper, we propose EvHandPose with novel hand flow representations in Event-to-Pose module for accurate hand pose estimation and alleviating the motion ambiguity issue. To solve the problem under sparse annotation, we design contrast maximization and edge constraints in Pose-to-IWE (Image with Warped Events) module and formulate EvHandPose in a self-supervision framework. We further build EvRealHands, the first large-scale real-world event-based hand pose dataset on several challenging scenes to bridge the domain gap due to relying on synthetic data and facilitate future research. Experiments on EvRealHands demonstrate that EvHandPose outperforms previous event-based method under all evaluation scenes with 15 \u223c 20 mm lower MPJPE and achieves accurate and stable hand pose estimation in fast motion and strong light scenes compared with RGB-based methods. Furthermore, EvHandPose demonstrates 3D hand pose estimation at 120 fps or higher."}, "cited_paper_content": {"title": "Hand Keypoint Detection In Single Images Using Multiview Bootstrapping", "abstract": "We present an approach that uses a multi-camera system to train fine-grained detectors for keypoints that are prone to occlusion, such as the joints of a hand. We call this procedure multiview bootstrapping: first, an initial keypoint detector is used to produce noisy labels in multiple views of the hand. The noisy detections are then triangulated in 3D using multiview geometry or marked as outliers. Finally, the reprojected triangulations are used as new labeled training data to improve the detector. We repeat this process, generating more labeled data in each iteration. We derive a result analytically relating the minimum number of views to achieve target true and false positive rates for a given detector. The method is used to train a hand keypoint detector for single images. The resulting keypoint detector runs in realtime on RGB images and has accuracy comparable to methods that use depth sensors. The single view detector, triangulated over multiple views, enables 3D markerless hand motion capture with complex object interactions."}, "keywords": ["annotations"], "citation_intent": "method"} {"citing_id": "2303.06696v1", "cited_id": "1904.00071", "section_title": "V. Analysis & Results", "citation": "The shortcomings of standard RC algorithm on C-V2X has been explored in #REFR , which is highly suggested for readers interested in congestion control approaches in C-V2X.", "text_before_citation": ["Mean ITT for the tested scenarios is presented in figure 4 , which heavily correlates with the CBR at corresponding densities.", "At very low traffic (1veh/s), 35% CBR (figure 3) causes RC to remain dormant, resulting in VUEs transmitting BSMs with 100ms ITT (figure 4).", "But a slight increase of traffic flow (5veh/s) vastly increases the CBR to 90%.", "This change is not reflected comparably at ITT, which barely increases to 110ms.", "This low sensitivity of RC algorithm at low traffic flow appears from the algorithm originally being optimized for the older DSRC technology, which follows a different abstraction of physical layer bandwidth than C-V2X."], "text_after_citation": ["Further increase in density (10 veh/s) causes gradual increase in ITT.", "With higher ITT, larger volume of resources are available within a unit time-frame due to less frequent transmission by the VUEs.", "As a result, CBR comparably decreases and the now-accessible resources are utilized by the service application, making SCT at 10veh/s better than 5veh/s (figure 2).", "At high CBR, SPS resource allocation procedure is more aggressive in resource shortlisting and selection.", "When a VUE transmitting in a high density traffic scenario increases its SPS RSRP threshold while shortlisting a resource for its own transmission, one implication of this occurrence is that the VUE can likely select a resource with low RSRP for its upcoming transmission."], "citing_paper_content": {"title": "On Batching Acknowledgements In C-V2X Services", "abstract": "Cellular Vehicle-to-Everything (C-V2X) is a frontier in the evolution of distributed communication introduced in 3GPP release 14 to advanced use cases. While research efforts continue to optimize the accessible bandwidth for transportation ecosystem, a bottom up analysis from the application layer perspective is necessary prior to deployment, as it can expose potential issues that can emerge in a dynamic road environment. This emphasizes on assessing the network using applicationoriented metrics to evaluate its capacity of providing advanced vehicular services with stringent latency and throughput requirements. C-V2X enables advanced applications like autonomous driving and on-the-go transaction services where consecutive exchange of messages is required. For such services, the network level metrics fails to capture the edge case service quality as they express an average measure of performance. In this paper, we present an application-oriented analysis of a transaction service built on C-V2X protocol. We analyze different design choices that affects quality of service both from network-oriented and user-centric metrics and we highlight the issues regarding packet dissemination from infrastructures for vehicle-to-infrastructure (V2I) based service applications. We also present our study on the impact of batching in disseminating acknowledgement packets (ACK) and its consequence on both the service reliability and network congestion. Our results show that time-sensitive and mission-sensitive vehicular applications should aim for a balance between achieving the mission utility in shortest duration possible, while keeping minimal impact on the system-wide stability."}, "cited_paper_content": {"title": "Analysis Of Distributed Congestion Control In Cellular Vehicle-To-Everything Networks", "abstract": "Cellular Vehicle-to-everything (C-V2X) communication has been proposed in the 3rd Generation Partnership Project release 14 standard to address the latency and reliability requirements of cooperative safety applications. Such applications can involve highly congested vehicular scenarios where the network experiences high data loads. Thus, a sophisticated congestion control solution is vital in order to maintain the network performance required for safety-related applications. With the aid of our high-fidelity link-level network simulator, we investigate the feasibility of implementing the distributed congestion control algorithm specified in SAE J2945/1 standard on top of the C-V2X stack. We describe our implementation and evaluate the performance of transmission rate and range control mechanisms using relevant metrics. Additionally, we identify areas for potential design enhancements and further investigation."}, "keywords": ["congestion control approaches"], "citation_intent": "background"} {"citing_id": "2303.15230v1", "cited_id": "1706.03762", "section_title": "Cross-Modal Traction", "citation": "Specifically, the Cross-Modal Traction module is composed of a stack of N blocks, and in each block, we first consider a scaled dot product attention mechanism #REFR with the prompt representation attending to all patch tokens.", "text_before_citation": ["Given the same semantic concept, the static and monotonous prompt representation naturally fails to be commonly optimal for all input images that come from a plentiful distribution.", "This issue becomes more serious in the additional state and object branches, as the visual content of the same primitive changes considerably when paired with different primitives.", "Therefore, we further develop a Cross-Modal Traction module for Troika.", "The module adaptively shifts the prompt representation to accommodate the content diversity and diminish the cross-modal discrepancies.", "In this process, relevant patch features serve as the guidance to avoid noise from semantic-agnostic sub-regions interfering with the traction."], "text_after_citation": ["Given the input prompt representation t that comes from an arbitrary branch, we first acquire the patch tokens X p \u2208 R N p \u00d7d after projecting them with the linear layer g proj .", "Then, the query, key and value can be derived as", "EQUATION", "where W q , W K , W V \u2208 R d\u00d7d attn are the parameter matrices, and d attn is the dimension of the single-head attention.", "The dot product attention gives relevance weights from t to each patch token, which are used to aggregate the value-projected patch tokens as"], "citing_paper_content": {"title": "Troika: Multi-Path Cross-Modal Traction For Compositional Zero-Shot Learning", "abstract": "Recent compositional zero-shot learning (CZSL) methods adapt pre-trained vision-language models (VLMs) by constructing trainable prompts only for composed state-object pairs. Relying on learning the joint representation of seen compositions, these methods ignore the explicit modeling of the state and object, thus limiting the exploitation of pre-trained knowledge and generalization to unseen compositions. With a particular focus on the universality of the solution, in this work, we propose a novel paradigm for CZSL models that establishes three identification branches (i.e., Multi-Path) to jointly model the state, object, and composition. The presented Troika is our implementation that aligns the branch-specific prompt representations with decomposed visual features. To calibrate the bias between semantically similar multi-modal representations, we further devise a Cross-Modal Traction module into Troika that shifts the prompt representation towards the current visual content. We conduct extensive experiments on three popular benchmarks, where our method significantly outperforms existing methods in both closed-world and open-world settings."}, "cited_paper_content": {"title": "Attention Is All You Need", "abstract": "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data."}, "keywords": ["prompt representation", "attention mechanism"], "citation_intent": "method"} {"citing_id": "2303.07868v1", "cited_id": "1703.06870", "section_title": "Related Work", "citation": "To date, most of the topperforming IS methods still follow the Mask R-CNN metaarchitecture #REFR .", "text_before_citation": ["Instance Segmentation."], "text_after_citation": ["These proposal-based approaches typically employ an object detector to localize each instance in bounding boxes.", "Then the instance-wise features are cropped and extracted from FPN features based on the detected bounding boxes by using RoI pooling/align #OTHEREFR .", "Finally, a compact segmentation head is deployed to obtain the desired object masks.", "Mask Scoring R-CNN #OTHEREFR aligns the mask quality and score by using a branch to explicitly learn the quality of predicted masks.", "BMask R-CNN #OTHEREFR leverages boundary details to improve mask localization ability."], "citing_paper_content": {"title": "Dynamask: Dynamic Mask Selection For Instance Segmentation", "abstract": "The representative instance segmentation methods mostly segment different object instances with a mask of the fixed resolution, e.g., 28 \u00d7 28 grid. However, a lowresolution mask loses rich details, while a high-resolution mask incurs quadratic computation overhead. It is a challenging task to predict the optimal binary mask for each instance. In this paper, we propose to dynamically select suitable masks for different object proposals. First, a dual-level Feature Pyramid Network (FPN) with adaptive feature aggregation is developed to gradually increase the mask grid resolution, ensuring high-quality segmentation of objects. Specifically, an efficient region-level top-down path (r-FPN) is introduced to incorporate complementary contextual and detailed information from different stages of image-level FPN (i-FPN). Then, to alleviate the increase of computation and memory costs caused by using large masks, we develop a Mask Switch Module (MSM) with negligible computational cost to select the most suitable mask resolution for each instance, achieving high efficiency while maintaining high segmentation accuracy. Without bells and whistles, the proposed method, namely DynaMask, brings consistent and noticeable performance improvements over other state-ofthe-arts at a moderate computation overhead. The source code: https://github.com/lslrh/DynaMask."}, "cited_paper_content": {"title": "Mask R-Cnn", "abstract": "We present a conceptually simple, flexible, and general framework for object instance segmentation. Our approach efficiently detects objects in an image while simultaneously generating a high-quality segmentation mask for each instance. The method, called Mask R-CNN, extends Faster R-CNN by adding a branch for predicting an object mask in parallel with the existing branch for bounding box recognition. Mask R-CNN is simple to train and adds only a small overhead to Faster R-CNN, running at 5 fps. Moreover, Mask R-CNN is easy to generalize to other tasks, e.g., allowing us to estimate human poses in the same framework. We show top results in all three tracks of the COCO suite of challenges, including instance segmentation, bounding-box object detection, and person keypoint detection. Without bells and whistles, Mask R-CNN outperforms all existing, single-model entries on every task, including the COCO 2016 challenge winners. We hope our simple and effective approach will serve as a solid baseline and help ease future research in instance-level recognition. Code has been made available at: https://github.com/facebookresearch/Detectron ."}, "keywords": ["Mask R-CNN metaarchitecture"], "citation_intent": "method"} {"citing_id": "2304.06019v1", "cited_id": "1803.02077", "section_title": "Learning Objectives", "citation": "The Contextual Loss (CX) #REFR is a viable choice as it treats features of images as a set and measures the similarity between images, ignoring the spatial positions of the features.", "text_before_citation": ["The training of AlignFormer requires an objective function that does not forcefully match each spatial position as the problem lacks exact spatially aligned supervision."], "text_after_citation": ["This property enables us to compare images that are spatially deformed.", "Given two images x and y, CX loss aims to minimize the summed distance of all matched feature pairs, formulated as", "EQUATION", "where \u03d5(x) j and \u03d5(y) i are the j-th point of \u03d5(x) and ith point of \u03d5(y), respectively.", "\u03d5(x) denotes feature maps of x extracted from the VGG network \u03d5, and D is some distance measure."], "citing_paper_content": {"title": "Generating Aligned Pseudo-Supervision From Non-Aligned Data For Image Restoration In Under-Display Camera", "abstract": "Due to the difficulty in collecting large-scale and perfectly aligned paired training data for Under-Display Camera (UDC) image restoration, previous methods resort to monitor-based image systems or simulation-based methods, sacrificing the realness of the data and introducing domain gaps. In this work, we revisit the classic stereo setup for training data collection-capturing two images of the same scene with one UDC and one standard camera. The key idea is to \"copy\" details from a high-quality reference image and \"paste\" them on the UDC image. While being able to generate real training pairs, this setting is susceptible to spatial misalignment due to perspective and depth of field changes. The problem is further compounded by the large domain discrepancy between the UDC and normal images, which is unique to UDC restoration. In this paper, we mitigate the non-trivial domain discrepancy and spatial misalignment through a novel Transformer-based framework that generates well-aligned yet high-quality target data for the corresponding UDC input. This is made possible through two carefully designed components, namely, the Domain Alignment Module (DAM) and Geometric Alignment Module (GAM), which encourage robust and accurate discovery of correspondence between the UDC and normal views. Extensive experiments show that high-quality and well-aligned pseudo UDC training pairs are beneficial for training a robust restoration network. Code and the dataset are available at https://github.com/ jnjaby/AlignFormer."}, "cited_paper_content": {"title": "The Contextual Loss For Image Transformation With Non-Aligned Data", "abstract": "Feed-forward CNNs trained for image transformation problems rely on loss functions that measure the similarity between the generated image and a target image. Most of the common loss functions assume that these images are spatially aligned and compare pixels at corresponding locations. However, for many tasks, aligned training pairs of images will not be available. We present an alternative loss function that does not require alignment, thus providing an effective and simple solution for a new space of problems. Our loss is based on both context and semantics -- it compares regions with similar semantic meaning, while considering the context of the entire image. Hence, for example, when transferring the style of one face to another, it will translate eyes-to-eyes and mouth-to-mouth. Our code can be found at https://www.github.com/roimehrez/contextualLoss"}, "keywords": ["Contextual Loss"], "citation_intent": "background"} {"citing_id": "2303.14531v1", "cited_id": "1905.10887", "section_title": "Cifar", "citation": "While this is not the focus of this work, our finding suggests that synthetic samples can be helpful for accuracy if used properly, challenging previous beliefs #REFR .", "text_before_citation": ["On CIFAR-10 far-OOD detection, SIO does not outperform vanilla OE, which incorporates a large set of external OOD samples for training.", "We suspect that this is because far-OOD detection on CIFAR-10 is dominated by low-level statistics #OTHEREFR , and using synthetic ID samples may slightly push the model away from the real low-level statistics of the ID data, causing a shrinked difference between ID and OOD samples.", "Nonetheless, SIO improves far-OOD detection in all other cases and effectively closes the gap between non-OE methods and OE.", "Notably, LogitNorm + SIO yields a 97.09% AUROC without any OOD training data, which is on par with the 97.16% AUROC achieved by OE.", "Lastly, we observe that SIO can also benefit ID classification accuracy."], "text_after_citation": ["On the other hand, however, in our later experiments where we vary the hyperparameters, we find that SIO consistently boosts OOD detection performance even if it does not improve ID accuracy.", "More discussion on this can be found in Section 4.3.", "On CIFAR-100, we observe similar results to CIFAR-10, with the general observation that SIO can benefit OOD detection performance.", "With SIO, the average near-OOD / far-OOD AUROC is lifted from 76.63% / 80.29% to 77.20% / 83.65%, and the best numbers are improved from 80.22% / 87.54% to 80.34% / 89.37%."], "citing_paper_content": {"title": "Sio: Synthetic In-Distribution Data Benefits Out-Of-Distribution Detection", "abstract": "Building up reliable Out-of-Distribution (OOD) detectors is challenging, often requiring the use of OOD data during training. In this work, we develop a data-driven approach which is distinct and complementary to existing works: Instead of using external OOD data, we fully exploit the internal in-distribution (ID) training set by utilizing generative models to produce additional synthetic ID images. The classifier is then trained using a novel objective that computes weighted loss on real and synthetic ID samples together. Our training framework, which is termed SIO, serves as a \"plug-and-play\" technique that is designed to be compatible with existing and future OOD detection algorithms, including the ones that leverage available OOD training data. Our experiments on CIFAR-10, CIFAR-100, and ImageNet variants demonstrate that SIO consistently improves the performance of nearly all stateof-the-art (SOTA) OOD detection algorithms. For instance, on the challenging CIFAR-10 v.s. CIFAR-100 detection problem, SIO improves the average OOD detection AUROC of 18 existing methods from 86.25% to 89.04% and achieves a new SOTA of 92.94% according to the OpenOOD benchmark. Code is available at https://github.com/ zjysteven/SIO."}, "cited_paper_content": {"title": "Classification Accuracy Score For Conditional Generative Models", "abstract": "Deep generative models (DGMs) of images are now sufficiently mature that they produce nearly photorealistic samples and obtain scores similar to the data distribution on heuristics such as Frechet Inception Distance (FID). These results, especially on large-scale datasets such as ImageNet, suggest that DGMs are learning the data distribution in a perceptually meaningful space and can be used in downstream tasks. To test this latter hypothesis, we use class-conditional generative models from a number of model classes\u2014variational autoencoders, autoregressive models, and generative adversarial networks (GANs)\u2014to infer the class labels of real data. We perform this inference by training an image classifier using only synthetic data and using the classifier to predict labels on real data. The performance on this task, which we call Classification Accuracy Score (CAS), reveals some surprising results not identified by traditional metrics and constitute our contributions. First, when using a state-of-the-art GAN (BigGAN-deep), Top-1 and Top-5 accuracy decrease by 27.9% and 41.6%, respectively, compared to the original data; and conditional generative models from other model classes, such as Vector-Quantized Variational Autoencoder-2 (VQ-VAE-2) and Hierarchical Autoregressive Models (HAMs), substantially outperform GANs on this benchmark. Second, CAS automatically surfaces particular classes for which generative models failed to capture the data distribution, and were previously unknown in the literature. Third, we find traditional GAN metrics such as Inception Score (IS) and FID neither predictive of CAS nor useful when evaluating non-GAN models. Furthermore, in order to facilitate better diagnoses of generative models, we open-source the proposed metric."}, "keywords": ["synthetic samples"], "citation_intent": "result"} {"citing_id": "2304.00086v2", "cited_id": "1903.10075", "section_title": "When Ml Is Used In Economics?", "citation": "Also, article #REFR point that the methods developed in the ML have been particularly successful in big data settings, where we observe information on a large number of units, many pieces of information on each unit, or both.", "text_before_citation": ["Lastly, large datasets allow for more flexible relationships than simple linear models can capture.", "ML techniques are handy in those cases due to their ability to model intricate and nonlinear relationships potentially offering new insights.", "Similarly, article #OTHEREFR argue that ML not only provides new tools but also solves a different problem.", "They assert ML's success is largely due to its ability to discover the complex structure that was not specified in advance.", "They suggest applying ML to economics requires finding relevant tasks, for instance, where the focus is on increasing prediction accuracy or uncovering generalizable patterns from complex datasets."], "text_after_citation": ["The authors suggest that for using ML tools for economics research and analysis, researchers should clearly articulate their goals and why certain properties of ML algorithms may or may not be important."], "citing_paper_content": {"title": "Machine Learning For Economics Research: When What And How? *", "abstract": "This article provides a curated review of selected papers published in prominent economics journals that use machine learning (ML) tools for research and policy analysis. The review focuses on three key questions: (1) when ML is used in economics, (2) what ML models are commonly preferred, and (3) how they are used for economic applications. The review highlights that ML is particularly used to process nontraditional and unstructured data, capture strong nonlinearity, and improve prediction accuracy. Deep learning models are suitable for nontraditional data, whereas ensemble learning models are preferred for traditional datasets. While traditional econometric models may suffice for analyzing low-complexity data, the increasing complexity of economic data due to rapid digitalization and the growing literature suggests that ML is becoming an essential addition to the econometrician's toolbox."}, "cited_paper_content": {"title": "Machine Learning Methods Economists Should Know About", "abstract": "We discuss the relevance of the recent Machine Learning (ML) literature for economics and econometrics. First we discuss the differences in goals, methods and settings between the ML literature and the traditional econometrics and statistics literatures. Then we discuss some specific methods from the machine learning literature that we view as important for empirical researchers in economics. These include supervised learning methods for regression and classification, unsupervised learning methods, as well as matrix completion methods. Finally, we highlight newly developed methods at the intersection of ML and econometrics, methods that typically perform better than either off-the-shelf ML or more traditional econometric methods when applied to particular classes of problems, problems that include causal inference for average treatment effects, optimal policy estimation, and estimation of the counterfactual effect of price changes in consumer choice models."}, "keywords": ["ML"], "citation_intent": "method"} {"citing_id": "2304.12830v1", "cited_id": "1903.07163", "section_title": "V. Evaluation", "citation": "If any iteration fails to converge in 512 integration steps, then the spin output s is such that Ts = 0 (where T is the transform matrix defined by #REFR ).", "text_before_citation": ["For the scenarios simulated in this paper, dt = 0.005 was found to be sufficient.", "The initial values x i are i.i.d N (0, 0.001) and e i are initialized using a folded N (0, 0.001) distribution.", "We set p = 0.98, \u03b2 = 1, a = 2 and \u03b3 = 1000/(256 \u2022 0.01).", "These parameters were empirically selected, based on trial-and-error experiments, such that the system can achieve a steady state and attain good performance.", "Note that performance can be further improved by optimally selecting these parameters, and we plan to address this in our future work."], "text_after_citation": ["2) Evaluation Metrics: Our evaluation setup simulates an uplink N t \u00d7 N r MIMO system which has N t users (with one transmit antenna each) and N r receive antennas at the base station (N r > N t ).", "We assume a slow-fading channel, and channel instances are assumed to follow the Rayleigh fading model.", "The BER is computed as the mean BER of all users.", "We compare our methods against MMSE-SIC with optimal ordering #OTHEREFR and the MMSE detector in both large and massive MIMO scenarios.", "Spectral efficiency computations are based on convolutional coding with code-rates #OTHEREFR 3 , 1 2 , 2 3 , and an oracle Adaptive Modulation and Coding (AMC) module that selects the best modulation and code-rate based on SNR."], "citing_paper_content": {"title": "Uplink Mimo Detection Using Ising Machines: A Multi-Stage Ising Approach", "abstract": "Multiple-Input-Multiple-Output (MIMO) signal detection is central to every state-of-the-art communication system, and enhancements in error performance and computation complexity of MIMO detection would significantly enhance data rate and latency experienced by the users. Theoretically, the optimal MIMO detector is the maximum-likelihood (ML) MIMO detector; however, due to its extremely high complexity, it is not feasible for large real-world communication systems. Over the past few years, algorithms based on physics-inspired Ising solvers, like Coherent Ising machines and Quantum Annealers, have shown significant performance improvements for the MIMO detection problem. However, the current state-of-the-art is limited to low-order modulations or systems with few users. In this paper, we propose an adaptive multi-stage Ising machine-based MIMO detector that extends the performance gains of physics-inspired computation to Large and Massive MIMO systems with a large number of users and very high modulation schemes (up to 256-QAM). We enhance our previously proposed delta Ising formulation and develop a heuristic that adaptively optimizes the performance and complexity of our proposed method. We perform extensive micro-benchmarking to optimize several free parameters of the system and evaluate our methods' BER and spectral efficiency for Large and Massive MIMO systems (up to 32 users and 256 QAM modulation)."}, "cited_paper_content": {"title": "Oim: Oscillator-Based Ising Machines For Solving Combinatorial Optimisation Problems", "abstract": "We present a new way to make Ising machines, i.e., using networks of coupled self-sustaining nonlinear oscillators. Our scheme is theoretically rooted in a novel result that establishes that the phase dynamics of coupled oscillator systems, under the influence of subharmonic injection locking, are governed by a Lyapunov function that is closely related to the Ising Hamiltonian of the coupling graph. As a result, the dynamics of such oscillator networks evolve naturally to local minima of the Lyapunov function. Two simple additional steps (i.e., adding noise, and turning subharmonic locking on and off smoothly) enable the network to find excellent solutions of Ising problems. We demonstrate our method on Ising versions of the MAX-CUT and graph colouring problems, showing that it improves on previously published results on several problems in the G benchmark set. Our scheme, which is amenable to realisation using many kinds of oscillators from different physical domains, is particularly well suited for CMOS IC implementation, offering significant practical advantages over previous techniques for making Ising machines. We present working hardware prototypes using CMOS electronic oscillators."}, "keywords": ["transform matrix", "iteration"], "citation_intent": "background"} {"citing_id": "2303.04456v1", "cited_id": "1806.01260", "section_title": "Implementation Details", "citation": "Same augmentations are performed on the training data as #REFR , namely 50% horizontal flips, random brightness, contrast, saturation, and hue jitter.", "text_before_citation": ["Particularly, the top two levels are not used in the depth encoder.", "For the motion network, the pose decoder is adopted from #OTHEREFR .", "The object motion decoder 4 uses 9 and 2 RMUs in level 4 and the remained levels, respectively.", "RMUs are not shared across different levels in order to maximize filter diversity for different scales.", "Training Details. The whole system is implemented in TensorFlow [1] ."], "text_after_citation": ["Following #OTHEREFR , the length of each image sequence is fixed to 3 frames. The central frame is treated as the target view.", "The depth and motion networks are jointly trained using Adam #OTHEREFR with a batch size varying from 16 to 40 on multiple GPUs.", "To address the stationary pixels and the oc-clusion problem, the auto-masking and the per-pixel minimum reprojection loss #OTHEREFR are adopted.", "Depth map and motion field are regularized by an edge-aware smoothness loss #OTHEREFR while the proposed outlier-aware regularization loss is further imposed on the object motion field.", "The selfsupervision #OTHEREFR is also adopted but no cropping is applied. Some parts of RM-Depth require pre-training #OTHEREFR ."], "citing_paper_content": {"title": "Rm-Depth: Unsupervised Learning Of Recurrent Monocular Depth In Dynamic Scenes *", "abstract": "Unsupervised methods have showed promising results on monocular depth estimation. However, the training data must be captured in scenes without moving objects. To push the envelope of accuracy, recent methods tend to increase their model parameters. In this paper, an unsupervised learning framework is proposed to jointly predict monocular depth and complete 3D motion including the motions of moving objects and camera. (1) Recurrent modulation units are used to adaptively and iteratively fuse encoder and decoder features. This not only improves the singleimage depth inference but also does not overspend model parameters. (2) Instead of using a single set of filters for upsampling, multiple sets of filters are devised for the residual upsampling. This facilitates the learning of edgepreserving filters and leads to the improved performance. (3) A warping-based network is used to estimate a motion field of moving objects without using semantic priors. This breaks down the requirement of scene rigidity and allows to use general videos for the unsupervised learning. The motion field is further regularized by an outlier-aware training loss. Despite the depth model just uses a single image in test time and 2.97M parameters, it achieves state-of-the-art results on the KITTI and Cityscapes benchmarks. * This research work is not for commercial use unless a prior arrangement has been made with the author. 1 The words, ego-motion, camera motion and pose, are interchangeably used throughout the paper."}, "cited_paper_content": {"title": "Digging Into Self-Supervised Monocular Depth Estimation", "abstract": "Per-pixel ground-truth depth data is challenging to acquire at scale. To overcome this limitation, self-supervised learning has emerged as a promising alternative for training models to perform monocular depth estimation. In this paper, we propose a set of improvements, which together result in both quantitatively and qualitatively improved depth maps compared to competing self-supervised methods. Research on self-supervised monocular training usually explores increasingly complex architectures, loss functions, and image formation models, all of which have recently helped to close the gap with fully-supervised methods. We show that a surprisingly simple model, and associated design choices, lead to superior predictions. In particular, we propose (i) a minimum reprojection loss, designed to robustly handle occlusions, (ii) a full-resolution multi-scale sampling method that reduces visual artifacts, and (iii) an auto-masking loss to ignore training pixels that violate camera motion assumptions. We demonstrate the effectiveness of each component in isolation, and show high quality, state-of-the-art results on the KITTI benchmark."}, "keywords": ["training data"], "citation_intent": "method"} {"citing_id": "2304.03932v1", "cited_id": "1903.10384", "section_title": "3D Gans Variants", "citation": "Using MeshGAN, the first intrinsic GAN architecture operating directly on 3D meshes, we can generate high-fidelity 3D faces with rich identities, and expressions #REFR .", "text_before_citation": ["It samples images at various scales using a patchbased discriminator to produce radiance fields.", "Another strategy is to use an effective and efficient tri-plane-based 3D GAN framework #OTHEREFR .", "It uses dual discrimination to promote consistency from several viewpoints while the generator is conditioned on poses to faithfully model attribute distributions dependent on the pose in the real-world datasets.", "\u2022 Generating texture models or 3DMMs: In #OTHEREFR , the authors introduce a novel 3D Morphable Model (3DMM) in the form of a GAN Texture Model to provide excellent facial shape and texture reconstructions in arbitrary recording conditions from 2D images.", "They also show the results to be both photorealistic and identity preserving in both qualitative and quantitative experiments."], "text_after_citation": ["\u2022 Feedback learning: By considering the generator network as an encoder and decoder, the spatial output from multiple discriminators can be used to provide feedback to the generator so it can improve on its previous generations using Adaptive Spatial Transform #OTHEREFR .", "\u2022 Meshes: In industrial design, gaming, computer graphics and other digital art, automatically generating shapes based on meshes is necessary.", "Most of the current research is involved with voxel and point cloud generation, alienating itself from the design and graphics communities.", "MeshGAN, as mentioned above, is the first intrinsic GAN architecture operating directly on 3D meshes and can generate high-fidelity 3D faces with rich identities and expressions.", "To automatically generate shapes based on meshes, we use the signed distance function representation to generate detail-preserving threedimensional surface meshes #OTHEREFR ."], "citing_paper_content": {"title": "3D Gans And Latent Space: A Comprehensive Survey", "abstract": "Generative Adversarial Networks (GANs) have emerged as a significant player in generative modeling by mapping lower-dimensional random noise to higher-dimensional spaces. These networks have been used to generate high-resolution images and 3D objects. The efficient modeling of 3D objects and human faces is crucial in the development process of 3D graphical environments such as games or simulations. 3D GANs are a new type of generative model used for 3D reconstruction, point cloud reconstruction, and 3D semantic scene completion. The choice of distribution for noise is critical as it represents the latent space. Understanding a GAN's latent space is essential for fine-tuning the generated samples, as demonstrated by the morphing of semantically meaningful parts of images. In this work, we explore the latent space and 3D GANs, examine several GAN variants and training methods to gain insights into improving 3D GAN training, and suggest potential future directions for further research."}, "cited_paper_content": {"title": "Meshgan: Non-Linear 3D Morphable Models Of Faces", "abstract": "Generative Adversarial Networks (GANs) are currently the method of choice for generating visual data. Certain GAN architectures and training methods have demonstrated exceptional performance in generating realistic synthetic images (in particular, of human faces). However, for 3D object, GANs still fall short of the success they have had with images. One of the reasons is due to the fact that so far GANs have been applied as 3D convolutional architectures to discrete volumetric representations of 3D objects. In this paper, we propose the first intrinsic GANs architecture operating directly on 3D meshes (named as MeshGAN). Both quantitative and qualitative results are provided to show that MeshGAN can be used to generate high-fidelity 3D face with rich identities and expressions."}, "keywords": ["first intrinsic GAN", "MeshGAN"], "citation_intent": "method"} {"citing_id": "2304.14674v1", "cited_id": "1807.01697", "section_title": "Experiments", "citation": "Besides, we comprehensively test the robustness of SAM with 18 types of data corruption at 5 severity levels by following #REFR .", "text_before_citation": ["In this study, two widely-used surgical instrument segmentation datasets, i.e., EndoVis17 #OTHEREFR and EndoVis18 #OTHEREFR , have been adopted in our experiments. Our evaluation involves three categories.", "Firstly, we have provided both quantitative and qualitative assessments on the promptable segmentation performance of SAM, with bounding boxes and single points, for binary and instrument-wise segmentation."], "text_after_citation": ["Moreover, we also examine SAM on its automatic mask generation in unprompted settings for surgical scene segmentation."], "citing_paper_content": {"title": "Sam Meets Robotic Surgery: An Empirical Study In Robustness Perspective", "abstract": "Segment Anything Model (SAM) is a foundation model for semantic segmentation and shows excellent generalization capability with the prompts. In this empirical study, we investigate the robustness and zero-shot generalizability of the SAM in the domain of robotic surgery in various settings of (i) prompted vs. unprompted; (ii) bounding box vs. points-based prompt; (iii) generalization under corruptions and perturbations with five severity levels; and (iv) state-of-the-art supervised model vs. SAM. We conduct all the observations with two well-known robotic instrument segmentation datasets of MICCAI EndoVis 2017 and 2018 challenges. Our extensive evaluation results reveal that although SAM shows remarkable zero-shot generalization ability with bounding box prompts, it struggles to segment the whole instrument with point-based prompts and unprompted settings. Furthermore, our qualitative figures demonstrate that the model either failed to predict the parts of the instrument mask (e.g., jaws, wrist) or predicted parts of the instrument as different classes in the scenario of overlapping instruments within the same bounding box or with the point-based prompt. In fact, it is unable to identify instruments in some complex surgical scenarios of blood, reflection, blur, and shade. Additionally, SAM is insufficiently robust to maintain high performance when subjected to various forms of data corruption. Therefore, we can argue that SAM is not ready for downstream surgical tasks without further domain-specific fine-tuning."}, "cited_paper_content": {"title": "Benchmarking Neural Network Robustness To Common Corruptions And Surface Variations", "abstract": "In this paper we establish rigorous benchmarks for image classifier robustness. Our first benchmark, ImageNet-C, standardizes and expands the corruption robustness topic, while showing which classifiers are preferable in safety-critical applications. Then we propose a new dataset called ImageNet-P which enables researchers to benchmark a classifier's robustness to common perturbations. Unlike recent robustness research, this benchmark evaluates performance on common corruptions and perturbations not worst-case adversarial perturbations. We find that there are negligible changes in relative corruption robustness from AlexNet classifiers to ResNet classifiers. Afterward we discover ways to enhance corruption and perturbation robustness. We even find that a bypassed adversarial defense provides substantial common perturbation robustness. Together our benchmarks may aid future work toward networks that robustly generalize."}, "keywords": ["robustness"], "citation_intent": "method"} {"citing_id": "2303.15027v2", "cited_id": "0908.3817", "section_title": "I.I.D. Datasets", "citation": "The ground-truth graph is a large network with 70 nodes and 123 edges which is available in the bnlearn #REFR ) repository.", "text_before_citation": ["(1989) ) designed to provide an alarm message for patients, and has an associated synthetic dataset.", "In particular, it implements a cautionary alarm message for patient monitoring.", "The ground-truth graph is a medium-sized network with 37 nodes and 46 edges. This dataset was used by #OTHEREFR , and to evaluate their approaches. The ground-truth network is available in this repository: https://www.bnlearn.com/bnrepository/.", "\u2022 HEPAR2: It is a probabilistic causal model for the diagnosis of liver disorders #OTHEREFR ).", "This causal Bayesian network tries to capture the causal links among different risk factors, diseases, symptoms, and test results."], "text_after_citation": [], "citing_paper_content": {"title": "A Survey On Causal Discovery Methods For Temporal And Non-Temporal Data", "abstract": "Causal Discovery (CD) is the process of identifying the cause-effect relationships among the variables of a system from data. Over the years, several methods have been developed primarily based on the statistical properties of data to uncover the underlying causal mechanism. In this study, we present an extensive discussion on the methods designed to perform causal discovery from both independent and identically distributed (i.i.d.) data and time series data. For this purpose, we first introduce the common terminologies in causal discovery, and then provide a comprehensive discussion of the algorithms designed to identify the causal edges in different settings. We further discuss some of the benchmark datasets available for evaluating the performance of the causal discovery methods, available tools or software packages to perform causal discovery readily, and the common metrics used to evaluate these methods. We also test some common causal discovery algorithms on different benchmark datasets, and compare their performances. Finally, we conclude by presenting the common challenges involved in causal discovery, and also, discuss the applications of causal discovery in multiple areas of interest."}, "cited_paper_content": {"title": "Learning Bayesian Networks With The Bnlearn R Package", "abstract": "bnlearn is an R package (R Development Core Team 2010) which includes several algorithms for learning the structure of Bayesian networks with either discrete or continuous variables. Both constraint-based and score-based algorithms are implemented, and can use the functionality provided by the snow package (Tierney et al. 2008) to improve their performance via parallel computing. Several network scores and conditional independence algorithms are available for both the learning algorithms and independent use. Advanced plotting options are provided by the Rgraphviz package (Gentry et al. 2010)."}, "keywords": ["ground-truth graph", "bnlearn"], "citation_intent": "method"} {"citing_id": "2304.07522v1", "cited_id": "1412.0035", "section_title": "Id Descriptor Information & Inversion", "citation": "In a seminal work, in 2015, Mahendran and Vedaldi #REFR set out to analyse the visual information contained in both shallow (e.g.", "text_before_citation": [], "text_after_citation": ["HOG) and deep feature representations, to investigate the question: given an encoding of an image, to which extent is it possible to reconstruct the image itself.", "They propose an optimisation method to invert representations using gradient descent.", "Among their findings are that networks retain rich information even at deep levels and that a progressively more invariant and abstract notion of the image content is formed in the network.", "As face identity descriptors usually are made up of the final layer of a deep network, it would therefore be the most invariant and abstract representation.", "Few works so far have investigated the inversion of a face descriptor back to a face image. Genova et al."], "citing_paper_content": {"title": "Id2Image: Leakage Of Non-Id Information Into Face Descriptors And Inversion From Descriptors To Images", "abstract": "Embedding a face image to a descriptor vector using a deep CNN is a widely used technique in face recognition. Via several possible training strategies, such embeddings are supposed to capture only identity information. Information about the environment (such as background and lighting) or changeable aspects of the face (such as pose, expression, presence of glasses, hat etc.) should be discarded since they are not useful for recognition. In this paper, we present a surprising result that this is not the case. We show that non-ID attributes, as well as landmark positions and the image histogram can be recovered from the ID embedding of state-of-the-art face embedding networks (VGGFace2 and ArcFace). In fact, these non-ID attributes can be predicted from ID embeddings with similar accuracy to a prediction from the original image. Going further, we present an optimisation strategy that uses a generative model (specifically StyleGAN2 for faces) to recover images from an ID embedding. We show photorealistic inversion from ID embedding to face image in which not only is the ID realistically reconstructed but the pose, lighting and background/apparel to some extent as well."}, "cited_paper_content": {"title": "Understanding Deep Image Representations By Inverting Them", "abstract": "Image representations, from SIFT and Bag of Visual Words to Convolutional Neural Networks (CNNs), are a crucial component of almost any image understanding system. Nevertheless, our understanding of them remains limited. In this paper we conduct a direct analysis of the visual information contained in representations by asking the following question: given an encoding of an image, to which extent is it possible to reconstruct the image itself? To answer this question we contribute a general framework to invert representations. We show that this method can invert representations such as HOG more accurately than recent alternatives while being applicable to CNNs too. We then use this technique to study the inverse of recent state-of-the-art CNN image representations for the first time. Among our findings, we show that several layers in CNNs retain photographically accurate information about the image, with different degrees of geometric and photometric invariance."}, "keywords": ["visual information"], "citation_intent": "background"} {"citing_id": "2304.11668v1", "cited_id": "2004.00288", "section_title": "Experiment Results", "citation": "For the identification task, CoReFace achieves competitive performance which is only 0.02% lower compared to the highest one Curricular-Face #REFR .", "text_before_citation": ["When there is a higher FAR bound (10 \u22124 ) or the evaluation dataset is larger, CoReFace still outperforms the competitors.", "Results on MegaFace.", "Finally, we demonstrate the efficacy of our method on the MegaFace Challenge.", "The gallery set of MegaFace contains 1M images of 690K subjects, and the probe set is FaceScrub, which contains 100K photos of 530 unique individuals.", "We follow #OTHEREFR to remove the face images with wrong labels and evaluate our method on the refined dataset. Table 3 shows the performance of different methods."], "text_after_citation": ["For the verification task, CoReFace outperforms all the other approaches with a clear margin.", "The Broad-Face #OTHEREFR also shows competitive performance by building a dynamic queue to gain extra training on the classification layer.", "Without complex structure reformation, CoReFace adds an image-image regularization to improve the feature distribution and boost the performance of open-set face recognition."], "citing_paper_content": {"title": "Coreface: Sample-Guided Contrastive Regularization For Deep Face Recognition", "abstract": "The discriminability of feature representation is the key to open-set face recognition. Previous methods rely on the learnable weights of the classification layer that represent the identities. However, the evaluation process learns no identity representation and drops the classifier from training. This inconsistency could confuse the feature encoder in understanding the evaluation goal and hinder the effect of identity-based methods. To alleviate the above problem, we propose a novel approach namely Contrastive Regularization for Face recognition (CoReFace) to apply image-level regularization in feature representation learning. Specifically, we employ sample-guided contrastive learning to regularize the training with the image-image relationship directly, which is consistent with the evaluation process. To integrate contrastive learning into face recognition, we augment embeddings instead of images to avoid the image quality degradation. Then, we propose a novel contrastive loss for the representation distribution by incorporating an adaptive margin and a supervised contrastive mask to generate steady loss values and avoid the collision with the classification supervision signal. Finally, we discover and solve the semantically repetitive signal problem in contrastive learning by exploring new pair coupling protocols. Extensive experiments demonstrate the efficacy and efficiency of our CoReFace which is highly competitive with the state-of-the-art approaches."}, "cited_paper_content": {"title": "Curricularface: Adaptive Curriculum Learning Loss For Deep Face Recognition", "abstract": "As an emerging topic in face recognition, designing margin-based loss functions can increase the feature margin between different classes for enhanced discriminability. More recently, the idea of mining-based strategies is adopted to emphasize the misclassified samples, achieving promising results. However, during the entire training process, the prior methods either do not explicitly emphasize the sample based on its importance that renders the hard samples not fully exploited; or explicitly emphasize the effects of semi-hard/hard samples even at the early training stage that may lead to convergence issue. In this work, we propose a novel Adaptive Curriculum Learning loss (CurricularFace) that embeds the idea of curriculum learning into the loss function to achieve a novel training strategy for deep face recognition, which mainly addresses easy samples in the early training stage and hard ones in the later stage. Specifically, our CurricularFace adaptively adjusts the relative importance of easy and hard samples during different training stages. In each stage, different samples are assigned with different importance according to their corresponding difficultness. Extensive experimental results on popular benchmarks demonstrate the superiority of our CurricularFace over the state-of-the-art competitors."}, "keywords": ["CoReFace"], "citation_intent": "result"} {"citing_id": "2303.13900v1", "cited_id": "1802.05642", "section_title": "C. Perceptual Loss", "citation": "Our perceptual loss(L perc in algorithm1) is defined as a l 1 loss in feature space, see equation #REFR .", "text_before_citation": ["To ensure the perceptual fidelity of the SR, we propose to use a perceptual loss function (L perc in algorithm1) to constrain the quality of SR results in feature space.", "The perceptual loss was first introduced in #OTHEREFR , where a pretrained VGG-19 network #OTHEREFR was used to extract features.", "However, our implementation differs in that we do not use the pretrained VGG-19 network, but an actively updating ResNet10 network.", "More details are explained in the \"Feature Matching\" section below."], "text_after_citation": ["Here I i,j,k and\u00ce i,j,k represent HR and SR intensity value at the i, j, k-th voxel in all three image dimensions(W, H, D), F(\u2022) represent the feature network.", "L perc = 1 W \u00d7 H \u00d7 D W,H,D i,j,k=1 |F(I i,j,k ) \u2212 F(\u00ce i,j,k )| (1)", "As shown in #OTHEREFR , perceptual loss functions generally outperform MSE loss functions, because they better preserve the detailed texture of the image."], "citing_paper_content": {"title": "A Three-Player Gan For Super-Resolution In Magnetic Resonance Imaging", "abstract": "Learning based single image super resolution (SISR) task is well investigated in 2D images. However, SISR for 3D Magnetics Resonance Images (MRI) is more challenging compared to 2D, mainly due to the increased number of neural network parameters, the larger memory requirement and the limited amount of available training data. Current SISR methods for 3D volumetric images are based on Generative Adversarial Networks (GANs), especially Wasserstein GANs due to their training stability. Other common architectures in the 2D domain, e.g. transformer models, require large amounts of training data and are therefore not suitable for the limited 3D data. However, Wasserstein GANs can be problematic because they may not converge to a global optimum and thus produce blurry results. Here, we propose a new method for 3D SR based on the GAN framework. Specifically, we use instance noise to balance the GAN training. Furthermore, we use a relativistic GAN loss function and an updating feature extractor during the training process. We show that our method produces highly accurate results. We also show that we need very few training samples. In particular, we need less than 30 samples instead of thousands of training samples that are typically required in previous studies. Finally, we show improved out-of-sample results produced by our model."}, "cited_paper_content": {"title": "The Mechanics Of N-Player Differentiable Games", "abstract": "The cornerstone underpinning deep learning is the guarantee that gradient descent on an objective converges to local minima. Unfortunately, this guarantee fails in settings, such as generative adversarial nets, where there are multiple interacting losses. The behavior of gradient-based methods in games is not well understood -- and is becoming increasingly important as adversarial and multi-objective architectures proliferate. In this paper, we develop new techniques to understand and control the dynamics in general games. The key result is to decompose the second-order dynamics into two components. The first is related to potential games, which reduce to gradient descent on an implicit function; the second relates to Hamiltonian games, a new class of games that obey a conservation law, akin to conservation laws in classical mechanical systems. The decomposition motivates Symplectic Gradient Adjustment (SGA), a new algorithm for finding stable fixed points in general games. Basic experiments show SGA is competitive with recently proposed algorithms for finding stable fixed points in GANs -- whilst at the same time being applicable to -- and having guarantees in -- much more general games."}, "keywords": ["perceptual loss(L perc", "l 1 loss"], "citation_intent": "background"} {"citing_id": "2303.10343v1", "cited_id": "1902.04103", "section_title": "Pascal Voc \u2192 Clipart1K", "citation": "We also see sub-optimal results for Union #REFR due to errors in the approximation of unweighted union, similar to object detection experiments.", "text_before_citation": ["a small \u03bb value on target domain image without any pseudo labels.", "This strategy is similar to our noise mixing during the warmup, but is used throughout the training.", "Note that this approach performs worse than our AT basedline.", "This is because although noise mixing could be helpful in general, as shown by both AFAN #OTHEREFR and our following ablation studies, heavily relying on it in the adaptation phase of Mean Teacher can lead to bias towards the source domain due to the fact that \"mixed-in\" target information is only limited to a tiny amount to act as a domain-aware augmentation.", "Indeed, we believe for cross-domain mean teacher, the pseudo labels are much stronger target signals and should be taken advantage of appropriately."], "text_after_citation": ["PASCAL VOC \u2192 Watercolor2k Next, we are interested in answering the question whether or not the encouraging gains observed in PASCAL VOC \u2192 Clipart1k can be reproduced on a different dataset.", "To do this, we use Wa-tercolor2k and evaluate the performance of PASCAL VOC \u2192 Watercolor2k adaptation.", "Note that after experimenting with Clipart1k, we narrowed down our set of hyperparameters to ones that work best for both Adaptive Teacher and our method for fair competition.", "For Watercolor2k, to test our method's robustness, we directly perform grid search on this small set of hyper-parameters without any further tuning or manual supervision.", "Nonetheless, even without exhaustive tuning, our results in Table 5 show that we can still outperform AT (mAP=57.7) with a +1.5 improvement and archive mAP=59.3."], "citing_paper_content": {"title": "Lossmix: Simplify And Generalize Mixup For Object Detection And Beyond", "abstract": "The success of data mixing augmentations in image classification tasks has been well-received. However, these techniques cannot be readily applied to object detection due to challenges such as spatial misalignment, foreground/background distinction, and plurality of instances. To tackle these issues, we first introduce a novel conceptual framework called Supervision Interpolation, which offers a fresh perspective on interpolation-based augmentations by relaxing and generalizing Mixup. Building on this framework, we propose LossMix, a simple yet versatile and effective regularization that enhances the performance and robustness of object detectors and more. Our key insight is that we can effectively regularize the training on mixed data by interpolating their loss errors instead of ground truth labels. Empirical results on the PASCAL VOC and MS COCO datasets demonstrate that LossMix consistently outperforms currently popular mixing strategies. Furthermore, we design a two-stage domain mixing method that leverages LossMix to surpass Adaptive Teacher (CVPR 2022) and set a new state of the art for unsupervised domain adaptation. * Work done during a residency with Mineral."}, "cited_paper_content": {"title": "Bag Of Freebies For Training Object Detection Neural Networks", "abstract": "Training heuristics greatly improve various image classification model accuracies~\\cite{he2018bag}. Object detection models, however, have more complex neural network structures and optimization targets. The training strategies and pipelines dramatically vary among different models. In this works, we explore training tweaks that apply to various models including Faster R-CNN and YOLOv3. These tweaks do not change the model architectures, therefore, the inference costs remain the same. Our empirical results demonstrate that, however, these freebies can improve up to 5% absolute precision compared to state-of-the-art baselines."}, "keywords": ["detection experiments"], "citation_intent": "result"} {"citing_id": "2303.07475v1", "cited_id": "1906.07413", "section_title": "Lemma 16. Under Assumption 3, For Any", "citation": "To complete the proof, we need to show that any optimal solution to (28) satisfies the dropped constraint \u03c8 * (Q) \u2264 0 and is therefore also a solution to the original convex program #REFR .", "text_before_citation": ["n i=1 g \u22121 (q k,i ) = 1 for all k \u2208 [K].", "We now consider the relaxed convex program Q \u2208 arg min", "Q\u2208R Kn 1 2 Q CXX CQ F (Q) (28) subject to \u2212 q k,i < 0 for all i \u2208 [n] and k \u2208 [K], and 1 \u2212 n i=1 g \u22121 q k,i \u2264 0 for all k \u2208 [K]", "that is obtained by dropping the constraint \u03c8 * (Q) \u2264 0 and relaxing the equality constraint 1\u2212", "n i=1 g \u22121 q k,i = 0 for all k \u2208 [K] to an inequality constraint, 1 \u2212 n i=1 g \u22121 q k,i \u2264 0."], "text_after_citation": ["(Note that by Lemma 15, the equality constraint 1 \u2212 n i=1 g \u22121 q k,i = 0 for all k \u2208 [K], also ensures that Q \u2208 dom \u03c8 * .) It is necessary and sufficient for any optimal solutionQ to the relaxed convex program #OTHEREFR to satisfy its KKT conditions, listed below:", "EQUATION", "EQUATION", "EQUATION", "First, we claim that any optimal solutionQ needs to satisfy 1 \u2212"], "citing_paper_content": {"title": "General Loss Functions Lead To (Approximate) Interpolation In High Dimensions", "abstract": "We provide a unified framework, applicable to a general family of convex losses and across binary and multiclass settings in the overparameterized regime, to approximately characterize the implicit bias of gradient descent in closed form. Specifically, we show that the implicit bias is approximated (but not exactly equal to) the minimum-norm interpolation in high dimensions, which arises from training on the squared loss. In contrast to prior work which was tailored to exponentially-tailed losses and used the intermediate support-vector-machine formulation, our framework directly builds on the primaldual analysis of [29], allowing us to provide new approximate equivalences for general convex losses through a novel sensitivity analysis. Our framework also recovers existing exact equivalence results for exponentially-tailed losses across binary and multiclass settings. Finally, we provide evidence for the tightness of our techniques, which we use to demonstrate the effect of certain loss functions designed for out-of-distribution problems on the closed-form solution."}, "cited_paper_content": {"title": "Learning Imbalanced Datasets With Label-Distribution-Aware Margin Loss", "abstract": "Deep learning algorithms can fare poorly when the training dataset suffers from heavy class-imbalance but the testing criterion requires good generalization on less frequent classes. We design two novel methods to improve performance in such scenarios. First, we propose a theoretically-principled label-distribution-aware margin (LDAM) loss motivated by minimizing a margin-based generalization bound. This loss replaces the standard cross-entropy objective during training and can be applied with prior strategies for training with class-imbalance such as re-weighting or re-sampling. Second, we propose a simple, yet effective, training schedule that defers re-weighting until after the initial stage, allowing the model to learn an initial representation while avoiding some of the complications associated with re-weighting or re-sampling. We test our methods on several benchmark vision tasks including the real-world imbalanced dataset iNaturalist 2018. Our experiments show that either of these methods alone can already improve over existing techniques and their combination achieves even better performance gains."}, "keywords": ["original convex program", "dropped constraint \u03c8"], "citation_intent": "background"} {"citing_id": "2303.04341v1", "cited_id": "1911.11227", "section_title": "Experimental Protocol", "citation": "We first demonstrate the ability of our NVF to reconstruct non-watertight meshes by category-specific reconstruction in Sec. #REFR Table 1 . Quantitative evaluation on ShapeNet Cars.", "text_before_citation": ["Tasks.", "We evaluate the effectiveness of our framework on four tasks: 1) category-specific, 2) category-agnostic, 3) category-unseen and 4) cross-domain reconstruction."], "text_after_citation": ["We train and evaluate our method on the raw data of the ShapeNet \"Car\" category.", "Our method achieves better performance than the state-ofthe-art UDF-based methods.", "we compare our NVF with existing methods on categoryagnostic and category-unseen reconstruction in Sec. 4.3.", "We also test cross-domain reconstruction by reconstructing real scanned data without training or fine-tuning in Sec. 4.4. Implementations.", "We employ PointTransformer #OTHEREFR as our feature encoder and set k = 16 for the nearest points."], "citing_paper_content": {"title": "Neural Vector Fields: Implicit Representation By Explicit Learning", "abstract": "Deep neural networks (DNNs) are widely applied for nowadays 3D surface reconstruction tasks and such methods can be further divided into two categories, which respectively warp templates explicitly by moving vertices or represent 3D surfaces implicitly as signed or unsigned distance functions. Taking advantage of both advanced explicit learning process and powerful representation ability of implicit functions, we propose a novel 3D representation method, Neural Vector Fields (NVF). It not only adopts the explicit learning process to manipulate meshes directly, but also leverages the implicit representation of unsigned distance functions (UDFs) to break the barriers in resolution and topology. Specifically, our method first predicts the displacements from queries towards the surface and models the shapes as Vector Fields. Rather than relying on network differentiation to obtain direction fields as most existing UDF-based methods, the produced vector fields encode the distance and direction fields both and mitigate the ambiguity at \"ridge\" points, such that the calculation of direction fields is straightforward and differentiation-free. The differentiation-free characteristic enables us to further learn a shape codebook via Vector Quantization, which encodes the cross-object priors, accelerates the training procedure, and boosts model generalization on cross-category reconstruction. The extensive experiments on surface reconstruction benchmarks indicate that our method outperforms those state-of-the-art methods in different evaluation scenarios including watertight vs nonwatertight shapes, category-specific vs category-agnostic reconstruction, category-unseen reconstruction, and crossdomain reconstruction. Our code will be publicly released."}, "cited_paper_content": {"title": "Shape Reconstruction By Learning Differentiable Surface Representations", "abstract": "Generative models that produce point clouds have emerged as a powerful tool to represent 3D surfaces, and the best current ones rely on learning an ensemble of parametric representations. Unfortunately, they offer no control over the deformations of the surface patches that form the ensemble and thus fail to prevent them from either overlapping or collapsing into single points or lines. As a consequence, computing shape properties such as surface normals and curvatures becomes difficult and unreliable. ::: In this paper, we show that we can exploit the inherent differentiability of deep networks to leverage differential surface properties during training so as to prevent patch collapse and strongly reduce patch overlap. Furthermore, this lets us reliably compute quantities such as surface normals and curvatures. We will demonstrate on several tasks that this yields more accurate surface reconstructions than the state-of-the-art methods in terms of normals estimation and amount of collapsed and overlapped patches."}, "keywords": ["ShapeNet Cars"], "citation_intent": "method"} {"citing_id": "2304.12329v1", "cited_id": "1607.04606", "section_title": "Static Pre-Trained Models", "citation": "FastText #REFR conceives each word as a group of n-grams instead of a single string. It is trained to vectorize n-grams.", "text_before_citation": ["It employs a local context window, as a continuous bagof-word (order-agnostic) or a continuous skip-gram (order-aware).", "The latter can link words that behave similarly in a sentence, but fails to utilize the statistics of a corpus.", "GloVe #OTHEREFR combines matrix factorization, i.e., the global cooccurrence counts, with a local context window, i.e., word analogy.", "It is trained on large corpora, such as Wikipedia, to provide pre-trained vectors for general use.", "Since it operates on a global dictionary, it identifies words with a specific writing and fails to detect slight modifications."], "text_after_citation": ["It then represents each word as the sum of its underlying n-grams."], "citing_paper_content": {"title": "Pre-Trained Embeddings For Entity Resolution: An Experimental Analysis [Experiment, Analysis & Benchmark", "abstract": "Many recent works on Entity Resolution (ER) leverage Deep Learning techniques involving language models to improve effectiveness. This is applied to both main steps of ER, i.e., blocking and matching. Several pre-trained embeddings have been tested, with the most popular ones being fastText and variants of the BERT model. However, there is no detailed analysis of their pros and cons. To cover this gap, we perform a thorough experimental analysis of 12 popular language models over 17 established benchmark datasets. First, we assess their vectorization overhead for converting all input entities into dense embeddings vectors. Second, we investigate their blocking performance, performing a detailed scalability analysis, and comparing them with the state-of-the-art deep learning-based blocking method. Third, we conclude with their relative performance for both supervised and unsupervised matching. Our experimental results provide novel insights into the strengths and weaknesses of the main language models, facilitating researchers and practitioners to select the most suitable ones in practice."}, "cited_paper_content": {"title": "Enriching Word Vectors With Subword Information", "abstract": "Continuous word representations, trained on large unlabeled corpora are useful for many natural language processing tasks. Popular models to learn such representations ignore the morphology of words, by assigning a distinct vector to each word. This is a limitation, especially for languages with large vocabularies and many rare words. In this paper, we propose a new approach based on the skipgram model, where each word is represented as a bag of character n-grams. A vector representation is associated to each character n-gram, words being represented as the sum of these representations. Our method is fast, allowing to train models on large corpora quickly and allows to compute word representations for words that did not appear in the training data. We evaluate our word representations on nine different languages, both on word similarity and analogy tasks. By comparing to recently proposed morphological word representations, we show that our vectors achieve state-of-the-art performance on these tasks."}, "keywords": ["FastText", "word"], "citation_intent": "background"} {"citing_id": "2304.12272v1", "cited_id": "1910.10683", "section_title": "Arxiv:2304.12272V1 [Cs.Cl] 24 Apr 2023", "citation": "Like all models derived from T5 models, #REFR , we pose AMR parsing as a text-totext problem and train models to transfer a text to a linearized AMR graph with the task prefix amr generation.", "text_before_citation": ["2 AMR Parsing with FLAN-T5 Models Flan-T5 models, (Chung et al., 2022) , are obtained by instruction fine-tuning T5-LM adapted models, , on a collection of 1.8K instruction annotated tasks.", "1 They are prefix language models 2 and achieve strong few-shot performance even compared to much larger models, such as PaLM 62B."], "text_after_citation": ["FLAN-T5 model size variants, all of which use 32,128 unique vocabulary, is shown in Table 1 ."], "citing_paper_content": {"title": "Amr Parsing With Instruction Fine-Tuned Pre-Trained Language Models", "abstract": "Instruction fine-tuned language models on a collection of instruction annotated datasets (FLAN) have shown highly effective to improve model performance and generalization to unseen tasks. However, a majority of standard parsing tasks including abstract meaning representation (AMR), universal dependency (UD), semantic role labeling (SRL) has been excluded from the FLAN collections for both model training and evaluations. In this paper, we take one of such instruction fine-tuned pre-trained language models, i.e. FLAN-T5, and fine-tune them for AMR parsing. Our extensive experiments on various AMR parsing tasks including AMR2.0, AMR3.0 and BioAMR indicate that FLAN-T5 fine-tuned models out-perform previous state-of-the-art models across all tasks. In addition, full finetuning followed by the parameter efficient finetuning, LoRA, further improves the model performances, setting new state-of-the-arts in Smatch on AMR2.0 (86.4), AMR3.0 (84.9) and BioAMR (82.3)."}, "cited_paper_content": {"title": "Exploring The Limits Of Transfer Learning With A Unified Text-To-Text Transformer", "abstract": "Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new \"Colossal Clean Crawled Corpus\", we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code."}, "keywords": ["train models", "text-totext problem"], "citation_intent": "method"} {"citing_id": "2304.13539v1", "cited_id": "1706.03762", "section_title": "1) Multi-Linear Attention:", "citation": "For the machine translation task, the baseline is a Transformer-big model on WMT 2014 English-German dataset #REFR . This dataset has around 4.5 million sentence pairs. The results are summarized in Table IX .", "text_before_citation": ["For comparison, each of the attention layers was replaced with Multi-linear attention in the Encoder of the Transformer. The results are summarized in Table VIII .", "Notice that tensorized transformers with Multi-linear attention achieve better performance with fewer parameters than the vanilla Transformer.", "2) Tensorized Embedding Layers: The proposed TTembedding layer was tested on two language modeling tasks (PTB and WikiText-103) and a machine translation task (WMT 2014 English-German).", "As shown in Table VII , Transformer-XL+TT stands for the transformers with TTembedding layers.", "Compared to the Transformer with Multilinear attention, Transformer-XL+TT can not achieve that high compression ratio."], "text_after_citation": ["Notice that the embedding layers can be compressed significantly at the cost of a small drop in the BLEU scores."], "citing_paper_content": {"title": "Tensor Decomposition For Model Reduction In Neural Networks: A Review", "abstract": "Modern neural networks have revolutionized the fields of computer vision (CV) and Natural Language Processing (NLP). They are widely used for solving complex CV tasks and NLP tasks such as image classification, image generation, and machine translation. Most state-of-the-art neural networks are over-parameterized and require a high computational cost. One straightforward solution is to replace the layers of the networks with their low-rank tensor approximations using different tensor decomposition methods. This paper reviews six tensor decomposition methods and illustrates their ability to compress model parameters of convolutional neural networks (CNNs), recurrent neural networks (RNNs) and Transformers. The accuracy of some compressed models can be higher than the original versions. Evaluations indicate that tensor decompositions can achieve significant reductions in model size, run-time and energy consumption, and are well suited for implementing neural networks on edge devices."}, "cited_paper_content": {"title": "Attention Is All You Need", "abstract": "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data."}, "keywords": ["Transformer-big model", "machine translation task"], "citation_intent": "method"} {"citing_id": "2304.10707v1", "cited_id": "1903.12370", "section_title": "Proposed Method", "citation": "These results not only confirm the previous findings about longrun stability for persistent training #REFR , but also reveal a new phenomenon that the learned energies from persistent training appear to be locally meaningful, but globally misaligned.", "text_before_citation": ["(ii) The long-run sample (MALA starting from real data) also resemble the training data.", "(iii) The post-training sample (MALA starting from standard Gaussian noises) shows two local modes at \u22122 and 2, but with proportions different from 3:1 as in the training data.", "(iv) The learned energy function exhibits two local modes about \u22122 and 2, but is globally misaligned.", "The estimated energy at the local mode \u22122 is substantially higher than at the other mode 2, and hence is also higher than (for example) at x = 1, an OOD point near no real data.", "More troubling is that the estimated energies near \u22122 and 2 may reverse the direction of relative magnitudes from different training runs, as shown in Appendix E.1."], "text_after_citation": ["Hence application of such learned energies to OOD detection is problematic.", "Motivated by the 1D example, we provide some new theoretical understanding of persistent training of EBMs with MCMC sampling which enables only local mixing for mul-timodal distributions with modes separated by low-density (i.e., high-energy) barriers.", "Our discussion is heuristic, but highlights the main ideas which can be exploited to develop formal analysis.", "In the limit of the persistent training process (assumed to exist), let\u03b8 be a limit value of the network parameter \u03b8, andq be a limit distribution for the synthetic data (which can be represented by the empirical distribution of the replay buffer).", "Then we expect that (\u03b8,q) satisfy the following stationarity conditions:"], "citing_paper_content": {"title": "Persistently Trained, Diffusion-Assisted Energy-Based Models", "abstract": "Maximum likelihood (ML) learning for energybased models (EBMs) is challenging, partly due to non-convergence of Markov chain Monte Carlo. Several variations of ML learning have been proposed, but existing methods all fail to achieve both post-training image generation and proper density estimation. We propose to introduce diffusion data and learn a joint EBM, called diffusion assisted-EBMs, through persistent training (i.e., using persistent contrastive divergence) with an enhanced sampling algorithm to properly sample from complex, multimodal distributions. We present results from a 2D illustrative experiment and image experiments and demonstrate that, for the first time for image data, persistently trained EBMs can simultaneously achieve long-run stability, post-training image generation, and superior out-of-distribution detection."}, "cited_paper_content": {"title": "On The Anatomy Of Mcmc-Based Maximum Likelihood Learning Of Energy-Based Models", "abstract": "This study investigates the effects of Markov Chain Monte Carlo (MCMC) sampling in unsupervised Maximum Likelihood (ML) learning. Our attention is restricted to the family of unnormalized probability densities for which the negative log density (or energy function) is a ConvNet. In general, we find that many of the techniques used to stabilize training in previous studies can have the opposite effect. Stable ML learning with a ConvNet potential can be achieved with only a few hyper-parameters and no regularization. Using this minimal framework, we identify a variety of ML learning outcomes that depend on the implementation of MCMC sampling. ::: On one hand, we show that it is easy to train an energy-based model which can sample realistic images with short-run Langevin. ML can be effective and stable even when MCMC samples have much higher energy than true steady-state samples throughout training. Based on this insight, we introduce an ML method with purely noise-initialized MCMC, high-quality short-run synthesis, and the same budget as ML with informative MCMC initialization such as CD or PCD. Unlike previous models, our model can obtain realistic high-diversity samples from a noise signal after training with no auxiliary networks. ::: On the other hand, ConvNet potentials learned with highly non-convergent MCMC do not have a valid steady-state and cannot be considered approximate unnormalized densities of the training data because long-run MCMC samples differ greatly from observed images. We show that it is much harder to train a ConvNet potential to learn a steady-state over realistic images. To our knowledge, long-run MCMC samples of all previous models lose the realism of short-run samples. With correct tuning of Langevin noise, we train the first ConvNet potentials for which long-run and steady-state MCMC samples are realistic images."}, "keywords": ["learned energies"], "citation_intent": "result"} {"citing_id": "2303.01234v1", "cited_id": "1910.01108", "section_title": "Victim Models", "citation": "The BERT classifier for AG's News is structured by the Distil-RoBERTa-base #REFR connected with two fully connected layers, and it is trained for 10 epochs with a learning rate of 0.0001.", "text_before_citation": ["We apply our attack algorithm to two popular and well-performed types of victim models. The details of the models can be found below.", "BERT-based Classifiers To do convincing experiments, we choose three well-performed and popular BERT-based models, which we call BERT-C models, pre-trained by Huggingface 3 .", "Due to the different sizes of the datasets, the structures of BERT-based classifiers are adjusted accordingly."], "text_after_citation": ["For the Emotion dataset, its BERT-C adopts another version of BERT, Distil-BERT-base-uncased #OTHEREFR , and the training hyper-parameters remain the same as BERT-C for AG's News.", "Since the SST2 dataset is relatively small compared with the other two models, the corresponding BERT classifier utilizes a small-size version of BERT, BERT-base-uncased #OTHEREFR .", "The test accuracy of these BERT-based classifiers before they are under attack are listed in Table 1 which are publicly accessible 4 5 6 .", "TextCNN-based models The other type of victim model is TextCNN #OTHEREFR , structured with a 100-dimension embedding layer followed by a 128-unit long short-term memory layer.", "This classifier is trained 10 epochs by ADAM optimizer with parameters: learning rate lr = 0.005, the two coefficients used for computing running averages of gradient and its square are set to be 0.9 and 0.999 (\u03b2 1 = 0.9, \u03b2 2 = 0.999), the denominator to improve numerical stability \u03c3 = 10 \u22125 ."], "citing_paper_content": {"title": "Fraud'S Bargain Attack: Generating Adversarial Text Samples Via Word Manipulation Process", "abstract": "Recent studies on adversarial examples expose vulnerabilities of natural language processing (NLP) models. Existing techniques for generating adversarial examples are typically driven by deterministic heuristic rules that are agnostic to the optimal adversarial examples, a strategy that often results in attack failures. To this end, this research proposes Fraud's Bargain Attack (FBA) which utilizes a novel randomization mechanism to enlarge the search space and enables high-quality adversarial examples to be generated with high probabilities. FBA applies the Metropolis-Hasting sampler, a member of Markov Chain Monte Carlo samplers, to enhance the selection of adversarial examples from all candidates proposed by a customized stochastic process that we call Word Manipulation Process (WMP). WMP perturbs one word at a time via insertion, removal or substitution in a contextual-aware manner. Extensive experiments demonstrate that FBA outperforms the state-of-the-art methods in terms of both attack success rate and imperceptibility."}, "cited_paper_content": {"title": "Distilbert, A Distilled Version Of Bert: Smaller, Faster, Cheaper And Lighter", "abstract": "As Transfer Learning from large-scale pre-trained models becomes more prevalent in Natural Language Processing (NLP), operating these large models in on-the-edge and/or under constrained computational training or inference budgets remains challenging. In this work, we propose a method to pre-train a smaller general-purpose language representation model, called DistilBERT, which can then be fine-tuned with good performances on a wide range of tasks like its larger counterparts. While most prior work investigated the use of distillation for building task-specific models, we leverage knowledge distillation during the pre-training phase and show that it is possible to reduce the size of a BERT model by 40%, while retaining 97% of its language understanding capabilities and being 60% faster. To leverage the inductive biases learned by larger models during pre-training, we introduce a triple loss combining language modeling, distillation and cosine-distance losses. Our smaller, faster and lighter model is cheaper to pre-train and we demonstrate its capabilities for on-device computations in a proof-of-concept experiment and a comparative on-device study."}, "keywords": ["BERT classifier"], "citation_intent": "method"} {"citing_id": "2303.07002v1", "cited_id": "1206.0834", "section_title": "Implementation", "citation": "Also the columns of the bifiltration (with s fixed) have been studied earlier in #REFR .", "text_before_citation": ["By excision and the fact that all pairs are \"good pairs\" #OTHEREFR , we have isomorphisms #OTHEREFR", "H n (L s , L s \\ B o r (q)) \u223c = H n (L s \u2229 B r (q), L s \u2229 \u2202B r (q)) \u223c =Hn(Ls \u2229 B r (q)/(L s \u2229 \u2202B r (q)))", "withH denoting reduced homology.", "These isomorphisms imply that the relative localized persistence module corresponds point-wise to the space considered in Figure 2 (right) .", "Moreover, the excision isomorphism commutes with the inclusion maps L s \u2286 L s for s \u2264 s which implies that the rows of the localized relative persistence module (with fixed r) are isomorphic to the module (H n (L s \u2229 B r (q), L s \u2229 \u2202B r (q))) s\u22650 which was studied in #OTHEREFR , #OTHEREFR , #OTHEREFR ."], "text_after_citation": ["Hence, the localized relative bifiltration encodes both types of modules of local persistent homology studied in previous work (see Appendix D for a summary of basic notions).", "We also remark that the computation of the relative localized persistence module can easily be reduced to the case of absolute homology via the well-known coning construction [30, p.125] , yielding isomorphisms H n (X, A) \u223c =Hn(X \u222a \u03c9 * A) for pairs of topological spaces (X, A) with \u03c9 denoting a new vertex.", "These isomorphisms are functorial, yielding an isomorphism between the relative persistence module of a pair of bifiltrations and the absolute persistence module of a bifiltration (using reduced homology).", "Moreover, if the pair (X, A) is finite simplicial, so is X \u222a \u03c9 * A.", "Nerves of pairs."], "citing_paper_content": {"title": "The Localized Union-Of-Balls Bifiltration", "abstract": "We propose an extension of the classical union-of-balls filtration of persistent homology: fixing a point q, we focus our attention to a ball centered at q whose radius is controlled by a second scale parameter. We discuss an absolute variant, where the union is just restricted to the q-ball, and a relative variant where the homology of the q-ball relative to its boundary is considered. Interestingly, these natural constructions lead to bifiltered simplicial complexes which are not k-critical for any finite k. Nevertheless, we demonstrate that these bifiltrations can be computed exactly and efficiently, and we provide a prototypical implementation using the CGAL library. We also argue that some of the recent algorithmic advances for 2-parameter persistence (which usually assume k-criticality for some finite k) carry over to the \u221e-critical case."}, "cited_paper_content": {"title": "Approximating Local Homology From Samples", "abstract": "Recently, multi-scale notions of local homology (a variant of persistent homology) have been used to study the local structure of spaces around a given point from a point cloud sample. Current reconstruction guarantees rely on constructing embedded complexes which become difficult in high dimensions. We show that the persistence diagrams used for estimating local homology, can be approximated using families of Vietoris-Rips complexes, whose simple constructions are robust in any dimension. To the best of our knowledge, our results, for the first time, make applications based on local homology, such as stratification learning, feasible in high dimensions."}, "keywords": ["bifiltration", "columns"], "citation_intent": "background"} {"citing_id": "2303.11879v1", "cited_id": "2001.04830", "section_title": "I. Introduction", "citation": "S EQUENTIAL recommendation systems are designed to capture the dynamic preferences of users based on their historical behaviors, with the goal of predicting the next item that they will be interested in #REFR .", "text_before_citation": [], "text_after_citation": ["The primary supervision signal utilized for learning the parameters of these models is typically derived from the sequential interactions of users with items.", "However, given the sparsity of user behavior data, sequential recommendation methods that rely solely on such data are susceptible to the problem of data sparsity, resulting in suboptimal performance.", "In practice, there exists a significant amount of multimodal information associated with items (e.g., images and text descriptions), which has been employed to alleviate the data sparsity problem in building conventional recommendation systems #OTHEREFR - #OTHEREFR .", "For example, #OTHEREFR , #OTHEREFR leverage item multimodal content as a regularization factor and integrate it with collaborative filtering frameworks.", "Recent studies #OTHEREFR - #OTHEREFR utilize graph neural networks to uncover the hidden links between different modalities and establish an in-depth understanding of users' preferences."], "citing_paper_content": {"title": "Multimodal Pre-Training Framework For Sequential Recommendation Via Contrastive Learning", "abstract": "Sequential recommendation systems utilize the sequential interactions of users with items as their main supervision signals in learning users' preferences. However, existing methods usually generate unsatisfactory results due to the sparsity of user behavior data. To address this issue, we propose a novel pre-training framework, named Multimodal Sequence Mixup for Sequential Recommendation (MSM4SR), which leverages both users' sequential behaviors and items' multimodal content (i.e., text and images) for effectively recommendation. Specifically, MSM4SR tokenizes each item image into multiple textual keywords and uses the pre-trained BERT model to obtain initial textual and visual features of items, for eliminating the discrepancy between the text and image modalities. A novel backbone network, i.e., Multimodal Mixup Sequence Encoder (M 2 SE), is proposed to bridge the gap between the item multimodal content and the user behavior, using a complementary sequence mixup strategy. In addition, two contrastive learning tasks are developed to assist M 2 SE in learning generalized multimodal representations of the user behavior sequence. Extensive experiments on real-world datasets demonstrate that MSM4SR outperforms state-of-the-art recommendation methods. Moreover, we further verify the effectiveness of MSM4SR on other challenging tasks including cold-start and cross-domain recommendation."}, "cited_paper_content": {"title": "Sequential Recommender Systems: Challenges, Progress And Prospects", "abstract": "The emerging topic of sequential recommender systems has attracted increasing attention in recent years.Different from the conventional recommender systems including collaborative filtering and content-based filtering, SRSs try to understand and model the sequential user behaviors, the interactions between users and items, and the evolution of users preferences and item popularity over time. SRSs involve the above aspects for more precise characterization of user contexts, intent and goals, and item consumption trend, leading to more accurate, customized and dynamic recommendations.In this paper, we provide a systematic review on SRSs.We first present the characteristics of SRSs, and then summarize and categorize the key challenges in this research area, followed by the corresponding research progress consisting of the most recent and representative developments on this topic.Finally, we discuss the important research directions in this vibrant area."}, "keywords": ["EQUENTIAL recommendation systems"], "citation_intent": "background"} {"citing_id": "2303.13794v1", "cited_id": "1712.07629", "section_title": "Matching Key-Points Crop (Mkpc)", "citation": "Before applying MKPC, we already have two images I 1 and I 2 , and matched key-points on both images with any model (or any combinations of models), which is defined as X #REFR 1 and X 1 2 , respectively.", "text_before_citation": ["The MKPC algorithm crops critical regions by clustering the matching key-points between two images outputted by arbitrary image matching models. The workflow is shown is Figure 2 ."], "text_after_citation": ["X 1 i denotes the i th image in the first stage of the two-stage pipeline which will be proposed in the next subsection.", "It consists of the following three steps: (i) Clustering the matching key-points(X 1 1 and X #OTHEREFR 2 ) of two images(I 1 and I 2 ) with DBSCAN.", "(ii) Generate a bounding box by filtering and gathering those clusters (iii) Crop the area covered by the Bounding box.", "The Algorithm 1 describes the MKPC algorithm flow in detail.", "With the input of two images (I 1 , I 2 ) and corresponding matched key-points in stage-one (X 1 1 and X 1 2 ), the MKPC outputs the respective cropped critical regions."], "citing_paper_content": {"title": "Efficient And Accurate Co-Visible Region Localization With Matching Key-Points Crop (Mkpc): A Two-Stage Pipeline For Enhancing Image Matching Performance", "abstract": "Image matching is a classic and fundamental task in computer vision. In this paper, under the hypothesis that the areas outside the co-visible regions carry little information, we propose a matching key-points crop (MKPC) algorithm. The MKPC locates, proposes and crops the critical regions, which are the co-visible areas with great efficiency and accuracy. Furthermore, building upon MKPC, we propose a general two-stage pipeline for image matching, which is compatible to any image matching models or combinations. We experimented with plugging SuperPoint + SuperGlue into the two-stage pipeline, whose results show that our method enhances the performance for outdoor pose estimations. What's more, in a fair comparative condition, our method outperforms the SOTA on Image Matching Challenge 2022 Benchmark, which represents the hardest outdoor benchmark of image matching currently. * denotes contributing equally to this work. Preprint. Under review."}, "cited_paper_content": {"title": "Superpoint: Self-Supervised Interest Point Detection And Description", "abstract": "This paper presents a self-supervised framework for training interest point detectors and descriptors suitable for a large number of multiple-view geometry problems in computer vision. As opposed to patch-based neural networks, our fully-convolutional model operates on full-sized images and jointly computes pixel-level interest point locations and associated descriptors in one forward pass. We introduce Homographic Adaptation, a multi-scale, multi-homography approach for boosting interest point detection repeatability and performing cross-domain adaptation (e.g., synthetic-to-real). Our model, when trained on the MS-COCO generic image dataset using Homographic Adaptation, is able to repeatedly detect a much richer set of interest points than the initial pre-adapted deep model and any other traditional corner detector. The final system gives rise to state-of-the-art homography estimation results on HPatches when compared to LIFT, SIFT and ORB."}, "keywords": ["two images", "key-points"], "citation_intent": "method"} {"citing_id": "2304.05741v1", "cited_id": "1412.6980", "section_title": "B. Training", "citation": "During the training phase, all our models were optimized with the Adam algorithm #REFR and a learning rate of lr = 0.001, for a maximum of 100 epoch with an early stopping mechanism activated when the validation loss stops improving after a duration of 5 epochs.", "text_before_citation": [], "text_after_citation": ["Additionally, every dropout is performed with r Dropout = 0.5, the batch normalization uses = 0.001 and \u03b3 = 0.99, and we use a batch size of 256 in every module apart from the fixation prediction performed with high-level features.", "\u2022 Fixation Prediction from High-Level Features During training, we estimate the best weight and bias parameters that minimize the loss between the predicted output\u0177 and the ground truth label y, with the cross entropy function computed for every fixation time step t for every sequence s of each mini-batch b:", "EQUATION", "where S corresponds to the batch size, T corresponds to the sequence length which is set to 6 (in addition to the initial fixation point at t = 0) and H \u00d7 W is the output size which is set to 160.", "We set with F = 5 filters, a kernel size of K = 4 and a stride of S = 2, and, during training, we varied the batch size between 32, 64, 128 and 256 and we conducted an ablation study over theses additional hyper-parameters and settings: -Fovea size: In this work we utilized the same real-time foveation system as in #OTHEREFR , and assessed the model performance when varying the fovea size, which defines the radius of the region with highest visual acuity with the values of 50, 75 and 100 pixels."], "citing_paper_content": {"title": "Learning To Search For And Detect Objects In Foveal Images Using Deep Learning", "abstract": "The human visual system processes images with varied degrees of resolution, with the fovea, a small portion of the retina, capturing the highest acuity region, which gradually declines toward the field of view's periphery. However, the majority of existing object localization methods rely on images acquired by image sensors with space-invariant resolution, ignoring biological attention mechanisms. As a region of interest pooling, this study employs a fixation prediction model that emulates human objective-guided attention of searching for a given class in an image. The foveated pictures at each fixation point are then classified to determine whether the target is present or absent in the scene. Throughout this twostage pipeline method, we investigate the varying results obtained by utilizing high-level or panoptic features and provide a groundtruth label function for fixation sequences that is smoother, considering in a better way the spatial structure of the problem. Finally, we present a novel dual task model capable of performing fixation prediction and detection simultaneously, allowing knowledge transfer between the two tasks. We conclude that, due to the complementary nature of both tasks, the training process benefited from the sharing of knowledge, resulting in an improvement in performance when compared to the previous approach's baseline scores."}, "cited_paper_content": {"title": "Adam: A Method For Stochastic Optimization", "abstract": "We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and/or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm."}, "keywords": ["learning rate"], "citation_intent": "method"} {"citing_id": "2303.14218v1", "cited_id": "1911.07559", "section_title": "Related Work", "citation": "FFANet #REFR introduces feature attention (FA) blocks that leverage both channel and pixel attention to improve haze removal.", "text_before_citation": ["They focus on designing hand-crafted priors such as the dark channel prior #OTHEREFR and color attenuation prior #OTHEREFR .", "However, these priors may not be powerful enough to characterize complex scenes in practice.", "Early learning-based methods #OTHEREFR use deep neural networks to predict the transmission map and atmospheric light in the physics model to obtain a latent clear image.", "However, inaccuracies in the estimations may accumulate, hindering the reliable inference of the haze-free image.", "With the advent of large haze datasets #OTHEREFR , data-driven methods #OTHEREFR have been developed rapidly."], "text_after_citation": ["DeHamer #OTHEREFR combines CNN and Transformer for image dehazing, which can aggregate long-term attention in Transformer and local attention in CNN features.", "Note that these methods do not consider the physics of the hazing process. Further, Dong et al.", "propose a feature dehazing unit (FDU) #OTHEREFR derived based on the physics model.", "To the best of our knowledge, this work is the only one that considers the physics model in the feature space, avoiding the cumulative errors that occur in the raw space.", "However, FDU uses a shared structure to predict those unknown factors without considering their different physical characteristics."], "citing_paper_content": {"title": "Curricular Contrastive Regularization For Physics-Aware Single Image Dehazing", "abstract": "Considering the ill-posed nature, contrastive regularization has been developed for single image dehazing, introducing the information from negative images as a lower bound. However, the contrastive samples are nonconsensual, as the negatives are usually represented distantly from the clear (i.e., positive) image, leaving the solution space still under-constricted. Moreover, the interpretability of deep dehazing models is underexplored towards the physics of the hazing process. In this paper, we propose a novel curricular contrastive regularization targeted at a consensual contrastive space as opposed to a non-consensual one. Our negatives, which provide better lower-bound constraints, can be assembled from 1) the hazy image, and 2) corresponding restorations by other existing methods. Further, due to the different similarities between the embeddings of the clear image and negatives, the learning difficulty of the multiple components is intrinsically imbalanced. To tackle this issue, we customize a curriculum learning strategy to reweight the importance of different negatives. In addition, to improve the interpretability in the feature space, we build a physics-aware dual-branch unit according to the atmospheric scattering model. With the unit, as well as curricular contrastive regularization, we establish our dehazing network, named C 2 PNet. Extensive experiments demonstrate that our C 2 PNet significantly outperforms state-of-the-art methods, with extreme PSNR boosts of 3.94dB and 1.50dB, respectively, on SOTSindoor and SOTS-outdoor datasets. Code is available at https://github.com/YuZheng9/C2PNet."}, "cited_paper_content": {"title": "Ffa-Net: Feature Fusion Attention Network For Single Image Dehazing", "abstract": "In this paper, we propose an end-to-end feature fusion at-tention network (FFA-Net) to directly restore the haze-free image. The FFA-Net architecture consists of three key components: ::: 1) A novel Feature Attention (FA) module combines Channel Attention with Pixel Attention mechanism, considering that different channel-wise features contain totally different weighted information and haze distribution is uneven on the different image pixels. FA treats different features and pixels unequally, which provides additional flexibility in dealing with different types of information, expanding the representational ability of CNNs. 2) A basic block structure consists of Local Residual Learning and Feature Attention, Local Residual Learning allowing the less important information such as thin haze region or low-frequency to be bypassed through multiple local residual connections, let main network architecture focus on more effective information. 3) An Attention-based different levels Feature Fusion (FFA) structure, the feature weights are adaptively learned from the Feature Attention (FA) module, giving more weight to important features. This structure can also retain the information of shallow layers and pass it into deep layers. ::: The experimental results demonstrate that our proposed FFA-Net surpasses previous state-of-the-art single image dehazing methods by a very large margin both quantitatively and qualitatively, boosting the best published PSNR metric from 30.23db to 36.39db on the SOTS indoor test dataset. ::: Code has been made available at GitHub."}, "keywords": ["haze removal"], "citation_intent": "background"} {"citing_id": "2303.01465v1", "cited_id": "1704.04861", "section_title": "Mobilenet V1", "citation": "We have opted MobileNet #REFR as a feature extractor to develop an FPAD model for devices with limited computational resources.", "text_before_citation": [], "text_after_citation": ["It is advantageous over state-ofthe-art CNN architectures as it utilizes depth-wise separable convolution instead of standard convolution.", "A standard convolution operation takes input as X channels of size D x \u00d7 D x and produces D y \u00d7 D y \u00d7 Y feature maps by applying D k \u00d7 D k \u00d7 Y filters where the spatial height and width of the squared input image are denoted with D x .", "X denotes the number of input channels while D y is the spatial height and width.", "The number of output feature maps is denoted with Y .", "Equation 1describes the calculation of the output feature map for standard convolution operation #OTHEREFR having stride one with padding."], "citing_paper_content": {"title": "Mosfpad: An End-To-End Ensemble Of Mobilenet And Support Vector Classifier For Fingerprint Presentation Attack Detection", "abstract": "Automatic fingerprint recognition systems are the most extensively used systems for person authentication although they are vulnerable to Presentation attacks. Artificial artifacts created with the help of various materials are used to deceive these systems causing a threat to the security of fingerprint-based applications. This paper proposes a novel end-to-end model to detect fingerprint Presentation attacks. The proposed model incorporates MobileNet as a feature extractor and a Support Vector Classifier as a classifier to detect presentation attacks in cross-material and cross-sensor paradigms. The feature extractor's parameters are learned with the loss generated by the support vector classifier. The proposed model eliminates the need for intermediary data preparation procedures, unlike other static hybrid architectures. The performance of the proposed model has been validated on benchmark"}, "cited_paper_content": {"title": "Mobilenets: Efficient Convolutional Neural Networks For Mobile Vision Applications", "abstract": "We present a class of efficient models called MobileNets for mobile and embedded vision applications. MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks. We introduce two simple global hyper-parameters that efficiently trade off between latency and accuracy. These hyper-parameters allow the model builder to choose the right sized model for their application based on the constraints of the problem. We present extensive experiments on resource and accuracy tradeoffs and show strong performance compared to other popular models on ImageNet classification. We then demonstrate the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization."}, "keywords": ["feature extractor", "MobileNet"], "citation_intent": "method"} {"citing_id": "2305.01905v1", "cited_id": "1801.07698", "section_title": "Iii. Experiments A. Training Details", "citation": "Following the setup of #REFR , we set the scale s to 64 and the margin m to 0.5 for arcface loss.", "text_before_citation": ["We train networks using the standard data augmentation (i.e., flipping, translation, cropping), and mask augmentation using the tools introduced in the ICCV2021-MFR/Insightface track #OTHEREFR .", "We train the CASIA-Webface and MS1MV3 datasets #OTHEREFR , #OTHEREFR , #OTHEREFR , #OTHEREFR by employing SGD with a minibatch size of 512.", "Momentum and weight decay are set to 0.9 and 5e-4, respectively.", "We set initial learning rate to 0.2, and employ the polynomial learning rate decay scheduler #OTHEREFR , #OTHEREFR with 2 epochs of warm restart.", "We finish the training at 25 epochs and 34 epochs for MS1MV3 and CASIA-Webface datasets, respectively."], "text_after_citation": [], "citing_paper_content": {"title": "Localization Using Multi-Focal Spatial Attention For Masked Face Recognition", "abstract": "Since the beginning of worldwide COVID-19 pandemic, facial masks have been recommended to limit the spread of the disease. However, these masks hide certain facial attributes. Hence, it has become difficult for existing face recognition systems to perform identity verification on masked faces. In this context, it is necessary to develop masked Face Recognition (MFR) for contactless biometric recognition systems. Thus, in this paper, we propose Complementary Attention Learning and Multi-Focal Spatial Attention that precisely removes masked region by training complementary spatial attention to focus on two distinct regions: masked regions and backgrounds. In our method, standard spatial attention and networks focus on unmasked regions, and extract maskinvariant features while minimizing the loss of the conventional Face Recognition (FR) performance. For conventional FR, we evaluate the performance on the IJB-C, Age-DB, CALFW, and CPLFW datasets. We evaluate the MFR performance on the ICCV2021-MFR/Insightface track, and demonstrate the improved performance on the both MFR and FR datasets. Additionally, we empirically verify that spatial attention of proposed method is more precisely activated in unmasked regions."}, "cited_paper_content": {"title": "Arcface: Additive Angular Margin Loss For Deep Face Recognition", "abstract": "One of the main challenges in feature learning using Deep Convolutional Neural Networks (DCNNs) for large-scale face recognition is the design of appropriate loss functions that can enhance the discriminative power. Centre loss penalises the distance between deep features and their corresponding class centres in the Euclidean space to achieve intra-class compactness. SphereFace assumes that the linear transformation matrix in the last fully connected layer can be used as a representation of the class centres in the angular space and therefore penalises the angles between deep features and their corresponding weights in a multiplicative way. Recently, a popular line of research is to incorporate margins in well-established loss functions in order to maximise face class separability. In this paper, we propose an Additive Angular Margin Loss (ArcFace) to obtain highly discriminative features for face recognition. The proposed ArcFace has a clear geometric interpretation due to its exact correspondence to geodesic distance on a hypersphere. We present arguably the most extensive experimental evaluation against all recent state-of-the-art face recognition methods on ten face recognition benchmarks which includes a new large-scale image database with trillions of pairs and a large-scale video dataset. We show that ArcFace consistently outperforms the state of the art and can be easily implemented with negligible computational overhead. To facilitate future research, the code has been made available."}, "keywords": ["arcface loss"], "citation_intent": "method"} {"citing_id": "2304.10891v1", "cited_id": "1903.11027", "section_title": "E. Benchmark Of Transformer Models", "citation": "As shown in TableII, for 3D object detection task using Nuscenes dataset #REFR , both DETR3D and FUTR3D exhibit comparable performance due to their similar structures.", "text_before_citation": ["We benchmark major Transformer-based models on an NVIDIA GPU 3090 considering indicators such as input size, runtime, accuracy, and datasets."], "text_after_citation": ["BEVFormer outperforms DETR3D by generating BEV features and querying 3D objects from these features.", "PETR and CrossDTR transform 2D features into 3D features using CNN networks, accelerating the query process and yielding better performance than DETR3D.", "ResNet101's higher precision compared to ResNet50 can be attributed to its deformable convolution mechanism and increased convolution depth, although at the cost of slower runtime speed #OTHEREFR .", "On the other hand, Transformerbased road element detection research exhibits greater variation, with different models and evaluation criteria for tasks such as 2D lane (TuSimple), 3D lane (OpenLane), and local map (Nuscenes).", "Lane and local map Transformer queries are faster than object detection due to fewer key-point queries and smaller CNN backbones that utilize shallower layer features."], "citing_paper_content": {"title": "Transformer-Based Models And Hardware Acceleration Analysis In Autonomous Driving: A Survey", "abstract": "Transformer architectures have exhibited promising performance in various autonomous driving applications in recent years. On the other hand, its dedicated hardware acceleration on portable computational platforms has become the next critical step for practical deployment in real autonomous vehicles. This survey paper provides a comprehensive overview, benchmark, and analysis of Transformer-based models specifically tailored for autonomous driving tasks such as lane detection, segmentation, tracking, planning, and decision-making. We review different architectures for organizing Transformer inputs and outputs, such as encoder-decoder and encoder-only structures, and explore their respective advantages and disadvantages. Furthermore, we discuss Transformer-related operators and their hardware acceleration schemes in depth, taking into account key factors such as quantization and runtime. We specifically illustrate the operator level comparison between layers from convolutional neural network, Swin-Transformer, and Transformer with 4D encoder. The paper also highlights the challenges, trends, and current insights in Transformerbased models, addressing their hardware deployment and acceleration issues within the context of long-term autonomous driving applications."}, "cited_paper_content": {"title": "Nuscenes: A Multimodal Dataset For Autonomous Driving", "abstract": "Robust detection and tracking of objects is crucial for the deployment of autonomous vehicle technology. Image-based benchmark datasets have driven development in computer vision tasks such as object detection, tracking and segmentation of agents in the environment. Most autonomous vehicles, however, carry a combination of cameras and range sensors such as lidar and radar. As machine learning based methods for detection and tracking become more prevalent, there is a need to train and evaluate such methods on datasets containing range sensor data along with images. In this work we present nuTonomy scenes (nuScenes), the first published dataset to carry the full autonomous vehicle sensor suite: 6 cameras, 5 radars and 1 lidar, all with full 360 degree field of view. nuScenes comprises 1000 scenes, each 20s long and fully annotated with 3D bounding boxes for 23 classes and 8 attributes. It has 7x as many annotations and 100x as many images as the pioneering KITTI dataset. We define novel 3D detection and tracking metrics. We also provide careful dataset analysis as well as baselines for lidar and image based detection and tracking. Data, development kit and more information are available online at this http URL."}, "keywords": ["3D object detection"], "citation_intent": "result"} {"citing_id": "2304.11445v1", "cited_id": "1807.09441", "section_title": "Interpretability Analysis", "citation": "Less discrepancy between different distributions can be observed at later stages, the same pattern authors of IBNnet #REFR have observed.", "text_before_citation": ["Let F denote a feature map, and \u00b5, \u03c3 2 denote its mean and variance values, respectively.", "Then symmetric KL divergence between two distributions S and S is computed as", "EQUATION", "We use the average divergence of all channels for each image and report the mean value for the whole set.", "From Figure 2(a) , it can be seen that there is less feature divergence for the modified network, suggesting that learned representations are more generalizable."], "text_after_citation": ["Next, we visualize mean covariance and variance matrices (Figure 2(b) ) and observe that the covariance matrix computed from intermediate outputs of the modified model has more bright spots.", "This might result from suppressing sensitive covariances previously present in the baseline.", "There are also fewer activations in the variance matrix for H&E, which means that the distribution shift is causing less divergence for all channels on average.", "The same pattern can be observed for TRI and PAS variance matrices (Figure 10 in Appendix C), however, for the latter, some of the brighter spots appear darker compared to the baseline.", "t-SNE visualization (Figure 9 in Appendix C) of learned representations from stain normalized and original image versions of the first and last encoder stages show that they are closer when we integrate our method into training scheme."], "citing_paper_content": {"title": "Improving Stain Invariance Of Cnns For Segmentation By Fusing Channel Attention And Domain-Adversarial Training", "abstract": "Variability in staining protocols, such as different slide preparation techniques, chemicals, and scanner configurations, can result in a diverse set of whole slide images (WSIs). This distribution shift can negatively impact the performance of deep learning models on unseen samples, presenting a significant challenge for developing new computational pathology applications. In this study, we propose a method for improving the generalizability of convolutional neural networks (CNNs) to stain changes in a single-source setting for semantic segmentation. Recent studies indicate that style features mainly exist as covariances in earlier network layers. We design a channel attention mechanism based on these findings that detects stain-specific features and modify the previously proposed stain-invariant training scheme. We reweigh the outputs of earlier layers and pass them to the stain-adversarial training branch. We evaluate our method on multi-center, multi-stain datasets and demonstrate its effectiveness through interpretability analysis. Our approach achieves substantial improvements over baselines and competitive performance compared to other methods, as measured by various evaluation metrics. We also show that combining our method with stain augmentation leads to mutually beneficial results and outperforms other techniques. Overall, our study makes significant contributions to the field of computational pathology."}, "cited_paper_content": {"title": "Two At Once: Enhancing Learning And Generalization Capacities Via Ibn-Net", "abstract": "Convolutional neural networks (CNNs) have achieved great successes in many computer vision problems. Unlike existing works that designed CNN architectures to improve performance on a single task of a single domain and not generalizable, we present IBN-Net, a novel convolutional architecture, which remarkably enhances a CNN's modeling ability on one domain (e.g. Cityscapes) as well as its generalization capacity on another domain (e.g. GTA5) without finetuning. IBN-Net carefully integrates Instance Normalization (IN) and Batch Normalization (BN) as building blocks, and can be wrapped into many advanced deep networks to improve their performances. This work has three key contributions. (1) By delving into IN and BN, we disclose that IN learns features that are invariant to appearance changes, such as colors, styles, and virtuality/reality, while BN is essential for preserving content related information. (2) IBN-Net can be applied to many advanced deep architectures, such as DenseNet, ResNet, ResNeXt, and SENet, and consistently improve their performance without increasing computational cost. (3) When applying the trained networks to new domains, e.g. from GTA5 to Cityscapes, IBN-Net achieves comparable improvements as domain adaptation methods, even without using data from the target domain. With IBN-Net, we won the 1st place on the WAD 2018 Challenge Drivable Area track, with an mIoU of 86.18%."}, "keywords": ["different distributions", "IBNnet"], "citation_intent": "result"} {"citing_id": "2304.11319v1", "cited_id": "1703.10593", "section_title": "Implementation Details", "citation": "Horse \u2192 Zebra is provided in #REFR , which contains 1,067 and 1,334 training images for horse and zebra, respectively.", "text_before_citation": ["Datasets.", "SN-DCR is trained and evaluated on Horse \u2192 Zebra, Cat \u2192 Dog , Van Gogh \u2192 Photo and CityScapes datasets."], "text_after_citation": ["We use 120 horse images as the test images on Horse \u2192 Zebra.", "Cat \u2192 Dog is from #OTHEREFR , which consists of 5,153 and 4,739 training images for cat and dog, respectively.", "We used 500 images of cats as test images on Cat \u2192 Dog .", "Van Gogh \u2192 Photo is a dataset of 400 Van Gogh paintings and 6287 photos extracted from #OTHEREFR . We used 400 Van Gogh images as test images.", "Cityscapes contains street scenes from German cities, with 2,975 training images and 500 test images."], "citing_paper_content": {"title": "Spectral Normalized Dual Contrastive Regularization For Image-To-Image Translation", "abstract": "Existing image-to-image(I2I) translation methods achieve state-of-the-art performance by incorporating the patch-wise contrastive learning into Generative Adversarial Networks. However, patch-wise contrastive learning only focuses on the local content similarity but neglects the global structure constraint, which affects the quality of the generated images. In this paper, we propose a new unpaired I2I translation framework based on dual contrastive regularization and spectral normalization, namely SN-DCR. To maintain consistency of the global structure and texture, we design the dual contrastive regularization using different feature spaces respectively. In order to improve the global structure information of the generated images, we formulate a semantically contrastive loss to make the global semantic structure of the generated images similar to the real images from the target domain in the semantic feature space. We use Gram Matrices to extract the style of texture from images. Similarly, we design style contrastive loss to improve the global texture information of the generated images. Moreover, to enhance the stability of model, we employ the spectral normalized convolutional network in the design of our generator. We conduct the comprehensive experiments to evaluate the effectiveness of SN-DCR, and the results prove that our method achieves SOTA in multiple tasks."}, "cited_paper_content": {"title": "Unpaired Image-To-Image Translation Using Cycle-Consistent Adversarial Networks", "abstract": "Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain $X$ to a target domain $Y$ in the absence of paired examples. Our goal is to learn a mapping $G: X \\rightarrow Y$ such that the distribution of images from $G(X)$ is indistinguishable from the distribution $Y$ using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping $F: Y \\rightarrow X$ and introduce a cycle consistency loss to push $F(G(X)) \\approx X$ (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach."}, "keywords": ["1,334 training images"], "citation_intent": "background"} {"citing_id": "2305.02375v1", "cited_id": "1606.03556", "section_title": "Example Applications", "citation": "Motivated by #REFR , to understand whether visual question answering (VQA) models and humans focus on the same parts of an image to answer a question, a researcher may want to compare the saliency maps of a VQA model and human attention maps.", "text_before_citation": ["To ensure that a model is classifying the image based on the properties of the object, a developer can look at the number of mask pixels with high values within the bounding box of the object and compare that number to either the area of the object or the total number of mask pixels with high values.", "A developer could further query for the top-images where such ratios are highest or lowest.", "Concretely, a query over our toy example could count the number of pixels with values higher than 0.85 in the bounding box, which produces a count of two pixels, and compare against the area, which is 6 pixels, for a ratio of 0.33.", "The query could return all images with a ratio below some threshold or could return the images with the lowest ratios.", "Example 2: Comparing model saliency maps and human attention maps."], "text_after_citation": ["To achieve this, they can generate saliency maps for the VQA model and human attention maps for the same images, and then issue a query to retrieve images with the highest number of pixels with values above some threshold in the intersection of the two masks.", "The returned images are those for which the VQA model and humans focus on the same parts of the image to answer a question."], "citing_paper_content": {"title": "Masksearch: Querying Image Masks At Scale", "abstract": "Machine learning tasks over image databases often generate masks that annotate image content (e.g., saliency maps, segmentation maps) and enable a variety of applications (e.g., determine if a model is learning spurious correlations or if an image was maliciously modified to mislead a model). While queries that retrieve examples based on mask properties are valuable to practitioners, existing systems do not support such queries efficiently. In this paper, we formalize the problem and propose a system, MaskSearch, that focuses on accelerating queries over databases of image masks. MaskSearch leverages a novel indexing technique and an efficient filter-verification query execution framework. Experiments on realworld datasets with our prototype show that MaskSearch, using indexes approximately 5% the size of the data, accelerates individual queries by up to two orders of magnitude and consistently outperforms existing methods on various multi-query workloads that simulate dataset exploration and analysis processes."}, "cited_paper_content": {"title": "Human Attention In Visual Question Answering: Do Humans And Deep Networks Look At The Same Regions?", "abstract": "We conduct large-scale studies on \u2018human attention\u2019 in Visual Question Answering (VQA) to understand where humans choose to look to answer questions about images. We design and test multiple game-inspired novel attention-annotation interfaces that require the subject to sharpen regions of a blurred image to answer a question. Thus, we introduce the VQA-HAT (Human ATtention) dataset. We evaluate attention maps generated by state-of-the-art VQA models against human attention both qualitatively (via visualizations) and quantitatively (via rank-order correlation). Our experiments show that current attention models in VQA do not seem to be looking at the same regions as humans. Finally, we train VQA models with explicit attention supervision, and find that it improves VQA performance."}, "keywords": ["saliency maps", "human attention maps"], "citation_intent": "background"} {"citing_id": "2305.01668v1", "cited_id": "1901.00850", "section_title": "Synthetic Data: Trance", "citation": "Existing methods such as CLEVR and CLEVR-Change use text which has ambiguity issues making the evaluation unreliable, while CLEVR-Ref+ #REFR employs bounding boxes that are specific but require the additional ability of detection.", "text_before_citation": ["This is the major reason that we choose CLEVR to extend.", "Another reason is that images can be synthesized using Blender #OTHEREFR with small costs. Therefore, it is practicable to create millions of samples.", "CLEVR provides a good foundation on attributes and values, which are fundamental items of the atomic transformation triplet (o, a, v), as we introduced in Sec. 3.", "However, the distance to defining atomic transformations well still exists unless we proceed with several modifications or designs.", "The first problem is how to represent an object in the answer."], "text_after_citation": ["Therefore, we design to provide additional information, which is the attributes of the initial objects, including the index, color, material, and other attribute values.", "In this way, an object can be referred to with its index.", "Note machines still need to perform their own recognition to align objects in images with given attributes.", "The second problem is available values in size and material are too few, therefore we add medium size and glass material.", "The last problem is the available values of position transformation are infinite in the space of R 2 , which is not computational friendly."], "citing_paper_content": {"title": "Visual Reasoning: From State To Transformation", "abstract": "Most existing visual reasoning tasks, such as CLEVR in VQA, ignore an important factor, i.e. transformation. They are solely defined to test how well machines understand concepts and relations within static settings, like one image. Such state driven visual reasoning has limitations in reflecting the ability to infer the dynamics between different states, which has shown to be equally important for human cognition in Piaget's theory. To tackle this problem, we propose a novel transformation driven visual reasoning (TVR) task. Given both the initial and final states, the target becomes to infer the corresponding intermediate transformation. Following this definition, a new synthetic dataset namely TRANCE is first constructed on the basis of CLEVR, including three levels of settings, i.e. Basic (single-step transformation), Event (multi-step transformation), and View (multi-step transformation with variant views). Next, we build another real dataset called TRANCO based on COIN, to cover the loss of transformation diversity on TRANCE. Inspired by human reasoning, we propose a three-staged reasoning framework called TranNet, including observing, analyzing, and concluding, to test how recent advanced techniques perform on TVR. Experimental results show that the state-of-the-art visual reasoning models perform well on Basic, but are still far from human-level intelligence on Event, View, and TRANCO. We believe the proposed new paradigm will boost the development of machine visual reasoning. More advanced methods and new problems need to be investigated in this direction. The resource of TVR is available at https://hongxin2019.github.io/TVR/."}, "cited_paper_content": {"title": "Clevr-Ref+: Diagnosing Visual Reasoning With Referring Expressions", "abstract": "Referring object detection and referring image segmentation are important tasks that require joint understanding of visual information and natural language. Yet there has been evidence that current benchmark datasets suffer from bias, and current state-of-the-art models cannot be easily evaluated on their intermediate reasoning process. To address these issues and complement similar efforts in visual question answering, we build CLEVR-Ref+, a synthetic diagnostic dataset for referring expression comprehension. The precise locations and attributes of the objects are readily available, and the referring expressions are automatically associated with functional programs. The synthetic nature allows control over dataset bias (through sampling strategy), and the modular programs enable intermediate reasoning ground truth without human annotators. In addition to evaluating several state-of-the-art models on CLEVR-Ref+, we also propose IEP-Ref, a module network approach that significantly outperforms other models on our dataset. In particular, we present two interesting and important findings using IEP-Ref: (1) the module trained to transform feature maps into segmentation masks can be attached to any intermediate module to reveal the entire reasoning process step-by-step; (2) even if all training data has at least one object referred, IEP-Ref can correctly predict no-foreground when presented with false-premise referring expressions. To the best of our knowledge, this is the first direct and quantitative proof that neural modules behave in the way they are intended. We will release data and code for CLEVR-Ref+."}, "keywords": ["detection", "CLEVR-Ref+"], "citation_intent": "method"} {"citing_id": "2304.09402v1", "cited_id": "1910.10683", "section_title": "Implementation Details", "citation": "A pretrained T5 model #REFR is used to fill the blanks and generate an augmented sample.", "text_before_citation": ["As mentioned in Section 2.2, we augment the original prompt and generate both label-preserving and label-flipping prompts #OTHEREFR ) before applying the three-level Mixup.", "Both the input text and the templates are augmented separately.", "We generate label-preserving and label-flipping augmented text, but only label-preserving templates are generated.", "Following , we use a cloze pattern to combine both the input text (template) and the label into a single sequence, and then randomly mask a fixed percentage of the input tokens."], "text_after_citation": ["For model training, we adopt the experimental settings used in PET #OTHEREFR and conduct a grid search for the three-level Mixup, as indicated in Table 2 .", "To evaluate the effectiveness of our DA methods, we employ PET #OTHEREFR as the backbone and augment it with MIXPRO and other DA baselines.", "We utilize Albert-xxlarge-v2 #OTHEREFR as the PLM and measure the performance using the identical metrics in Table1, namely Acc., F1, F1 a , and EM.", "Since few-shot learning typically exhibits significant performance fluctuations (Dodge et al., 2020; #OTHEREFR , we use 5 independent seeds and report the average performance across these seeds for each model.", "All of our experiments were performed on a Linux platform equipped with NVIDIA A100 (40G)."], "citing_paper_content": {"title": "Mixpro: Simple Yet Effective Data Augmentation For Prompt-Based Learning", "abstract": "Prompt-based learning reformulates downstream tasks as cloze problems by combining the original input with a template. This technique is particularly useful in few-shot learning, where a model is trained on a limited amount of data. However, the limited templates and text used in few-shot prompt-based learning still leave significant room for performance improvement. Additionally, existing methods (Schick and Sch\u00fctze, 2021c) using model ensembles can constrain the model efficiency. To address these issues, we propose an augmentation method called MIXPRO, which augments both the vanilla input text and the templates through token-level, sentence-level, and epoch-level Mixup strategies. We conduct experiments on five few-shot datasets, and the results show that MIXPRO outperforms other augmentation baselines, improving model performance by an average of 5.08% compared to before augmentation."}, "cited_paper_content": {"title": "Exploring The Limits Of Transfer Learning With A Unified Text-To-Text Transformer", "abstract": "Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new \"Colossal Clean Crawled Corpus\", we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code."}, "keywords": ["augmented sample", "pretrained T5 model"], "citation_intent": "method"} {"citing_id": "2304.07504v1", "cited_id": "1507.02000", "section_title": "G.3 Proof Of Proposition 4.2", "citation": "For x \u2208 F k , by #REFR , if k \u2208 L i , then u i \u2208 F k+1 ; otherwise, u i \u2208 F k . This completes the proof.", "text_before_citation": ["Thus, if x \u2208 F 0 , prox \u03b3 r1 (x) \u2208 F 1 ; if x \u2208 F k for k \u2265 1, prox \u03b3 r1 (x) \u2208 F k .", "(ii) For i = 1, we define u i := prox \u03b3 ri (x) for simplicity. Then u i satisfies the following equation", "I + n\u03b3 c\u03b3 + 1 B i B i \u22121 x = x \u2212 B i D i B i x c\u03b3 + 1 .", "Then we have", "B i D i B i x = l\u2208Li d i,l b l b l x."], "text_after_citation": ["which implies our desired result."], "citing_paper_content": {"title": "Stochastic Distributed Optimization Under Average Second-Order Similarity: Algorithms And Analysis", "abstract": "We study finite-sum distributed optimization problems with n-clients under popular \u03b4-similarity condition and \u00b5-strong convexity. We propose two new algorithms: SVRS and AccSVRS motivated by previous works. The non-accelerated SVRS method combines the techniques of gradient-sliding and variance reduction, which achieves superior communication complexity\u00d5(n+ \u221a n\u03b4/\u00b5) compared to existing non-accelerated algorithms. Applying the framework proposed in Katyusha X [6], we also build a direct accelerated practical version named AccSVRS with totally smoothness-free\u00d5(n+n 3/4 \u03b4/\u00b5) communication complexity that improves upon existing algorithms on ill-conditioning cases. Furthermore, we show a nearly matched lower bound to verify the tightness of our AccSVRS method."}, "cited_paper_content": {"title": "An Optimal Randomized Incremental Gradient Method", "abstract": "In this paper, we consider a class of finite-sum convex optimization problems whose objective function is given by the summation of $m$ ($\\ge 1$) smooth components together with some other relatively simple terms. We first introduce a deterministic primal-dual gradient (PDG) method that can achieve the optimal black-box iteration complexity for solving these composite optimization problems using a primal-dual termination criterion. Our major contribution is to develop a randomized primal-dual gradient (RPDG) method, which needs to compute the gradient of only one randomly selected smooth component at each iteration, but can possibly achieve better complexity than PDG in terms of the total number of gradient evaluations. More specifically, we show that the total number of gradient evaluations performed by RPDG can be ${\\cal O} (\\sqrt{m})$ times smaller, both in expectation and with high probability, than those performed by deterministic optimal first-order methods under favorable situations. We also show that the complexity of the RPDG method is not improvable by developing a new lower complexity bound for a general class of randomized methods for solving large-scale finite-sum convex optimization problems. Moreover, through the development of PDG and RPDG, we introduce a novel game-theoretic interpretation for these optimal methods for convex optimization."}, "keywords": ["k \u2208 L", "k+1"], "citation_intent": "background"} {"citing_id": "2304.14886v1", "cited_id": "1910.09328", "section_title": "A. Linear Systems With Gaussian Uncertainties", "citation": "Some notes regarding the synthesis: (a) It is possible to derive the Hessian as well (see #REFR ).", "text_before_citation": ["However, when optimizing for cases where p \u03d5 f 1, we want smaller steps so we do not overshoot.", "We introduce p \u03d5 f in the gradient descent update as a normalizing factor so that the learning rate is easier to obtain and can remain relatively constant.", "We also introduce a sign variable where v dir = 1 if we search for p \u03d5 s and v dir = \u22121, if we search for p \u03d5 f .", "The gradient descent process for some parameter \u03b3 of the system, is then:", "EQUATION"], "text_after_citation": ["A possible improvement to the gradient descent is by incorporating the second-degree derivative for more accuracy.", "(b) The lower the probability of failure is, the longer the computation (more nestings).", "Keep in mind that p \u03d5 f = 0 is not possible due to the assumption that the uncertainty is an unbounded Gaussian.", "Therefore, it is up to the designer to specify the stopping criteria at the requisite level of performance.", "(c) STL-based ESS can also attempt to find the initial trajectory."], "citing_paper_content": {"title": "Ensuring Reliable Robot Task Performance Through Probabilistic Rare-Event Verification And Synthesis", "abstract": "Providing guarantees on the safe operation of robots against edge cases is challenging as testing methods such as traditional Monte-Carlo require too many samples to provide reasonable statistics. Built upon recent advancements in rareevent sampling, we present a model-based method to verify if a robotic system satisfies a Signal Temporal Logic (STL) specification in the face of environment variations and sensor/actuator noises. Our method is efficient and applicable to both linear and nonlinear and even black-box systems with arbitrary, but known, uncertainty distributions. For linear systems with Gaussian uncertainties, we exploit a feature to find optimal parameters that minimize the probability of failure. We demonstrate illustrative examples on applying our approach to real-world autonomous robotic systems."}, "cited_paper_content": {"title": "Integrals Over Gaussians Under Linear Domain Constraints", "abstract": "Integrals of linearly constrained multivariate Gaussian densities are a frequent problem in machine learning and statistics, arising in tasks like generalized linear models and Bayesian optimization. Yet they are notoriously hard to compute, and to further complicate matters, the numerical values of such integrals may be very small. We present an efficient black-box algorithm that exploits geometry for the estimation of integrals over a small, truncated Gaussian volume, and to simulate therefrom. Our algorithm uses the Holmes-Diaconis-Ross (HDR) method combined with an analytic version of elliptical slice sampling (ESS). Adapted to the linear setting, ESS allows for efficient, rejection-free sampling, because intersections of ellipses and domain boundaries have closed-form solutions. The key idea of HDR is to decompose the integral into easier-to-compute conditional probabilities by using a sequence of nested domains. Remarkably, it allows for direct computation of the logarithm of the integral value and thus enables the computation of extremely small probability masses. We demonstrate the effectiveness of our tailored combination of HDR and ESS on high-dimensional integrals and on entropy search for Bayesian optimization."}, "keywords": ["synthesis", "Hessian"], "citation_intent": "background"} {"citing_id": "2303.14865v1", "cited_id": "1608.03983", "section_title": "B.2. Implementation Details", "citation": "We set the learning rate as 0.005, and decay it to 0 following the cosine strategy #REFR .", "text_before_citation": ["We set the maximum iterations number to 1,000, and determine the L2 regularization weights following DECLIP's hyperparameter sweeping strategy #OTHEREFR .", "We do not report the results on the ImageNet-1K dataset, due to the high computational cost of conducting hyperparameter sweeping on the dataset. Non-linear Probe Task.", "The downstream task head consists of a fully-connected layer with GELU activation and a fully-connected layer.", "The extracted FDT features of images and questions are concatenated and then fed to the downstream task head to predict the answers. The encoders and FDT are frozen during the training.", "The downstream head is optimized by the AdamW optimizer #OTHEREFR ."], "text_after_citation": [], "citing_paper_content": {"title": "Revisiting Multimodal Representation In Contrastive Learning: From Patch And Token Embeddings To Finite Discrete Tokens", "abstract": "Contrastive learning-based vision-language pretraining approaches, such as CLIP, have demonstrated great success in many vision-language tasks. These methods achieve cross-modal alignment by encoding a matched image-text pair with similar feature embeddings, which are generated by aggregating information from visual patches and language tokens. However, direct aligning cross-modal information using such representations is challenging, as visual patches and text tokens differ in semantic levels and granularities. To alleviate this issue, we propose a Finite Discrete Tokens (FDT) based multimodal representation. FDT is a set of learnable tokens representing certain visualsemantic concepts. Both images and texts are embedded using shared FDT by first grounding multimodal inputs to FDT space and then aggregating the activated FDT representations. The matched visual and semantic concepts are enforced to be represented by the same set of discrete tokens by a sparse activation constraint. As a result, the granularity gap between the two modalities is reduced. Through both quantitative and qualitative analyses, we demonstrate that using FDT representations in CLIP-style models improves cross-modal alignment and performance in visual recognition and vision-language downstream tasks. Furthermore, we show that our method can learn more comprehensive representations, and the learned FDT capture meaningful cross-modal correspondence, ranging from objects to actions and attributes. 1 * This work was done during a research internship at ByteDance. \u2020 Dimitris N. Metaxas has been supported by NSF IUCRC CARTA"}, "cited_paper_content": {"title": "Sgdr: Stochastic Gradient Descent With Warm Restarts", "abstract": "Restart techniques are common in gradient-free optimization to deal with multimodal functions. Partial warm restarts are also gaining popularity in gradient-based optimization to improve the rate of convergence in accelerated gradient schemes to deal with ill-conditioned functions. In this paper, we propose a simple warm restart technique for stochastic gradient descent to improve its anytime performance when training deep neural networks. We empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively. We also demonstrate its advantages on a dataset of EEG recordings and on a downsampled version of the ImageNet dataset. Our source code is available at https://github.com/loshchil/SGDR"}, "keywords": ["learning rate"], "citation_intent": "method"} {"citing_id": "2304.05379v1", "cited_id": "2001.04057", "section_title": "Viii. Transmission Power Analysis", "citation": "To find the minimum power requirement for a 2-Group IC-NOMA transmission that makes R IN \u2212N OMA #REFR (n,f ) at least as good as R IC , consider (25) and (31).", "text_before_citation": ["Since, l IC \u2265 l m , we have P #OTHEREFR saving > 0", "3)CASE III (l n > l f > l m ) : The 3-Group NOMA transmission of the scheme saves power compared to conventional IC to achieve equal information rate as given by (27).", "Let P", "(n,f ) denote the power per 2-Group NOMA transmission in this case. Then, (16) is modified as", "R IN \u2212N OMA (2) (n,f ) = log 2 \uf8eb \uf8ed (1 + P (2) (n,f ) g f )(1 + \u03b1 1 P (2) (n,f ) g n ) (1 + \u03b1 1 P (2) (n,f ) g f ) \uf8f6 \uf8f8 (31)"], "text_after_citation": ["EQUATION", "From (32), we have \u03b6 (n,f ) > 0 =\u21d2 P IC > P", "(n,f ) , i.e., To achieve a rate at least as good as that of conventional IC, power per transmission in conventional IC system is greater when compared to power per a 2-Group NOMA transmission in this case . Then,", "EQUATION", "Let P IC n denote the power per IC transmission in this case, then (4) is modified as"], "citing_paper_content": {"title": "Design And Analysis Of Index Codes For 3-Group Noma In Vehicular Adhoc Networks", "abstract": "Index coding (IC) is a source coding technique employed to improve spectral utilisation, where the source node aims to satisfy users' demands by making minimum transmissions. Non-orthogonal multiple access (NOMA) is integral to the radio access technique used in 5G networks. Index-coded NOMA (IC-NOMA) transmission scheme in Vehicular Adhoc Networks (VANETs) involves applying NOMA principles on index-coded data to avoid network congestion and to improve spectral efficiency compared to conventional IC systems. In this work, a spectral efficient transmission scheme called 3-Group IC-NOMA is proposed, and an innovative index code design that fits with NOMA decoding principles to obtain improved spectral efficiency is developed. Through exhaustive analytical studies, we demonstrate that the proposed transmission scheme always supports higher rates than the conventional IC systems and requires less power to achieve an information rate at least as good as conventional IC systems."}, "cited_paper_content": {"title": "Efficient 3D Road Map Data Exchange For Intelligent Vehicles In Vehicular Fog Networks", "abstract": "Through connecting intelligent vehicles as well as the roadside infrastructure, the perception range of vehicles can be significantly extended, and hidden objects at blind spots can be efficiently detected and avoided. To realize this, accurate road map data must be downloaded in real time to these intelligent vehicles for navigation and localization purposes. Besides, the cloud must be updated with dynamic changes that happened in the road network. These involve the transmissions of high-definition 3D road map data for accurately representing the physical environments. In this work, we propose solutions under the fog computing architecture in a heterogeneous vehicular network to optimize data exchange among intelligent vehicles, the roadside infrastructure, as well as regional databases. Specifically, the efficiency of 3D road map data dissemination at roadside fog nodes is achieved by exploiting index coding techniques to reduce the overall data load, while opportunistic scheduling of heterogeneous transmissions can be done to judiciously manage network resources and minimize operating cost. In addition, 3D point cloud coding and hashing techniques are applied to expedite the updates of various dynamic changes in the network. We empirically evaluate the proposed solutions based on real-world mobility traces of vehicles and 3D LIght Detection And Ranging (LIDAR) data of city streets. The proposed system is also implemented in a multi-robotic testbed for practical evaluation."}, "keywords": ["2-Group IC-NOMA transmission", "R IC"], "citation_intent": "background"} {"citing_id": "2303.09824v3", "cited_id": "1705.10528", "section_title": "B. Reinforcement Learning", "citation": "Constrained Policy Optimization (CPO) #REFR is a pioneering general-purpose policy exploit algorithm for constrained reinforcement learning with guarantees for near-constraint satisfaction at each iteration.", "text_before_citation": ["#OTHEREFR train a DQN agent combined with DNN which outputs two discrete actions.", "The safety and agility of the ego vehicle can be balanced on-the-go, indicating that the RL agent can learn an adaptive behavior. Furthermore, Ronecker et al.", "#OTHEREFR propose a safer navigating method for IVs in highway scenarios by combining Deep Q-Networks from control theory.", "The proposed network is trained in simulation for central decision-making by proposing targets for a trajectory planner, which shows that the value-based RL can produce efficient and safe driving behavior in highway traffic scenarios.", "The security of end-to-end autonomous driving also raises significant apprehension."], "text_after_citation": ["Building on this, #OTHEREFR and #OTHEREFR present the Safety Gym benchmark suite and validate several constrained deep RL algorithms under constrained conditions. Li et al.", "#OTHEREFR introduce a risk awareness algorithm into DRL frameworks to learn a riskaware driving decision policy for lane-changing tasks with the minimum expected risk. Chow et al.", "#OTHEREFR propose safe policy optimization algorithms that employ a Lyapunov-based approach #OTHEREFR to address CMDP problems. Furthermore, Yang et al.", "#OTHEREFR construct a model-free safe RL algorithm that integrates policy and neural barrier certificate learning in a stepwise state constraint scenario. Mo et al.", "#OTHEREFR leverage Monte Carlo Tree Search to reduce unsafe behaviors on overtaking subtasks at highway scenarios. Fig. 7 ."], "citing_paper_content": {"title": "Motion Planning For Autonomous Driving: The State Of The Art And Future Perspectives", "abstract": "Thanks to the augmented convenience, safety advantages, and potential commercial value, Intelligent vehicles (IVs) have attracted wide attention throughout the world. Although a few autonomous driving unicorns assert that IVs will be commercially deployable by 2025, their implementation is still restricted to small-scale validation due to various issues, among which precise computation of control commands or trajectories by planning methods remains a prerequisite for IVs. This paper aims to review state-of-the-art planning methods, including pipeline planning and end-to-end planning methods. In terms of pipeline methods, a survey of selecting algorithms is provided along with a discussion of the expansion and optimization mechanisms, whereas in end-to-end methods, the training approaches and verification scenarios of driving tasks are points of concern. Experimental platforms are reviewed to facilitate readers in selecting suitable training and validation methods. Finally, the current challenges and future directions are discussed. The sideby-side comparison presented in this survey not only helps to gain insights into the strengths and limitations of the reviewed methods but also assists with system-level design choices. Index Terms-Pipeline planning, end-to-end planning, imitation learning, reinforcement learning, parallel learning. I. INTRODUCTION I NTELLIGENT vehicles (IVs) have gained considerable attention from government, industry, academia, and the"}, "cited_paper_content": {"title": "Constrained Policy Optimization", "abstract": "For many applications of reinforcement learning it can be more convenient to specify both a reward function and constraints, rather than trying to design behavior through the reward function. For example, systems that physically interact with or around humans should satisfy safety constraints. Recent advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015, Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in high-dimensional control, but do not consider the constrained setting. ::: We propose Constrained Policy Optimization (CPO), the first general-purpose policy search algorithm for constrained reinforcement learning with guarantees for near-constraint satisfaction at each iteration. Our method allows us to train neural network policies for high-dimensional control while making guarantees about policy behavior all throughout training. Our guarantees are based on a new theoretical result, which is of independent interest: we prove a bound relating the expected returns of two policies to an average divergence between them. We demonstrate the effectiveness of our approach on simulated robot locomotion tasks where the agent must satisfy constraints motivated by safety."}, "keywords": ["Constrained Policy Optimization"], "citation_intent": "background"} {"citing_id": "2304.04664v1", "cited_id": "1803.08494", "section_title": "Computational Blocks", "citation": "Various forms of normalisation differ along which dimensions they calculate the reference statistics (see #REFR for a comparison).", "text_before_citation": ["Residual connections preserve high resolution information, facilitate the propagation of gradient information to early encoding layers, and thus avoid the vanishing gradient problem and enabled the training of very deep neural networks #OTHEREFR .", "As a result, residual blocks are now used in essentially all modern DL architectures.", "Besides data normalisation, discussed in Section 2.3, dedicated normalisation modules between (or after) the projection and the non-linearity within a processing block may be applied #OTHEREFR .", "Empirically, it has been found that normalisation leads to faster and more stable convergence (for a theoretical perspective see e.g. #OTHEREFR ).", "Normalisation re-scales all intermediate activations within a layer to have zero mean and unit variance."], "text_after_citation": ["Among the reviewed models, P22, K22, and E21, and H22 utilise normalisation over the channel and spatial dimensions, whereas R21 use normalisation over the batch dimension and W21 omit internal normalisation entirely.", "Immediately after the normalisation, an affine transformation with learned parameters is often applied to allow the activation statistics to be adapted to the data.", "Recent work in #OTHEREFR and #OTHEREFR suggests that learned affine parameters may achieve the beneficial effects of normalisation by themselves.", "In E21, for example, the parameters of the affine transformation are predicted by a secondary neural network based on the targeted forecast lead-time, which enables maximally effective, layer-specific lead-time encodings.", "The same principle has been used in previous work #OTHEREFR to condition on various kinds of auxiliary information, an approach that could hold promise in DLWP for integrating other relevant data sources."], "citing_paper_content": {"title": "Inductive Biases In Deep Learning Models For Weather Prediction", "abstract": "Deep learning has recently gained immense popularity in the Earth sciences as it enables us to formulate purely data-driven models of complex Earth system processes. Deep learning-based weather prediction (DLWP) models have made significant progress in the last few years, achieving forecast skills comparable to established numerical weather prediction (NWP) models with comparatively lesser computational costs. In order to train accurate, reliable, and tractable DLWP models with several millions of parameters, the model design needs to incorporate suitable inductive biases that encode structural assumptions about the data and modelled processes. When chosen appropriately, these biases enable faster learning and better generalisation to unseen data. Although inductive biases play a crucial role in successful DLWP models, they are often not stated explicitly and how they contribute to model performance remains unclear. Here, we review and analyse the inductive biases of six state-of-the-art DLWP models, involving a deeper look at five key design elements: input data, forecasting objective, loss components, layered design of the deep learning architectures, and optimisation methods. We show how the design choices made in each of the five design elements relate to structural assumptions. Given recent developments in the broader DL community, we anticipate that the future of DLWP will likely see a wider use of foundation models-large models pre-trained on big databases with self-supervised learning-combined with explicit physics-informed inductive biases that allow the models to provide competitive forecasts even at the more challenging subseasonal-to-seasonal scales."}, "cited_paper_content": {"title": "Group Normalization", "abstract": "Batch Normalization (BN) is a milestone technique in the development of deep learning, enabling various networks to train. However, normalizing along the batch dimension introduces problems --- BN's error increases rapidly when the batch size becomes smaller, caused by inaccurate batch statistics estimation. This limits BN's usage for training larger models and transferring features to computer vision tasks including detection, segmentation, and video, which require small batches constrained by memory consumption. In this paper, we present Group Normalization (GN) as a simple alternative to BN. GN divides the channels into groups and computes within each group the mean and variance for normalization. GN's computation is independent of batch sizes, and its accuracy is stable in a wide range of batch sizes. On ResNet-50 trained in ImageNet, GN has 10.6% lower error than its BN counterpart when using a batch size of 2; when using typical batch sizes, GN is comparably good with BN and outperforms other normalization variants. Moreover, GN can be naturally transferred from pre-training to fine-tuning. GN can outperform its BN-based counterparts for object detection and segmentation in COCO, and for video classification in Kinetics, showing that GN can effectively replace the powerful BN in a variety of tasks. GN can be easily implemented by a few lines of code in modern libraries."}, "keywords": ["normalisation"], "citation_intent": "background"} {"citing_id": "2304.11130v1", "cited_id": "1308.4941", "section_title": "Introduction", "citation": "Figure 2 highlights the aim of our work at large which is inline with MITRE's vision on the standardization of CVE records, an extremely valuable task to the community #REFR .", "text_before_citation": ["\u2022 Experimental results.", "We approached the task as a ranking problem, using text similarity SBERT #OTHEREFR , optimized for sentences (vs.", "the document level) and the recently released ranking T5 #OTHEREFR model on the dataset.", "Our results were compared to the more general-built models BERT #OTHEREFR and RoBERTa #OTHEREFR . BM25 #OTHEREFR was used as the baseline model.", "Moreover, we used T5 as Seq2Seq generation #OTHEREFR model as well as a ranker."], "text_after_citation": ["The rest of this paper is organized as follows: Section 2 provides a summary of related work.", "Section 3 briefly summarizes the MITRE weaknesses types and how they are used.", "Section 4 explains the rationale behind annotating and maintaining the released cyber-security AI dataset. Section 5 introduces our methodology. Section 6 describes our experimentation and discusses the results.", "Finally, Section 7 concludes and points to possible directions for future work."], "citing_paper_content": {"title": "Automated Mapping Of Cve Vulnerability Records To Mitre Cwe Weaknesses", "abstract": "In recent years, a proliferation of cyber-security threats and diversity has been on the rise culminating in an increase in their reporting and analysis. To counter that, many non-profit organizations have emerged in this domain, such as MITRE and OSWAP, which have been actively tracking vulnerabilities, and publishing defense recommendations in standardized formats. As producing data in such formats manually is very time-consuming, there have been some proposals to automate the process. Unfortunately, a major obstacle to adopting supervised machine learning for this problem has been the lack of publicly available specialized datasets. Here, we aim to bridge this gap. In particular, we focus on mapping CVE records into MITRE CWE Weaknesses, and we release to the research community a manually annotated dataset of 4,012 records for this task. With a human-in-the-loop framework in mind, we approach the problem as a ranking task and aim to incorporate reinforced learning to make use of the human feedback in future work. Our experimental results using fine-tuned deep learning models, namely Sentence-BERT and rankT5, show sizable performance gains over BM25, BERT, and RoBERTa, which demonstrates the need for an architecture capable of good semantic understanding for this task."}, "cited_paper_content": {"title": "Automatic Labeling For Entity Extraction In Cyber Security", "abstract": "Timely analysis of cyber-security information necessitates automated information extraction from unstructured text. While state-of-the-art extraction methods produce extremely accurate results, they require ample training data, which is generally unavailable for specialized applications, such as detecting security related entities; moreover, manual annotation of corpora is very costly and often not a viable solution. In response, we develop a very precise method to automatically label text from several data sources by leveraging related, domainspecific, structured data and provide public access to a corpus annotated with cyber-security entities. Next, we implement a Maximum Entropy Model trained with the average perceptron on a portion of our corpus ( 750,000 words) and achieve near perfect precision, recall, and accuracy, with training times under 17 seconds."}, "keywords": ["CVE records"], "citation_intent": "background"} {"citing_id": "2304.08204v1", "cited_id": "1612.06890", "section_title": "Rq2. Effects On Geometric Primitives And Concepts: How Well Does Lbs Represent Geometric Primitives And Understand Their Conceptual Relationships?", "citation": "We validate whether LBS can reflect the local geometric information of each object in a synthetic photo dataset consisting of multiple objects, using the CLEVR dataset #REFR .", "text_before_citation": ["To evaluate whether LBS is suitable for representing geometric primitives and concepts, we perform a classification task on the Geoclidean dataset #OTHEREFR .", "The Geoclidean dataset consists of realized images from a concept of Euclidean geometry, e.g., black parallel lines on a white background.", "Geoclidean is divided into two categories: Geoclidean-Elements and Geoclidean-Constraints, visualized in Fig. 7 on Appendix B.", "By providing a limited training set consisting of only 10 images per concept, we evaluate whether LBS can effectively learn the high-level relationships between each primitive and generalize them across different examples by classifying the concept of the test image. RQ3.", "Local geometric information and spatial reasoning: How effectively does LBS reflect local geometric information and extend it to spatial reasoning tasks?"], "text_after_citation": ["We train our model with very limited descriptions, where the label for the entire scene is provided as the rightmost object or without any descriptions at all.", "We validate the effectiveness of our representation by evaluating its ability to successfully classify attributes that are not provided as labels, such as determining the color of the leftmost object.", "Additionally, we test its ability to perform simple spatial reasoning, such as shifting the rightmost object and inferring the attribute of the current rightmost object. RQ4.", "Domain transfer: Can the geometric concepts of LBS trained in a specific domain be extended to other domains?", "To investigate whether the learned representation within a specific image domain provides meaningful geometric information across other domains, we evaluate the model by shifting the distribution from the STL-10 [11] dataset to CLEVR and vice versa."], "citing_paper_content": {"title": "Learning Geometry-Aware Representations By Sketching", "abstract": "Understanding geometric concepts, such as distance and shape, is essential for understanding the real world and also for many vision tasks. To incorporate such information into a visual representation of a scene, we propose learning to represent the scene by sketching, inspired by human behavior. Our method, coined Learning by Sketching (LBS), learns to convert an image into a set of colored strokes that explicitly incorporate the geometric information of the scene in a single inference step without requiring a sketch dataset. A sketch is then generated from the strokes where CLIP-based perceptual loss maintains a semantic similarity between the sketch and the image. We show theoretically that sketching is equivariant with respect to arbitrary affine transformations and thus provably preserves geometric information. Experimental results show that LBS substantially improves the performance of object attribute classification on the unlabeled CLEVR dataset, domain transfer between CLEVR and STL-10 datasets, and for diverse downstream tasks, confirming that LBS provides rich geometric information."}, "cited_paper_content": {"title": "Clevr: A Diagnostic Dataset For Compositional Language And Elementary Visual Reasoning", "abstract": "When building artificial intelligence systems that can reason and answer questions about visual data, we need diagnostic tests to analyze our progress and discover short-comings. Existing benchmarks for visual question answering can help, but have strong biases that models can exploit to correctly answer questions without reasoning. They also conflate multiple sources of error, making it hard to pinpoint model weaknesses. We present a diagnostic dataset that tests a range of visual reasoning abilities. It contains minimal biases and has detailed annotations describing the kind of reasoning each question requires. We use this dataset to analyze a variety of modern visual reasoning systems, providing novel insights into their abilities and limitations."}, "keywords": ["local geometric information", "synthetic photo dataset"], "citation_intent": "method"} {"citing_id": "2304.14736v1", "cited_id": "1604.01685", "section_title": "Experiments", "citation": "Semantic segmentation results with different sensor resolutions on the Cityscapes dataset #REFR . For all resolutions, the learned rectangular layout performs best.", "text_before_citation": ["MNIST classification To illustrate the principle of our end-to-end sensor layout optimization, we start with a toy example and optimize the layout for hand-written digit recognition on MNIST #OTHEREFR with a sensor size of only 4 \u00d7 4 instead of the original 28 \u00d7 28 pixels.", "The digits in MNIST are always centered, so the hypothesis is that an optimized layout puts smaller pixels in the middle in order to capture the higher information density there.", "We apply the curvilinear layout \u03c6 curv whose parameters we initialize with 0, i.e., Table 2 ."], "text_after_citation": [], "citing_paper_content": {"title": "Differentiable Sensor Layouts For End-To-End Learning Of Task-Specific Camera Parameters", "abstract": "The success of deep learning is frequently described as the ability to train all parameters of a network on a specific application in an end-to-end fashion. Yet, several design choices on the camera level, including the pixel layout of the sensor, are considered as pre-defined and fixed, and high resolution, regular pixel layouts are considered to be the most generic ones in computer vision and graphics, treating all regions of an image as equally important. While several works have considered non-uniform, e.g., hexagonal or foveated, pixel layouts in hardware and image processing, the layout has not been integrated into the end-to-end learning paradigm so far. In this work, we present the first truly end-to-end trained imaging pipeline that optimizes the size and distribution of pixels on the imaging sensor jointly with the parameters of a given neural network on a specific task. We derive an analytic, differentiable approach for the sensor layout parameterization that allows for taskspecific, local varying pixel resolutions. We present two pixel layout parameterization functions: rectangular and curvilinear grid shapes that retain a regular topology. We provide a drop-in module that approximates sensor simulation given existing high-resolution images to directly connect our method with existing deep learning models. We show that network predictions benefit from learnable pixel layouts for two different downstream tasks, classification and semantic segmentation."}, "cited_paper_content": {"title": "The Cityscapes Dataset For Semantic Urban Scene Understanding", "abstract": "Visual understanding of complex urban street scenes is an enabling factor for a wide range of applications. Object detection has benefited enormously from large-scale datasets, especially in the context of deep learning. For semantic urban scene understanding, however, no current dataset adequately captures the complexity of real-world urban scenes. To address this, we introduce Cityscapes, a benchmark suite and large-scale dataset to train and test approaches for pixel-level and instance-level semantic labeling. Cityscapes is comprised of a large, diverse set of stereo video sequences recorded in streets from 50 different cities. 5000 of these images have high quality pixel-level annotations, 20 000 additional images have coarse annotations to enable methods that leverage large volumes of weakly-labeled data. Crucially, our effort exceeds previous attempts in terms of dataset size, annotation richness, scene variability, and complexity. Our accompanying empirical study provides an in-depth analysis of the dataset characteristics, as well as a performance evaluation of several state-of-the-art approaches based on our benchmark."}, "keywords": ["learned rectangular layout", "Semantic segmentation"], "citation_intent": "background"} {"citing_id": "2303.18223v7", "cited_id": "2001.08361", "section_title": "Background For Llms", "citation": "As discussed in previous parts, there exists an evident scaling effect in Transformer language models: larger model/data sizes and more training compute typically lead to an improved model capacity #REFR 34] .", "text_before_citation": ["Key Techniques for LLMs.", "It has been a long way that LLMs evolve into the current state: general and capable learners.", "In the development process, a number of important techniques are proposed, which largely improve the capacity of LLMs.", "Here, we briefly list several important techniques that (potentially) lead to the success of LLMs, as follows.", "\u2022 Scaling."], "text_after_citation": ["As two representative models, GPT-3 and PaLM explored the scaling limits by increasing the model size to 175B and 540B, respectively.", "Furthermore, since compute budget is usually limited, scaling laws can be employed to conduct a more compute-efficient allocation of the compute resources.", "For example, Chinchilla (with more training tokens) outperforms its counterpart model Gopher (with a larger model size) by increasing the data scale with the same compute budget [34] .", "While, it should be noted that data scaling should be with careful cleaning process, since the quality of pre-training data plays a key role in the model capacity.", "\u2022 Training."], "citing_paper_content": {"title": "A Survey Of Large Language Models", "abstract": "Ever since the Turing Test was proposed in the 1950s, humans have explored the mastering of language intelligence by machine. Language is essentially a complex, intricate system of human expressions governed by grammatical rules. It poses a significant challenge to develop capable artificial intelligence (AI) algorithms for comprehending and grasping a language. As a major approach, language modeling has been widely studied for language understanding and generation in the past two decades, evolving from statistical language models to neural language models. Recently, pre-trained language models (PLMs) have been proposed by pretraining Transformer models over large-scale corpora, showing strong capabilities in solving various natural language processing (NLP) tasks. Since the researchers have found that model scaling can lead to an improved model capacity, they further investigate the scaling effect by increasing the parameter scale to an even larger size. Interestingly, when the parameter scale exceeds a certain level, these enlarged language models not only achieve a significant performance improvement, but also exhibit some special abilities (e.g., incontext learning) that are not present in small-scale language models (e.g., BERT). To discriminate the language models in different parameter scales, the research community has coined the term large language models (LLM) for the PLMs of significant size (e.g., containing tens or hundreds of billions of parameters). Recently, the research on LLMs has been largely advanced by both academia and industry, and a remarkable progress is the launch of ChatGPT (a powerful AI chatbot developed based on LLMs), which has attracted widespread attention from society. The technical evolution of LLMs has been making an important impact on the entire AI community, which would revolutionize the way how we develop and use AI algorithms. Considering this rapid technical progress, in this survey, we review the recent advances of LLMs by introducing the background, key findings, and mainstream techniques. In particular, we focus on four major aspects of LLMs, namely pre-training, adaptation tuning, utilization, and capacity evaluation. Besides, we also summarize the available resources for developing LLMs and discuss the remaining issues for future directions. This survey provides an up-to-date review of the literature on LLMs, which can be a useful resource for both researchers and engineers."}, "cited_paper_content": {"title": "Scaling Laws For Neural Language Models", "abstract": "We study empirical scaling laws for language model performance on the cross-entropy loss. The loss scales as a power-law with model size, dataset size, and the amount of compute used for training, with some trends spanning more than seven orders of magnitude. Other architectural details such as network width or depth have minimal effects within a wide range. Simple equations govern the dependence of overfitting on model/dataset size and the dependence of training speed on model size. These relationships allow us to determine the optimal allocation of a fixed compute budget. Larger models are significantly more sample-efficient, such that optimally compute-efficient training involves training very large models on a relatively modest amount of data and stopping significantly before convergence."}, "keywords": ["Transformer language models"], "citation_intent": "background"} {"citing_id": "2304.00759v2", "cited_id": "1905.08114", "section_title": "B. Heterogeneous Models", "citation": "The basic ideas of data-free KD are to optimize noise inputs to minimize the distance to prior knowledge #REFR , and Chen et al.", "text_before_citation": ["In RHFL #OTHEREFR , a server calculates the weights of clients by the symmetric cross-entropy loss function, and clients distilled knowledge from the unlabeled dataset.", "FCCL #OTHEREFR computed a cross-correlation matrix also based on the unlabeled public dataset.", "2) Data-free knowledge distillation: However, the former methods using KD in FL acquire a public dataset.", "The server may not collect sufficient data due to data availability and privacy concerns.", "In contrast to the aforementioned methods, data-free KD is a novel approach to complete the knowledge distillation process without the training data."], "text_after_citation": ["#OTHEREFR train The process for FedIN is described as follows.", "First, the extractor and the classifier are updated by the averaged weights we and wc from the server.", "The second step of the clients is training their models from the local private dataset and completing the IN training for the feature inputs and outputs (s in , sout) from the server.", "Generative Adversarial Networks (GANs) #OTHEREFR to generate training data for the entire KD process, utilizing the knowledge distilled from the teacher model.", "To free the limitation from a public dataset, a few researches consider data-free KD in FL."], "citing_paper_content": {"title": "Fedin: Federated Intermediate Layers Learning For Model Heterogeneity", "abstract": "Federated learning (FL) facilitates edge devices to cooperatively train a global shared model while maintaining the training data locally and privately. However, a common but impractical assumption in FL is that the participating edge devices possess the same required resources and share identical global model architecture. In this study, we propose a novel FL method called Federated Intermediate Layers Learning (FedIN), supporting heterogeneous models without utilizing any public dataset. The training models in FedIN are divided into three parts, including an extractor, the intermediate layers, and a classifier. The model architectures of the extractor and classifier are the same in all devices to maintain the consistency of the intermediate layer features, while the architectures of the intermediate layers can vary for heterogeneous devices according to their resource capacities. To exploit the knowledge from features, we propose IN training, training the intermediate layers in line with the features from other clients. Additionally, we formulate and solve a convex optimization problem to mitigate the gradient divergence problem induced by the conflicts between the IN training and the local training. The experiment results show that FedIN achieves the best performance in the heterogeneous model environment compared with the state-of-the-art algorithms. Furthermore, our ablation study demonstrates the effectiveness of IN training and the solution to the convex optimization problem."}, "cited_paper_content": {"title": "Zero-Shot Knowledge Distillation In Deep Networks", "abstract": "Knowledge distillation deals with the problem of training a smaller model (Student) from a high capacity source model (Teacher) so as to retain most of its performance. Existing approaches use either the training data or meta-data extracted from it in order to train the Student. However, accessing the dataset on which the Teacher has been trained may not always be feasible if the dataset is very large or it poses privacy or safety concerns (e.g., bio-metric or medical data). Hence, in this paper, we propose a novel data-free method to train the Student from the Teacher. Without even using any meta-data, we synthesize the Data Impressions from the complex Teacher model and utilize these as surrogates for the original training data samples to transfer its learning to Student via knowledge distillation. We, therefore, dub our method \"Zero-Shot Knowledge Distillation\" and demonstrate that our framework results in competitive generalization performance as achieved by distillation using the actual training data samples on multiple benchmark datasets."}, "keywords": ["prior knowledge"], "citation_intent": "background"} {"citing_id": "2303.14770v1", "cited_id": "1311.1249", "section_title": "Corpora Coverage", "citation": "We used the SDSL library #REFR to implement our corpus indexer. We index each corpus separately.", "text_before_citation": ["(2016) and use the FM-Index #OTHEREFR that utilises the text compressibility vi the Burrows-Wheeler transformation (BWT) #OTHEREFR of the text. The BWT is defined as, BWT", "[i] = [SA[i] \u2212 1 mod |T |].", "Searching for a sequence in BWT is done in reverse order and requires O(|u| log |\u03c3|).", "For more details on BWT and reverse searching, refer to #OTHEREFR .", "The CSA is at the core of Koala's index and search backbone."], "text_after_citation": ["Once a corpus is indexed, its constructed index sits on disk and could be queried through the Koala web interface (introduced shortly).", "Each query is launched into the indexed collection of corpora and returns the hit counts of the query in the corresponding corpus.", "Table 1 reports the time and memory usage for construction of indexes."], "citing_paper_content": {"title": "Koala: An Index For Quantifying Overlaps With Pre-Training Corpora", "abstract": "In very recent years more attention has been placed on probing the role of pre-training data in Large Language Models (LLMs) downstream behaviour. Despite the importance, there is no public tool that supports such analysis of pre-training corpora at large scale. To help research in this space, we launch Koala, a searchable index over large pretraining corpora using compressed suffix arrays with highly efficient compression rate and search support. In its first release we index the public proportion of OPT 175B pretraining data. Koala provides a framework to do forensic analysis on the current and future benchmarks as well as to assess the degree of memorization in the output from the LLMs. Koala is available for public use at https: //koala-index.erc.monash.edu/ * Corresponding author 1 We plan to extend our coverage of pre-training corpora."}, "cited_paper_content": {"title": "From Theory To Practice: Plug And Play With Succinct Data Structures", "abstract": "Engineering efficient implementations of compact and succinct structures is time-consuming and challenging, since there is no standard library of easy-to-use, highly optimized, and composable components. One consequence is that measuring the practical impact of new theoretical proposals is difficult, since older baseline implementations may not rely on the same basic components, and reimplementing from scratch can be time-consuming. In this paper we present a framework for experimentation with succinct data structures, providing a large set of configurable components, together with tests, benchmarks, and tools to analyze resource requirements. We demonstrate the functionality of the framework by recomposing two succinct solutions for top-k document retrieval which can operate on both character and integer alphabets."}, "keywords": ["corpus indexer"], "citation_intent": "method"} {"citing_id": "2304.02313v1", "cited_id": "1609.08144", "section_title": "Feature Extraction", "citation": "For the text modality, we utilize WordPiece #REFR to generate a vocabulary, and randomly initialize a matrix E \u2208 R n E \u00d7d E for word embeddings, where n E denotes the length of the vocabulary, d E represents the feature dimension. E is optimized during training.", "text_before_citation": ["We extract features from multimodal information as well as MBTI personality information.", "For V, we use the same feature extraction method as HERO #OTHEREFR .", "That is, employ pre-training 2D ResNet-152 #OTHEREFR and 3D SlowFast #OTHEREFR to extract video feature expressions V 2D \u2208 R n V \u00d7d 2D and V 3D \u2208 R n V \u00d7d 3D .", "To acquire the final feature V \u2208 R n V \u00d7d V , we concatenate V 2D and V 3D and lower the dimension through a linear layer, where n V represents the length of image frame sequence, and d V represents the feature dimension."], "text_after_citation": ["In particular, features obtained from various information are expressed as follows: D \u2208 R n D \u00d7l D \u00d7d E for dialogue information, B \u2208 R n B \u00d7l B \u00d7d E for behavior description, and A \u2208 R n A \u00d7l A \u00d7d E for multiple choice option.", "Here, n D denotes the number of dialogue utterances, n B expresses the number of behavior descriptions, n A represents the number of multiple choice options, l D , l B and l A signify the sentence length, respectively.", "To represent P, we concatenate the name tags and the MBTI personalities of characters into phrases and use E to retrieve the personality feature P \u2208 R n P \u00d7l P \u00d7d E .", "Here, n P denotes the number of relevant characters in VC, and l P represents the phrases length.", "To enhance the personality feature, we concatenate the initial personality feature with the feature through the Self-Attention module to obtain final P C by"], "citing_paper_content": {"title": "Personality-Aware Human-Centric Multimodal Reasoning: A New Task", "abstract": "Multimodal reasoning, an area of artificial intelligence that aims at make inferences from multimodal signals such as vision, language and speech, has drawn more and more attention in recent years. People with different personalities may respond differently to the same situation. However, such individual personalities were ignored in the previous studies. In this work, we introduce a new Personality-aware Human-centric Multimodal Reasoning (Personality-aware HMR) task, and accordingly construct a new dataset based on The Big Bang Theory television shows, to predict the behavior of a specific person at a specific moment, given the multimodal information of its past and future moments. The Myers-Briggs Type Indicator (MBTI) was annotated and utilized in the task to represent individuals' personalities. We benchmark the task by proposing three baseline methods, two were adapted from the related tasks and one was newly proposed for our task. The experimental results demonstrate that personality can effectively improve the performance of human-centric multimodal reasoning. To further solve the lack of personality annotation in real-life scenes, we introduce an extended task called Personality-predicted HMR, and propose the corresponding methods, to predict the MBTI personality at first, and then use the predicted personality to help multimodal reasoning. The experimental results show that our method can accurately predict personality and achieves satisfactory multimodal reasoning performance without relying on personality annotations."}, "cited_paper_content": {"title": "Google'S Neural Machine Translation System: Bridging The Gap Between Human And Machine Translation", "abstract": "Neural Machine Translation (NMT) is an end-to-end learning approach for automated translation, with the potential to overcome many of the weaknesses of conventional phrase-based translation systems. Unfortunately, NMT systems are known to be computationally expensive both in training and in translation inference. Also, most NMT systems have difficulty with rare words. These issues have hindered NMT's use in practical deployments and services, where both accuracy and speed are essential. In this work, we present GNMT, Google's Neural Machine Translation system, which attempts to address many of these issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder layers using attention and residual connections. To improve parallelism and therefore decrease training time, our attention mechanism connects the bottom layer of the decoder to the top layer of the encoder. To accelerate the final translation speed, we employ low-precision arithmetic during inference computations. To improve handling of rare words, we divide words into a limited set of common sub-word units (\"wordpieces\") for both input and output. This method provides a good balance between the flexibility of \"character\"-delimited models and the efficiency of \"word\"-delimited models, naturally handles translation of rare words, and ultimately improves the overall accuracy of the system. Our beam search technique employs a length-normalization procedure and uses a coverage penalty, which encourages generation of an output sentence that is most likely to cover all the words in the source sentence. On the WMT'14 English-to-French and English-to-German benchmarks, GNMT achieves competitive results to state-of-the-art. Using a human side-by-side evaluation on a set of isolated simple sentences, it reduces translation errors by an average of 60% compared to Google's phrase-based production system."}, "keywords": ["feature dimension", "word embeddings"], "citation_intent": "method"} {"citing_id": "2303.08594v2", "cited_id": "1604.01685", "section_title": "B. Additional Datasets B.1. Cityscapes", "citation": "Cityscapes #REFR is a high-resolution (1024\u00d72048 pixels) street-view dataset that contains 2975 training, 500 validation, and 1525 testing images.", "text_before_citation": [], "text_after_citation": ["We evaluate the performance of FastInst in terms of instance segmentation AP over eight semantic classes of the dataset. Training settings.", "We use a batch size of 16 and train the model for 90K iterations.", "We set the initial learning rate as 0.0001 and drop it by multiplying 0.1 at 0.9 and 0.95 fractions of the total number of training steps.", "During training, we randomly resize the image to a shorter edge from 800 to 1024 pixels with a step of 32 pixels, followed by a crop size of 512\u00d71024.", "During inference, we operate on the full image with a resolution of 1024\u00d72048. Results."], "citing_paper_content": {"title": "Fastinst: A Simple Query-Based Model For Real-Time Instance Segmentation", "abstract": "Recent attention in instance segmentation has focused on query-based models. Despite being non-maximum suppression (NMS)-free and end-to-end, the superiority of these models on high-accuracy real-time benchmarks has not been well demonstrated. In this paper, we show the strong potential of query-based models on efficient instance segmentation algorithm designs. We present FastInst, a simple, effective query-based framework for real-time instance segmentation. FastInst can execute at a real-time speed (i.e., 32.5 FPS) while yielding an AP of more than 40 (i.e., 40.5 AP) on COCO test-dev without bells and whistles. Specifically, FastInst follows the meta-architecture of recently introduced Mask2Former. Its key designs include instance activation-guided queries, dual-path update strategy, and ground truth mask-guided learning, which enable us to use lighter pixel decoders, fewer Transformer decoder layers, while achieving better performance. The experiments show that FastInst outperforms most state-ofthe-art real-time counterparts, including strong fully convolutional baselines, in both speed and accuracy. Code can be found at https://github.com/junjiehe96/ FastInst. Recently, with the success of DETR [4] in object detection, query-based single-stage instance segmentation methods [9, 10, 26, 44] have emerged. Instead of convolution, they exploit the versatile and powerful attention mechanism [40] combined with a sequence of learnable queries to infer the object class and segmentation mask. For example, Mask2Former [9] simplifies the workflow of instance segmentation by adding a pixel decoder and a maskedattention Transformer decoder on top of a backbone. Unlike previous methods [16, 43], Mask2Former does not require additional handcrafted components, such as training target assignment and NMS post-processing. While being simple, Mask2Former has its own issues: (1) it requires a large number of decoder layers to decode the object queries since its queries are learned static and need a lengthy process to refine; (2) It relies upon a heavy pixel decoder, e.g., multi-scale deformable attention Transformer (MSDefor-mAttn) [51], because its object segmentation mask straightforwardly depends on the output of the pixel decoder, which 1"}, "cited_paper_content": {"title": "The Cityscapes Dataset For Semantic Urban Scene Understanding", "abstract": "Visual understanding of complex urban street scenes is an enabling factor for a wide range of applications. Object detection has benefited enormously from large-scale datasets, especially in the context of deep learning. For semantic urban scene understanding, however, no current dataset adequately captures the complexity of real-world urban scenes. To address this, we introduce Cityscapes, a benchmark suite and large-scale dataset to train and test approaches for pixel-level and instance-level semantic labeling. Cityscapes is comprised of a large, diverse set of stereo video sequences recorded in streets from 50 different cities. 5000 of these images have high quality pixel-level annotations, 20 000 additional images have coarse annotations to enable methods that leverage large volumes of weakly-labeled data. Crucially, our effort exceeds previous attempts in terms of dataset size, annotation richness, scene variability, and complexity. Our accompanying empirical study provides an in-depth analysis of the dataset characteristics, as well as a performance evaluation of several state-of-the-art approaches based on our benchmark."}, "keywords": ["Cityscapes"], "citation_intent": "background"} {"citing_id": "2303.12417v2", "cited_id": "1903.11027", "section_title": "Conclusion", "citation": "The zero-shot transfer results on various indoor and outdoor benchmarks validate the ability of CLIP #REFR for 3D open-world understanding.", "text_before_citation": ["In this paper, we present a novel contrastive languageimage-point cloud pretraining framework, CLIP 2 , which consists of a triplet proxy collection scheme and a crossmodal contrastive learning mechanism.", "Based on the observation that realistic scenarios contain a massive amount of open-world objects, we innovatively propose to collect triplet proxies from realistic scenes as pretraining data.", "We then conduct cross-modal contrastive alignment across language, image and point cloud feature space to learn transferable 3D representation."], "text_after_citation": [], "citing_paper_content": {"title": "Clip 2 : Contrastive Language-Image-Point Pretraining From Real-World Point Cloud Data", "abstract": "Indoor Scene Outdoor Scene Figure 1. Illustration of our open-world recognition results. Benefiting from our CLIP 2 , the 3D representation is aligned to the openworld language representation, which enables flexible zero-shot transfer. Best viewed in colors."}, "cited_paper_content": {"title": "Nuscenes: A Multimodal Dataset For Autonomous Driving", "abstract": "Robust detection and tracking of objects is crucial for the deployment of autonomous vehicle technology. Image-based benchmark datasets have driven development in computer vision tasks such as object detection, tracking and segmentation of agents in the environment. Most autonomous vehicles, however, carry a combination of cameras and range sensors such as lidar and radar. As machine learning based methods for detection and tracking become more prevalent, there is a need to train and evaluate such methods on datasets containing range sensor data along with images. In this work we present nuTonomy scenes (nuScenes), the first published dataset to carry the full autonomous vehicle sensor suite: 6 cameras, 5 radars and 1 lidar, all with full 360 degree field of view. nuScenes comprises 1000 scenes, each 20s long and fully annotated with 3D bounding boxes for 23 classes and 8 attributes. It has 7x as many annotations and 100x as many images as the pioneering KITTI dataset. We define novel 3D detection and tracking metrics. We also provide careful dataset analysis as well as baselines for lidar and image based detection and tracking. Data, development kit and more information are available online at this http URL."}, "keywords": ["3D open-world understanding"], "citation_intent": "result"} {"citing_id": "2304.09433v2", "cited_id": "2003.10555", "section_title": "Closedie", "citation": "We study the effectiveness of span-extractor models, which are commonly used in QA systems to extract the information that is relevant to a user-query from a provided document context #REFR .", "text_before_citation": [], "text_after_citation": ["Given the ground truth attributes, we evaluate the ability of these models to extract their values from the relevant paragraphs.", "We evaluate the DebertaV3 Large model fine-tuned on the Squad 2.0 dataset, which achieves 90.8 F1 on the Squad 2.0 dev set in Table 6 .", "We find our EVAPORATE function generation approach (Table 1 ) significantly outperforms this pre-trained QA model on ClosedIE in all settings, over text and HTML documents."], "citing_paper_content": {"title": "Language Models Enable Simple Systems For Generating Structured Views Of Heterogeneous Data Lakes", "abstract": "A long standing goal of the data management community is to develop general, automated systems that ingest semi-structured documents and output queryable tables without human effort or domain specific customization. Given the sheer variety of potential documents, state-of-the art systems make simplifying assumptions and use domain specific training. In this work, we ask whether we can maintain generality by using large language models (LLMs). LLMs, which are pretrained on broad data, can perform diverse downstream tasks simply conditioned on natural language task descriptions. We propose and evaluate EVAPORATE, a simple, prototype system powered by LLMs. We identify two fundamentally different strategies for implementing this system: prompt the LLM to directly extract values from documents or prompt the LLM to synthesize code that performs the extraction. Our evaluations show a cost-quality tradeoff between these two approaches. Code synthesis is cheap, but far less accurate than directly processing each document with the LLM. To improve quality while maintaining low cost, we propose an extended code synthesis implementation, EVAPORATE-CODE+, which achieves better quality than direct extraction. Our key insight is to generate many candidate functions and ensemble their extractions using weak supervision. EVAPORATE-CODE+ not only outperforms the state-of-the art systems, but does so using a sublinear pass over the documents with the LLM. This equates to a 110\u00d7 reduction in the number of tokens the LLM needs to process, averaged across 16 real-world evaluation settings of 10k documents each."}, "cited_paper_content": {"title": "Electra: Pre-Training Text Encoders As Discriminators Rather Than Generators", "abstract": "Masked language modeling (MLM) pre-training methods such as BERT corrupt the input by replacing some tokens with [MASK] and then train a model to reconstruct the original tokens. While they produce good results when transferred to downstream NLP tasks, they generally require large amounts of compute to be effective. As an alternative, we propose a more sample-efficient pre-training task called replaced token detection. Instead of masking the input, our approach corrupts it by replacing some tokens with plausible alternatives sampled from a small generator network. Then, instead of training a model that predicts the original identities of the corrupted tokens, we train a discriminative model that predicts whether each token in the corrupted input was replaced by a generator sample or not. Thorough experiments demonstrate this new pre-training task is more efficient than MLM because the task is defined over all input tokens rather than just the small subset that was masked out. As a result, the contextual representations learned by our approach substantially outperform the ones learned by BERT given the same model size, data, and compute. The gains are particularly strong for small models; for example, we train a model on one GPU for 4 days that outperforms GPT (trained using 30x more compute) on the GLUE natural language understanding benchmark. Our approach also works well at scale, where it performs comparably to RoBERTa and XLNet while using less than 1/4 of their compute and outperforms them when using the same amount of compute."}, "keywords": ["span-extractor models"], "citation_intent": "method"} {"citing_id": "2304.14509v1", "cited_id": "2003.08061", "section_title": "Xai In The Field Of Biometrics And Pad", "citation": "The authors in #REFR proposed a method that includes two sub-networks, a spatial network, and a temporal network.", "text_before_citation": ["#OTHEREFR proposed a novel deep learning framework for oneclass classification, called Deep SVDD (Support Vector Data Description), which learns a high-dimensional data representation using a deep neural network. Zee et al.", "#OTHEREFR presented in their work the interpretability potential of a Siamese CNN to assist humans in hard prediction tasks i.e.", "authors used (CAM) to primarily know where to look in the subject images to correctly understand the decisions made by black box CNN models.", "In #OTHEREFR , the authors used the depth map and the Remote Photoplethysmography (rPPG) signal as the auxiliary supervision to enhance the effectiveness of the face anti-spoofing method using a Residual Neural Network.", "In recent years, authors are trying to exploit the methodologies of explainability and interpretability of a ML models in the field of biometrics, in particular, face PAD by making attempts such as depth map #OTHEREFR [39], #OTHEREFR , producing saliency maps for CNN models #OTHEREFR , #OTHEREFR or some relevant works on estimating the patterns that define a spoofed sample #OTHEREFR , #OTHEREFR ."], "text_after_citation": ["The spatial network processes the input image to extract spatial features, while the temporal network processes a sequence of images to capture temporal features.", "In #OTHEREFR , the authors proposed a Central Difference Convolutional Network (CDCN) architecture that is specifically designed for this task.", "The CDCN is a deep neural network that accepts a face image as input and outputs a probability score indicating whether the input image is genuine or spoofed.", "The authors in #OTHEREFR proposed an approach consisting of two stages: the metatraining stage and the meta-testing stage.", "In the meta-training stage, a meta-learner is trained to learn a good initialization for the feature extractor network that is used to extract features from face images."], "citing_paper_content": {"title": "An Efficient Ensemble Explainable Ai (Xai) Approach For Morphed Face Detection", "abstract": "The extensive utilization of biometric authentication systems have emanated attackers / imposters to forge user identity based on morphed images. In this attack, a synthetic image is produced and merged with genuine. Next, the resultant image is user for authentication. Numerous deep neural convolutional architectures have been proposed in literature for face Morphing Attack Detection (MADs) to prevent such attacks and lessen the risks associated with them. Although, deep learning models achieved optimal results in terms of performance, it is difficult to understand and analyse these networks since they are black box/opaque in nature. As a consequence, incorrect judgments may be made. There is, however, a dearth of literature that explains decision-making methods of black box deep learning models for biometric Presentation Attack Detection (PADs) or MADs that can aid the biometric community to have trust in deep learning-based biometric systems for identification and authentication in various security applications such as border control, criminal database establishment etc. In this work, we present a novel visual explanation approach named Ensemble XAI integrating Saliency maps, Class Activation Maps (CAM) and Gradient-CAM (Grad-CAM) to provide a more comprehensive visual explanation for a deep learning prognostic model (EfficientNet-B1) that we have employed to predict whether the input presented to a biometric authentication system is morphed or genuine. The experimentations have been performed on three publicly available datasets namely Face Research Lab London Set, Wide Multi-Channel Presentation Attack (WMCA), and Makeup Induced Face Spoofing (MIFS). The experimental evaluations affirms that the resultant visual explanations highlight more fine-grained details of image features/areas focused by EfficientNet-B1 to reach decisions along with appropriate reasoning."}, "cited_paper_content": {"title": "Deep Spatial Gradient And Temporal Depth Learning For Face Anti-Spoofing", "abstract": "Face anti-spoofing is critical to the security of face recognition systems. Depth supervised learning has been proven as one of the most effective methods for face anti-spoofing. Despite the great success, most previous works still formulate the problem as a single-frame multi-task one by simply augmenting the loss with depth, while neglecting the detailed fine-grained information and the interplay between facial depths and moving patterns. In contrast, we design a new approach to detect presentation attacks from multiple frames based on two insights: 1) detailed discriminative clues (e.g., spatial gradient magnitude) between living and spoofing face may be discarded through stacked vanilla convolutions, and 2) the dynamics of 3D moving faces provide important clues in detecting the spoofing faces. The proposed method is able to capture discriminative details via Residual Spatial Gradient Block (RSGB) and encode spatio-temporal information from Spatio-Temporal Propagation Module (STPM) efficiently. Moreover, a novel Contrastive Depth Loss is presented for more accurate depth supervision. To assess the efficacy of our method, we also collect a Double-modal Anti-spoofing Dataset (DMAD) which provides actual depth for each sample. The experiments demonstrate that the proposed approach achieves state-of-the-art results on five benchmark datasets including OULU-NPU, SiW, CASIA-MFSD, Replay-Attack, and the new DMAD. Codes will be available at https://github.com/clks-wzz/FAS-SGTD."}, "keywords": ["spatial network"], "citation_intent": "method"} {"citing_id": "2303.11040v1", "cited_id": "1807.01697", "section_title": "Discussion And Conclusion", "citation": "By conducting large-scale experiments on diverse 3D object detection models under corruptions, we draw some important findings, as summarized below: 1) In general, the corruption robustness of 3D object detection models is largely correlated with their clean performance, similar to the observation in #REFR .", "text_before_citation": ["In this paper, we systematically design 27 types of common corruptions in 3D object detection to benchmark corruption robustness of existing 3D object detectors.", "We establish three corruption robustness benchmarks-KITTI-C, nuScenes-C, and Waymo-C by synthesizing the corruptions on public datasets."], "text_after_citation": ["2) Among all corruption types, motion-level ones degrade the model performance most, which pose a significant threat to autonomous driving.", "Weather-level corruptions are also influential to models trained on normal weather.", "3) Among all 3D detectors, LiDAR-camera fusion models have better corruption robustness, especially under those that apply distortions to only one modality.", "However, they are also exposed to corruptions from both sensors, leading to degraded performance in this case.", "Besides, there is a trade-off between robustness under image corruptions and point cloud corruptions of fusion models."], "citing_paper_content": {"title": "Benchmarking Robustness Of 3D Object Detection To Common Corruptions In Autonomous Driving", "abstract": "3D object detection is an important task in autonomous driving to perceive the surroundings. Despite the excellent performance, the existing 3D detectors lack the robustness to real-world corruptions caused by adverse weathers, sensor noises, etc., provoking concerns about the safety and reliability of autonomous driving systems. To comprehensively and rigorously benchmark the corruption robustness of 3D detectors, in this paper we design 27 types of common corruptions for both LiDAR and camera inputs considering real-world driving scenarios. By synthesizing these corruptions on public datasets, we establish three corruption robustness benchmarks-KITTI-C, nuScenes-C, and Waymo-C. Then, we conduct large-scale experiments on 24 diverse 3D object detection models to evaluate their corruption robustness. Based on the evaluation results, we draw several important findings, including: 1) motion-level corruptions are the most threatening ones that lead to significant performance drop of all models; 2) LiDAR-camera fusion models demonstrate better robustness; 3) camera-only models are extremely vulnerable to image corruptions, showing the indispensability of LiDAR point clouds. We release the benchmarks and codes at https://github.com/kkkcx/ 3D_Corruptions_AD. We hope that our benchmarks and findings can provide insights for future research on developing robust 3D object detection models."}, "cited_paper_content": {"title": "Benchmarking Neural Network Robustness To Common Corruptions And Surface Variations", "abstract": "In this paper we establish rigorous benchmarks for image classifier robustness. Our first benchmark, ImageNet-C, standardizes and expands the corruption robustness topic, while showing which classifiers are preferable in safety-critical applications. Then we propose a new dataset called ImageNet-P which enables researchers to benchmark a classifier's robustness to common perturbations. Unlike recent robustness research, this benchmark evaluates performance on common corruptions and perturbations not worst-case adversarial perturbations. We find that there are negligible changes in relative corruption robustness from AlexNet classifiers to ResNet classifiers. Afterward we discover ways to enhance corruption and perturbation robustness. We even find that a bypassed adversarial defense provides substantial common perturbation robustness. Together our benchmarks may aid future work toward networks that robustly generalize."}, "keywords": ["3D object detection", "corruption robustness"], "citation_intent": "result"} {"citing_id": "2304.01401v1", "cited_id": "1505.04597", "section_title": "I. Introduction", "citation": "One popular backbone of the deep learning model for segmentation is U-Net #REFR , which is a general CNN model with an encoder and decoder structure ( Fig. 1(a) ).", "text_before_citation": ["Medical image segmentation aims to use machine learning models (e.g., Convolutional Neural Networks or CNNs for short) to automatically segment the target regions (organs or lesions) from the input medical images with different modalities #OTHEREFR , #OTHEREFR , #OTHEREFR , #OTHEREFR ."], "text_after_citation": ["The encoder path decomposes the input image from local to global deep features where the spatial size is gradually reduced by using the max-pooling operation.", "As the layer (Sheng He and Yangming Ou are the corresponding authors.) S. He, R. Bao, P. Grant and Y.", "Ou are with the Boston Children's Hospital and Harvard Medical School, Harvard University, 300 Longwood Ave., Boston, MA, USA.", "E-mail: heshengxgd@gmail.com; rina.bao@childrens.harvard.edu, ellen.grant@childrens.harvard.edu, yangming.ou@childrens.harvard.edu goes to deep, it extracts the high-level contextual information by discarding the detailed information on each local pixel to remove noise and irrelevant information #OTHEREFR .", "To recover the lost detailed spatial information, the decoder path hierarchically fuses global features from the output of the encoder and local features from the intermediate output of the encoder #OTHEREFR for computing the final segmentation map. In summary, as shown in Fig."], "citing_paper_content": {"title": "U-Netmer: U-Net Meets Transformer For Medical Image Segmentation", "abstract": "The combination of the U-Net based deep learning models and Transformer is a new trend for medical image segmentation. U-Net can extract the detailed local semantic and texture information and Transformer can learn the long-rang dependencies among pixels in the input image. However, directly adapting the Transformer for segmentation has \"token-flatten\" problem (flattens the local patches into 1D tokens which losses the interaction among pixels within local patches) and \"scalesensitivity\" problem (uses a fixed scale to split the input image into local patches). Compared to directly combining U-Net and Transformer, we propose a new global-local fashion combination of U-Net and Transformer, named U-Netmer, to solve the two problems. The proposed U-Netmer splits an input image into local patches. The global-context information among local patches is learnt by the self-attention mechanism in Transformer and U-Net segments each local patch instead of flattening into tokens to solve the 'token-flatten\" problem. The U-Netmer can segment the input image with different patch sizes with the identical structure and the same parameter. Thus, the U-Netmer can be trained with different patch sizes to solve the \"scale-sensitivity\" problem. We conduct extensive experiments in 7 public datasets on 7 organs (brain, heart, breast, lung, polyp, pancreas and prostate) and 4 imaging modalities (MRI, CT, ultrasound, and endoscopy) to show that the proposed U-Netmer can be generally applied to improve accuracy of medical image segmentation. These experimental results show that U-Netmer provides stateof-the-art performance compared to baselines and other models. In addition, the discrepancy among the outputs of U-Netmer with different scales is linearly correlated to the segmentation accuracy which can be considered as a confidence score to rank test images by difficulty without ground-truth. The code will be available on GitHub."}, "cited_paper_content": {"title": "U-Net: Convolutional Networks For Biomedical Image Segmentation", "abstract": "There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net ."}, "keywords": ["segmentation"], "citation_intent": "background"} {"citing_id": "2303.08029v1", "cited_id": "1606.00915", "section_title": "Introduction", "citation": "For multiscale feature aggregation, Deeplab #REFR uses various dilation convolutions to capture contextual information at multiple scales.", "text_before_citation": ["We selected pixel features of the train(olive points) and road(green points) for tsne visualization.", "the pixel feature representation #OTHEREFR , #OTHEREFR , #OTHEREFR , #OTHEREFR .", "The other is to obtain contextual information to enhance the pixel representation #OTHEREFR , #OTHEREFR , #OTHEREFR , #OTHEREFR , #OTHEREFR , #OTHEREFR .", "This paper investigates the same direction as the latter, with the aim of how to obtain richer contextual/semantic information to improve the performance of segmentation.", "The methods obtaining contextual information are broadly divided into two types, multi-scale feature aggregation and relational contextual aggregation."], "text_after_citation": ["PSPNet #OTHEREFR introduces pyramid spatial pooling to aggregate contextual information.", "For relational contextual aggregation, ACFNet #OTHEREFR and OCRNet #OTHEREFR divide pixels in an image into multiple regions and then increase the pixel representation by weighting the aggregated region representation, the weights are determined by the relationship between the pixels and the regions.", "Although the above methods are effective, they ignore the potential contextual information between the input images.", "In other words,there is no consideration of using class-level features beyond image to enhance the pixel representations.", "In order to obtain class-level features beyond the input image, MCIBI #OTHEREFR proposes to use simulated annealing to find a semantic feature on each class."], "citing_paper_content": {"title": "Class-Level Multiple Distributions Representation Are Necessary For Semantic Segmentation", "abstract": "Existing approaches focus on using class-level features to improve semantic segmentation performance. How to characterize the relationships of intra-class pixels and inter-class pixels is the key to extract the discriminative representative class-level features. In this paper, we introduce for the first time to describe intra-class variations by multiple distributions. Then, multiple distributions representation learning(MDRL) is proposed to augment the pixel representations for semantic segmentation. Meanwhile, we design a class multiple distributions consistency strategy to construct discriminative multiple distribution representations of embedded pixels. Moreover, we put forward a multiple distribution semantic aggregation module to aggregate multiple distributions of the corresponding class to enhance pixel semantic information. Our approach can be seamlessly integrated into popular segmentation frameworks FCN/PSPNet/CCNet and achieve 5.61%/1.75%/0.75% mIoU improvements on ADE20K. Extensive experiments on the Cityscapes, ADE20K datasets have proved that our method can bring significant performance improvement."}, "cited_paper_content": {"title": "Deeplab: Semantic Image Segmentation With Deep Convolutional Nets, Atrous Convolution, And Fully Connected Crfs", "abstract": "In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First , we highlight convolution with upsampled filters, or \u2018atrous convolution\u2019, as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second , we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third , we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed \u201cDeepLab\u201d system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online."}, "keywords": ["multiscale feature aggregation", "Deeplab"], "citation_intent": "method"} {"citing_id": "2304.13986v1", "cited_id": "1512.03385", "section_title": "Ablation Study", "citation": "Case (a) is our baseline which contains ResBlock #REFR with a similar number of parameters as OCT module.", "text_before_citation": ["In this part, we conduct ablation studies on Set11 dataset for our OCTUF whose iteration number is 10.", "Break-down Ablation.", "We first conduct a break-down ablation experiment in the case of CS ratio = 50% to investigate the effect of each component towards higher performance. The results are listed in Tab. 3."], "text_after_citation": ["When we successively apply our FFN and Dual-CA sub-modules respectively, namely Cases (b) and (c), the model achieves 0.71 dB and 2.91 dB improvements.", "And the model can greatly enhance 3.09 dB gains with little storage place when both sub-modules are used together.", "We also discuss the effect of the LayerNorm (LN) function in Case (d), which addresses that our OCTUF achieves better performance with the LayerNorm function.", "Note that without \"LayerNorm\" represents removing all LN from our OCTUF.", "What is more, we train our models with different learning rates as seen from Cases (e), (f), and (g). Table 4 ."], "citing_paper_content": {"title": "Optimization-Inspired Cross-Attention Transformer For Compressive Sensing", "abstract": "By integrating certain optimization solvers with deep neural networks, deep unfolding network (DUN) with good interpretability and high performance has attracted growing attention in compressive sensing (CS). However, existing DUNs often improve the visual quality at the price of a large number of parameters and have the problem of feature information loss during iteration. In this paper, we propose an Optimization-inspired Cross-attention Transformer (OCT) module as an iterative process, leading to a lightweight OCT-based Unfolding Framework (OCTUF) for image CS. Specifically, we design a novel Dual Cross Attention (Dual-CA) sub-module, which consists of an Inertia-Supplied Cross Attention (ISCA) block and a Projection-Guided Cross Attention (PGCA) block. ISCA block introduces multi-channel inertia forces and increases the memory effect by a cross attention mechanism between adjacent iterations. And, PGCA block achieves an enhanced information interaction, which introduces the inertia force into the gradient descent step through a cross attention block. Extensive CS experiments manifest that our OCTUF achieves superior performance compared to state-of-the-art methods while training lower complexity. Codes are available at https : / / github. com / songjiechong / OCTUF."}, "cited_paper_content": {"title": "Deep Residual Learning For Image Recognition", "abstract": "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers\u20148\u00d7 deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation."}, "keywords": ["ResBlock", "baseline"], "citation_intent": "method"} {"citing_id": "2304.09854v1", "cited_id": "1608.00272", "section_title": "Datasets And Metrics", "citation": "Ref-COCO #REFR 118k / 5k RIS mIoU Ref-COCO is a Reference Segmentation dataset based on the COCO.", "text_before_citation": ["#OTHEREFR 4,998 / 5,105 SS mIoU PASCAL Context dataset is an extension of PASCAL VOC containing 400+ classes (usually 59 most frequently).", "COCO #OTHEREFR 118k / 5k SS / IS / PS mIoU / mAP / PQ MS COCO dataset is a large-scale dataset with 80 thing categories and 91 stuff categories.", "ADE20k #OTHEREFR 20,210 / 2,000 SS / IS / PS mIoU / mAP / PQ ADE20k dataset is a large-scale dataset exhaustively annotated with pixel-level objects and object part labels.", "Cityscapes #OTHEREFR 2,975 / 500 SS / IS / PS mIoU / mAP / PQ Cityscapes dataset focuses on semantic understanding of urban street scenes, captured in 50 cities.", "Mapillary #OTHEREFR 18k / 2k SS / PS mIoU / PQ Mapillary dataset is a large-scale dataset with accurate high-resolution annotations."], "text_after_citation": ["VSPW #OTHEREFR 198k / 25k VSS mIoU VPSW is a larg-scale high-resolution dataset with long videos focusing on VSS.", "Youtube-VIS-2019 #OTHEREFR 95k / 14k VIS AP Extending from Youtube-VOS, Youtube-VIS is with exhaustive instance labels.", "VIP-Seg #OTHEREFR 67k / 8k VPS VPQ & STQ Extending from VSPW, VIP-Seg adds extra instance labels for VPS task.", "Cityscape-VPS #OTHEREFR 2,400 / 300 VPS VPQ Cityscapes-VPS dataset extracts from the val split of Cityscapes dataset, adding temporal annotations.", "KITTI-STEP #OTHEREFR 5,027 / 2,981 VPS STQ KITTI-STEP focuses on the long videos in the urban scenes."], "citing_paper_content": {"title": "Transformer-Based Visual Segmentation: A Survey", "abstract": "Visual segmentation seeks to partition images, video frames, or point clouds into multiple segments or groups. This technique has numerous real-world applications, such as autonomous driving, image editing, robot sensing, and medical analysis. Over the past decade, deep learning-based methods have made remarkable strides in this area. Recently, transformers, a type of neural network based on self-attention originally designed for natural language processing, have considerably surpassed previous convolutional or recurrent approaches in various vision processing tasks. Specifically, vision transformers offer robust, unified, and even simpler solutions for various segmentation tasks. This survey provides a thorough overview of transformer-based visual segmentation, summarizing recent advancements. We first review the background, encompassing problem definitions, datasets, and prior convolutional methods. Next, we summarize a meta-architecture that unifies all recent transformer-based approaches. Based on this meta-architecture, we examine various method designs, including modifications to the meta-architecture and associated applications. We also present several closely related settings, including 3D point cloud segmentation, foundation model tuning, domain-aware segmentation, efficient segmentation, and medical segmentation. Additionally, we compile and re-evaluate the reviewed methods on several well-established datasets. Finally, we identify open challenges in this field and propose directions for future research. The project page can be found at https://github.com/lxtGH/Awesome-Segmenation-With-Transformer. We will also continually monitor developments in this rapidly evolving field."}, "cited_paper_content": {"title": "Modeling Context In Referring Expressions", "abstract": "Humans refer to objects in their environments all the time, especially in dialogue with other people. We explore generating and comprehending natural language referring expressions for objects in images. In particular, we focus on incorporating better measures of visual context into referring expression models and find that visual comparison to other objects within an image helps improve performance significantly. We also develop methods to tie the language generation process together, so that we generate expressions for all objects of a particular category jointly. Evaluation on three recent datasets - RefCOCO, RefCOCO+, and RefCOCOg (Datasets and toolbox can be downloaded from https://github.com/lichengunc/refer), shows the advantages of our methods for both referring expression generation and comprehension."}, "keywords": ["Reference Segmentation dataset"], "citation_intent": "method"} {"citing_id": "2303.01911v1", "cited_id": "1910.10683", "section_title": "Related Work", "citation": "These results have since been confirmed for other monolingual LMs such as T5 #REFR and multilingual LMs such as XGLM , PALM (Chowdhery et al., 2022) , and ALEXATM (Soltan et al., 2022) .", "text_before_citation": ["Since the early attempts at using language models (LMs) as multi-task learners #OTHEREFR , MT has been a task of choice to gauge LMs' multilingual ability.", "Results for the zeroand few-shot ability of LMs were discussed for both GPT-2 and GPT-3 #OTHEREFR , which is especially intriguing as they were trained primarily on monolingual (English) data."], "text_after_citation": ["However, the focus has mainly been on global multi-task performance; often only a small part of the discussion is devoted to MT.", "Moreover, results are often only reported for a few well-resourced language pairs (e.g.", "English-French and English-German), and the scores reported (mostly BLEU), are hard to compare due to a non-systematic use of standardised evaluation protocols and metrics.", "3 There are however some in-depth analyses of MT performance of LLMs, each focusing on a specific LM's performance in a true multilingual setting with respect to prompt design and number of few-shots. For instance, #OTHEREFR", "(2022) reevaluate the MT performance of the multilingual PALM (Chowdhery et al., 2022) , focusing notably on the selection of few-shot examples."], "citing_paper_content": {"title": "Investigating The Translation Performance Of A Large Multilingual Language Model: The Case Of Bloom", "abstract": "The NLP community recently saw the release of a new large open-access multilingual language model, BLOOM (BigScience et al., 2022) covering 46 languages. We focus on BLOOM's multilingual ability by evaluating its machine translation performance across several datasets (WMT, Flores-101 and DiaBLa) and language pairs (high-and low-resourced). Our results show that 0-shot performance suffers from overgeneration and generating in the wrong language, but this is greatly improved in the few-shot setting, with very good results for a number of language pairs. We study several aspects including prompt design, model sizes, cross-lingual transfer and the use of discursive context."}, "cited_paper_content": {"title": "Exploring The Limits Of Transfer Learning With A Unified Text-To-Text Transformer", "abstract": "Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new \"Colossal Clean Crawled Corpus\", we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code."}, "keywords": ["multilingual LMs", "monolingual LMs"], "citation_intent": "result"} {"citing_id": "2304.01636v1", "cited_id": "1409.0575", "section_title": "Implementation Details.", "citation": "The weights of both the teacher and the student are initialized by training the networks on the ImageNet #REFR .", "text_before_citation": ["We perform experiments on ENet #OTHEREFR , PSPNet #OTHEREFR and BiseNet #OTHEREFR to evaluate the efficiency of the proposed method as they are three typical segmentation models of different structures.", "ENet is a classical Encoder-Decoder model, PSP-Net is a model without a decoder and BiseNet utilizes a compound structure."], "text_after_citation": ["We adopt the Stochastic Gradient Descent (SGD) update rule #OTHEREFR with the aforementioned loss function in Equation 3 to optimize the parameters of the network. Loss coefficient is set to 0.5.", "We adopt the 'poly' policy widely used in a lot of segmentation models to set the learning rate for each iteration, where the learning rate in an iteration equals to initial learning rate multiplied by (1 \u2212 ) .", "The power is set to 0.9 and the initial learning rate is set to 0.025 for student network and 0.007 for teacher network.", "All of the teacher networks are trained by 50 epochs.", "The student networks are trained by 150 epochs for TuSimple and 100 epochs for CULane."], "citing_paper_content": {"title": "Label-Guided Attention Distillation For Lane Segmentation", "abstract": "Contemporary segmentation methods are usually based on deep fully convolutional networks (FCNs). However, the layer-by-layer convolutions with a growing receptive field is not good at capturing longrange contexts such as lane markers in the scene. In this paper, we address this issue by designing a distillation method that exploits label structure when training segmentation network. The intuition is that the ground-truth lane annotations themselves exhibit internal structure. We broadcast the structure hints throughout a teacher network, i.e., we train a teacher network that consumes a lane label map as input and attempts to replicate it as output. Then, the attention maps of the teacher network are adopted as supervisors of the student segmentation network. The teacher network, with label structure information embedded, knows distinctly where the convolution layers should pay visual attention into. The proposed method is named as Label-guided Attention Distillation (LGAD). It turns out that the student network learns significantly better with LGAD than when learning alone. As the teacher network is deprecated after training, our method do not increase the inference time. Note that LGAD can be easily incorporated in any lane segmentation network. To validate the effectiveness of the proposed LGAD method, extensive experiments have been conducted on two popular lane detection benchmarks: TuSimple and CULane. The results show consistent improvement across a variety of convolutional neural network architectures. Specifically, we demonstrate the accuracy boost of LGAD on the lightweight model ENet. It turns out that the ENet-LGAD surpasses existing lane segmentation algorithms."}, "cited_paper_content": {"title": "Imagenet Large Scale Visual Recognition Challenge", "abstract": "The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the five years of the challenge, and propose future directions and improvements."}, "keywords": ["ImageNet"], "citation_intent": "method"} {"citing_id": "2303.05566v1", "cited_id": "1710.07009", "section_title": "A. Nonlinear Filtering For Discrete-Time Systems", "citation": "The above derivation converts the problem into a fully observed controlled Markov process (\u03a0, u) via an enlargement of the state space, where control policies and even optimal control policies can be synthesized accordingly for the (hypothetically) fully observed \u03a0 #REFR .", "text_before_citation": ["1 {F (\u03c0t,yt,ut)\u2208D} \u2022 n(dy t ), D \u2208 B(P(X )).", "We also use \u03a0 u to emphasize the marginal behavior of the process (\u03a0, u).", "Given the observations and the adaptively generated control signal, the optimal estimation of the conditional probability of satisfying any \u03c9-regular formula \u03a8 is given by", "EQUATION", "Note that it is difficult to obtain the full knowledge of Y , our goal is to generate control policies such that the optimal estimation P \u00b50,u [X \u03a8 | Y ] possesses certain confidence of satisfying the probabilistic requirement given any realization of observation."], "text_after_citation": ["The policy fulfilling the goal mentioned above is thereby decidable.", "The construction of the optimal filter process (or the function F in (27)) can be decomposed into a two-step differential equation 8 .", "The approximation of such a solution already suffers from the curse of dimensionality.", "Using formal abstractions to enlarge the partially observed processes into the filter processes with full observations, based on which control policies can be determined and utilized back to the partially observed cases, seems tedious and impractical.", "Besides the theoretical formal guarantee of a confidence of a satisfaction probability (i.e., a probabilistic requirement of the probabilistic specification), the abstraction essentially solves the continuous probability law of a continuous conditional expectation (or a random measure) upon some process with discrete labels using discrete inclusions."], "citing_paper_content": {"title": "Robustly Complete Finite-State Abstractions For Control Synthesis Of Stochastic Systems", "abstract": "The essential step of abstraction-based control synthesis for nonlinear systems to satisfy a given specification is to obtain a finite-state abstraction of the original systems. The complexity of the abstraction is usually the dominating factor that determines the efficiency of the algorithm. For the control synthesis of discrete-time nonlinear stochastic systems modelled by nonlinear stochastic difference equations, recent literature has demonstrated the soundness of abstractions in preserving robust probabilistic satisfaction of \u03c9-regular lineartime properties. However, unnecessary transitions exist within the abstractions, which are difficult to quantify, and the completeness of abstraction-based control synthesis in the stochastic setting remains an open theoretical question. In this paper, we address this fundamental question from the topological view of metrizable space of probability measures, and propose constructive finite-state abstractions for control synthesis of probabilistic linear temporal specifications. Such abstractions are both sound and approximately complete. That is, given a concrete discrete-time stochastic system and an arbitrarily small L 1-perturbation of this system, there exists a family of finite-state controlled Markov chains that both abstracts the concrete system and is abstracted by the slightly perturbed system. In other words, given an arbitrarily small prescribed precision, an abstraction always exists to decide whether a control strategy exists for the concrete system to satisfy the probabilistic specification."}, "cited_paper_content": {"title": "Asymptotic Optimality Of Finite Model Approximations For Partially Observed Markov Decision Processes With Discounted Cost", "abstract": "We consider finite model approximations of discrete-time partially observed Markov decision processes (POMDPs) under the discounted cost criterion. After converting the original partially observed stochastic control problem to a fully observed one on the belief space, the finite models are obtained through the uniform quantization of the state and action spaces of the belief space Markov decision process (MDP). Under mild assumptions on the components of the original model, it is established that the policies obtained from these finite models are nearly optimal for the belief space MDP, and so, for the original partially observed problem. The assumptions essentially require that the belief space MDP satisfies a mild weak continuity condition. We provide an example and introduce explicit approximation procedures for the quantization of the set of probability measures on the state space of POMDP (i.e., belief space)."}, "keywords": ["state space"], "citation_intent": "background"} {"citing_id": "2303.03922v1", "cited_id": "1907.11692", "section_title": "Training Details.", "citation": "Table 6 , we compare KGTransformer to recently proposed QA methods and report their accuracy on inhouse valid and test sets. All baselines are using RoBERTa-Large #REFR as language encoder.", "text_before_citation": ["We apply RoBERTa-large #OTHEREFR as language encoder with the first 1024 dimensional hidden state in the sequence as .", "We transform into a 768-dimensional one through a transformation matrix.", "We regard P and a triple in G containing words in W as related during the construction of .", "We tune the model with batch size set to 128 and Adam #OTHEREFR whose initial rate set as 0.00001.", "In the first 4 tuning epochs, the RoBERTa-large is frozen."], "text_after_citation": [], "citing_paper_content": {"title": "Structure Pretraining And Prompt Tuning For Knowledge Graph Transfer", "abstract": "Knowledge graphs (KG) are essential background knowledge providers in many tasks. When designing models for KG-related tasks, one of the key tasks is to devise the Knowledge Representation and Fusion (KRF) module that learns the representation of elements from KGs and fuses them with task representations. While due to the difference of KGs and perspectives to be considered during fusion across tasks, duplicate and ad hoc KRF modules design are conducted among tasks. In this paper, we propose a novel knowledge graph pretraining model KGTransformer that could serve as a uniform KRF module in diverse KG-related tasks. We pretrain KGTransformer with three self-supervised tasks with sampled sub-graphs as input. For utilization, we propose a general prompttuning mechanism regarding task data as a triple prompt to allow flexible interactions between task KGs and task data. We evaluate pretrained KGTransformer on three tasks, triple classification, zeroshot image classification, and question answering. KGTransformer consistently achieves better results than specifically designed task models. Through experiments, we justify that the pretrained KG-Transformer could be used off the shelf as a general and effective KRF module across KG-related tasks. The code and datasets are available at https://github.com/zjukg/KGTransformer. CCS CONCEPTS \u2022 Computing methodologies \u2192 Knowledge representation and reasoning."}, "cited_paper_content": {"title": "Roberta: A Robustly Optimized Bert Pretraining Approach", "abstract": "Language model pretraining has led to significant performance gains but careful comparison between different approaches is challenging. Training is computationally expensive, often done on private datasets of different sizes, and, as we will show, hyperparameter choices have significant impact on the final results. We present a replication study of BERT pretraining (Devlin et al., 2019) that carefully measures the impact of many key hyperparameters and training data size. We find that BERT was significantly undertrained, and can match or exceed the performance of every model published after it. Our best model achieves state-of-the-art results on GLUE, RACE and SQuAD. These results highlight the importance of previously overlooked design choices, and raise questions about the source of recently reported improvements. We release our models and code."}, "keywords": ["language encoder"], "citation_intent": "method"} {"citing_id": "2304.06403v1", "cited_id": "2003.14266", "section_title": "Related Work", "citation": "To alleviate the need for large annotated datasets, weakly supervised techniques for video segmentation involve using transcripts (ordered list of the actions occurring in the video), visual similarities, and audio information to generate pseudo-labels for training #REFR .", "text_before_citation": ["Encoder-Decoder Temporal Convolutional Networks (ED-TCNs) #OTHEREFR use a hierarchy of temporal convolutions to perform fine-grained action segmentation, but they can act solely on low-temporal resolution videos.", "Instead, Multi-Stage Temporal Convolutional Network (MS-TCN and its improved version MS-TCN++) can act on the full temporal resolution of the videos and achieves increased performance #OTHEREFR .", "Spatio-temporal convolutional layers #OTHEREFR have shown promising results in capturing temporal dependencies while being easier to train than previous methods.", "The main drawback of traditional supervised approaches to action segmentation is the requirement of a large amount of quality labelled data for training, which limits their applicability to large-scale domains outside of existing presegmented datasets #OTHEREFR .", "Weakly and semi-supervised approaches."], "text_after_citation": ["In #OTHEREFR , a Gaussian Mixture Models + Convolutional Neural Networks (GMM+CNN) is first initialized and used to infer the segments of a video given a transcription of it.", "The new segmentation is used to re-estimate and update the model parameters until convergence.", "In #OTHEREFR , a recurrent neural network is used to model a discriminative representation of subactions, and a coarse probabilistic model to allow for temporal alignment and inference over long sequences.", "Some approaches use machine learning models to infer the segments of the video #OTHEREFR .", "Other approaches, such as those based on frame-to-frame visual similarities #OTHEREFR , self-attentions mechanism #OTHEREFR or iterative soft boundary assignment #OTHEREFR , enforce consistency between the video and labels without the need for temporal supervision."], "citing_paper_content": {"title": "Leveraging Triplet Loss For Unsupervised Action Segmentation", "abstract": "In this paper, we propose a novel fully unsupervised framework that learns action representations suitable for the action segmentation task from the single input video itself, without requiring any training data. Our method is a deep metric learning approach rooted in a shallow network with a triplet loss operating on similarity distributions and a novel triplet selection strategy that effectively models temporal and semantic priors to discover actions in the new representational space. Under these circumstances, we successfully recover temporal boundaries in the learned action representations with higher quality compared with existing unsupervised approaches. The proposed method is evaluated on two widely used benchmark datasets for the action segmentation task and it achieves competitive performance by applying a generic clustering algorithm on the learned representations. 1 * Work done during an internship at the IRI."}, "cited_paper_content": {"title": "Sct: Set Constrained Temporal Transformer For Set Supervised Action Segmentation", "abstract": "Temporal action segmentation is a topic of increasing interest, however, annotating each frame in a video is cumbersome and costly. Weakly supervised approaches therefore aim at learning temporal action segmentation from videos that are only weakly labeled. In this work, we assume that for each training video only the list of actions is given that occur in the video, but not when, how often, and in which order they occur. In order to address this task, we propose an approach that can be trained end-to-end on such data. The approach divides the video into smaller temporal regions and predicts for each region the action label and its length. In addition, the network estimates the action labels for each frame. By measuring how consistent the frame-wise predictions are with respect to the temporal regions and the annotated action labels, the network learns to divide a video into class-consistent regions. We evaluate our approach on three datasets where the approach achieves state-of-the-art results."}, "keywords": ["video segmentation"], "citation_intent": "method"} {"citing_id": "2304.14557v1", "cited_id": "1712.08147", "section_title": "Related Work", "citation": "Interestingly, our reduction of k-cycles essentially mirrors the construction in the proof of Theorem 3.1 in #REFR .", "text_before_citation": ["Fine-Grained Complexity The study of fine-grained complexity aims to show (conditional) hardness of easy problems.", "Recent years have witnessed a bloom of development into this fascinating subject, resulting in many tight lower bounds which match exactly, or up to poly log factors, the running time of best-known algorithms #OTHEREFR 24, #OTHEREFR .", "Among many others, popular hardness assumptions include the Strong Exponential Time Hypothesis (SETH), Boolean Matrix Multiplication (BMM), and All-Pairs Shortest Paths (APSP).", "Our work can be seen as a particular instance under this framework, i.e.", "using Boolean or Min-Weight k-Clique Conjecture to show conditional lower bounds for BCQs."], "text_after_citation": ["Functional Aggregate Queries (FAQ) FAQ #OTHEREFR provides a Sum-of-Product framework to define the semantics of conjunctive queries over arbitrary semirings.", "The semiring point-ofview originated from the seminal paper #OTHEREFR .", "We show that the embedding from a k-clique into a hypergraph holds for arbitrary semirings, which enables one to transfer the hardness of k-clique to FAQ independent of the underlying semiring.", "To the best of our knowledge, this is the first semiring-oblivious reduction.", "Enumeration and Preprocessing #OTHEREFR characterized for which self-join-free conjunctive queries a linear or constant delay and linear preprocessing algorithm is possible."], "citing_paper_content": {"title": "The Fine-Grained Complexity Of Boolean Conjunctive Queries And Sum-Product Problems", "abstract": "We study the fine-grained complexity of evaluating Boolean Conjunctive Queries and their generalization to sum-of-product problems over an arbitrary semiring. For these problems, we present a general semiring-oblivious reduction from the k-clique problem to any query structure (hypergraph). Our reduction uses the notion of embedding a graph to a hypergraph, first introduced by Marx [18]. As a consequence of our reduction, we can show tight conditional lower bounds for many classes of hypergraphs, including cycles, Loomis-Whitney joins, some bipartite graphs, and chordal graphs. These lower bounds have a dependence on what we call the clique embedding power of a hypergraph H, which we believe is a quantity of independent interest. We show that the clique embedding power is always less than the submodular width of the hypergraph, and present a decidable algorithm for computing it. We conclude with many open problems for future research. 2012 ACM Subject Classification Database theory; Complexity theory and logic Keywords and phrases Fine-grained complexity, conjunctive queries, semiring-oblivious reduction Digital Object Identifier 10.4230/LIPIcs.CVIT.2016.23 1 Technically, the PANDA algorithm works for Boolean or full CQs. 2 Informally speaking, this requires the algorithm does not leverage the fast matrix multiplication."}, "cited_paper_content": {"title": "Tight Hardness For Shortest Cycles And Paths In Sparse Graphs", "abstract": "Fine-grained reductions have established equivalences between many core problems with $\\tilde{O}(n^3)$-time algorithms on $n$-node weighted graphs, such as Shortest Cycle, All-Pairs Shortest Paths (APSP), Radius, Replacement Paths, Second Shortest Paths, and so on. These problems also have $\\tilde{O}(mn)$-time algorithms on $m$-edge $n$-node weighted graphs, and such algorithms have wider applicability. Are these $mn$ bounds optimal when $m \\ll n^2$? Starting from the hypothesis that the minimum weight $(2\\ell+1)$-Clique problem in edge weighted graphs requires $n^{2\\ell+1-o(1)}$ time, we prove that for all sparsities of the form $m = \\Theta(n^{1+1/\\ell})$, there is no $O(n^2 + mn^{1-\\epsilon})$ time algorithm for $\\epsilon>0$ for \\emph{any} of the below problems: Minimum Weight $(2\\ell+1)$-Cycle in a directed weighted graph, Shortest Cycle in a directed weighted graph, APSP in a directed or undirected weighted graph, Radius (or Eccentricities) in a directed or undirected weighted graph, Wiener index of a directed or undirected weighted graph, Replacement Paths in a directed weighted graph, Second Shortest Path in a directed weighted graph, Betweenness Centrality of a given node in a directed weighted graph. That is, we prove hardness for a variety of sparse graph problems from the hardness of a dense graph problem. Our results also lead to new conditional lower bounds from several related hypothesis for unweighted sparse graph problems including $k$-cycle, shortest cycle, Radius, Wiener index and APSP."}, "keywords": ["k-cycles"], "citation_intent": "background"} {"citing_id": "2303.04298v3", "cited_id": "1510.06750", "section_title": "Introduction", "citation": "More than a decade later, Fefferman and Kimmel #REFR proved a second black-box separation using a distributional in-place oracle, which is a non-standard type of oracles.", "text_before_citation": ["The first (but often false) thought is that phases and magnitudes are continuous, and a piece of quantum information may be able to store exponentially or infinitely more information than classical ones; which is always not true 1 .", "Since classical and quantum information present distinct and unique natures, the community studies their differences under different contexts and directions, including advice-aided quantum computation [ One way to understand their differences is by studying one-way communication complexity: i.e., Alice and Bob want to jointly compute a function with their private inputs, but only one-time quantum/classical communication from Alice to Bob is allowed.", "Among many works, Bar-Yossef, Jayram, and Kerenidis #OTHEREFR showed an exponential separation between quantum and classical one-way communication complexity, for the so-called hidden matching problem.", "The other approach is by looking at QMA v.s. QCMA.", "In 2007, Aaronson and Kuperberg #OTHEREFR showed a black-box separation with respect to a black-box quantum unitary and left the same separation with respect to a classical oracle as an open question."], "text_after_citation": ["Recently, Natarajan and Nirkhe #OTHEREFR moved a step closer to the goal by presenting a black-box separation with respect to a distributional oracle 2 .", "Therefore, we would like to further investigate the difference between quantum and classical proofs, i.e., the separation between QMA v.s. QCMA.", "In the work, we address the question by demonstrating a separation relative to classically accessible classical oracle.", "Definition 1.1 (QMA).", "A language L is said to be in QMA if there exists a quantum polynomialtime machine V together with a polynomial p(\u2022) such that,"], "citing_paper_content": {"title": "Classical Vs Quantum Advice And Proofs Under Classically-Accessible Oracle", "abstract": "It is a long-standing open question to construct a classical oracle relative to which BQP/qpoly = BQP/poly or QMA = QCMA. In this paper, we construct classically-accessible classical oracles relative to which BQP/qpoly = BQP/poly and QMA = QCMA. Here, classically-accessible classical oracles are oracles that can be accessed only classically even for quantum algorithms. Based on a similar technique, we also show an alternative proof for the separation of QMA and QCMA relative to a distributional quantumly-accessible classical oracle, which was recently shown by Natarajan and Nirkhe."}, "cited_paper_content": {"title": "Quantum Vs Classical Proofs And Subset Verification", "abstract": "We study the ability of efficient quantum verifiers to decide properties of exponentially large subsets given either a classical or quantum witness. We develop a general framework that can be used to prove that QCMA machines, with only classical witnesses, cannot verify certain properties of subsets given implicitly via an oracle. We use this framework to prove an oracle separation between QCMA and QMA using an ``in-place'' permutation oracle, making the first progress on this question since Aaronson and Kuperberg in 2007. We also use the framework to prove a particularly simple standard oracle separation between QCMA and AM."}, "keywords": ["oracles"], "citation_intent": "background"} {"citing_id": "2303.06484v2", "cited_id": "1805.09298", "section_title": "L Derivation Of Ce'S Lower Bound", "citation": "Combining the two lower bounds above, we can have that #REFR where the first two terms encourage larger inter-class hyperspherical uniformity, and the last term promotes smaller intra-class hyperspherical uniformity.", "text_before_citation": ["where \u03bb can be chosen such that both Q 1 (w) and Q 2 (w) become convex functions with respect to w.", "Taking advantage of the convexity, we can separately set the gradient of Q 1 (w) and Q 2 (w) with respect to w as 0 and compute their minima. Specifically, we end up with", "EQUATION", "EQUATION", "where l ic = exp( wc,wi ) j exp( wj ,xi ) denotes the softmax confidence."], "text_after_citation": ["L CE \u2265 Q 1 (w * Q1 ) + Q 2 (w * Q2 ) = n i=1 log C c=1 exp 1 \u03bbn n j=1 l jc x i , x j \u2212 n 2\u03bb C c=1 1 n n i=1 l ic x i 2 \u2212 1 2\u03bbn n i=1 j\u2208Ay i x i , x j"], "citing_paper_content": {"title": "", "abstract": "The neural collapse (NC) phenomenon describes an underlying geometric symmetry for deep neural networks, where both deeply learned features and classifiers converge to a simplex equiangular tight frame. It has been shown that both crossentropy loss and mean square error can provably lead to NC. We remove NC's key assumption on the feature dimension and the number of classes, and then present a generalized neural collapse (GNC) hypothesis that effectively subsumes the original NC. Inspired by how NC characterizes the training target of neural networks, we decouple GNC into two objectives: minimal intra-class variability and maximal inter-class separability. We then use hyperspherical uniformity (which characterizes the degree of uniformity on the unit hypersphere) as a unified framework to quantify these two objectives. Finally, we propose a general objective-hyperspherical uniformity gap (HUG), which is defined by the difference between inter-class and intra-class hyperspherical uniformity. HUG not only provably converges to GNC, but also decouples GNC into two separate objectives. Unlike cross-entropy loss that couples intra-class compactness and inter-class separability, HUG enjoys more flexibility and serves as a good alternative loss function. Empirical results show that HUG works well in terms of generalization and robustness."}, "cited_paper_content": {"title": "Learning Towards Minimum Hyperspherical Energy", "abstract": "Neural networks are a powerful class of nonlinear functions that can be trained end-to-end on various applications. While the over-parametrization nature in many neural networks renders the ability to fit complex functions and the strong representation power to handle challenging tasks, it also leads to highly correlated neurons that can hurt the generalization ability and incur unnecessary computation cost. As a result, how to regularize the network to avoid undesired representation redundancy becomes an important issue. To this end, we draw inspiration from a well-known problem in physics -- Thomson problem, where one seeks to find a state that distributes N electrons on a unit sphere as evenly as possible with minimum potential energy. In light of this intuition, we reduce the redundancy regularization problem to generic energy minimization, and propose a minimum hyperspherical energy (MHE) objective as generic regularization for neural networks. We also propose a few novel variants of MHE, and provide some insights from a theoretical point of view. Finally, we apply neural networks with MHE regularization to several challenging tasks. Extensive experiments demonstrate the effectiveness of our intuition, by showing the superior performance with MHE regularization."}, "keywords": ["larger inter-class hyperspherical"], "citation_intent": "background"} {"citing_id": "2305.01095v1", "cited_id": "1412.6980", "section_title": "B. Determination Of Lstm Predictor", "citation": "The Adam optimizer is utilized to adjust the learning rate #REFR . The learning rate is set to 0.0001.", "text_before_citation": ["To determine the optimal configuration of the LSTM predictor, its performance on the validation dataset and training time is assessed.", "The flow chart of the proposed network layers is shown in Fig.2 . It consists of eight layers each has 200 neurons.", "A sequence input layer inputs sequence data to a network.", "A fully connected layer multiplies the input by a weight matrix and then adds a bias vector.The ReLU layer performs a threshold operation on each element of the input, where any value less than zero is set to zero.An LSTM layer learns long-term dependencies between time steps in time series and sequence data.The layer performs additive interactions, which can help improve gradient flow over long sequences during training and the regression layer computes the halfmean-squared-error loss for regression tasks."], "text_after_citation": ["The training process is stopped when the validation accuracy does not show improvement over five consecutive iterations/epochs to prevent overfitting.", "The performance of the vehicle acceleration prediction is evaluated in this section based on root mean square error (RMSE) as an indicator of the model prediction accuracy which is formulated as the following equation 1and 2 #OTHEREFR :", "\uf0e5 \uf03d \uf03d N 1 t t y 1 N y (1) N t y t y 1 t 2 RMSE \uf0e5 \uf03d \uf03d \uf0d9 \uf0f7 \uf0f7 \uf0f8 \uf0f6 \uf0e7 \uf0e7 \uf0e8 \uf0e6 \uf02d N (2)", "where y is the mean of the measured acceleration, t y is the measured acceleration, t y \uf0d9 is the predicted acceleration, and N is the number of elements in output. Fig.", "3 shows the RMSE values during the LSTM training process. As depicted in Fig."], "citing_paper_content": {"title": "Lstm-Based Preceding Vehicle Behaviour Prediction During Aggressive Lane Change For Acc Application", "abstract": "The development of Adaptive Cruise Control (ACC) systems aims to enhance the safety and comfort of vehicles by automatically regulating the speed of the vehicle to ensure a safe gap from the preceding vehicle.However, conventional ACC systems are unable to adapt themselves to changing driving conditions and drivers' behavior. To address this limitation, we propose a Long Short-Term Memory (LSTM)based ACC system that can learn from past driving experiences and adapt and predict new situations in realtime.The model is constructed based on the real-world highD dataset, acquired from German highways with the assistance of camera-equipped drones. We evaluated the ACC system under aggressive lane changes when the side lane preceding vehicle cut off, forcing the targeted driver to reduce speed. To this end, the proposed system was assessed on a simulated driving environment and compared with a feedforward Artificial Neural Network (ANN) model and Model Predictive Control (MPC) model. The results show that the LSTM-based system is 19.25 % more accurate than the ANN model and 5.9 % more accurate than the MPC model in terms of predicting future values of subject vehicle acceleration. The simulation is done in Matlab/Simulink environment. I."}, "cited_paper_content": {"title": "Adam: A Method For Stochastic Optimization", "abstract": "We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and/or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm."}, "keywords": ["learning rate", "Adam optimizer"], "citation_intent": "method"} {"citing_id": "2304.07460v1", "cited_id": "1712.07557", "section_title": "Convergence-Optimized Power Control Under Dp Guarantee", "citation": "Then, we substitute the objective function and the DP constraint in P1 with the convergence upper bound #REFR w.r.t.", "text_before_citation": ["Under Assumption 1, given a local update \u2206 t i \u2208 R d and rand k sparsification with random projection matrix A t , we have", "EQUATION", "Proof: Taking the expectation on the rand k , we have:", "E A t \u2206 t i 2 2 = d j=1 k d [\u2206 t i ] 2 j (35a) = k d \u03b8 t,\u03c4 i \u2212 \u03b8 t 2 2 (35b) = k d \u03b7 2 \u03c4 s=1 g t,s\u22121 i 2 2 (35c) \u2264 k d \u03b7 2 \u03c4 2 C 2 1 (35d)", "where (35c) holds due to #OTHEREFR , (35d) follows from Assumption 1 and Cauchy-Schwarz inequality."], "text_after_citation": ["{\u03b2 t } t\u2208[T \u22121] in Theorem 4 and the client-level DP result #OTHEREFR in Theorem 3, respectively. Therefore, we can approximate Problem P1 as follows:", "EQUATION", "Note that P2 can be readily solved as shown in the following result:", "Theorem 5. The optimal solution to Problem P2 is given by:", "EQUATION"], "citing_paper_content": {"title": "Communication And Energy Efficient Wireless Federated Learning With Intrinsic Privacy", "abstract": "Federated Learning (FL) is a collaborative learning framework that enables edge devices to collaboratively learn a global model while keeping raw data locally. Although FL avoids leaking direct information from local datasets, sensitive information can still be inferred from the shared models. To address the privacy issue in FL, differential privacy (DP) mechanisms are leveraged to provide formal privacy guarantee. However, when deploying FL at the wireless edge with over-the-air computation, ensuring client-level DP faces significant challenges. In this paper, we propose a novel wireless FL scheme called private federated edge learning with sparsification (PFELS) to provide client-level DP guarantee with intrinsic channel noise while reducing communication and energy overhead and improving model accuracy. The key idea of PFELS is for each device to first compress its model update and then adaptively design the transmit power of the compressed model update according to the wireless channel status without any artificial noise addition. We provide a privacy analysis for PFELS and prove the convergence of PFELS under general non-convex and non-IID settings. Experimental results show that compared with prior work, PFELS can improve the accuracy with the same DP guarantee and save communication and energy costs simultaneously."}, "cited_paper_content": {"title": "Differentially Private Federated Learning: A Client Level Perspective", "abstract": "Federated learning is a recent advance in privacy protection. In this context, a trusted curator aggregates parameters optimized in decentralized fashion by multiple clients. The resulting model is then distributed back to all clients, ultimately converging to a joint representative model without explicitly having to share the data. However, the protocol is vulnerable to differential attacks, which could originate from any party contributing during federated optimization. In such an attack, a client's contribution during training and information about their data set is revealed through analyzing the distributed model. We tackle this problem and propose an algorithm for client sided differential privacy preserving federated optimization. The aim is to hide clients' contributions during training, balancing the trade-off between privacy loss and model performance. Empirical studies suggest that given a sufficiently large number of participating clients, our proposed procedure can maintain client-level differential privacy at only a minor cost in model performance."}, "keywords": ["DP constraint"], "citation_intent": "method"} {"citing_id": "2305.01072v1", "cited_id": "1907.10121", "section_title": "A. Shortest Path Problem", "citation": "Using an optimized implementation of Dijkstra's algorithm (e.g., the one provided by scipy #REFR ), the search for a shortest path is also very fast.", "text_before_citation": ["As noted above, this curve is safe because each of its line segments is contained in at least a safe box.", "This shortest path step determines whether or not our path planning problem is feasible.", "If there is no path in the augmented line graph between the vertices associated with p init and p term , i.e., the distance between them is \u221e, then the path planning problem (4) is infeasible.", "Conversely, if there is a path between these two vertices, the original path planning problem is feasible, since a feasible trajectory can be constructed as in \u00a7II-C.", "The problem of identifying the safe boxes that contain the initial and terminal points is known as stabbing problem and, given the precomputations done to construct the line graph, this takes negligible time #OTHEREFR ."], "text_after_citation": [], "citing_paper_content": {"title": "Fast Path Planning Through Large Collections Of Safe Boxes", "abstract": "We present a fast algorithm for the design of smooth paths (or trajectories) that are constrained to lie in a collection of axis-aligned boxes. We consider the case where the number of these safe boxes is large, and basic preprocessing of them (such as finding their intersections) can be done offline. At runtime we quickly generate a smooth path between given initial and terminal positions. Our algorithm designs trajectories that are guaranteed to be safe at all times, and it detects infeasibility whenever such a trajectory does not exist. Our algorithm is based on two subproblems that we can solve very efficiently: finding a shortest path in a weighted graph, and solving (multiple) convex optimal control problems. We demonstrate the proposed path planner on large-scale numerical examples, and we provide an efficient open-source software implementation, fastpathplanning."}, "cited_paper_content": {"title": "Scipy 1.0: Fundamental Algorithms For Scientific Computing In Python", "abstract": "SciPy is an open-source scientific computing library for the Python programming language. Since its initial release in 2001, SciPy has become a de facto standard for leveraging scientific algorithms in Python, with over 600 unique code contributors, thousands of dependent packages, over 100,000 dependent repositories and millions of downloads per year. In this work, we provide an overview of the capabilities and development practices of SciPy 1.0 and highlight some recent technical developments. This Perspective describes the development and capabilities of SciPy 1.0, an open source scientific computing library for the Python programming language."}, "keywords": ["shortest path", "optimized implementation"], "citation_intent": "method"} {"citing_id": "2303.12936v1", "cited_id": "1905.06316", "section_title": "Comparing Bert And Distilbert", "citation": "The fairly comparable scores of ELMo and the traditional baselines in the null context supports the observation of #REFR that is, when it comes to contextual embeddings, there is only a small improvement in learning semantics over traditional ML methods.", "text_before_citation": ["If the models were also pretrained from scratch on the same corpus, it would be ensured that they utilize the same knowledge to learn the context.", "And this would enable a fairer comparison.", "Recently, it was shown that ELMo and BERT make no significant difference in semantic analysis #OTHEREFR .", "Here it is observed that although they are close-by in the null context, DistilBERT is more robust than ELMo in the cross-context in text classification.", "The findings of this study are in line with prior work."], "text_after_citation": ["DistilBERT is on par with or exceeding ELMo on a binary text classification task #OTHEREFR .", "DistilBERT, as a transformerbased model, is better in capturing long-term dependencies in an input sequence #OTHEREFR .", "DistilBERT is lighter than ELMo and has a shorter training time #OTHEREFR .", "Here it should be noted that the experimental settings of the previous work and In this study, ELMo and DistilBERT are compared on their fine-tuning performance on two binary text classification tasks.", "The main focus was to see how much can these models be benefited in a practical way without any modification to the pretraining outputs."], "citing_paper_content": {"title": "", "abstract": "I am grateful to my family for their unconditional love and patience. I am grateful to Arzucan\u00d6zg\u00fcr, for being such an inspiring figure by her selfless devotion to research the most righteous way with the passion to contribute to the community. I am grateful to Ali H\u00fcrriyetoglu, for being such a role model, who could somehow always find a way to turn the mist of research questions into a structured path to create practical solutions by combining creativity and technique. I cannot thank enough my dear friends who put up with my whims throughout this journey. I thank fellows from TabiLAB for inspiring me with their brilliance, invaluable insights and recommendations. I thank Ko\u00e7 University EMW research team for their generosity in sharing the data which was created with blood, sweat and tears. I feel lucky that I got to meet fellows in EMW project engineering team who invested their precious time and energy to support me in this study from the very beginning. Lastly, I owe the deepest gratitude to our professors and staff members in our department who taught us how to form such a great community and made it feel like the dearest home from the day one."}, "cited_paper_content": {"title": "What Do You Learn From Context? Probing For Sentence Structure In Contextualized Word Representations", "abstract": "Contextualized representation models such as ELMo (Peters et al., 2018a) and BERT (Devlin et al., 2018) have recently achieved state-of-the-art results on a diverse array of downstream NLP tasks. Building on recent token-level probing work, we introduce a novel edge probing task design and construct a broad suite of sub-sentence tasks derived from the traditional structured NLP pipeline. We probe word-level contextual representations from four recent models and investigate how they encode sentence structure across a range of syntactic, semantic, local, and long-range phenomena. We find that existing models trained on language modeling and translation produce strong representations for syntactic phenomena, but only offer comparably small improvements on semantic tasks over a non-contextual baseline."}, "keywords": ["contextual embeddings"], "citation_intent": "result"} {"citing_id": "2304.00193v1", "cited_id": "1902.00279", "section_title": "I. Introduction", "citation": "A distancedbased formation control algorithm for a team of quadrotors transporting a heavy object is presented in #REFR , which measures and resists the acceleration due to disturbances and rope tension using incremental nonlinear dynamic inversion control.", "text_before_citation": ["Although the precise attitude and position control of the payload can be realized, the dynamic information of the payload is required for real-time feedback control, which is hard to obtain in engineering practice.", "In contrast, in formation-based design, only the state information of the aerial vehicles is needed.", "When the vehicle group reaches its destination, the payload is also supposed to reach the target area.", "The validity and feasibility of such approach has been established via simulation #OTHEREFR and experiment #OTHEREFR , but the cable forces on the quadrotors are ignored.", "To implement formation-based robust collaborative transportation, several control algorithms have been developed."], "text_after_citation": ["In #OTHEREFR and #OTHEREFR , a passivity-based formation control strategy is proposed with adaptive compensation terms to eliminate the wind disturbance and the cable tension.", "The energy passivity property of the quadrotors-payload system is established in #OTHEREFR , where an adaptive damping term is used to dissipate the energy injected by the sudden perturbations.", "The studies mentioned above are all designed based on the rigid formation.", "As a matter of fact, maintaining a fixed formation for payload transportation is not necessary and it is better to employ a flexible formation, which can adapt the vehicles to the complex and uncertain environment and tasks #OTHEREFR .", "Force control-based approaches have been explored for collaborative payload transportation with flexible formation, e.g., force amplification #OTHEREFR and contact force regulation #OTHEREFR ."], "citing_paper_content": {"title": "Force-Coordination Control For Aerial Collaborative Transportation Based On Lumped Disturbance Separation And Estimation", "abstract": "This article studies the collaborative transportation of a cable-suspended pipe by two quadrotors. A force-coordination control scheme is proposed, where a force-consensus term is introduced to average the load distribution between the quadrotors. Since thrust uncertainty and cable force are coupled together in the acceleration channel, disturbance observer can only obtain the lumped disturbance estimate. Under the quasi-static condition, a disturbance separation strategy is developed to remove the thrust uncertainty estimate for precise cable force estimation. The stability of the overall system is analyzed using Lyapunov theory. Both numerical simulations and indoor experiments using heterogeneous quadrotors validate the effectiveness of thrust uncertainty separation and force-consensus algorithm."}, "cited_paper_content": {"title": "Flexible Collaborative Transportation By A Team Of Rotorcraft", "abstract": "We propose a combined method for the collaborative transportation of a suspended payload by a team of rotorcraft. A recent distance-based formation-motion control algorithm based on assigning distance disagreements among robots generates the acceleration signals to be tracked by the vehicles. In particular, the proposed method does not need global positions nor tracking prescribed trajectories for the motion of the members of the team. The acceleration signals are followed accurately by an Incremental Nonlinear Dynamic Inversion controller designed for rotorcraft that measures and resists the tensions from the payload. Our approach allows us to analyze the involved accelerations and forces in the system so that we can calculate the worst case conditions explicitly to guarantee a nominal performance, provided that the payload starts at rest in the 2D centroid of the formation, and it is not under significant disturbances. For example, we can calculate the maximum safe deformation of the team with respect to its desired shape. We demonstrate our method with a team of four rotorcraft carrying a suspended object two times heavier than the maximum payload for an individual. Last but not least, our proposed algorithm is available for the community in the open-source autopilot Paparazzi."}, "keywords": ["quadrotors"], "citation_intent": "method"} {"citing_id": "2305.01506v1", "cited_id": "1905.00414", "section_title": "Vi. How Does The Pre-Training Escalate Target Task Performance?", "citation": "To analyze the representations in neural networks consisting of different weights, we employed Central Kernel Alignment (CKA) #REFR as an index of measuring similarity between two representations from different neural networks.", "text_before_citation": ["Throughout experimental analyses in section IV and V-B, we discovered pre-training methods concretely contribute to the escalated target task performance in every image recognition task and problem settings.", "Based on the aforementioned discoveries, we aim to scrutinize an underlying reason behind the effectiveness of pre-training by analyzing learned knowledge at the neural networks."], "text_after_citation": ["Suppose we provide image samples into two neural networks (N 1 , N 2 ) trained from different pre-trained weights.", "Then, we can extract a pair of representation vectors from any layers of N 1 and N 2 denoted as (R 1 , R 2 ).", "Given these representation vectors R 1 and R 2 , CKA effectively measures the similarity between layers in the same neural networks with different weights and across entirely different architectures.", "The CKA yields a similarity metric lying between 0 to 1, where 0 implies less similarity and 1 illustrates high similarity.", "Due to its convenience and effectiveness in measuring similarity between two representations, we utilized it."], "citing_paper_content": {"title": "Discovering The Effectiveness Of Pre-Training In A Large-Scale Car-Sharing Platform", "abstract": "Recent progress of deep learning has empowered various intelligent transportation applications, especially in carsharing platforms. While the traditional operations of the carsharing service highly relied on human engagements in fleet management, modern car-sharing platforms let users upload car images before and after their use to inspect the cars without a physical visit. To automate the aforementioned inspection task, prior approaches utilized deep neural networks. They commonly employed pre-training, a de-facto technique to establish an effective model under the limited number of labeled datasets. As candidate practitioners who deal with car images would presumably get suffered from the lack of a labeled dataset, we analyzed a sophisticated analogy into the effectiveness of pretraining is important. However, prior studies primarily shed a little spotlight on the effectiveness of pre-training. Motivated by the aforementioned lack of analysis, our study proposes a series of analyses to unveil the effectiveness of various pre-training methods in image recognition tasks at the car-sharing platform. We set two real-world image recognition tasks in the car-sharing platform in a live service, established them under the many-shot and few-shot problem settings, and scrutinized which pre-training method accomplishes the most effective performance in which setting. Furthermore, we analyzed how does the pre-training and fine-tuning convey different knowledge to the neural networks for a precise understanding."}, "cited_paper_content": {"title": "Similarity Of Neural Network Representations Revisited", "abstract": "Recent work has sought to understand the behavior of neural networks by comparing representations between layers and between different trained models. We examine methods for comparing neural network representations based on canonical correlation analysis (CCA). We show that CCA belongs to a family of statistics for measuring multivariate similarity, but that neither CCA nor any other statistic that is invariant to invertible linear transformation can measure meaningful similarities between representations of higher dimension than the number of data points. We introduce a similarity index that measures the relationship between representational similarity matrices and does not suffer from this limitation. This similarity index is equivalent to centered kernel alignment (CKA) and is also closely connected to CCA. Unlike CCA, CKA can reliably identify correspondences between representations in networks trained from different initializations."}, "keywords": ["neural networks", "Central Kernel Alignment"], "citation_intent": "method"} {"citing_id": "2304.05963v1", "cited_id": "1804.05650", "section_title": "Introduction", "citation": "It has been shown, both in practice and in theory #REFR , to provide an advantage over fixed parameter strategies.", "text_before_citation": ["It is surprising that an evolutionary algorithm on bit vectors similar to CMA-ES is missing.", "One may object to this observation that PBIL is a natural candidate.", "However, its parameters are all continuous and, in the form of a probability vector, do not belong to the search space in the same way as, in CMA-ES, the mean of the search distribution does.", "Furthermore, it is expected that such a speculative evolutionary algorithm will be able to control the mutation rate, just as CMA-ES provides an optimal control of correlated mutations.", "Adaptive parameter control in evolutionary algorithms has been the subject of sustained efforts for decades #OTHEREFR ."], "text_after_citation": ["However, update rules are often introduced as heuristics and do not derive from first principles.", "A famous update rule is the so called one-fifth success rule [25] which adjusts the mutation rate so as to keep the success rate of mutations equal or close to one fifth (a mutation is successful if it produces an offspring of increased fitness).", "If the success rate is larger (smaller) than one fifth then the mutation rate should be increased (decreased).", "The one-fifth constant can be derived from theoretical considerations about the (1 + 1) evolution strategy applied to the sphere problem in Euclidean spaces.", "It has been proved that the (1 + 1) evolutionary algorithm equipped with a similar success-based update rule achieves the same performance on LeadingOnes as the (1+1) EA with an optimal fitness-dependent mutation rate #OTHEREFR ."], "citing_paper_content": {"title": "An Information-Theoretic Evolutionary Algorithm *", "abstract": "We propose a novel evolutionary algorithm on bit vectors which derives from the principles of information theory. The informationtheoretic evolutionary algorithm (it-EA) iteratively updates a search distribution with two parameters, the center, that is the bit vector at which standard bit mutation is applied, and the mutation rate. The mutation rate is updated by means of information-geometric optimization and the center is updated by means of a maximum likelihood principle. Standard elitist and non elitist updates of the center are also considered. Experiments illustrate the dynamics of the mutation rate and the influence of hyperparameters. In an empirical runtime analysis, on OneMax and LeadingOnes, the elitist and non elitist it-EAs obtain promising results."}, "cited_paper_content": {"title": "Theory Of Parameter Control For Discrete Black-Box Optimization: Provable Performance Gains Through Dynamic Parameter Choices", "abstract": "Parameter control is aimed at realizing performance gains through a dynamic choice of the parameters which determine the behavior of the underlying optimization algorithm. In the context of evolutionary algorithms, this research line has for a long time been dominated by empirical approaches. With the significant advances in running-time analysis achieved in the last ten years, the parameter control question has become accessible to theoretical investigations. A number of running-time results for a broad range of different parameter control mechanisms have been obtained in recent years. This chapter surveys these results, and puts them into context by proposing an updated classification scheme for parameter control."}, "keywords": ["fixed parameter strategies"], "citation_intent": "background"} {"citing_id": "2303.02667v1", "cited_id": "1607.00376", "section_title": "Discussion And Conclusion", "citation": "From a gender point of view, our results converge with previous research that reported that male scholars have a greater propensity to self-cite than female counterparts #REFR .", "text_before_citation": ["These 'mechanical' factors affecting self-citations suggest that researchers from different cohorts cannot be held to the same standards in terms of self-citations.", "Third, at the individual researcher level, our results show that direct self-citations have, on average, little effect on the h-index, but can have a considerable impact for researchers that have high rates of self-citations.", "This suggests that, while self-citations are, in most cases, not affecting the reliability of citation analysis for assessing research impact, some researchers with unusually high levels of self-citations do.", "Fourth, it provides evidence that self-citations are correlated with external citations.", "That is, researchers who have high levels of self-citations are also highly cited by their colleagues. Women are also self-citing their work less than men."], "text_after_citation": ["These results are congruent with studies that show that women are relatively less prone to engage in selfpromoting behavior at work #OTHEREFR , and that women tend to face relatively greater social penalties for self-promotion than men #OTHEREFR .", "In line with those, our results suggest that women require a greater threshold of relevancy in order to self-cite.", "This \"higher bar\" results in lower self-citation rates for women vis-\u00e0-vis men.", "Finally, or results show that self-citations are more highly related to the citing paper than external citations, which suggest that, in most cases, self-citations are a normal feature of knowledge accumulation.", "However, such finding can be nuanced by the inverse relationship observed between text similarity and the extent of researchers' self-refencing practices: the higher researchers' percentage of self-references, the lower the similarity between citing and cited articles."], "citing_paper_content": {"title": "Title: Are Self-Citations A Normal Feature Of Knowledge Accumulation?", "abstract": "Science is a cumulative activity, which can manifest itself through the act of citing. Citations are also central to research evaluation, thus creating incentives for researchers to cite their own work. Using a dataset containing more than 63 million articles and 51 million disambiguated authors, this paper examines the relative importance of self-citations and selfreferences in the scholarly communication landscape, their relationship with the age and gender of authors, as well as their effects on various research evaluation indicators. Results show that self-citations and self-references evolve in different directions throughout researchers' careers, and that men and older researchers are more likely to self-cite. Although self-citations have, on average, a small to moderate effect on author's citation rates, they highly inflate citations for a subset of researchers. Comparison of the abstracts of cited and citing papers to assess the relatedness of different types of citations shows that self-citations are more similar to each other than other types of citations, and therefore more relevant. However, researchers that selfreference more tend to include less relevant citations. The paper concludes with a discussion of the role of self-citations in scholarly communication. One-Sentence Summary: This study provides evidence of career and gender effects in selfcitations, and of a higher similarity of citing and cited papers in the case of self-citations than external citations. Main Text: Citation analysis has been used in research evaluation for almost five decades (1-3). What started as a tool to help researchers and librarians find relevant literature more efficiently (4-6) slowly became, after the creation Science Citation Index in 1963, a means to assess research and various levels, from individuals to institutions and countries (7). In this context, citation analysis has come under scrutiny from researchers across all disciplines. Several authors criticized bibliometrics and citation analysis for their limitations (8). Those limitations can be divided into those that relate to the coverage of the database (9-11), accuracy of citation data (12), adverse effects (13-16), overabundance of indicators (17), and citations being questionable indicators of research impact (18-19). Literature on citation analysis has highlighted the diversity of roles of citations in scholarly papers. Based on papers published in high-energy physics, the classic study by Moravcsik and Murugesan (20) provides a typology of functions of citations, based on four non-exclusive dichotomies. Citations can be conceptual, related to theories or concepts contained in the cited"}, "cited_paper_content": {"title": "Men Set Their Own Cites High: Gender And Self-Citation Across Fields And Over Time", "abstract": "How common is self-citation in scholarly publication, and does the practice vary by gender? Using novel methods and a data set of 1.5 million research papers in the scholarly database JSTOR published between 1779 and 2011, the authors find that nearly 10 percent of references are self-citations by a paper\u2019s authors. The findings also show that between 1779 and 2011, men cited their own papers 56 percent more than did women. In the last two decades of data, men self-cited 70 percent more than women. Women are also more than 10 percentage points more likely than men to not cite their own previous work at all. While these patterns could result from differences in the number of papers that men and women authors have published rather than gender-specific patterns of self-citation behavior, this gender gap in self-citation rates has remained stable over the last 50 years, despite increased representation of women in academia. The authors break down self-citation patterns by academic field and number of authors ..."}, "keywords": ["self-cite"], "citation_intent": "result"} {"citing_id": "2304.02916v1", "cited_id": "1904.09751", "section_title": "Guiding Text", "citation": "In order to avoid the \"unreliable tail\" of the distribution we use Nucleus Sampling (topp) #REFR .", "text_before_citation": ["PaSST has recently achieved state-of-the-art performance in audio classification tasks #OTHEREFR .", "Using a pre-trained PaSST model, we infer AudioSet class labels from the input audio.", "Each word in the label is embedded in the input space using trainable embeddings and concatenated with the extracted patches.", "Similarly to #OTHEREFR , in order to make our system more robust to PaSST's prediction errors, we sample each label from the output distribution."], "text_after_citation": ["During inference, we select the most probable output label instead of sampling.", "Since we add word-level information to our model, we want the input label to be semantically similar to the ground truth caption, functioning as a guiding text.", "We observe that PaSST tends to output labels that capture the general, high-level information in the audio and not labels that are more infrequent and specific.", "Such labels are more likely to be semantically similar or even be present verbatim in the ground truth captions.", "For example, an audio clip with the caption \"A short distance away, a group of people engage in indistinguishable chatter.\" is classified as Speech when in fact the AudioSet class Chatter would have higher semantic accuracy."], "citing_paper_content": {"title": "Efficient Audio Captioning Transformer With Patchout And Text Guidance", "abstract": "Automated audio captioning is multi-modal translation task that aim to generate textual descriptions for a given audio clip. In this paper we propose a full Transformer architecture that utilizes Patchout as proposed in [1], significantly reducing the computational complexity and avoiding overfitting. The caption generation is partly conditioned on textual AudioSet tags extracted by a pre-trained classification model which is fine-tuned to maximize the semantic similarity between AudioSet labels and ground truth captions. To mitigate the data scarcity problem of Automated Audio Captioning we introduce transfer learning from an upstream audio-related task and an enlarged in-domain dataset. Moreover, we propose a method to apply Mixup augmentation for AAC. Ablation studies are carried out to investigate how Patchout and text guidance contribute to the final performance. The results show that the proposed techniques improve the performance of our system and while reducing the computational complexity. Our proposed method received the Judges Award at the Task6A of DCASE Challenge 2022."}, "cited_paper_content": {"title": "The Curious Case Of Neural Text Degeneration", "abstract": "Despite considerable advances in neural language modeling, it remains an open question what the best decoding strategy is for text generation from a language model (e.g. to generate a story). The counter-intuitive empirical observation is that even though the use of likelihood as training objective leads to high quality models for a broad range of language understanding tasks, maximization-based decoding methods such as beam search lead to degeneration \u2014 output text that is bland, incoherent, or gets stuck in repetitive loops. To address this we propose Nucleus Sampling, a simple but effective method to draw considerably higher quality text out of neural language models. Our approach avoids text degeneration by truncating the unreliable tail of the probability distribution, sampling from the dynamic nucleus of tokens containing the vast majority of the probability mass. To properly examine current maximization-based and stochastic decoding methods, we compare generations from each of these methods to the distribution of human text along several axes such as likelihood, diversity, and repetition. Our results show that (1) maximization is an inappropriate decoding objective for open-ended text generation, (2) the probability distributions of the best current language models have an unreliable tail which needs to be truncated during generation and (3) Nucleus Sampling is the best decoding strategy for generating long-form text that is both high-quality \u2014 as measured by human evaluation \u2014 and as diverse as human-written text."}, "keywords": ["Nucleus Sampling"], "citation_intent": "method"} {"citing_id": "2304.13099v1", "cited_id": "1812.00676", "section_title": "", "citation": "These methods are based on the clever solution approximations that result in a time-stepping scheme requiring only a small number of previous solution states for the next state evaluation. With some exceptions ( e.g. #REFR ), these methods are also O(h p ).", "text_before_citation": ["The methods from the first class are sequential in nature and have algebraic convergence order that typically does not exceed 2, even for the multi-step methods #OTHEREFR , because of the intrinsic fractional-kernel singularity #OTHEREFR .", "In addition, at each time-step these methods need to query the entire solution history in order to evaluate \u2202 \u03b1 t or J \u03b1 , numerically.", "In the consequence of that, they are computationally costly and memory constrained.", "Nonetheless, the methods from this class are popular due to their simplicity, numerical stability #OTHEREFR and the ability to handle non-smooth initial data #OTHEREFR .", "The second class of numerical methods represented by the works #OTHEREFR , to name a few."], "text_after_citation": ["Spectral methods from #OTHEREFR deserve a separate mention.", "Although formally belonging to the second class, they make use of the exponentially convergent contour based propagator approximation, which permits to evaluate the transient component of the solution to linear problem without time-stepping.", "Authors of these works, however, do not apply it to (1.1), (1.4) directly.", "Instead they consider a special proxy problem \u2202 t u + I 1\u2212\u03b1 Au = g where I \u03b1 is a non-local operator equal to \u2202 \u03b1 t , if \u03b1 < 1, or to J \u03b1 , otherwise.", "It was shown in #OTHEREFR , that the existing methodology for parabolic problems #OTHEREFR can be transferred to the mild solution of such proxy problem with all important numerical features of the solution algorithms preserved, including uniform exponential convergence for t \u2208 [0, T ] and the capacity for multi-level parallelism."], "citing_paper_content": {"title": "Exponentially Convergent Numerical Method For Abstract Cauchy Problem With Fractional Derivative Of Caputo Type", "abstract": "We present an exponentially convergent numerical method to approximate the solution of the Cauchy problem for the inhomogeneous fractional differential equation with an unbounded operator coefficient and Caputo fractional derivative in time. The numerical method is based on the newly obtained solution formula that consolidates the mild solution representations of sub-parabolic, parabolic and sub-hyperbolic equations with sectorial operator coefficient A and non-zero initial data. The involved integral operators are approximated using the sinc-quadrature formulas that are tailored to the spectral parameters of A, fractional order \u03b1 and the smoothness of the first initial condition, as well as to the properties of the equation's right-hand side f (t). The resulting method possesses exponential convergence for positive sectorial A, any finite t, including t = 0, and the whole range \u03b1 \u2208 (0, 2). It is suitable for a practically important case, when no knowledge of f (t) is available outside the considered interval t \u2208 [0, T ]. The algorithm of the method is capable of multi-level parallelism. We provide numerical examples that confirm the theoretical error estimates."}, "cited_paper_content": {"title": "Efficient Multistep Methods For Tempered Fractional Calculus: Algorithms And Simulations", "abstract": "In this work, we extend the fractional linear multistep methods in [C. Lubich, SIAM J. Math. Anal., 17 (1986), pp.704--719] to the tempered fractional integral and derivative operators in the sense that the tempered fractional derivative operator is interpreted in terms of the Hadamard finite-part integral. We develop two fast methods, Fast Method I and Fast Method II, with linear complexity to calculate the discrete convolution for the approximation of the (tempered) fractional operator. Fast Method I is based on a local approximation for the contour integral that represents the convolution weight. Fast Method II is based on a globally uniform approximation of the trapezoidal rule for the integral on the real line. Both methods are efficient, but numerical experimentation reveals that Fast Method II outperforms Fast Method I in terms of accuracy, efficiency, and coding simplicity. The memory requirement and computational cost of Fast Method II are $O(Q)$ and $O(Qn_T)$, respectively, where $n_T$ is the number of the final time steps and $Q$ is the number of quadrature points used in the trapezoidal rule. The effectiveness of the fast methods is verified through a series of numerical examples for long-time integration, including a numerical study of a fractional reaction-diffusion model."}, "keywords": ["clever solution approximations"], "citation_intent": "method"} {"citing_id": "2303.01625v1", "cited_id": "1801.08967", "section_title": "Definition 3.4 (Qszk).", "citation": "In particular, Menda and Watrous #REFR showed that for oracle A, a problem L A in QSZK A if there exists a reduction from L A to QSD A . Theorem 3.6 ([44, Theorem 1]).", "text_before_citation": ["The completeness follows from the fact that the states's trace distance is negligibly close to one, and this implies there exists a measurement that perfectly distinguishes the states.", "For the soundness, since the states are negligibly close in trace distance, the prover does not succeed with non-negligibly advantage over random guessing.", "To show the protocol is zero-knowledge, the quantum simulator applies the verifier's quantum operation first.", "After receiving the reponse b \u2032 from the prover, it sets b \u2032 = b.", "The QSZK-completeness of QSD relativizes."], "text_after_citation": ["For alphabet \u03a3, \u0393, let L \u2286 \u0393 * be a language and A \u2286 \u03a3 * be an oracle.", "The language L A is contained in QSZK A if and only if there exists a polynomial-time uniform family of pairs of relativized quantum circuits (Q A 0 , Q A 1 ) with the following properties:", "\u2022 If x \u2208 L A , then (Q A 0 , Q A 1 ) \u2208 QSD A 1 .", "\u2022 If x / \u2208 L A , then (Q A 0 , Q A 1 ) \u2208 QSD A 0 .", "Ben-David and Kothari #OTHEREFR studied independently the so-called QSZK complexity of function f , denoted QSZK(f ), which is defined as the minimum number k made by a pair of query algorithms A, B given oracle access to x such that for every x such that (i) if f (x) = 1, then A x \u2212 B x tr \u2265 2/3, and (ii) if f (x) = 0, then A x \u2212 B x tr \u2264 1/3."], "citing_paper_content": {"title": "Certified Randomness From Quantum Supremacy", "abstract": "We propose an application for near-term quantum devices: namely, generating cryptographically certified random bits, to use (for example) in proof-of-stake cryptocurrencies. Our protocol repurposes the existing \"quantum supremacy\" experiments, based on random circuit sampling, that Google and USTC have successfully carried out starting in 2019. We show that, whenever the outputs of these experiments pass the now-standard Linear Cross-Entropy Benchmark (LXEB), under plausible hardness assumptions they necessarily contain \u2126(n) min-entropy, where n is the number of qubits. To achieve a net gain in randomness, we use a small random seed to produce pseudorandom challenge circuits. In response to the challenge circuits, the quantum computer generates output strings that, after verification, can then be fed into a randomness extractor to produce certified nearly-uniform bits-thereby \"bootstrapping\" from pseudorandomness to genuine randomness. We prove our protocol sound in two senses: (i) under a hardness assumption called Long List Quantum Supremacy Verification, which we justify in the random oracle model, and (ii) unconditionally in the random oracle model against an eavesdropper who could share arbitrary entanglement with the device. (Note that our protocol's output is unpredictable even to a computationally unbounded adversary who can see the random oracle.) Currently, the central drawback of our protocol is the exponential cost of verification, which in practice will limit its implementation to at most n \u223c 60 qubits, a regime where attacks are expensive but not impossible. Modulo that drawback, our protocol appears to be the only practical application of quantum computing that both requires a QC and is physically realizable today."}, "cited_paper_content": {"title": "Oracle Separations For Quantum Statistical Zero-Knowledge", "abstract": "This paper investigates the power of quantum statistical zero knowledge interactive proof systems in the relativized setting. We prove the existence of an oracle relative to which quantum statistical zero-knowledge does not contain UP intersect coUP, and we prove that quantum statistical zero knowledge does not contain UP relative to a random oracle with probability 1. Our proofs of these statements rely on a bound on output state discrimination for relativized quantum circuits based on the quantum adversary method of Ambainis, following a technique similar to one used by Ben-David and Kothari to prove limitations on a query complexity variant of quantum statistical zero-knowledge."}, "keywords": ["QSD", "oracle"], "citation_intent": "background"} {"citing_id": "2303.13794v1", "cited_id": "1712.07629", "section_title": "Two-Stage Pipeline Of Image Matching", "citation": "The concatenated key-points of stage-one X #REFR 1 and X 1 2 are then fed .After the loop in 1, the green, yellow, and red cluster will be gathered, while the pink one and the blue one will be rejected.", "text_before_citation": ["As shown in Figure 4 , The two-stages pipeline support plugging any image matching models or recipe of models into both stages, such that the models set for stage 1 is", "M 1 = {m 1 1 , m 1 2 , \u2022 \u2022 \u2022 , m 1", "n1 }, and the models set for stage 2 is", "M 2 = {m 2 1 , m 2 2 , \u2022 \u2022 \u2022 , m 2 n2 }.", "Each of the models in stage-one produce a set of key-points for both images."], "text_after_citation": ["If the pink one is included, more areas without useful information will be included, making rejected the pink cluster makes sense.", "into MKPC algorithm.", "Based on the two images I 1 and I 2 and the key-points of stage-one X 1 1 and X #OTHEREFR 2 , The MKPC generates the cropped critical regions for both images (I 1 and I 2 ).", "With the cropped regions of both images, each of the models in stage-two outputs the matched key-points for stage-two (X 2 1 and X 2 2 ).", "Afterwards, the models in stage-two matches on the cropped area, producing the key-points for the second stage X 2 1 and X 2 2 . Concatenating the key-points of both stages, i.e."], "citing_paper_content": {"title": "Efficient And Accurate Co-Visible Region Localization With Matching Key-Points Crop (Mkpc): A Two-Stage Pipeline For Enhancing Image Matching Performance", "abstract": "Image matching is a classic and fundamental task in computer vision. In this paper, under the hypothesis that the areas outside the co-visible regions carry little information, we propose a matching key-points crop (MKPC) algorithm. The MKPC locates, proposes and crops the critical regions, which are the co-visible areas with great efficiency and accuracy. Furthermore, building upon MKPC, we propose a general two-stage pipeline for image matching, which is compatible to any image matching models or combinations. We experimented with plugging SuperPoint + SuperGlue into the two-stage pipeline, whose results show that our method enhances the performance for outdoor pose estimations. What's more, in a fair comparative condition, our method outperforms the SOTA on Image Matching Challenge 2022 Benchmark, which represents the hardest outdoor benchmark of image matching currently. * denotes contributing equally to this work. Preprint. Under review."}, "cited_paper_content": {"title": "Superpoint: Self-Supervised Interest Point Detection And Description", "abstract": "This paper presents a self-supervised framework for training interest point detectors and descriptors suitable for a large number of multiple-view geometry problems in computer vision. As opposed to patch-based neural networks, our fully-convolutional model operates on full-sized images and jointly computes pixel-level interest point locations and associated descriptors in one forward pass. We introduce Homographic Adaptation, a multi-scale, multi-homography approach for boosting interest point detection repeatability and performing cross-domain adaptation (e.g., synthetic-to-real). Our model, when trained on the MS-COCO generic image dataset using Homographic Adaptation, is able to repeatedly detect a much richer set of interest points than the initial pre-adapted deep model and any other traditional corner detector. The final system gives rise to state-of-the-art homography estimation results on HPatches when compared to LIFT, SIFT and ORB."}, "keywords": ["concatenated key-points"], "citation_intent": "method"} {"citing_id": "2304.05336v1", "cited_id": "1910.10683", "section_title": "Lemmatization", "citation": "Models based on the T5 #REFR model architecture have achieved state-of-the-art results in various natural language processing challenges and can be fine-tuned for specific tasks.", "text_before_citation": [], "text_after_citation": ["One of the applications of T5 can be lemmatization, the process of reducing a word or phrase to its basic form (lemma).", "In Slavic languages such as Polish, Czech and Russian, lemmatization is particularly important due to the complex inflection of these languages.", "We approached the lemmatization task as a textto-text problem.", "The input to the model is an inflected phrase or named entity, which can consist of several word forms.", "For example, it can consist of nouns in singular or plural form, or verbs in different tenses."], "citing_paper_content": {"title": "Exploring The Use Of Foundation Models For Named Entity Recognition And Lemmatization Tasks In Slavic Languages", "abstract": "This paper describes Adam Mickiewicz University's (AMU) solution for the 4th Shared Task on SlavNER. The task involves the identification, categorization, and lemmatization of named entities in Slavic languages. Our approach involved exploring the use of foundation models for these tasks. In particular, we used models based on the popular BERT and T5 model architectures. Additionally, we used external datasets to further improve the quality of our models. Our solution obtained promising results, achieving high metrics scores in both tasks. We describe our approach and the results of our experiments in detail, showing that the method is effective for NER and lemmatization in Slavic languages."}, "cited_paper_content": {"title": "Exploring The Limits Of Transfer Learning With A Unified Text-To-Text Transformer", "abstract": "Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new \"Colossal Clean Crawled Corpus\", we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code."}, "keywords": ["various natural language"], "citation_intent": "background"} {"citing_id": "2304.06831v1", "cited_id": "1704.01212", "section_title": "Dgnn Booster V2", "citation": "However, these designs suffer from high energy consumption and low computation resource utilization because of temporal data dependencies. #REFR Lack of parallelism between GNN and RNN.", "text_before_citation": ["Integrated DGNN GCRN-M2 #OTHEREFR , GC-LSTM #OTHEREFR LRGCN #OTHEREFR , RE-Net #OTHEREFR \u2022 Data dependencies between GNN and RNN in adjacent time steps.", "\u2022 Dependent GNN at different time steps.", "Weights-evolved DGNN EvolveGCN #OTHEREFR \u2022 Weights for GNN are evolved by RNN.", "\u2022 Independent GNN at different time steps.", "However, there still remain some challenges on DGNN hardware deployment. #OTHEREFR High energy consumption and low computation resource utilization. Previous works primarily focus on deploying DGNNs on GPUs."], "text_after_citation": ["Previous research focuses on treating GNN and RNN as separate parts, which limits parallelism.", "#OTHEREFR Lack of integrating GNN and RNN optimizations together into a single system.", "Previous research usually optimizes GNN and RNN individually, which limits achieving optimal hardware efficiency."], "citing_paper_content": {"title": "Dgnn-Booster: A Generic Fpga Accelerator Framework For Dynamic Graph Neural Network Inference", "abstract": "Dynamic Graph Neural Networks (DGNNs) are becoming increasingly popular due to their effectiveness in analyzing and predicting the evolution of complex interconnected graph-based systems. However, hardware deployment of DGNNs still remains a challenge. First, DGNNs do not fully utilize hardware resources because temporal data dependencies cause low hardware parallelism. Additionally, there is currently a lack of generic DGNN hardware accelerator frameworks, and existing GNN accelerator frameworks have limited ability to handle dynamic graphs with changing topologies and node features. To address the aforementioned challenges, in this paper, we propose DGNN-Booster, which is a novel Field-Programmable Gate Array (FPGA) accelerator framework for real-time DGNN inference using High-Level Synthesis (HLS). It includes two different FPGA accelerator designs with different dataflows that can support the most widely used DGNNs. We showcase the effectiveness of our designs by implementing and evaluating two representative DGNN models on ZCU102 board and measuring the end-to-end performance. The experiment results demonstrate that DGNN-Booster can achieve a speedup of up to 5.6\u00d7 compared to the CPU baseline (6226R), 8.4\u00d7 compared to the GPU baseline (A6000) and 2.1\u00d7 compared to the FPGA baseline without applying optimizations proposed in this paper. Moreover, DGNN-Booster can achieve over 100\u00d7 and over 1000\u00d7 runtime energy efficiency than the CPU and GPU baseline respectively. Our implementation code and on-board measurements are publicly"}, "cited_paper_content": {"title": "Neural Message Passing For Quantum Chemistry", "abstract": "Supervised learning on molecules has incredible potential to be useful in chemistry, drug discovery, and materials science. Luckily, several promising and closely related neural network models invariant to molecular symmetries have already been described in the literature. These models learn a message passing algorithm and aggregation procedure to compute a function of their entire input graph. At this point, the next step is to find a particularly effective variant of this general approach and apply it to chemical prediction benchmarks until we either solve them or reach the limits of the approach. In this paper, we reformulate existing models into a single common framework we call Message Passing Neural Networks (MPNNs) and explore additional novel variations within this framework. Using MPNNs we demonstrate state of the art results on an important molecular property prediction benchmark; these results are strong enough that we believe future work should focus on datasets with larger molecules or more accurate ground truth labels."}, "keywords": ["RNN"], "citation_intent": "background"} {"citing_id": "2303.13520v1", "cited_id": "1310.2963", "section_title": "Shareability Network", "citation": "In this study, we extend the shareability network approach #REFR to quantify the potential ridesharing efficiency.", "text_before_citation": [], "text_after_citation": ["The shareability network transforms the spatiotemporal distribution of trips into a theoretic graph G = (V, E), where V is the set of trips and E the set of edges that indicates the shareability between trips.", "Following the original work, we assume that at most two trips can be shared.", "Two trips are shareable if serving them together with one vehicle can (1) generate total travel duration savings and (2) only incur an acceptable delay for the involved passengers.", "Each trip x \u2208 V can be represented as a vector (o", "x , d x , t o x , t d x )"], "citing_paper_content": {"title": "Quantifying The Uneven Efficiency Benefits Of Ridesharing Market Integration", "abstract": "Ridesharing is recognized as one of the key pathways to sustainable urban mobility. With the emergence of Transportation Network Companies (TNCs) such as Uber and Lyft, the ridesharing market has become increasingly fragmented in many cities around the world, leading to efficiency loss and increased traffic congestion. While an integrated ridesharing market (allowing sharing across TNCs) can improve the overall efficiency, how such benefits may vary across TNCs based on actual market characteristics is still not well understood. In this study, we extend a shareability network framework to quantify and explain the efficiency benefits of ridesharing market integration using available TNC trip records. Through a case study in Manhattan, New York City, the proposed framework is applied to analyze a real-world ridesharing market with 3 TNCs-Uber, Lyft, and Via. It is estimated that a perfectly integrated market in Manhattan would improve ridesharing efficiency by 13.3%, or 5% of daily TNC vehicle hours traveled. Further analysis reveals that (1) the efficiency improvement is negatively correlated with the overall demand density and inter-TNC spatiotemporal unevenness (measured by network modularity), (2) market integration would generate a larger efficiency improvement in a competitive market, and (3) the TNC with a higher intra-TNC demand concentration (measured by clustering coefficient) would benefit less from market integration. As the uneven benefits may deter TNCs from collaboration, we also illustrate how to quantify each TNC's marginal contribution based on the Shapley value, which can be used to ensure equitable profit allocation. These results can help market regulators and business alliances to evaluate and monitor market efficiency and dynamically adjust their strategies, incentives, and profit allocation schemes to promote market integration and collaboration."}, "cited_paper_content": {"title": "Quantifying The Benefits Of Vehicle Pooling With Shareability Networks", "abstract": "Taxi services are a vital part of urban transportation, and a considerable contributor to traffic congestion and air pollution causing substantial adverse effects on human health. Sharing taxi trips is a possible way of reducing the negative impact of taxi services on cities, but this comes at the expense of passenger discomfort quantifiable in terms of a longer travel time. Due to computational challenges, taxi sharing has traditionally been approached on small scales, such as within airport perimeters, or with dynamical ad hoc heuristics. However, a mathematical framework for the systematic understanding of the tradeoff between collective benefits of sharing and individual passenger discomfort is lacking. Here we introduce the notion of shareability network, which allows us to model the collective benefits of sharing as a function of passenger inconvenience, and to efficiently compute optimal sharing strategies on massive datasets. We apply this framework to a dataset of millions of taxi trips taken in New York City, showing that with increasing but still relatively low passenger discomfort, cumulative trip length can be cut by 40% or more. This benefit comes with reductions in service cost, emissions, and with split fares, hinting toward a wide passenger acceptance of such a shared service. Simulation of a realistic online system demonstrates the feasibility of a shareable taxi service in New York City. Shareability as a function of trip density saturates fast, suggesting effectiveness of the taxi sharing system also in cities with much sparser taxi fleets or when willingness to share is low."}, "keywords": ["potential ridesharing efficiency"], "citation_intent": "method"} {"citing_id": "2304.01238v2", "cited_id": "1808.06226", "section_title": "Large Language Models", "citation": "We used the HuggingFace implementation of the Flan-T5 model (google/ Flan-t5-base) with the SentencePiece tokenizer #REFR .", "text_before_citation": ["The distance between the resulting embeddings is measured using the cosine similarity.", "Flan-T5.", "Flan-T5 (2022) is a family of models based on T5 (2019) #OTHEREFR , an encoder-decoder transformer architecture trained on multiple language tasks.", "The Flan-T5 models have undergone instruction-finetuning on over 1,800 language tasks, leading to a significant enhancement in their reasoning skills and promptability.", "However, it is worth noting that the Flan-T5 models were not trained to perform spam detection tasks."], "text_after_citation": ["Our experimentation included the small version of the Flan-T5 model (80M parameters), but it demonstrated limited generalization capabilities, which is why it was excluded from this study.", "The Flan-T5 model is a Seq2Seq model that is capable of generating textual outputs, as opposed to binary labels or probabilities.", "To leverage the capabilities of this model for spam detection, we fine-tuned it as a new task, introducing a dedicated prefix of \"classify as ham or spam:\" to every sample.", "As a result, the model was trained to correctly output either \"ham\" or \"spam\" based on the input text.", "To obtain numerical values for classification metrics, a postprocessing step was utilized to map the textual labels to 0 and 1."], "citing_paper_content": {"title": "Spam-T5: Benchmarking Large Language Models For Few-Shot Email Spam Detection", "abstract": "This paper investigates the effectiveness of large language models (LLMs) in email spam detection by comparing prominent models from three distinct families: BERT-like, Sentence Transformers, and Seq2Seq. Additionally, we examine well-established machine learning techniques for spam detection, such as Na\u00efve Bayes and LightGBM, as baseline methods. We assess the performance of these models across four public datasets, utilizing different numbers of training samples (full training set and few-shot settings). Our findings reveal that, in the majority of cases, LLMs surpass the performance of the popular baseline techniques, particularly in few-shot scenarios. This adaptability renders LLMs uniquely suited to spam detection tasks, where labeled samples are limited in number and models require frequent updates. Additionally, we introduce Spam-T5, a Flan-T5 model that has been specifically adapted and fine-tuned for the purpose of detecting email spam. Our results demonstrate that Spam-T5 surpasses baseline models and other LLMs in the majority of scenarios, particularly when there are a limited number of training samples available. Our code is publicly available at https://github.com/jpmorganchase/emailspamdetection."}, "cited_paper_content": {"title": "Sentencepiece: A Simple And Language Independent Subword Tokenizer And Detokenizer For Neural Text Processing", "abstract": "This paper describes SentencePiece, a language-independent subword tokenizer and detokenizer designed for Neural-based text processing, including Neural Machine Translation. It provides open-source C++ and Python implementations for subword units. While existing subword segmentation tools assume that the input is pre-tokenized into word sequences, SentencePiece can train subword models directly from raw sentences, which allows us to make a purely end-to-end and language independent system. We perform a validation experiment of NMT on English-Japanese machine translation, and find that it is possible to achieve comparable accuracy to direct subword training from raw sentences. We also compare the performance of subword training and segmentation with various configurations. SentencePiece is available under the Apache 2 license at https://github.com/google/sentencepiece."}, "keywords": ["SentencePiece tokenizer"], "citation_intent": "method"} {"citing_id": "2303.09824v3", "cited_id": "1801.06503", "section_title": "A. Imitation Learning", "citation": "This is an active method based on the Follow-the-Leader algorithm #REFR , each validation iteration is an online learning example.", "text_before_citation": ["2) Direct Policy Learning: Direct Policy Learning (DPL), a training method based on BC, evaluates the current policy and then obtains more suitable training data for selfoptimization.", "Compared with BC, the main advantage of DPL leverages expert trajectories to instruct the agent how to recover from current errors #OTHEREFR .", "In this way, DPL alleviates the limitation of BC due to insufficient data.", "In this section, we summarize a series of DPL methods. Ross et al.", "#OTHEREFR construct a classical online IL method named Dataset Aggregation (DAgger) method."], "text_after_citation": ["The method modifies the main classifier or regressor on all state\u2212action pairs experienced by the agent.", "DAgger is a novel solution for sequential prediction problems, however, its learning efficiency might be suppressed by the far distance between policy space and learning space. In reply, He et al.", "#OTHEREFR propose a DAgger by coaching algorithm which employs a coach to demonstrate easy-to-learn policies for the learner and the demonstrated policies gradually converge to label.", "To better instruct the agent, the coach establishes a compromised policy which is not much worse than a ground truth control signal and much better than novice predicted action. As shown in Fig.", "3 , \u03c0 is the predicted command, \u03c0 * shows the expert trajectory, and \u03c0 presents the compromised trajectory."], "citing_paper_content": {"title": "Motion Planning For Autonomous Driving: The State Of The Art And Future Perspectives", "abstract": "Thanks to the augmented convenience, safety advantages, and potential commercial value, Intelligent vehicles (IVs) have attracted wide attention throughout the world. Although a few autonomous driving unicorns assert that IVs will be commercially deployable by 2025, their implementation is still restricted to small-scale validation due to various issues, among which precise computation of control commands or trajectories by planning methods remains a prerequisite for IVs. This paper aims to review state-of-the-art planning methods, including pipeline planning and end-to-end planning methods. In terms of pipeline methods, a survey of selecting algorithms is provided along with a discussion of the expansion and optimization mechanisms, whereas in end-to-end methods, the training approaches and verification scenarios of driving tasks are points of concern. Experimental platforms are reviewed to facilitate readers in selecting suitable training and validation methods. Finally, the current challenges and future directions are discussed. The sideby-side comparison presented in this survey not only helps to gain insights into the strengths and limitations of the reviewed methods but also assists with system-level design choices. Index Terms-Pipeline planning, end-to-end planning, imitation learning, reinforcement learning, parallel learning. I. INTRODUCTION I NTELLIGENT vehicles (IVs) have gained considerable attention from government, industry, academia, and the"}, "cited_paper_content": {"title": "Global Overview Of Imitation Learning", "abstract": "Imitation Learning is a sequential task where the learner tries to mimic an expert's action in order to achieve the best performance. Several algorithms have been proposed recently for this task. In this project, we aim at proposing a wide review of these algorithms, presenting their main features and comparing them on their performance and their regret bounds."}, "keywords": ["online learning example", "Follow-the-Leader algorithm"], "citation_intent": "method"} {"citing_id": "2304.14749v1", "cited_id": "1810.03993", "section_title": "4.2.3", "citation": "Yet organisationfocused tools to provide information on points in the AI lifecycle (such as [40, #REFR ) are of limited help where information about interconnections between actors is needed.", "text_before_citation": ["Expanding the accountability horizon. The accountability horizon thus poses major problems for accountability.", "Interventions are needed to help expand the accountability horizon and better place actors to (i) know more about their own supply chains, and (ii) support others in knowing more about theirs."], "text_after_citation": ["Alternatively, tracking data flow between actors could help understand interconnections beyond the first few steps #OTHEREFR , as could legal and institutional mechanisms requiring information about arrangements.", "'Know your customer' requirements around customer on-boarding for AI services (common in financial services) could help providers understand customers' purposes and intentions [26] (though these may only give some visibility over one or two steps in the chain).", "Moreover, recent CJEU data protection jurisprudence regarding transparency rights confirms that data subjects have the right to know the identity of any recipients of their personal data [21], which may help understand data flows.", "However, where the data controller does not know the recipients' identity-which may be common due to the accountability horizon-data subjects can instead be told about the categories of recipients of the data [21] (significantly less useful information).", "A particular difficulty, however, is that accountability is contextual #OTHEREFR ."], "citing_paper_content": {"title": "Understanding Accountability In Algorithmic Supply Chains", "abstract": "Academic and policy proposals on algorithmic accountability often seek to understand algorithmic systems in their socio-technical context, recognising that they are produced by 'many hands'. Increasingly, however, algorithmic systems are also produced, deployed, and used within a supply chain comprising multiple actors tied together by flows of data between them. In such cases, it is the working together of an algorithmic supply chain of different actors who contribute to the production, deployment, use, and functionality that drives systems and produces particular outcomes. We argue that algorithmic accountability discussions must consider supply chains and the difficult implications they raise for the governance and accountability of algorithmic systems. In doing so, we explore algorithmic supply chains, locating them in their broader technical and political economic context and identifying some key features that should be understood in future work on algorithmic governance and accountability (particularly regarding general purpose AI services). To highlight ways forward and areas warranting attention, we further discuss some implications raised by supply chains: challenges for allocating accountability stemming from distributed responsibility for systems between actors, limited visibility due to the accountability horizon, service models of use and liability, and cross-border supply chains and regulatory arbitrage. CCS CONCEPTS \u2022 Social and professional topics \u2192 Computing / technology policy; Socio-technical systems."}, "cited_paper_content": {"title": "Model Cards For Model Reporting", "abstract": "Trained machine learning models are increasingly used to perform high-impact tasks in areas such as law enforcement, medicine, education, and employment. In order to clarify the intended use cases of machine learning models and minimize their usage in contexts for which they are not well suited, we recommend that released models be accompanied by documentation detailing their performance characteristics. In this paper, we propose a framework that we call model cards, to encourage such transparent model reporting. Model cards are short documents accompanying trained machine learning models that provide benchmarked evaluation in a variety of conditions, such as across different cultural, demographic, or phenotypic groups (e.g., race, geographic location, sex, Fitzpatrick skin type [15]) and intersectional groups (e.g., age and race, or sex and Fitzpatrick skin type) that are relevant to the intended application domains. Model cards also disclose the context in which models are intended to be used, details of the performance evaluation procedures, and other relevant information. While we focus primarily on human-centered machine learning models in the application fields of computer vision and natural language processing, this framework can be used to document any trained machine learning model. To solidify the concept, we provide cards for two supervised models: One trained to detect smiling faces in images, and one trained to detect toxic comments in text. We propose model cards as a step towards the responsible democratization of machine learning and related artificial intelligence technology, increasing transparency into how well artificial intelligence technology works. We hope this work encourages those releasing trained machine learning models to accompany model releases with similar detailed evaluation numbers and other relevant documentation."}, "keywords": ["AI lifecycle"], "citation_intent": "background"} {"citing_id": "2303.04683v2", "cited_id": "1204.3818", "section_title": "Transforming Problem P 1 Into Parametric Convex Optimization Problems", "citation": "With ( * , * , * ) denoting a globally optimal solution to Problem P 2 (and hence ( * , * ) denoting a globally optimal solution to Problem P 1 ), clearly setting ( , ) as ( * , * ) of #REFR and (18) satisfies( , ) = 0.", "text_before_citation": ["With Lemma 5.1 presented above, we now describe how to solve Problem P 2 using P 3 ( , ).", "Let [ # ( , ), # ( , )] denote a globally optimal solution to P 3 ( , ), where", "EQUATION", "1 ( ,", "EQUATION"], "text_after_citation": ["(23) Based on the above, solving Problem P 2 and hence P 1 can be transformed into solving (23) to obtain P 3 ( * , * ), and then setting", "[ * , * ] as [ # ( * , * ), # ( * , * )]", ", a globally optimal solution to P 3 ( * , * ), according to Lemma 5.1.", "Based on the above idea, we present Algorithm 1 next, where it will become clear that solving P 1 becomes solving a series of parametric convex optimization P 3 ( ( ) , ( ) ), with denoting the iteration index.", "(24) Readers may notice that our Lemma 5.1 provides just a necessary condition for a global optimum of Problem P 2 ."], "citing_paper_content": {"title": "Optimizing Utility-Energy Efficiency For The Metaverse Over Wireless Networks Under Physical Layer Security", "abstract": "The Metaverse, an emerging digital space, is expected to offer various services mirroring the real world. Wireless communications for mobile Metaverse users should be tailored to meet the following user characteristics: 1) emphasizing application-specific perceptual utility instead of simply the transmission rate, 2) concerned with energy efficiency due to the limited device battery and energy intensiveness of some applications, and 3) caring about security as the applications may involve sensitive personal data. To this end, this paper incorporates application-specific utility, energy efficiency, and physical-layer security (PLS) into the studied optimization in a wireless network for the Metaverse. Specifically, after introducing utility-energy efficiency (UEE) to represent each Metaverse user's application-specific objective under PLS, we formulate an optimization to maximize the network's weighted sum-UEE by deciding users' transmission powers and communication bandwidths. The formulated problem belongs to the sum-of-ratios optimization, for which prior studies have demonstrated its difficulty. Nevertheless, our proposed algorithm 1) obtains the global optimum for the weighted sum-UEE optimization, via a transform to parametric convex optimization problems, 2) applies to any utility function which is concave, increasing, and twice differentiable, and 3) achieves a linear time complexity in the number of users (the optimal complexity in the order sense). Simulations confirm the superiority of our algorithm over other approaches. We explain that our technique for solving the sum-of-ratios optimization is applicable to other optimization problems in wireless networks and mobile computing."}, "cited_paper_content": {"title": "Throughput Optimal Policies For Energy Harvesting Wireless Transmitters With Non-Ideal Circuit Power", "abstract": "Characterizing the fundamental tradeoffs for maximizing energy efficiency (EE) versus spectrum efficiency (SE) is a key problem in wireless communication. In this paper, we address this problem for a point-to-point additive white Gaussian noise (AWGN) channel with the transmitter powered solely via energy harvesting from the environment. In addition, we assume a practical on-off transmitter model with non-ideal circuit power, i.e., when the transmitter is on, its consumed power is the sum of the transmit power and a constant circuit power. Under this setup, we study the optimal transmit power allocation to maximize the average throughput over a finite horizon, subject to the time-varying energy constraint and the non-ideal circuit power consumption. First, we consider the off-line optimization under the assumption that the energy arrival time and amount are a priori known at the transmitter. Although this problem is non-convex due to the non-ideal circuit power, we show an efficient optimal solution that in general corresponds to a two-phase transmission: the first phase with an EE-maximizing on-off power allocation, and the second phase with a SE-maximizing power allocation that is non-decreasing over time, thus revealing an interesting result that both the EE and SE optimizations are unified in an energy harvesting communication system. We then extend the optimal off-line algorithm to the case with multiple parallel AWGN channels, based on the principle of nested optimization. Finally, inspired by the off-line optimal solution, we propose a new online algorithm under the practical setup with only the past and present energy state information (ESI) known at the transmitter."}, "keywords": ["globally optimal solution"], "citation_intent": "background"} {"citing_id": "2303.05546v1", "cited_id": "1505.04474", "section_title": "Human-Object Interaction Detection", "citation": "The problem of detecting interactions between humans and objects was originally introduced in #REFR and has drawn immense attention in the computer vision community since then.", "text_before_citation": [], "text_after_citation": ["Most of the research efforts on this topic #OTHEREFR use a two-stage solution in which human/object locations are extracted along with their semantic labels by an off-the-shelf object detector first, and an interaction classification model is learnt on pairwise human-object features.", "Apart from human/object appearances, there exist models that make use of contextual features #OTHEREFR , spatial layouts #OTHEREFR and human pose estimations #OTHEREFR .", "Inspired by one-stage object detection efforts, researchers lately try to formulate end-to-end HOI detection approaches where human/instance localization and interaction classification are performed in parallel #OTHEREFR . These methods are analogous to CNN-based (e.g. YOLO #OTHEREFR ) and Transformer-based (e.g. DETR #OTHEREFR ) end-to-end object detectors.", "PPDM #OTHEREFR takes a step forward and drops the need for heuristically created \"anchors\", formulating HOI detection as a point matching problem between human and object locations.", "Regardless of being one-stage or two-stage, these methods rely on strong supervision which is costly to acquire."], "citing_paper_content": {"title": "Weakly-Supervised Hoi Detection From Interaction Labels Only And Language/Vision-Language Priors", "abstract": "Human-object interaction (HOI) detection aims to extract interacting human-object pairs and their interaction categories from a given natural image. Even though the labeling effort required for building HOI detection datasets is inherently more extensive than for many other computer vision tasks, weakly-supervised directions in this area have not been sufficiently explored due to the difficulty of learning human-object interactions with weak supervision, rooted in the combinatorial nature of interactions over the object and predicate space. In this paper, we tackle HOI detection with the weakest supervision setting in the literature, using only image-level interaction labels, with the help of a pretrained vision-language model (VLM) and a large language model (LLM). We first propose an approach to prune non-interacting human and object proposals to increase the quality of positive pairs within the bag, exploiting the grounding capability of the vision-language model. Second, we use a large language model to query which interactions are possible between a human and a given object category, in order to force the model not to put emphasis on unlikely interactions. Lastly, we use an auxiliary weaklysupervised preposition prediction task to make our model explicitly reason about space. Extensive experiments and ablations show that all of our contributions increase HOI detection performance."}, "cited_paper_content": {"title": "Visual Semantic Role Labeling", "abstract": "In this paper we introduce the problem of Visual Semantic Role Labeling: given an image we want to detect people doing actions and localize the objects of interaction. Classical approaches to action recognition either study the task of action classification at the image or video clip level or at best produce a bounding box around the person doing the action. We believe such an output is inadequate and a complete understanding can only come when we are able to associate objects in the scene to the different semantic roles of the action. To enable progress towards this goal, we annotate a dataset of 16K people instances in 10K images with actions they are doing and associate objects in the scene with different semantic roles for each action. Finally, we provide a set of baseline algorithms for this task and analyze error modes providing directions for future work."}, "keywords": ["objects", "computer vision community"], "citation_intent": "background"} {"citing_id": "2304.03980v1", "cited_id": "1911.11236", "section_title": "Methodology", "citation": "We considered RandLA-Net #REFR , one of the most famous point-based architectures as a reference, in order to frame the problem in a perspective different from images (i.e., using MLPs in place of convolutional setting).", "text_before_citation": [], "text_after_citation": ["RandLA-Net is an efficient point-based lightweight network composed of an MLP based encoder-decoder structure that achieves remarkably high efficiency in terms of memory and computation.", "In addition, we evaluate on Cylin-der3D #OTHEREFR voxel based architecture for comparison.", "Nonetheless, the framework can be applied on top of any architecture for point cloud semantic segmentation.", "SemanticKITTI #OTHEREFR has been chosen as a reference dataset, since it is one of the most popular benchmarks for LiDAR semantic segmentation in autonomous driving.", "Se-manticKITTI consists of 43, 552 densely annotated LiDAR scans, 19, 130 for training, and 4, 071 for validating (that we used for testing, as done by all competing works being the test labels not publicly available)."], "citing_paper_content": {"title": "Continual Learning For Lidar Semantic Segmentation: Class-Incremental And Coarse-To-Fine Strategies On Sparse Data", "abstract": "During the last few years, Continual Learning (CL) strategies for image classification and segmentation have been widely investigated designing innovative solutions to tackle catastrophic forgetting, like knowledge distillation and selfinpainting. However, the application of continual learning paradigms to point clouds is still unexplored and investigation is required, especially using architectures that capture the sparsity and uneven distribution of LiDAR data. The current paper analyzes the problem of class incremental learning applied to point cloud semantic segmentation, comparing approaches and state-of-the-art architectures. To the best of our knowledge, this is the first example of classincremental continual learning for LiDAR point cloud semantic segmentation. Different CL strategies were adapted to LiDAR point clouds and tested, tackling both classic finetuning scenarios and the Coarse-to-Fine learning paradigm. The framework has been evaluated through two different architectures on SemanticKITTI [2, 16], obtaining results in line with state-of-the-art CL strategies and standard offline learning."}, "cited_paper_content": {"title": "Randla-Net: Efficient Semantic Segmentation Of Large-Scale Point Clouds", "abstract": "We study the problem of efficient semantic segmentation for large-scale 3D point clouds. By relying on expensive sampling techniques or computationally heavy pre/post-processing steps, most existing approaches are only able to be trained and operate over small-scale point clouds. In this paper, we introduce RandLA-Net, an efficient and lightweight neural architecture to directly infer per-point semantics for large-scale point clouds. The key to our approach is to use random point sampling instead of more complex point selection approaches. Although remarkably computation and memory efficient, random sampling can discard key features by chance. To overcome this, we introduce a novel local feature aggregation module to progressively increase the receptive field for each 3D point, thereby effectively preserving geometric details. Extensive experiments show that our RandLA-Net can process 1 million points in a single pass with up to 200X faster than existing approaches. Moreover, our RandLA-Net clearly surpasses state-of-the-art approaches for semantic segmentation on two large-scale benchmarks Semantic3D and SemanticKITTI."}, "keywords": ["famous point-based architectures", "RandLA-Net"], "citation_intent": "method"} {"citing_id": "2304.14474v1", "cited_id": "1411.2635", "section_title": "Examples", "citation": "We have the following result for the Gaussian RKHS, which parallels Maurer's result in #REFR Section 3.2] if we impose the same assumption that T is a projection of another RKHS function class onto a set of samples.", "text_before_citation": ["so our improvement consists in removing the logarithmic factor in front of the second term on the right-hand side.", "Our second example involves functions in a Reproducing Kernel Hilbert Space (RKHS).", "Due to space limitations, we can only give a brief sketch; the reader is invited to consult [3, Chaps. 2 and 4] for the background.", "Let X = B k 2 ( \u221a kR) for some R > 0.", "Let (H K , \u2022, \u2022 K ) be an RKHS associated with a Mercer kernel K : X \u00d7 X \u2192 R; then we consider F = I K (B K (\u033a)), where B K (\u033a) = {f \u2208 H K : f K \u2264 \u033a} is the zero-centered closed ball of radius \u033a in H K and I K is the embedding map from H K into the space C(X) of continuous real-valued functions on X equipped with the uniform norm \u2022 X ."], "text_after_citation": ["Proposition 3.", "Consider the Gaussian kernel K(x, y) = exp \u2212 1 2\u03c3 2 x \u2212 y 2 2 , where \u03c3 2 > 0 is the kernel bandwidth. Then, for any T", "EQUATION", "Remark 4.", "See the Remark following Theorem 2 for the motivation behind our choices of X and T ."], "citing_paper_content": {"title": "A Chain Rule For The Expected Suprema Of Bernoulli Processes *", "abstract": "We obtain an upper bound on the expected supremum of a Bernoulli process indexed by the image of an index set under a uniformly Lipschitz function class in terms of properties of the index set and the function class, extending an earlier result of Maurer for Gaussian processes. The proof makes essential use of recent results of Bednorz and Lata\u0142a on the boundedness of Bernoulli processes. * This research was suppoorted in part by DARPA under the Learning with Less Labels (LwLL) program."}, "cited_paper_content": {"title": "A Chain Rule For The Expected Suprema Of Gaussian Processes", "abstract": "The expected supremum of a Gaussian process indexed by the image of an index set under a function class is bounded in terms of separate properties of the index set and the function class. The bound is relevant to the estimation of nonlinear transformations or the analysis of learning algorithms whenever hypotheses are chosen from composite classes, as is the case for multi-layer models."}, "keywords": ["Gaussian RKHS"], "citation_intent": "result"} {"citing_id": "2304.00295v1", "cited_id": "1810.03292", "section_title": "Results And Discussion", "citation": "To visualize the effect of feature decomposition, we adopt deep neural network interpretability methods in #REFR .", "text_before_citation": ["Results on CelebA.", "To illustrate each model's ability for vision tasks, we choose smiling, wavy hair, and attractive to form three binary classification tasks.", "As shown in Figure 5, Fair-CDA achieves SOTA performance followed by two mixup methods.", "It is worth mentioning that the DP and EO gap of these methods on the smiling recognition task is smaller compared with other tasks, which is a relatively fair scenario, but Fair-CDA can still improve the fairness.", "Also, Fair-CDA is the only method that achieves considerable accuracy given high fairness requirements on both tasks."], "text_after_citation": ["We draw the saliency map on the wavy hair recognition task, as shown in Figure 4 .", "Sensitive features are those strongly related to gender, while non-sensitive features are those strongly related to wavy hair.", "It can be seen that the saliency maps of sensitive features focus more on the whole face, while those of non-sensitive features focus more on the hair of a man/woman.", "Additionally, we evaluate Fair-CDA on more sensitive features on CelebA dataset.", "We implement Fair-CDA on the same task as that in #OTHEREFR"], "citing_paper_content": {"title": "Fair-Cda: Continuous And Directional Augmentation For Group Fairness", "abstract": "In this work, we propose Fair-CDA, a fine-grained data augmentation strategy for imposing fairness constraints. We use a feature disentanglement method to extract the features highly related to the sensitive attributes. Then we show that group fairness can be achieved by regularizing the models on transition paths of sensitive features between groups. By adjusting the perturbation strength in the direction of the paths, our proposed augmentation is controllable and auditable. To alleviate the accuracy degradation caused by fairness constraints, we further introduce a calibrated model to impute labels for the augmented data. Our proposed method does not assume any data generative model and ensures good generalization for both accuracy and fairness. Experimental results show that Fair-CDA consistently outperforms state-of-the-art methods on widely-used benchmarks, e.g., Adult, CelebA and Movie-Lens. Especially, Fair-CDA obtains an 86.3% relative improvement for fairness while maintaining the accuracy on the Adult dataset. Moreover, we evaluate Fair-CDA in an online recommendation system to demonstrate the effectiveness of our method in terms of accuracy and fairness."}, "cited_paper_content": {"title": "Sanity Checks For Saliency Maps", "abstract": "Saliency methods have emerged as a popular tool to highlight features in an input deemed relevant for the prediction of a learned model. Several saliency methods have been proposed, often guided by visual appeal on image data. In this work, we propose an actionable methodology to evaluate what kinds of explanations a given method can and cannot provide. We find that reliance, solely, on visual assessment can be misleading. Through extensive experiments we show that some existing saliency methods are independent both of the model and of the data generating process. Consequently, methods that fail the proposed tests are inadequate for tasks that are sensitive to either data or model, such as, finding outliers in the data, explaining the relationship between inputs and outputs that the model learned, and debugging the model. We interpret our findings through an analogy with edge detection in images, a technique that requires neither training data nor model. Theory in the case of a linear model and a single-layer convolutional neural network supports our experimental findings."}, "keywords": ["feature decomposition", "interpretability methods"], "citation_intent": "method"} {"citing_id": "2303.15113v1", "cited_id": "1905.12389", "section_title": "Introduction", "citation": "This pattern library extends the initial pattern catalog of 10-15 patterns originally identified by #REFR both quantitatively with additional patterns observed during the SMS and qualitatively, by offering the patterns in a machine-actionable rather than graphical representation.", "text_before_citation": ["While a number of papers about systems that learn and reason were collected as a basis for the analysis described in #OTHEREFR , these were not offered as a corpus of annotated papers to the community.", "We addressed both challenges by conducting a large-scale Systematic Mapping Study (SMS #OTHEREFR ) on SWeMLS #OTHEREFR , through which we (i) proposed a set of characteristics for describing SWeMLS and (ii) systematically collected, selected and extracted data from nearly 500 papers describing such systems.", "This led to the following artifacts which together are offered as one resource:", "the SWeMLS ontology that describes the main aspects of SWeMLS including their internal workflow in terms of boxology patterns as shown in Fig. 1 .", "The ontology schema (i.e., capturing important SWeMLS characteristics, e.g., StatisticalM odel) and relevant instances (e.g., DeepLearningM odel) were derived systematically during the scoping and analysis phases of the SMS, the SWeMLS-KG: a knowledge graph containing the machine-actionable description of almost 500 systems in terms of the SWeMLS ontology, and the SWeMLS Pattern Library containing the machine-actionable description of 45 SWeMLS patterns and their associated SHACL-based validation constraints."], "text_after_citation": ["This resource is timely considering the recent trend in the SW community (and beyond) to create systems that leverage both SW and ML components.", "To the best of our knowledge, it is also novel by (i) providing the first ontology (and associated pattern library) for describing SWeMLS in a machine-actionable way and (ii) a methodologically collected corpus of SWeMLS and their semantic description.", "The resource is of immediate benefit for (SW) researchers that aim to explore trends in the SWeMLS field by analysing the data in the SWeMLS-KG and as such promises to have an impact on the understanding of the status-quo in this emerging field.", "Furthermore, the resource provides a semantic framework for describing SWeMLS and their internal details, thus potentially strongly influencing this field in terms of being well-documented, data-driven, and transparent.", "We continue by discussing the impact of this resource (Sect."], "citing_paper_content": {"title": "Describing And Organizing Semantic Web And Machine Learning Systems In The Swemls-Kg", "abstract": "The overall AI trend of creating neuro-symbolic systems is reflected in the Semantic Web community with an increased interest in the development of systems that rely on both Semantic Web resources and Machine Learning components (SWeMLS, for short). However, understanding trends and best practices in this rapidly growing field is hampered by a lack of standardized descriptions of these systems and an annotated corpus of such systems. To address these gaps, we leverage the results of a large-scale systematic mapping study collecting information about 470 SWeMLS papers and formalize these into one resource containing: (i) the SWeMLS ontology, (ii) the SWeMLS pattern library containing machine-actionable descriptions of 45 frequently occurring SWeMLS workflows, and (iii) SWEMLS-KG, a knowledge graph including machine-actionable metadata of the papers in terms of the SWeMLS ontology. This resource provides the first framework for semantically describing and organizing SWeMLS thus making a key impact in (1) understanding the status quo of the field based on the published paper corpus and (2) enticing the uptake of machine-processable system documentation in the SWeMLS area."}, "cited_paper_content": {"title": "A Boxology Of Design Patterns For Hybrid Learning And Reasoning Systems", "abstract": "We propose a set of compositional design patterns to describe a large variety of systems that combine statistical techniques from machine learning with symbolic techniques from knowledge representation. As in other areas of computer science (knowledge engineering, software engineering, ontology engineering, process mining and others), such design patterns help to systematize the literature, clarify which combinations of techniques serve which purposes, and encourage re-use of software components. We have validated our set of compositional design patterns against a large body of recent literature."}, "keywords": ["machine-actionable rather", "additional patterns"], "citation_intent": "background"} {"citing_id": "2304.11465v1", "cited_id": "1706.03762", "section_title": "A. Pointr-C: 3D Shape Completion Network", "citation": "The features are used as tokens to a transformer #REFR which captures the long-range relations among them and predicts the centers for the missing point cloud.", "text_before_citation": ["Given the current set of observations v o \u2208 V, we predict the complete volume using a learning-based predictor g, i.e., V = g(v o ).", "To obtainV, we use PoinTr #OTHEREFR , a transformer-based architecture that uses 3D point clouds as the input and output. PoinTr works in multiple steps for shapes completion.", "First, a k-nearest neighbor (kNN) algorithm is applied to the partial point cloud to find the cluster centers that represent geometric relationships at a low resolution.", "A DGCNN #OTHEREFR adds the local features around the these center points."], "text_after_citation": ["Lastly, FoldingNet #OTHEREFR performs a coarse-to-fine transformation over the predicted centers to predict the missing point cloud.", "This model was trained on the ShapeNet #OTHEREFR dataset and outperforms the previous methods on a range of objects.", "However, PoinTr was trained with implicit knowledge of the center of the object.", "Moving the partially observed point cloud to its center results in incorrect prediction from PoinTr.", "To improve the predictions, we fine-tune PoinTr using the curriculum framework, which dictates training the network over easy to hard tasks by increasing the difficulty in steps during learning #OTHEREFR ."], "citing_paper_content": {"title": "Pred-Nbv: Prediction-Guided Next-Best-View Planning For 3D Object Reconstruction", "abstract": "Prediction-based active perception has shown the potential to improve the navigation efficiency and safety of the robot by anticipating the uncertainty in the unknown environment. The existing works for 3D shape prediction make an implicit assumption about the partial observations and therefore cannot be used for real-world planning and do not consider the control effort for next-best-view planning. We present Pred-NBV, a realistic object shape reconstruction method consisting of PoinTr-C, an enhanced 3D prediction model trained on the ShapeNet dataset, and an information and control effort-based next-best-view method to address these issues. Pred-NBV shows an improvement of 25.46% in object coverage over the traditional methods in the AirSim simulator, and performs better shape completion than PoinTr, the stateof-the-art shape completion model, even on real data obtained from a Velodyne 3D LiDAR mounted on DJI M600 Pro."}, "cited_paper_content": {"title": "Attention Is All You Need", "abstract": "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data."}, "keywords": ["missing point cloud", "transformer"], "citation_intent": "method"} {"citing_id": "2305.00426v1", "cited_id": "1912.12055", "section_title": "Setup", "citation": "We used CQT transform as a spectrogram function for all experiments. We used nnAudio library #REFR for spectrogram calculation.", "text_before_citation": ["Split for training, validation and testing was datasetwise, which means that data from the training set for MAPS and the training set for GuitarSet were present in the training set for SynthesizedInstruments.", "We are using only randomly chosen fixed-size sequences of each composition for training, so it should not make this model overfit to traits specific to datasets distributions.", "Achieved recordings contain the clean version of each instrument, which is often not desired in analyzing noisy real-world data recorded by modern microphones.", "Each recording generated by the software synthesizer was later sampled with a 16 kHz sample rate and transformed into the CQT spectrogram.", "Data processing Each experiment was focused on training on a specific dataset."], "text_after_citation": ["To avoid recalculating CQT transform each time, we saved data on disk once it was calculated and loaded it in the subsequent experiments to not perform it again.", "Experimental protocol Datasets were split into training, validation, and testing sets.", "We checked the model using the validation set after every ten learning epochs.", "After training, all datasets were tested using corresponding testing sets for all available datasets.", "In the discussion of results, only datasets containing real-world recordings were considered (MAPS and GuitarSet)."], "citing_paper_content": {"title": "Transfer Of Knowledge Among Instruments In Automatic Music Transcription", "abstract": "Automatic music transcription (AMT) is one of the most challenging tasks in the music information retrieval domain. It is the process of converting an audio recording of music into a symbolic representation containing information about the notes, chords, and rhythm. Current research in this domain focuses on developing new models based on transformer architecture or using methods to perform semi-supervised training, which gives outstanding results, but the computational cost of training such models is enormous. This work shows how to employ easily generated synthesized audio data produced by software synthesizers to train a universal model. It is a good base for further transfer learning to quickly adapt transcription model for other instruments. Achieved results prove that using synthesized data for training may be a good base for pretraining general-purpose models, where the task of transcription is not focused on one instrument."}, "cited_paper_content": {"title": "Nnaudio: An On-The-Fly Gpu Audio To Spectrogram Conversion Toolbox Using 1D Convolution Neural Networks", "abstract": "Converting time domain waveforms to frequency domain spectrograms is typically considered to be a prepossessing step done before model training. This approach, however, has several drawbacks. First, it takes a lot of hard disk space to store different frequency domain representations. This is especially true during the model development and tuning process, when exploring various types of spectrograms for optimal performance. Second, if another dataset is used, one must process all the audio clips again before the network can be retrained. In this paper, we integrate the time domain to frequency domain conversion as part of the model structure, and propose a neural network based toolbox, nnAudio, which leverages 1D convolutional neural networks to perform time domain to frequency domain conversion during feed-forward. It allows on-the-fly spectrogram generation without the need to store any spectrograms on the disk. This approach also allows back-propagation on the waveforms-to-spectrograms transformation layer, which implies that this transformation process can be made trainable, and hence further optimized by gradient descent. nnAudio reduces the waveforms-to-spectrograms conversion time for 1,770 waveforms (from the MAPS dataset) from $10.64$ seconds with librosa to only $0.001$ seconds for Short-Time Fourier Transform (STFT), $18.3$ seconds to $0.015$ seconds for Mel spectrogram, $103.4$ seconds to $0.258$ for constant-Q transform (CQT), when using GPU on our DGX work station with CPU: Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20GHz Tesla v100 32Gb GPUs. (Only 1 GPU is being used for all the experiments.) We also further optimize the existing CQT algorithm, so that the CQT spectrogram can be obtained without aliasing in a much faster computation time (from $0.258$ seconds to only $0.001$ seconds)."}, "keywords": ["CQT transform", "spectrogram function"], "citation_intent": "method"} {"citing_id": "2303.04450v1", "cited_id": "1705.00722", "section_title": "Numerical Results", "citation": "These results corroborate the previous observation in #REFR that, parametric filters offer a more robust alternative to the particle filter in case of parameter uncertainty.", "text_before_citation": ["It can be seen that overall the EKF approximation is poor and has the worst performance.", "UKF and ENKF improve upon this, where UKF is more robust. However PF outperforms all three of them, as expected.", "As for EFKF, for low and high values of \u03b1 the results are comparable to PF where EFKF performs better in some cases and PF better in the others. The same also applies to SKF.", "On the other hand, for mid range of \u03b1, EFKF is significantly better than all the other filters, including PF.", "The best value is highlighted in boldface, which occurs at \u03b1 = 0.7."], "text_after_citation": ["Interesting, unlike the previous case, we see that EFKF also provides better performance even when there is no parameter uncertainty.", "For this \"Match\" we have the values in the rightmost column.", "Firstly, we can see that the PF outperforms all competitors including SKF and MKF, except for EFKF.", "For the case of \u03b1 = 0.5 and \u03b1 = 0.7, EFKF gives significantly lower RMSE.", "The best is once again highlighted in boldface and occurs at \u03b1 = 0.7."], "citing_paper_content": {"title": "Nonlinear Kalman Filtering With Reparametrization Gradients", "abstract": "We introduce a novel nonlinear Kalman filter that utilizes reparametrization gradients. The widely used parametric approximation is based on a jointly Gaussian assumption of the state-space model, which is in turn equivalent to minimizing an approximation to the Kullback-Leibler divergence. It is possible to obtain better approximations using the alpha divergence, but the resulting problem is substantially more complex. In this paper, we introduce an alternate formulation based on an energy function, which can be optimized instead of the alpha divergence. The optimization can be carried out using reparametrization gradients, a technique that has recently been utilized in a number of deep learning models."}, "cited_paper_content": {"title": "Nonlinear Kalman Filtering With Divergence Minimization", "abstract": "We consider the nonlinear Kalman filtering problem using Kullback\u2013Leibler (KL) and $\\alpha$ -divergence measures as optimization criteria. Unlike linear Kalman filters, nonlinear Kalman filters do not have closed form Gaussian posteriors because of a lack of conjugacy due to the nonlinearity in the likelihood. In this paper, we propose novel algorithms to approximate this posterior by optimizing the forward and reverse forms of the KL divergence, as well as the $\\alpha$ -divergence that contains these two as limiting cases. Unlike previous approaches, our algorithms do not make approximations to the divergences being optimized, but use Monte Carlo techniques to derive unbiased algorithms for direct optimization. We assess performance on radar and sensor tracking, and options pricing, showing general improvement over the extended, unscented, and ensemble Kalman filters, as well as competitive performance with particle filtering."}, "keywords": ["particle filter"], "citation_intent": "result"} {"citing_id": "2303.13272v1", "cited_id": "1908.07919", "section_title": "Multi-Scale Network", "citation": "To address the issue, we introduce the multi-scale network, which was firstly proposed in computer vision tasks #REFR .", "text_before_citation": ["If only a certain part of an IPT is considered, it tends to misjudge one IPT as another.", "For example, a portamento note will be misjudged as a normal pluck note if its inflection point in the spectrogram is out of the receptive field of the model.", "As solutions, the receptive field is usually enlarged by directly stacking multiple convolution layers or use large-size convolution kernels.", "But these methods both result in an excess of parameters.", "Simply increasing the receptive field will also lead to a loss in details for the subtle change of short IPTs."], "text_after_citation": ["As shown in Fig.2(a) , our proposed model is composed of three horizontal branches for different scales in the time axis.", "The resolution of the feature in the branches from top to bottom is from high to low.", "The middle branch with the medium resolution is used as a transition for the fusing between high-resolution features and long-range features.", "By downsampling/upsampling the feature to different scales, longrange features can be fused with high-resolution features repeatedly.", "To convert the spectral information into the channel domain, we first process the CQT input (1, F, T) into a sequence with the shape of (88, T, 1) by reshaping and batch normalization."], "citing_paper_content": {"title": "Frame-Level Multi-Label Playing Technique Detection Using Multi-Scale Network And Self-Attention Mechanism", "abstract": "Instrument playing technique (IPT) is a key element of musical presentation. However, most of the existing works for IPT detection only concern monophonic music signals, yet little has been done to detect IPTs in polyphonic instrumental solo pieces with overlapping IPTs or mixed IPTs. In this paper, we formulate it as a framelevel multi-label classification problem and apply it to Guzheng, a Chinese plucked string instrument. We create a new dataset, Guzheng Tech99, containing Guzheng recordings and onset, offset, pitch, IPT annotations of each note. Because different IPTs vary a lot in their lengths, we propose a new method to solve this problem using multi-scale network and self-attention. The multi-scale network extracts features from different scales, and the self-attention mechanism applied to the feature maps at the coarsest scale further enhances the long-range feature extraction. Our approach outperforms existing works by a large margin, indicating its effectiveness in IPT detection."}, "cited_paper_content": {"title": "Deep High-Resolution Representation Learning For Visual Recognition", "abstract": "High-resolution representations are essential for position-sensitive vision problems, such as human pose estimation, semantic segmentation, and object detection. Existing state-of-the-art frameworks first encode the input image as a low-resolution representation through a subnetwork that is formed by connecting high-to-low resolution convolutions \\emph{in series} (e.g., ResNet, VGGNet), and then recover the high-resolution representation from the encoded low-resolution representation. Instead, our proposed network, named as High-Resolution Network (HRNet), maintains high-resolution representations through the whole process. There are two key characteristics: (i) Connect the high-to-low resolution convolution streams \\emph{in parallel}; (ii) Repeatedly exchange the information across resolutions. The benefit is that the resulting representation is semantically richer and spatially more precise. We show the superiority of the proposed HRNet in a wide range of applications, including human pose estimation, semantic segmentation, and object detection, suggesting that the HRNet is a stronger backbone for computer vision problems. All the codes are available at~{\\url{this https URL}}."}, "keywords": ["multi-scale network"], "citation_intent": "method"} {"citing_id": "2304.03718v1", "cited_id": "1905.10083", "section_title": "Introduction", "citation": "Edge-AI optimizes the response time and accelerates AI computation when data is generated and processed for inference on-device #REFR .", "text_before_citation": ["In the past, AI approaches including machine learning, computer vision, and robotics along with data mining and image processing techniques are utilized to address the SHM challenges #OTHEREFR .", "Especially deep learning approaches are broadly utilized in the past for detecting and segmenting bridge components #OTHEREFR .", "Results obtained through these computational approaches have shown immense potential to support health monitoring tasks such as crack detection, segmentation of bridge components, UAV inspection of bridges, and damage detection for bridge structure #OTHEREFR .", "However, real-time inferences in SHM, especially crack detection is challenging based on several factors like a wide range of various complex backgrounds and crack-like features.", "Edge-AI allows a process for artificial intelligence inference and computation on-device rather than on cloud servers or network connections."], "text_after_citation": ["Zhou et al 2019 #OTHEREFR suggests that this ability of edge-AI platforms has advantages as follows- In this research study, drawing from artificial intelligence (AI), neural network process acceleration, and edge computing literature, the study aims to investigate edge-AI integration in structural health monitoring tasks.", "The objective is to develop a lightweight neural network model and to optimize neural network response time performance in real time.", "To accomplish this, we are utilizing Kneron KL520 platform which includes a neural processing unit (NPU), an AI system-on-chip(SoC) hardware.", "To address the inherent challenges for real-time inference an edge-AI framework for SHM domain is proposed.", "This paper introduces a novel framework to generate lightweight optimized models on the edge within the SHM domain. Figure 1 describes the edge-AI-SHM framework."], "citing_paper_content": {"title": "Integrating Edge-Ai In Structural Health Monitoring Domain *", "abstract": "Structural health monitoring (SHM) tasks like damage detection are crucial for decision-making regarding maintenance and deterioration. For example, crack detection in SHM is crucial for bridge maintenance as crack progression can lead to structural instability. However, most AI/ML models in the literature have low latency and late inference time issues while performing in real-time environments. This study aims to explore the integration of edge-AI in SHM domain for real-time bridge inspections. Based on edge-AI literature, its capabilities will be valuable integration for a real-time decision support system in SHM tasks such that real-time inferences can be performed on physical sites. This study will utilize commercial edge-AI platforms, such as Google Coral Dev Board or Kneron KL520, to develop and analyze the effectiveness of edge-AI devices. Thus, this study proposes an edge AI framework for the structural health monitoring domain. An edge-AIcompatible deep learning model is developed to validate the framework to perform real-time crack classification. The effectiveness of this model will be evaluated based on its accuracy, the confusion matrix generated, and the inference time observed in a real-time setting."}, "cited_paper_content": {"title": "Edge Intelligence: Paving The Last Mile Of Artificial Intelligence With Edge Computing", "abstract": "With the breakthroughs in deep learning, the recent years have witnessed a booming of artificial intelligence (AI) applications and services, spanning from personal assistant to recommendation systems to video/audio surveillance. More recently, with the proliferation of mobile computing and Internet of Things (IoT), billions of mobile and IoT devices are connected to the Internet, generating zillions bytes of data at the network edge. Driving by this trend, there is an urgent need to push the AI frontiers to the network edge so as to fully unleash the potential of the edge big data. To meet this demand, edge computing, an emerging paradigm that pushes computing tasks and services from the network core to the network edge, has been widely recognized as a promising solution. The resulted new interdiscipline, edge AI or edge intelligence (EI), is beginning to receive a tremendous amount of interest. However, research on EI is still in its infancy stage, and a dedicated venue for exchanging the recent advances of EI is highly desired by both the computer system and AI communities. To this end, we conduct a comprehensive survey of the recent research efforts on EI. Specifically, we first review the background and motivation for AI running at the network edge. We then provide an overview of the overarching architectures, frameworks, and emerging key technologies for deep learning model toward training/inference at the network edge. Finally, we discuss future research opportunities on EI. We believe that this survey will elicit escalating attentions, stimulate fruitful discussions, and inspire further research ideas on EI."}, "keywords": ["Edge-AI"], "citation_intent": "background"} {"citing_id": "2305.01778v1", "cited_id": "1508.07909", "section_title": "Setup", "citation": "We learn a joint vocabulary for glosses and texts via byte pair encoding (BPE) #REFR .", "text_before_citation": ["Datasets We work on three SLT datasets: PHOENIX-2014T, CSL-Daily, and DGS3-T.", "PHOENIX-2014T and DGS3-T focus on German Sign Language, CSL-Daily on Chinese Sign Language.", "All three datasets provide triplet samples, each consisting of a sign language video, a sentence-level gloss annotation and their corresponding text translation. Detailed statistics are listed in Table 1 .", "We employ MuST-C English-German (En-De, 229K samples) and English-Chinese (En-Zh, 185K samples) #OTHEREFR as the augmented MT data for PHOENIX-2014T/DGS3-T and CSL-Daily, respectively."], "text_after_citation": ["We employ 1K BPE operations when MT data is not used, and increase it to 8K/8K/10K for PHOENIX-2014T/DGS3-T/CSL-Daily otherwise.", "Model Settings We experiment with Transformer and start our analysis with a Baseline system optimized on Sign2Text alone with the following configurations: encoder and decoder layers of N S enc = 2, N P enc = 0 and N dec = 2 respectively, model dimension of d = 512, feed-forward dimension of d f f = 2048, attention head of h = 8, and no CTC regularization.", "We adopt the SMKD model (Hao et al., 2021) 1 to extract sign embeddings, and pretrain the model on each benchmark separately on the Sign2Gloss task considering the large difference of sign videos across benchmarks.", "More details about datasets and model settings are given in Appendix A.1.", "Evaluation We report results mainly on the SLT task."], "citing_paper_content": {"title": "Sltunet: A Simple Unified Model For Sign Language Translation", "abstract": "Despite recent successes with neural models for sign language translation (SLT), translation quality still lags behind spoken languages because of the data scarcity and modality gap between sign video and text. To address both problems, we investigate strategies for cross-modality representation sharing for SLT. We propose SLTUNET, a simple unified neural model designed to support multiple SLTrelated tasks jointly, such as sign-to-gloss, gloss-to-text and sign-to-text translation. Jointly modeling different tasks endows SLTUNET with the capability to explore the cross-task relatedness that could help narrow the modality gap. In addition, this allows us to leverage the knowledge from external resources, such as abundant parallel data used for spoken-language machine translation (MT). We show in experiments that SLTUNET achieves competitive and even state-of-theart performance on PHOENIX-2014T and CSL-Daily when augmented with MT data and equipped with a set of optimization techniques. We further use the DGS Corpus for end-to-end SLT for the first time. It covers broader domains with a significantly larger vocabulary, which is more challenging and which we consider to allow for a more realistic assessment of the current state of SLT than the former two. Still, SLTUNET obtains improved results on the DGS Corpus. Code is available at https://github.com/bzhangGo/sltunet."}, "cited_paper_content": {"title": "Neural Machine Translation Of Rare Words With Subword Units", "abstract": "Neural machine translation (NMT) models typically operate with a fixed vocabulary, but translation is an open-vocabulary problem. Previous work addresses the translation of out-of-vocabulary words by backing off to a dictionary. In this paper, we introduce a simpler and more effective approach, making the NMT model capable of open-vocabulary translation by encoding rare and unknown words as sequences of subword units. This is based on the intuition that various word classes are translatable via smaller units than words, for instance names (via character copying or transliteration), compounds (via compositional translation), and cognates and loanwords (via phonological and morphological transformations). We discuss the suitability of different word segmentation techniques, including simple character ngram models and a segmentation based on the byte pair encoding compression algorithm, and empirically show that subword models improve over a back-off dictionary baseline for the WMT 15 translation tasks English!German and English!Russian by up to 1.1 and 1.3 BLEU, respectively."}, "keywords": ["joint vocabulary"], "citation_intent": "method"} {"citing_id": "2304.02175v1", "cited_id": "1903.09890", "section_title": "I. Introduction", "citation": "This method takes as input surface elevation data including buildings and other structures and uses the fluid-inspired method presented in #REFR to generate layers of air corridors.", "text_before_citation": ["Air networks have two uses.", "First, they provide [closelyspaced] smooth paths that offer appropriate clearance from buildings, terrain, other structures, and neighboring UAS.", "The consideration environmental maps and safe separation distances significantly simplifies UAS flight planning and dynamic rerouting.", "Second, they define safely separated candidate paths that can assure collision avoidance so long as each UAS tracks its planned trajectory to within expected error bounds.", "The authors previously presented a method to automatically generate a dense air network for any urban environment that wraps structures #OTHEREFR ."], "text_after_citation": ["Each layer is a fixedaltitude plane with air corridors oriented in a fixed nominal direction that wrap around buildings and other structures.", "This paper presents several advancements to the authors' previous work #OTHEREFR which increases its scalabity and usability.", "More specifically, this paper offers the following distinct and novel contributions:", "1) The acquisition method for surface elevation data has been fully automated for US cities.", "In our previous work #OTHEREFR , three dimensional models for buildings were manually created using Google Maps data which is not scalable."], "citing_paper_content": {"title": "Can A Laplace Pde Define Air Corridors Through Low-Altitude Airspace?", "abstract": "Urban Uncrewed Aircraft System (UAS) flight will require new regulations that assure safety and accommodate unprecedented traffic density levels. Multi-UAS coordination is essential to both objectives. This paper models UAS coordination as an ideal fluid flow with a stream field governed by the Laplace partial differential equation. Streamlines spatially define closely-spaced deconflicted routes through the airspace and define air corridors that safely wrap buildings and other structures so UAS can avoid collision even when flying among low-altitude vertical obstacles and near mountainous terrain. We divide a city into zones, with each zone having its own sub-network, to allow for modularity and assure computation time for route generation is linear as a function of total area. We demonstrate the strength of our proposed approach by computing air corridors through low altitude airspace of select cities with tall buildings. For US cities, we use open LiDAR elevation data to determine surface elevation maps. We select non-US cities with existing high-fidelity three-dimensional landscape models."}, "cited_paper_content": {"title": "Physics-Based Freely Scalable Continuum Deformation For Uas Traffic Coordination", "abstract": "This paper develops a novel physics-inspired traffic coordination approach and applies it to Unmanned Aircraft System (UAS) traffic management. We extend available physics-inspired approaches previously applied to 1-D traffic flow on highways and urban streets to support models of traffic coordination in higher dimension airspace for cases where no predefined paths exist. The paper considers airspace as a finite control volume while UAS coordination, treated as continuum deformation, is controlled at the airspace boundaries. By partitioning airspace into planned and unplanned spaces, the paper models nominal coordination in the planned airspace as the solution of a partial differential equation with spatiotemporal parameters. This paper also improves resilience to vehicle failures with a resilient boundary control algorithm to update the geometry of the planned space when UAS problems threaten safe coordination in existing navigable airspace channels. To support UAS coordination at the microscopic level, we propose clustering vehicles based on vehicle performance limits. UAS clusters, with each UAS treated as a particle of a virtual rigid body, use leader-follower containment to acquire the macroscopic desired trajectory."}, "keywords": ["air corridors"], "citation_intent": "method"} {"citing_id": "2305.02968v1", "cited_id": "1802.09477", "section_title": "Representations Of Mtm", "citation": "In the Walk task, we note it actually improves over the asymptotic performance of the base TD3 #REFR algorithm within 10% of training budget.", "text_before_citation": ["We additionally test state-action representations of MTM by using the latent representation of the state and action encoded jointly with MTM.", "We allow end to end finetuning of the representations during training.", "We compare training TD3 on raw states to training TD3 with (a) state representations from the MTM model, and (b) state-action representations from the MTM model with the offline RL loss (i.e. TD3 objective).", "Figure 7 depicts the learning curves for the aforementioned experiment.", "In all cases we see significant improvement in training efficiency by using MTM representations -both with state and state-action representations."], "text_after_citation": ["Additionally, we find that the state-action representation from MTM can provide significant benefits, as in the case of the Walk task.", "Here, finetuning state-action representation from MTM leads to better asymptotic performance compared to state-only representation or learning from scratch.", "We provide additional plots of MTM frozen representations in Appendix E.3"], "citing_paper_content": {"title": "Masked Trajectory Models For Prediction, Representation, And Control", "abstract": "We introduce Masked Trajectory Models (MTM) as a generic abstraction for sequential decision making. MTM takes a trajectory, such as a stateaction sequence, and aims to reconstruct the trajectory conditioned on random subsets of the same trajectory. By training with a highly randomized masking pattern, MTM learns versatile networks that can take on different roles or capabilities, by simply choosing appropriate masks at inference time. For example, the same MTM network can be used as a forward dynamics model, inverse dynamics model, or even an offline RL agent. Through extensive experiments in several continuous control tasks, we show that the same MTM network-i.e. same weights-can match or outperform specialized networks trained for the aforementioned capabilities. Additionally, we find that state representations learned by MTM can significantly accelerate the learning speed of traditional RL algorithms. Finally, in offline RL benchmarks, we find that MTM is competitive with specialized offline RL algorithms, despite MTM being a generic self-supervised learning method without any explicit RL components."}, "cited_paper_content": {"title": "Addressing Function Approximation Error In Actor-Critic Methods", "abstract": "In value-based reinforcement learning methods such as deep Q-learning, function approximation errors are known to lead to overestimated value estimates and suboptimal policies. We show that this problem persists in an actor-critic setting and propose novel mechanisms to minimize its effects on both the actor and critic. Our algorithm takes the minimum value between a pair of critics to restrict overestimation and delays policy updates to reduce per-update error. We evaluate our method on the suite of OpenAI gym tasks, outperforming the state of the art in every environment tested."}, "keywords": ["Walk task", "training budget"], "citation_intent": "background"} {"citing_id": "2304.06645v1", "cited_id": "1903.05186", "section_title": "Ahmad", "citation": "These issues are addressed by the authors of #REFR , who introduced an arithmetic and geometric mean (AGM) robustness measure for STL.", "text_before_citation": ["Ahmad, Roberto Tron, and Calin Belta ({ahmadgh,tron,cbelta}@bu.edu) are with the Division of System Engineering, Boston University, Boston, MA 02215, USA.", "#OTHEREFR Cristian Vasile (cvr519@lehigh.edu) is with Mechanical Engineering and Mechanics Department at Lehigh University, Bethlehem, PA, 18015, USA robustly satisfy the specifications #OTHEREFR , #OTHEREFR , #OTHEREFR , #OTHEREFR , #OTHEREFR .", "The work in #OTHEREFR considers planning for syntactically cosafe LTL using RRT * , in addition to the task specifications, other spatial requirements are expressed using fragment-STL where its robustness is used as the optimality criterion for RRT * .", "In #OTHEREFR , the authors synthesize controllers for time-critical systems for which they quantify a temporal robustness measure that needs to be optimized.", "The traditional robustness metric is not differentiable and it is mostly determined by one value of the signal, i.e., it \"masks\" most of the signal."], "text_after_citation": ["TWTL has several advantages over STL, MTL, and other concrete-time TLs.", "First, its syntax and semantics can express serial tasks in an efficient and explicit way.", "This is important in many applications, especially in robotics #OTHEREFR . Second, TWTL formulae can be efficiently translated into automata.", "The complexity of the translation algorithm is independent of the formula time bounds #OTHEREFR .", "This makes this logic suitable for automata-based synthesis and planning problems (see #OTHEREFR for a planning application)."], "citing_paper_content": {"title": "Robustness Measures And Monitors For Time Window Temporal Logic", "abstract": "Temporal logics (TLs) have been widely used to formalize interpretable tasks for cyber-physical systems. Time Window Temporal Logic (TWTL) has been recently proposed as a specification language for dynamical systems. In particular, it can easily express robotic tasks, and it allows for efficient, automata-based verification and synthesis of control policies for such systems. In this paper, we define two quantitative semantics for this logic, and two corresponding monitoring algorithms, which allow for real-time quantification of satisfaction of formulas by trajectories of discrete-time systems. We demonstrate the new semantics and their runtime monitors on numerical examples."}, "cited_paper_content": {"title": "Arithmetic-Geometric Mean Robustness For Control From Signal Temporal Logic Specifications", "abstract": "We present a new average-based robustness for Signal Temporal Logic (STL) and a framework for optimal control of a dynamical system under STL constraints. By averaging the scores of different specifications or subformulae at different time points, our definition highlights the frequency of satisfaction as well as how robustly each specification is satisfied. Its usefulness in control synthesis problems is illustrated through case studies."}, "keywords": ["robustness measure"], "citation_intent": "background"} {"citing_id": "2303.11844v1", "cited_id": "1810.02733", "section_title": "Estimation From Independent Samples", "citation": "We follow a strategy similar to that used for the sample complexity of EOT #REFR . In Thm.", "text_before_citation": ["Let \u03c4, \u03bb > 0, define d = 2 d/2 and assume that c \u2208 C 1+d /2 (X \u00d7 X).", "Let \u00b5 \u03bb,\u03c4 be the empirical barycenter and \u00b5 * \u03bb,\u03c4 the population barycenter.", "Then there is C > 0 independent of (\u03bd k ) k such that", "E[H(\u03bc \u03bb,\u03c4 |\u00b5 * \u03bb,\u03c4 )] \u2264 C\u03c4 \u22121 (1 + \u03bb \u2212d /2 )n \u22121/2 .", "Proof."], "text_after_citation": ["4.1, one can replace the homogeneous Sobolev norm\u1e22 \u2212p by the (larger) inhomogeneous norm H \u2212p .", "For p = 1 + d /2, it is known that H p is a Reproducible Kernel Hilbert space norm, and by standard empirical process theory results #OTHEREFR one has", "E \u03bd k \u2212 \u03bd k H \u2212p \u2264 Cn \u22121/2 ."], "citing_paper_content": {"title": "Doubly Regularized Entropic Wasserstein Barycenters", "abstract": "We study a general formulation of regularized Wasserstein barycenters that enjoys favorable regularity, approximation, stability and (grid-free) optimization properties. This barycenter is defined as the unique probability measure that minimizes the sum of entropic optimal transport (EOT) costs with respect to a family of given probability measures, plus an entropy term. We denote it (\u03bb, \u03c4)-barycenter, where \u03bb is the inner regularization strength and \u03c4 the outer one. This formulation recovers several previously proposed EOT barycenters for various choices of \u03bb, \u03c4 \u2265 0 and generalizes them. First, in spite of-and in fact owing to-being doubly regularized, we show that our formulation is debiased for \u03c4 = \u03bb/2: the suboptimality in the (unregularized) Wasserstein barycenter objective is, for smooth densities, of the order of the strength \u03bb 2 of entropic regularization, instead of max{\u03bb, \u03c4 } in general. We discuss this phenomenon for isotropic Gaussians where all (\u03bb, \u03c4)-barycenters have closed form. Second, we show that for \u03bb, \u03c4 > 0, the barycenter has a smooth density and is strongly stable under perturbation of the marginals. In particular, it can be estimated efficiently: given n samples from each of the probability measures, it converges in relative entropy to the population barycenter at a rate n \u22121/2. And finally, this formulation lends itself naturally to a grid-free optimization algorithm: we propose a simple noisy particle gradient descent which, in the mean-field limit, converges globally at an exponential rate to the barycenter."}, "cited_paper_content": {"title": "Sample Complexity Of Sinkhorn Divergences", "abstract": "Optimal transport (OT) and maximum mean discrepancies (MMD) are now routinely used in machine learning to compare probability measures. We focus in this paper on \\emph{Sinkhorn divergences} (SDs), a regularized variant of OT distances which can interpolate, depending on the regularization strength $\\varepsilon$, between OT ($\\varepsilon=0$) and MMD ($\\varepsilon=\\infty$). Although the tradeoff induced by that regularization is now well understood computationally (OT, SDs and MMD require respectively $O(n^3\\log n)$, $O(n^2)$ and $n^2$ operations given a sample size $n$), much less is known in terms of their \\emph{sample complexity}, namely the gap between these quantities, when evaluated using finite samples \\emph{vs.} their respective densities. Indeed, while the sample complexity of OT and MMD stand at two extremes, $1/n^{1/d}$ for OT in dimension $d$ and $1/\\sqrt{n}$ for MMD, that for SDs has only been studied empirically. In this paper, we \\emph{(i)} derive a bound on the approximation error made with SDs when approximating OT as a function of the regularizer $\\varepsilon$, \\emph{(ii)} prove that the optimizers of regularized OT are bounded in a Sobolev (RKHS) ball independent of the two measures and \\emph{(iii)} provide the first sample complexity bound for SDs, obtained,by reformulating SDs as a maximization problem in a RKHS. We thus obtain a scaling in $1/\\sqrt{n}$ (as in MMD), with a constant that depends however on $\\varepsilon$, making the bridge between OT and MMD complete."}, "keywords": ["sample complexity"], "citation_intent": "method"} {"citing_id": "2303.11774v1", "cited_id": "1803.05350", "section_title": "Background And Related Work", "citation": "As recently #REFR showed, Rademacher random projections are asymptotically dimensionoptimal with exact constant ; this result improves upon a previous suboptimal bound of Kane and Nelson [26] .", "text_before_citation": ["Yet the quantitative analysis of the property #OTHEREFR has remained a difficult challenge, resulting in complex proofs simplified many times #OTHEREFR , crude statistical bounds (for example, sparse variants have an exponential gap with respect to the sharp no-go results #OTHEREFR ), and a lack of finitedimensional insights (bounds are input-oblivious which widens the gap between theory predictions and empirical performance #OTHEREFR ).", "This work addresses the aforementioned gap by revisiting the most promising construction of Rademacher random projections, which uses the following matrix", "EQUATION", "More specifically, this paper solves the following problem:", "Give a precise, non-asymptotic, non-oblivious analysis of random projections #OTHEREFR ."], "text_after_citation": ["The statistical performance of Rademacher projections is superior to the sparse ones, as demonstrated empirically in 1.", "Furthermore, the theoretical bounds for Rademacher random projections are much better than those available for sparse analogues #OTHEREFR .", "The best, prior to this paper, analysis of (2) is given by Achlioptas in #OTHEREFR .", "It is worth noting that Rademacher projections are also superior to their Gaussian counterparts; indeed, we know that they are dominated by the gaussian-based projections #OTHEREFR .", "The relation of statistical performance and input structure has not been understood in-depth yet; as for conceptually similar research, we note that recent results show that for sparse data one can improve the sparsity of random projections, gaining in computing time #OTHEREFR ."], "citing_paper_content": {"title": "Exact Non-Oblivious Performance Of Rademacher Random Embeddings", "abstract": "This paper revisits the performance of Rademacher random projections, establishing novel statistical guarantees that are numerically sharp and non-oblivious with respect to the input data. More specifically, the central result is the Schur-concavity property of Rademacher random projections with respect to the inputs. This offers a novel geometric perspective on the performance of random projections, while improving quantitatively on bounds from previous works. As a corollary of this broader result, we obtained the improved performance on data which is sparse or is distributed with small spread. This non-oblivious analysis is a novelty compared to techniques from previous work, and bridges the frequently observed gap between theory and practise. The main result uses an algebraic framework for proving Schur-concavity properties, which is a contribution of independent interest and an elegant alternative to derivative-based criteria."}, "cited_paper_content": {"title": "Optimal Bounds For Johnson-Lindenstrauss Transformations", "abstract": "In 1984, Johnson and Lindenstrauss proved that any finite set of data in a high-dimensional space can be projected to a lower-dimensional space while preserving the pairwise Euclidean distance between points up to a bounded relative error. If the desired dimension of the image is too small, however, Kane, Meka, and Nelson (2011) and Jayram and Woodruff (2013) independently proved that such a projection does not exist. In this paper, we provide a precise asymptotic threshold for the dimension of the image, above which, there exists a projection preserving the Euclidean distance, but, below which, there does not exist such a projection."}, "keywords": ["Rademacher random projections"], "citation_intent": "result"} {"citing_id": "2303.06872v3", "cited_id": "1706.03762", "section_title": "B. Camera-Lidar Fusion For Relocalization With Multi-Head Self-Attention", "citation": "Like the Transformer encoder in #REFR , a normalization layer is applied before MHSA, and a residual connection is attached after MHSA.", "text_before_citation": ["W p f T Att1 , . . . , f T Attj , . . . , f T Att N h T ,", "where f Attj is the output of the j-th scaled dot-product attention, N h is the number of the attention heads, and", "a T 1 , . . . , a T n", "is the concatenation of {a T i } n i=1 .", "In this operation, each attention is scaled by a scaling factor N h so that its output has the same dimension as the input."], "text_after_citation": ["We employ batch normalization (BN, #OTHEREFR ) instead of layer normalization (LN, #OTHEREFR ) different from #OTHEREFR .", "It was demonstrated in #OTHEREFR that LN is more effective than BN for recurrent networks.", "However, we find out from experiments that BN is more effective than LN in this work.", "Also, we do not use the positional encoding, another input of the Transformer encoder, because the order of elements in the fusion feature is not important in this task, unlike a sequence.", "This MHSA block with identical architecture repeats N l times as in #OTHEREFR ."], "citing_paper_content": {"title": "Fusionloc: Camera-2D Lidar Fusion Using Multi-Head Self-Attention For End-To-End Serving Robot Relocalization", "abstract": "As technology advances in autonomous mobile robots, mobile service robots have been actively used more and more for various purposes. Especially, serving robots have been not surprising products anymore since the COVID-19 pandemic. One of the practical problems in operating serving a robot is that it often fails to estimate its pose on a map that it moves around. Whenever the failure happens, servers should bring the serving robot to its initial location and reboot it manually. In this paper, we focus on end-to-end relocalization of serving robots to address the problem. It is to predict robot pose directly from only the onboard sensor data using neural networks. In particular, we propose a deep neural network architecture for the relocalization based on camera-2D LiDAR sensor fusion. We call the proposed method FusionLoc. In the proposed method, the multi-head selfattention complements different types of information captured by the two sensors to regress the robot pose. Our experiments on a dataset collected by a commercial serving robot demonstrate that FusionLoc can provide better performances than previous endto-end relocalization methods taking only a single image or a 2D LiDAR point cloud as well as a straightforward fusion method concatenating their features."}, "cited_paper_content": {"title": "Attention Is All You Need", "abstract": "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data."}, "keywords": ["normalization layer", "Transformer encoder"], "citation_intent": "method"} {"citing_id": "2303.10894v1", "cited_id": "1912.05074", "section_title": "", "citation": "UNet++ #REFR uses nested and dense skip connections to reduce the semantic gap between the feature maps of encoder and decoder, as shown in Fig. 1 (c) .", "text_before_citation": ["sions.", "There are three general challenges in accurate segmentation: Firstly, U-shape structures #OTHEREFR , #OTHEREFR have received considerable attention due to their abilities of utilizing multi-level information to reconstruct high-resolution feature maps.", "In UNet #OTHEREFR , the up-sampled feature maps are concatenated with feature maps skipped from the encoder and convolutions and non-linearities are added between up-sampling steps, as shown in Fig. 1 (a) .", "Subsequent UNet-based methods design diverse feature enhancement modules via attention mechanism #OTHEREFR , #OTHEREFR , gate mechanism #OTHEREFR , #OTHEREFR , transformer technique #OTHEREFR , #OTHEREFR , as shown in Fig. 1 (b) ."], "text_after_citation": ["Generally speaking, different level features in encoder have different characteristics.", "High-level ones have more semantic information which helps localize the objects, while low-level ones have more detailed information which can capture the subtle boundaries of objects.", "The decoder leverages the levelspecific and cross-level characteristics to generate the final high-resolution prediction.", "Nevertheless, the aforementioned methods directly use an element-wise addition or concatenation to fuse any two level features from the encoder and transmit them to the decoder.", "These simple operations do not pay more attention to differential information between different levels."], "citing_paper_content": {"title": "M 2 Snet: Multi-Scale In Multi-Scale Subtraction Network For Medical Image Segmentation", "abstract": "Accurate medical image segmentation is critical for early medical diagnosis. Most existing methods are based on U-shape structure and use element-wise addition or concatenation to fuse different level features progressively in decoder. However, both the two operations easily generate plenty of redundant information, which will weaken the complementarity between different level features, resulting in inaccurate localization and blurred edges of lesions. To address this challenge, we propose a general multi-scale in multi-scale subtraction network (M 2 SNet) to finish diverse segmentation from medical image. Specifically, we first design a basic subtraction unit (SU) to produce the difference features between adjacent levels in encoder. Next, we expand the single-scale SU to the intralayer multi-scale SU, which can provide the decoder with both pixel-level and structure-level difference information. Then, we pyramidally equip the multi-scale SUs at different levels with varying receptive fields, thereby achieving the inter-layer multi-scale feature aggregation and obtaining rich multi-scale difference information. In addition, we build a training-free network \"LossNet\" to comprehensively supervise the task-aware features from bottom layer to top layer, which drives our multi-scale subtraction network to capture the detailed and structural cues simultaneously. Without bells and whistles, our method performs favorably against most state-of-the-art methods under different evaluation metrics on eleven datasets of four different medical image segmentation tasks of diverse image modalities, including color colonoscopy imaging, ultrasound imaging, computed tomography (CT), and optical coherence tomography (OCT). The source code can be available at https: //github.com/Xiaoqi-Zhao-DLUT/MSNet."}, "cited_paper_content": {"title": "Unet++: Redesigning Skip Connections To Exploit Multiscale Features In Image Segmentation", "abstract": "The state-of-the-art models for medical image segmentation are variants of U-Net and fully convolutional networks (FCN). Despite their success, these models have two limitations: (1) their optimal depth is apriori unknown, requiring extensive architecture search or inefficient ensemble of models of varying depths; and (2) their skip connections impose an unnecessarily restrictive fusion scheme, forcing aggregation only at the same-scale feature maps of the encoder and decoder sub-networks. To overcome these two limitations, we propose UNet++, a new neural architecture for semantic and instance segmentation, by (1) alleviating the unknown network depth with an efficient ensemble of U-Nets of varying depths, which partially share an encoder and co-learn simultaneously using deep supervision; (2) redesigning skip connections to aggregate features of varying semantic scales at the decoder sub-networks, leading to a highly flexible feature fusion scheme; and (3) devising a pruning scheme to accelerate the inference speed of UNet++. We have evaluated UNet++ using six different medical image segmentation datasets, covering multiple imaging modalities such as computed tomography (CT), magnetic resonance imaging (MRI), and electron microscopy (EM), and demonstrating that (1) UNet++ consistently outperforms the baseline models for the task of semantic segmentation across different datasets and backbone architectures; (2) UNet++ enhances segmentation quality of varying-size objects -- an improvement over the fixed-depth U-Net; (3) Mask RCNN++ (Mask R-CNN with UNet++ design) outperforms the original Mask R-CNN for the task of instance segmentation; and (4) pruned UNet++ models achieve significant speedup while showing only modest performance degradation. Our implementation and pre-trained models are available at this https URL."}, "keywords": ["encoder", "dense skip connections"], "citation_intent": "method"} {"citing_id": "2304.01592v1", "cited_id": "2001.00106", "section_title": "A. Pac-Based Safety Guarantees", "citation": "The approach taken in #REFR to place PAC guarantees applies this concept by attempting to estimate the VC-dimension of the classification algorithm.", "text_before_citation": ["There are a number of papers that address the specific topic using PAC-based guarantees to generalize error bounds within CPSs.", "Notable investigations in this area include #OTHEREFR , #OTHEREFR , #OTHEREFR and #OTHEREFR .", "Similar to the objective of this study, the PAC-based guarantees in the aforementioned works correlate the size of the training data to the failure rate with a particular level of confidence.", "The error bounds for learning described in #OTHEREFR and #OTHEREFR use a generalized term correlated to the size of the hypothesis space to describe the target concept sample complexity.", "This is further generalized in #OTHEREFR as a bound that is dependent on the VC-dimension of the model being used."], "text_after_citation": ["In contrast to this, #OTHEREFR proposes the formulation of PACbased error bounds through the formulation of the problem as an optimization problem with the objective of minimizing the constraint violation probability.", "One of the main contributions of #OTHEREFR is that stochastic perturbations within the input layer, with an underlying probability distribution, are factored into the derived error bounds.", "Because the problem investigated in this study can be framed similarly, a similar approach to #OTHEREFR is utilized when deriving the error bounds.", "However, because this study attempts to approximate safety constraints using conformal prediction #OTHEREFR the guarantee placed on the constraints being accurate are incorporated into the PAC-based generalized error bounds for the entire system.", "To the best of our knowledge, aside from this paper, there are no existing studies on combining multiple types of guarantees when bounding the failure rate of an entire system."], "citing_paper_content": {"title": "Pac-Based Formal Verification For Out-Of-Distribution Data Detection", "abstract": "Cyber-physical systems (CPS) like autonomous vehicles, that utilize learning components, are often sensitive to noise and out-of-distribution (OOD) instances encountered during runtime. As such, safety critical tasks depend upon OOD detection subsystems in order to restore the CPS to a known state or interrupt execution to prevent safety from being compromised. However, it is difficult to guarantee the performance of OOD detectors as it is difficult to characterize the OOD aspect of an instance, especially in high-dimensional unstructured data. To distinguish between OOD data and data known to the learning component through the training process, an emerging technique is to incorporate variational autoencoders (VAE) within systems and apply classification or anomaly detection techniques on their latent spaces. The rationale for doing so is the reduction of the data domain size through the encoding process, which benefits real-time systems through decreased processing requirements, facilitates feature analysis for unstructured data and allows more explainable techniques to be implemented. This study places probably approximately correct (PAC) based guarantees on OOD detection using the encoding process within VAEs to quantify image features and apply conformal constraints over them. This is used to bound the detection error on unfamiliar instances, , with user-defined confidence, 1 \u2212 \u03b4. The approach used in this study is to empirically establish these bounds by sampling the latent probability distribution and evaluating the error with respect to the constraint violations that are encountered. The guarantee is then verified using data generated from CARLA, an open-source driving simulator."}, "cited_paper_content": {"title": "Pac Confidence Sets For Deep Neural Networks Via Calibrated Prediction", "abstract": "We propose an algorithm combining calibrated prediction and generalization bounds from learning theory to construct confidence sets for deep neural networks with PAC guarantees---i.e., the confidence set for a given input contains the true label with high probability. We demonstrate how our approach can be used to construct PAC confidence sets on ResNet for ImageNet, and on a dynamics model the half-cheetah reinforcement learning problem."}, "keywords": ["classification algorithm", "PAC guarantees"], "citation_intent": "method"} {"citing_id": "2304.11130v1", "cited_id": "1908.10084", "section_title": "Sbert", "citation": "The model is based on sentence-BERT (SBERT) similarity measures #REFR , which specifically targets the STS task.", "text_before_citation": [], "text_after_citation": ["As demonstrated in previous research #OTHEREFR , document-level models suffer from the loss of details, which affects accuracy.", "SBERT, however, has been optimized to treat text at the sentence-level, and not document-level, yielding better results #OTHEREFR than BERT.", "CWE input is segmented into sentences and the model computes the cosine similarity between two sentence embeddings.", "It was observed that CVE records, in the released dataset, have on average 3.69 sentences.", "Alternatively, the 25 CWE inputs is a collate of the name, the description, and the extended description with an average of 8.2 sentences."], "citing_paper_content": {"title": "Automated Mapping Of Cve Vulnerability Records To Mitre Cwe Weaknesses", "abstract": "In recent years, a proliferation of cyber-security threats and diversity has been on the rise culminating in an increase in their reporting and analysis. To counter that, many non-profit organizations have emerged in this domain, such as MITRE and OSWAP, which have been actively tracking vulnerabilities, and publishing defense recommendations in standardized formats. As producing data in such formats manually is very time-consuming, there have been some proposals to automate the process. Unfortunately, a major obstacle to adopting supervised machine learning for this problem has been the lack of publicly available specialized datasets. Here, we aim to bridge this gap. In particular, we focus on mapping CVE records into MITRE CWE Weaknesses, and we release to the research community a manually annotated dataset of 4,012 records for this task. With a human-in-the-loop framework in mind, we approach the problem as a ranking task and aim to incorporate reinforced learning to make use of the human feedback in future work. Our experimental results using fine-tuned deep learning models, namely Sentence-BERT and rankT5, show sizable performance gains over BM25, BERT, and RoBERTa, which demonstrates the need for an architecture capable of good semantic understanding for this task."}, "cited_paper_content": {"title": "Sentence-Bert: Sentence Embeddings Using Siamese Bert-Networks", "abstract": "BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019) has set a new state-of-the-art performance on sentence-pair regression tasks like semantic textual similarity (STS). However, it requires that both sentences are fed into the network, which causes a massive computational overhead: Finding the most similar pair in a collection of 10,000 sentences requires about 50 million inference computations (~65 hours) with BERT. The construction of BERT makes it unsuitable for semantic similarity search as well as for unsupervised tasks like clustering. ::: In this publication, we present Sentence-BERT (SBERT), a modification of the pretrained BERT network that use siamese and triplet network structures to derive semantically meaningful sentence embeddings that can be compared using cosine-similarity. This reduces the effort for finding the most similar pair from 65 hours with BERT / RoBERTa to about 5 seconds with SBERT, while maintaining the accuracy from BERT. ::: We evaluate SBERT and SRoBERTa on common STS tasks and transfer learning tasks, where it outperforms other state-of-the-art sentence embeddings methods."}, "keywords": ["sentence-BERT (SBERT) similarity"], "citation_intent": "method"} {"citing_id": "2304.06551v1", "cited_id": "1602.05629", "section_title": "Introduction", "citation": "Federated learning (FL), recently developed and proposed by Google as an emerging distributed machine learning technology, will provide further new technology to support the intelligence of drones #REFR .", "text_before_citation": ["However, traditional machine learning techniques require uploading all data to a cloud-based server for training and processing, which represents a considerable challenge for drone swarms #OTHEREFR .", "In a first consideration, the data generated by drones may be sensitive, and could be intercepted while uploading the data to the cloud, leading to a privacy breach.", "Secondly, drones' large numbers of data can result in impractical delays when uploading, thus creating a time lag for swarms of drones that prevents them from conducting real-time monitoring.", "Finally, drones can consume a great amount of energy when training models, meaning there may be related challenges to doing so in terms of energy constraints #OTHEREFR .", "Distributed machine learning techniques represent a new solution to address these issues and challenges, whereby drones train machine learning models without sharing raw data."], "text_after_citation": ["The concept of federated learning is allowing each drone to train its learning model based on its data.", "The parameters of each drone's trained model are then sent to a parameter server to update the model for a new round of training, without sending the raw data to the cloud.", "This training model allows for reasonable data security, latency and energy consumption.", "However, the highly mobile nature of drones means conventional FL is not well-suited, given their complex working environment.", "If the parameter server does not work properly, it will impact the training effectiveness of the whole UAV network #OTHEREFR ."], "citing_paper_content": {"title": "Decentralized Federated Learning Methods For Reducing Communication Cost And Energy Consumption In Uav Networks", "abstract": "Unmanned aerial vehicles (UAV) or drones play many roles in a modern smart city such as the delivery of goods, mapping real-time road traffic and monitoring pollution. The ability of drones to perform these functions often requires the support of machine learning technology. However, traditional machine learning models for drones encounter data privacy problems, communication costs and energy limitations. Federated Learning, an emerging distributed machine learning approach, is an excellent solution to address these issues. Federated learning (FL) allows drones to train local models without transmitting raw data. However, existing FL requires a central server to aggregate the trained model parameters of the UAV. A failure of the central server can significantly impact the overall training. In this paper, we propose two aggregation methods: Commutative FL and Alternate FL, based on the existing architecture of decentralised Federated Learning for UAV Networks (DFL-UN) by adding a unique aggregation method of decentralised FL. Those two methods can effectively control energy consumption and communication cost by controlling the number of local training epochs, local communication, and global communication. The simulation results of the proposed training methods are also presented to verify the feasibility and efficiency of the architecture compared with two benchmark methods (e.g. standard machine learning training and standard single aggregation server training). The simulation results show that the proposed methods outperform the benchmark methods in terms of operational stability, energy consumption and communication cost."}, "cited_paper_content": {"title": "Communication-Efficient Learning Of Deep Networks From Decentralized Data", "abstract": "Modern mobile devices have access to a wealth of data suitable for learning models, which in turn can greatly improve the user experience on the device. For example, language models can improve speech recognition and text entry, and image models can automatically select good photos. However, this rich data is often privacy sensitive, large in quantity, or both, which may preclude logging to the data center and training there using conventional approaches. We advocate an alternative that leaves the training data distributed on the mobile devices, and learns a shared model by aggregating locally-computed updates. We term this decentralized approach Federated Learning. We present a practical method for the federated learning of deep networks based on iterative model averaging, and conduct an extensive empirical evaluation, considering five different model architectures and four datasets. These experiments demonstrate the approach is robust to the unbalanced and non-IID data distributions that are a defining characteristic of this setting. Communication costs are the principal constraint, and we show a reduction in required communication rounds by 10-100x as compared to synchronized stochastic gradient descent."}, "keywords": ["Federated learning"], "citation_intent": "background"} {"citing_id": "2303.07585v1", "cited_id": "1905.10650", "section_title": "Iv. Related Work", "citation": "A similar result was obtained in #REFR , where they argued that a reasonable amount of attention heads could be removed during test time without significant performance loss.", "text_before_citation": ["There has been substantial recent research on examining the attention mechanism.", "Layer-based attention distribution analysis for 128-token-long inputs was conducted in #OTHEREFR to measure the syntactic ability of attention heads.", "One of the findings of #OTHEREFR is that the self-attention heads within the same layer have the similar attention distribution."], "text_after_citation": ["According to #OTHEREFR , BERT's initial layers are crucial for capturing word-order information.", "In contrast, middle layers are essential for syntactic information #OTHEREFR and the final layer representations are prominent for taskspecific adaptation #OTHEREFR .", "However, the relationship between attention weights and model outputs is ambiguous.", "For example, #OTHEREFR finds that the attention values have weak correlation with feature importance measures using gradient or feature erasure methods.", "They also demonstrate that different sets of attention values learned using adversarial training can result in the same prediction, therefore attention values should not be utilised as an explanation of the model's predictions."], "citing_paper_content": {"title": "Input-Length-Shortening And Text Generation Via Attention Values", "abstract": "Identifying words that impact a task's performance more than others is a challenge in natural language processing. Transformers models have recently addressed this issue by incorporating an attention mechanism that assigns greater attention (i.e., relevance) scores to some words than others. Because of the attention mechanism's high computational cost, transformer models usually have an input-length limitation caused by hardware constraints. This limitation applies to many transformers, including the well-known bidirectional encoder representations of the transformer (BERT) model. In this paper, we examined BERT's attention assignment mechanism, focusing on two questions: (1) How can attention be employed to reduce input length? (2) How can attention be used as a control mechanism for conditional text generation?We investigated these questions in the context of a text classification task. We discovered that BERT's early layers assign more critical attention scores for text classification tasks compared to later layers. We demonstrated that the first layer's attention sums could be used to filter tokens in a given sequence, considerably decreasing the input length while maintaining good test accuracy. We also applied filtering, which uses a computeefficient semantic similarities algorithm, and discovered that retaining approximately 6% of the original sequence is sufficient to obtain 86.5% accuracy. Finally, we showed that we could generate data in a stable manner and indistinguishable from the original one by only using a small percentage (10%) of the tokens with high attention scores according to BERT's first layer."}, "cited_paper_content": {"title": "Are Sixteen Heads Really Better Than One?", "abstract": "Multi-headed attention is a driving force behind recent state-of-the-art NLP models. By applying multiple attention mechanisms in parallel, it can express sophisticated functions beyond the simple weighted average. However we observe that, in practice, a large proportion of attention heads can be removed at test time without significantly impacting performance, and that some layers can even be reduced to a single head. Further analysis on machine translation models reveals that the self-attention layers can be significantly pruned, while the encoder-decoder layers are more dependent on multi-headedness."}, "keywords": ["attention heads"], "citation_intent": "result"} {"citing_id": "2304.00910v1", "cited_id": "1905.05833", "section_title": "B. Multiview-Activated Scvp Network Architecture", "citation": "We follow the setting in NBVNet #REFR to construct the shape size of our input and output.", "text_before_citation": ["The function of multiview-activated (MA-)SCVP network is a classic multilabel classification [71] function: #OTHEREFR This function takes a 32 \u00d732 \u00d732 occupancy grid and a vector of 32 bits as input, and predicts a vector of 32 bits so that the V * cover can be obtained."], "text_after_citation": ["Since convolution is easier to operate on data with equal dimensions, we extract a 32 \u00d7 32 \u00d7 32 cubic bounding box from our OctoMap M.", "We adopt the dynamic resolution of M so that the shape size of 32 \u00d7 32 \u00d7 32 is sufficient to generalize to different object sizes.", "In the real world, we assume o size is not greater than 15 cm, which can be predicted by solving the minimum bounding sphere in the point clouds to obtain the M.", "The view state vector V state is the same size as our candidate view space.", "A bit in our network output is bound to a certain candidate view because the SCOP is solved in such a fixed candidate view space."], "citing_paper_content": {"title": "One-Shot View Planning For Fast And Complete Unknown Object Reconstruction", "abstract": "Fig. 1: Comparison of surface details, views, and paths to reconstruct an untrained object: reconstructed 3D models (red point clouds), major missing surface details (gray voxels in enlarged areas), local paths (cyan), global paths (purple), views (red-green-blue), and the same initial view (black circle). (a) Results of an iterative learning-based NBV network. (b) Results of the one-shot SCVP network. (c) Results of our novel combined pipeline that selects four NBVs before activating the SCVP network. (d) Results of our novel combined pipeline that selects one NBV before activating the proposed MA-SCVP network. To ensure that sufficient surface details can be illuminated (black arrow), 4-NBV+SCVP requires more views and paths than MA-SCVP in this example (14 views with 4 local paths vs. 13 views with 1 local path)."}, "cited_paper_content": {"title": "Supervised Learning Of The Next-Best-View For 3D Object Reconstruction", "abstract": "Motivated by the advances in 3D sensing technology and the spreading of low-cost robotic platforms, 3D object reconstruction has become a common task in many areas. Nevertheless, the selection of the optimal sensor pose that maximizes the reconstructed surface is a problem that remains open. It is known in the literature as the next-best-view planning problem. In this paper, we propose a novel next-best-view planning scheme based on supervised deep learning. The scheme contains an algorithm for automatic generation of datasets and an original three-dimensional convolutional neural network (3D-CNN) used to learn the next-best-view. Unlike previous work where the problem is addressed as a search, the trained 3D-CNN directly predicts the sensor pose. We present a comparison of the proposed network against a similar net, and we present several experiments of the reconstruction of unknown objects validating the effectiveness of the proposed scheme."}, "keywords": ["NBVNet"], "citation_intent": "method"} {"citing_id": "2304.07918v1", "cited_id": "1906.01618", "section_title": "Experiments 4.1 Datasets", "citation": "We use the images rendered by #REFR and follow its split to separate the training and testing sets.", "text_before_citation": ["To evaluate the proposed NeRF-LEBM framework and the learning algorithms, we conduct experiments on three datasets.", "The Carla dataset is rendered by #OTHEREFR using the Carla Driving Simulator #OTHEREFR .", "It contains 10k cars of different shapes, colors and textures.", "Each car has one 2D image rendered from one random camera pose.", "Another dataset is the ShapeNet #OTHEREFR Car dataset, which contains 2.1k different cars for training and 700 cars for testing."], "text_after_citation": ["Each car in the training set has 250 views and we only use 50 views of them for training. Each car in the testing set has 251 views. each image is associated with its camera pose information."], "citing_paper_content": {"title": "Likelihood-Based Generative Radiance Field With Latent Space Energy-Based Model For 3D-Aware Disentangled Image Representation", "abstract": "1 We propose the NeRF-LEBM, a likelihoodbased top-down 3D-aware 2D image generative model that incorporates 3D representation via Neural Radiance Fields (NeRF) and 2D imaging process via differentiable volume rendering. The model represents an image as a rendering process from 3D object to 2D image and is conditioned on some latent variables that account for object characteristics and are assumed to follow informative trainable energy-based prior models. We propose two likelihood-based learning frameworks to train the NeRF-LEBM: (i) maximum likelihood estimation with Markov chain Monte Carlo-based inference and (ii) variational inference with the reparameterization trick. We study our models in the scenarios with both known and unknown camera poses. Experiments on several benchmark datasets demonstrate that the NeRF-LEBM can infer 3D object structures from 2D images, generate 2D images with novel views and objects, learn from incomplete 2D images, and learn from 2D images with known or unknown camera poses."}, "cited_paper_content": {"title": "Scene Representation Networks: Continuous 3D-Structure-Aware Neural Scene Representations", "abstract": "Unsupervised learning with generative models has the potential of discovering rich representations of 3D scenes. While geometric deep learning has explored 3D-structure-aware representations of scene geometry, these models typically require explicit 3D supervision. Emerging neural scene representations can be trained only with posed 2D images, but existing methods ignore the three-dimensional structure of scenes. We propose Scene Representation Networks (SRNs), a continuous, 3D-structure-aware scene representation that encodes both geometry and appearance. SRNs represent scenes as continuous functions that map world coordinates to a feature representation of local scene properties. By formulating the image formation as a differentiable ray-marching algorithm, SRNs can be trained end-to-end from only 2D images and their camera poses, without access to depth or shape. This formulation naturally generalizes across scenes, learning powerful geometry and appearance priors in the process. We demonstrate the potential of SRNs by evaluating them for novel view synthesis, few-shot reconstruction, joint shape and appearance interpolation, and unsupervised discovery of a non-rigid face model."}, "keywords": ["images"], "citation_intent": "method"} {"citing_id": "2303.10771v2", "cited_id": "1803.02602", "section_title": "Randomized Linear Algebra", "citation": "In practice, the matrix Q can be obtained via (sparse) Cholesky factorization of R U , but it can also be any rectangular matrix such that Q T Q = R U , as pointed out in #REFR Remark 2.7] .", "text_before_citation": ["It is for example the case for the partial subsampled randomized Hadamard transform (P-SRHT) described in #OTHEREFR .", "Sufficient conditions on k are also available but they are very conservative.", "However, numerical experiments from #OTHEREFR showed performances similar to the Gaussian embedding for a given dimension k.", "A good practice is to use composed embeddings, for example \u0398 = \u0398 2 \u0398 1 with \u0398 1 a moderate sized P-SRHT embedding and \u0398 2 a small sized Gaussian embedding.", "Remark 4."], "text_after_citation": ["Such matrix may be obtained by Cholesky factorizations of small matrices, which are easy to compute. This is especially important for large scale problems."], "citing_paper_content": {"title": "Dictionary-Based Model Reduction For State Estimation", "abstract": "We consider the problem of state estimation from m linear measurements, where the state u to recover is an element of the manifold M of solutions of a parameter-dependent equation. The state is estimated using a prior knowledge on M coming from model order reduction. Variational approaches based on linear approximation of M, such as PBDW, yields a recovery error limited by the Kolmogorov m-width of M. To overcome this issue, piecewise-affine approximations of M have also be considered, that consist in using a library of linear spaces among which one is selected by minimizing some distance to M. In this paper, we propose a state estimation method relying on dictionary-based model reduction, where a space is selected from a library generated by a dictionary of snapshots, using a distance to the manifold. The selection is performed among a set of candidate spaces obtained from the path of a 1-regularized least-squares problem. Then, in the framework of parameter-dependent operator equations (or PDEs) with affine parameterizations, we provide an efficient offline-online decomposition based on randomized linear algebra, that ensures efficient and stable computations while preserving theoretical guarantees."}, "cited_paper_content": {"title": "Randomized Linear Algebra For Model Reduction. Part I: Galerkin Methods And Error Estimation", "abstract": "We propose a probabilistic way for reducing the cost of classical projection-based model order reduction methods for parameter-dependent linear equations. A reduced order model is here approximated from its random sketch, which is a set of low-dimensional random projections of the reduced approximation space and the spaces of associated residuals. This approach exploits the fact that the residuals associated with approximations in low-dimensional spaces are also contained in low-dimensional spaces. We provide conditions on the dimension of the random sketch for the resulting reduced order model to be quasi-optimal with high probability. Our approach can be used for reducing both complexity and memory requirements. The provided algorithms are well suited for any modern computational environment. Major operations, except solving linear systems of equations, are embarrassingly parallel. Our version of proper orthogonal decomposition can be computed on multiple workstations with a communication cost independent of the dimension of the full order model. The reduced order model can even be constructed in a so-called streaming environment, i.e., under extreme memory constraints. In addition, we provide an efficient way for estimating the error of the reduced order model, which is not only more efficient than the classical approach but is also less sensitive to round-off errors. Finally, the methodology is validated on benchmark problems."}, "keywords": ["Cholesky factorization"], "citation_intent": "background"} {"citing_id": "2303.16102v1", "cited_id": "1803.08494", "section_title": "A. Network Training", "citation": "To generalize the network Group Norm #REFR with group size 32 is used after each linear layer.", "text_before_citation": ["During each epoch we generate 160 point clouds for each component. Thus each training epoch consists of 232000 point clouds.", "The network is trained with a batch size of 14, 7 on each GPU, using the Adam optimizer #OTHEREFR , with an initial learning rate of 0.0001.", "We use a step scheduler with a step size of 20 and the gamma parameter set to 0.7.", "The loss is calculated with cross entropy using a 0.2/0.8 split for segmentation and keypoint loss.", "For the keypoint loss only points belonging to the object is used."], "text_after_citation": ["Group Norm is used as opposed to Batch Norm as a result of the small batch size.", "Dropout is used for the object features, as the network should not overfit to a specific part of the object, and is used after the to concurrent linear layers.", "The dropout is set to 40 %, used after the last two linear layers of the object feature and the first two of the combined feature.", "Additionally, up to 0.75 % Gaussian noise is applied to the object and scene point clouds, and 10 % position shift is applied to the object point cloud.", "The network was trained on a PC environment with two NVIDIA GeForce RTX 2080 GPUs."], "citing_paper_content": {"title": "Gp3D: Generalized Pose Estimation In 3D Point Clouds: A Case Study On Bin Picking", "abstract": "In this paper, we present GP3D, a novel network for generalized pose estimation in 3D point clouds. The method generalizes to new objects by using both the scene point cloud and the object point cloud with keypoint indexes as input. The network is trained to match the object keypoints to scene points. To address the pose estimation of novel objects we also present a new approach for training pose estimation. The typical solution is a single model trained for pose estimation of a specific object in any scenario. This has several drawbacks: training a model for each object is time-consuming, energy consuming, and by excluding the scenario information the task becomes more difficult. In this paper, we present the opposite solution; a scenariospecific pose estimation method for novel objects that do not require retraining. The network is trained on 1500 objects and is able to learn a generalized solution. We demonstrate that the network is able to correctly predict novel objects, and demonstrate the ability of the network to perform outside of the trained class. We believe that the demonstrated method is a valuable solution for many real-world scenarios. Code and trained network will be made available after publication. Index Terms-pose estimation, point cloud, deep learning Pose Estimation Scene Point Cloud (Colored for visualization) Object Point Cloud with Keypoint Indecies"}, "cited_paper_content": {"title": "Group Normalization", "abstract": "Batch Normalization (BN) is a milestone technique in the development of deep learning, enabling various networks to train. However, normalizing along the batch dimension introduces problems --- BN's error increases rapidly when the batch size becomes smaller, caused by inaccurate batch statistics estimation. This limits BN's usage for training larger models and transferring features to computer vision tasks including detection, segmentation, and video, which require small batches constrained by memory consumption. In this paper, we present Group Normalization (GN) as a simple alternative to BN. GN divides the channels into groups and computes within each group the mean and variance for normalization. GN's computation is independent of batch sizes, and its accuracy is stable in a wide range of batch sizes. On ResNet-50 trained in ImageNet, GN has 10.6% lower error than its BN counterpart when using a batch size of 2; when using typical batch sizes, GN is comparably good with BN and outperforms other normalization variants. Moreover, GN can be naturally transferred from pre-training to fine-tuning. GN can outperform its BN-based counterparts for object detection and segmentation in COCO, and for video classification in Kinetics, showing that GN can effectively replace the powerful BN in a variety of tasks. GN can be easily implemented by a few lines of code in modern libraries."}, "keywords": ["network"], "citation_intent": "method"} {"citing_id": "2304.05097v1", "cited_id": "1801.03924", "section_title": "Metrics", "citation": "LPIPS #REFR calculates the cosine distances between the network features of the two images layer by layer and averages them to estimate the perceived distance of the generated image from the ground truth image.", "text_before_citation": ["PSNR is numerically related to the mean squared error (MSE) between the ground truth and the reconstructed image, it is used to measure the image reconstruction quality.", "SSIM measures the structural similarity between patches of the input images.", "As a result, it is more robust to changes in the global illumination than PSNR."], "text_after_citation": ["CSIM #OTHEREFR To evaluate the effectiveness of identity preservation, we compute the cosine similarity using embedded vectors created by the pre-trained face recognition model.", "AUCON is used to calculate the ratio of the same facial action unit values between the generated images and the driving images."], "citing_paper_content": {"title": "One-Shot High-Fidelity Talking-Head Synthesis With Deformable Neural Radiance Field", "abstract": "Figure 1. Representative results of our method. The first three columns exhibit the source, driving, generated images, respectively. The rest columns show the exploration of the generated images to different yaw angles."}, "cited_paper_content": {"title": "The Unreasonable Effectiveness Of Deep Features As A Perceptual Metric", "abstract": "While it is nearly effortless for humans to quickly assess the perceptual similarity between two images, the underlying processes are thought to be quite complex. Despite this, the most widely used perceptual metrics today, such as PSNR and SSIM, are simple, shallow functions, and fail to account for many nuances of human perception. Recently, the deep learning community has found that features of the VGG network trained on the ImageNet classification task has been remarkably useful as a training loss for image synthesis. But how perceptual are these so-called\"perceptual losses\"? What elements are critical for their success? To answer these questions, we introduce a new Full Reference Image Quality Assessment (FR-IQA) dataset of perceptual human judgments, orders of magnitude larger than previous datasets. We systematically evaluate deep features across different architectures and tasks and compare them with classic metrics. We find that deep features outperform all previous metrics by huge margins. More surprisingly, this result is not restricted to ImageNet-trained VGG features, but holds across different deep architectures and levels of supervision (supervised, self-supervised, or even unsupervised). Our results suggest that perceptual similarity is an emergent property shared across deep visual representations."}, "keywords": ["generated image"], "citation_intent": "method"} {"citing_id": "2303.14552v1", "cited_id": "1512.00567", "section_title": "Fr\u00e9chet Inception Distance", "citation": "The FID passes images from both distributions through the pre-trained inception network #REFR obtaining embeddings for every image, e.g. 2048 dimensional embedding vectors for Inception V3.", "text_before_citation": ["The Fr\u00e9chet Inception Distance (FID) #OTHEREFR is a method for comparing similarity of two image distributions to each other.", "It is commonly used between real images and generated images by a GAN as a measure of GAN performance."], "text_after_citation": ["For both distribution, the estimated mean embedding vector and the covariance matrix is calculated. The final distance is defined as following:", "EQUATION", "3)", "where", "(m 1 , C 1 )"], "citing_paper_content": {"title": "Spatial Latent Representations In Generative Adversarial Networks For Image Generation Master'S Thesis In Computer Science", "abstract": "Generative Adversarial Networks (GANs) are currently state-of-the-art methods in image generation tasks. They generate new images by transforming a latent space into an image data distribution. In the vast majority of GAN architectures, the latent space is defined as a set of vectors of given dimensionality. Such representations are not easily interpretable and do not capture spatial information of image content directly. In this work, we define a family of spatial latent spaces for StyleGAN2, capable of capturing more details and representing images that are out-of-sample in terms of the number and arrangement of object parts, such as an image of multiple faces or a face with more than two eyes. We propose a method for encoding images into our spaces, together with an attribute model capable of performing attribute editing in these spaces. We show that our spaces are effective for image manipulation purposes and encode semantic information well. Our approach can be used on pre-trained generator models, and attribute edition can be done using pre-generated direction vectors making the barrier to entry for experimentation and use extremely low. We propose a regularization method for optimizing latent representations, which equalizes distributions of parts of latent spaces, making representations much closer to generated ones. We use it for encoding images into spatial spaces to obtain significant improvement in quality while keeping semantics and ability to use our attribute model for edition purposes. In total, using our methods gives encoding quality boost even as high as 30% in terms of LPIPS score comparing to standard methods, while keeping semantics. Additionally, we propose a StyleGAN2 training procedure on our spatial latent spaces, together with a custom spatial latent representation distribution to make spatially closer elements in the representation more dependent on each other than farther elements. Such approach improves the FID score by 29% on SpaceNet, and is able to generate consistent images of arbitrary sizes on spatially homogeneous datasets, like satellite imagery."}, "cited_paper_content": {"title": "Rethinking The Inception Architecture For Computer Vision", "abstract": "Convolutional networks are at the core of most state of-the-art computer vision solutions for a wide variety of tasks. Since 2014 very deep convolutional networks started to become mainstream, yielding substantial gains in various benchmarks. Although increased model size and computational cost tend to translate to immediate quality gains for most tasks (as long as enough labeled data is provided for training), computational efficiency and low parameter count are still enabling factors for various use cases such as mobile vision and big-data scenarios. Here we are exploring ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization. We benchmark our methods on the ILSVRC 2012 classification challenge validation set demonstrate substantial gains over the state of the art: 21:2% top-1 and 5:6% top-5 error for single frame evaluation using a network with a computational cost of 5 billion multiply-adds per inference and with using less than 25 million parameters. With an ensemble of 4 models and multi-crop evaluation, we report 3:5% top-5 error and 17:3% top-1 error on the validation set and 3:6% top-5 error on the official test set."}, "keywords": ["pre-trained inception network"], "citation_intent": "method"} {"citing_id": "2303.11630v1", "cited_id": "1903.06874", "section_title": "Related Work", "citation": "Curve GCN #REFR regarded the initial contour as a graph and used a graph convolutional network to predict vertex-wise offsets.", "text_before_citation": ["For instance, Polygon RNN [9, 1] employed a CNN-RNN architecture to sequentially trace object boundaries in a given image patch.", "Two-stage Deep Snake #OTHEREFR created initial octagon contours using a detector and then iteratively deformed them through a circular convolution network.", "Based on that, DANCE #OTHEREFR utilized a segmentwise matching scheme and attentive contour deformation to facilitate learning.", "PolyTransform #OTHEREFR generated masks for each object using an off-the-shelf mask-based segmentation pipeline and converted the resulting mask contours into a set of vertices.", "Subsequently, the Transformer #OTHEREFR wrapped these vertices to fit the object silhouette better."], "text_after_citation": ["It employed a differentiable rendering loss to ensure that masks rendered from the predicted points agreed with the ground-truth masks.", "BoundaryFormer #OTHEREFR , on the other hand, applied a differentiable rasterization method to generate masks from polygons, achieving stunning results that are almost comparable to its mask-based counterparts on standard benchmarks.", "PolarMask #OTHEREFR and its followups #OTHEREFR adopted a set of rays in the polar coordinate system to represent object contours, which enables an efficient calculation of Intersection-over-Union.", "Furthermore, E2EC #OTHEREFR took the vertices and rays to define contours simultaneously.", "However, the deep learning-based methods mentioned above require expensive ground-truth masks or polygons, which hinders their practical applicability and extension."], "citing_paper_content": {"title": "Boxsnake: Polygonal Instance Segmentation With Box Supervision", "abstract": "Box-supervised instance segmentation has gained much attention as it requires only simple box annotations instead of costly mask or polygon annotations. However, existing box-supervised instance segmentation models mainly focus on mask-based frameworks. We propose a new endto-end training technique, termed BoxSnake, to achieve effective polygonal instance segmentation using only box annotations for the first time. Our method consists of two loss functions: (1) a point-based unary loss that constrains the bounding box of predicted polygons to achieve coarsegrained segmentation; and (2) a distance-aware pairwise loss that encourages the predicted polygons to fit the object boundaries. Compared with the mask-based weaklysupervised methods, BoxSnake further reduces the performance gap between the predicted segmentation and the bounding box, and shows significant superiority on the Cityscapes dataset. The source code will be available at https://github.com/Yangr116/BoxSnake."}, "cited_paper_content": {"title": "Fast Interactive Object Annotation With Curve-Gcn", "abstract": "Manually labeling objects by tracing their boundaries is a laborious process. In Polygon-RNN++, the authors proposed Polygon-RNN that produces polygonal annotations in a recurrent manner using a CNN-RNN architecture, allowing interactive correction via humans-in-the-loop. We propose a new framework that alleviates the sequential nature of Polygon-RNN, by predicting all vertices simultaneously using a Graph Convolutional Network (GCN). Our model is trained end-to-end, and runs in real time. It supports object annotation by either polygons or splines, facilitating labeling efficiency for both line-based and curved objects. We show that Curve-GCN outperforms all existing approaches in automatic mode, including the powerful DeepLab, and is significantly more efficient in interactive mode than Polygon-RNN++. Our model runs at 29.3ms in automatic, and 2.6ms in interactive mode, making it 10x and 100x faster than Polygon-RNN++."}, "keywords": ["graph convolutional network"], "citation_intent": "method"} {"citing_id": "2304.03147v1", "cited_id": "1606.00061", "section_title": "Results And Analysis.", "citation": "Ultimately, based on the results in Table 4 , we conclude that HieCoAtt #REFR is the most robust VQA model.", "text_before_citation": ["Because the plots are monotonously decreasing in accuracy, or, equivalently, monotonously increasing in accuracy decrement, the ranking is effective.", "In this figure, \"First top 3\" represents the first partition, \"Second top 3\" represents the second partition and so on. models are more robust than non-attention-based ones.", "However, when we examine MU and MUA in Table 4 ( 2 ), the non-attention-based model (MU) is more robust than the attention-based model (MUA).", "It is worth noting that the only difference between MU and MUA is the attention mechanism.", "Meanwhile, in Table 4 ( 1 ), MUA is more robust than MU, indicating that the diversity of BQ candidates affects the robustness of attention-based VQA models in some cases."], "text_after_citation": ["The HieCoAtt model employs a co-attention mechanism that repeatedly exploits the text and image information to guide Fig. 7 . Visual Question Answering by Basic Questions (VQABQ) pipeline.", "Note that in Module 1 all of the training and validation questions are only encoded by Skip-Thought Question Encoder once for generating the Basic Question Matrix.", "That is, the next input of Skip-Thought Question Encoder is only a new main question.", "Module 2 is a VQA model which we want to test, and it is the HieCoAtt VQA model in our case.", "Regarding the input question of the HieCoAtt model, it is the direct concatenation of a given main question with the corresponding selected basic questions based on the Threshold-based Criterion. \"\u2295\" denotes the direct concatenation of basic questions."], "citing_paper_content": {"title": "Improving Visual Question Answering Models Through Robustness Analysis And In-Context Learning With A Chain Of Basic Questions", "abstract": "Deep neural networks have been critical in the task of Visual Question Answering (VQA), with research traditionally focused on improving model accuracy. Recently, however, there has been a trend towards evaluating the robustness of these models against adversarial attacks. This involves assessing the accuracy of VQA models under increasing levels of noise in the input, which can target either the image or the proposed query question, dubbed the main question. However, there is currently a lack of proper analysis of this aspect of VQA. This work proposes a new method that utilizes semantically related questions, referred to as basic questions, acting as noise to evaluate the robustness of VQA models. It is hypothesized that as the similarity of a basic question to the main question decreases, the level of noise increases. To generate a reasonable noise level for a given main question, a pool of basic questions is ranked based on their similarity to the main question, and this ranking problem is cast as a optimization problem. Additionally, this work proposes a novel robustness measure, , and two basic question datasets to standardize the analysis of VQA model robustness. The experimental results demonstrate that the proposed evaluation method effectively analyzes the robustness of VQA models. Moreover, the experiments show that in-context learning with a chain of basic questions can enhance model accuracy."}, "cited_paper_content": {"title": "Hierarchical Question-Image Co-Attention For Visual Question Answering", "abstract": "A number of recent works have proposed attention models for Visual Question Answering (VQA) that generate spatial maps highlighting image regions relevant to answering the question. In this paper, we argue that in addition to modeling \"where to look\" or visual attention, it is equally important to model \"what words to listen to\" or question attention. We present a novel co-attention model for VQA that jointly reasons about image and question attention. In addition, our model reasons about the question (and consequently the image via the co-attention mechanism) in a hierarchical fashion via a novel 1-dimensional convolution neural networks (CNN). Our model improves the state-of-the-art on the VQA dataset from 60.3% to 60.5%, and from 61.6% to 63.3% on the COCO-QA dataset. By using ResNet, the performance is further improved to 62.1% for VQA and 65.4% for COCO-QA."}, "keywords": ["robust VQA model"], "citation_intent": "result"} {"citing_id": "2304.10643v1", "cited_id": "1703.09370", "section_title": "Performance Metrics", "citation": "As demonstrated in #REFR , activities in the Opportunity dataset were less structured and closer to real-life scenarios, hence is the most difficult dataset among the three, which resulted in the lowest performance of M S on D S .", "text_before_citation": ["The scores were then averaged across the classes to obtain the final precision, recall and F1 scores.", "For each of the three dataset, we evaluated the performance metrics with three experiments:", "\u2022 Table 1 shows the accuracy, precision, recall and F1 score for each of these three models for all the three datasets.", "The supervised training of M S on D S yields the highest scores for the PAMAP2 dataset.", "Lower performance on the MHEALTH dataset could be partially attributed to the smaller volume of training data and fewer channels of data being used (as stated in Section 5 we only used three accelerometer channels for MHEALTH dataset as opposed to nine IMU channels for the other two datasets)."], "text_after_citation": ["The same performance trend across the three datasets is also observed in the other two experiments, M S on D ST and M T on D ST .", "When comparing the model performance on the target domain, the target domain model M T trained on D ST performs significantly better in all the three datasets than the source domain model being directly applied to the target domain i.e., M S on D T .", "This shows that the embedding extractor E T , when trained on D ST , is able to learn similar discriminative information as in the source domain embedding e S for constructing the target domain embeddings e T .", "Thus, fine-tuning an existing activity classification model, as part of our unsupervised method, could be potentially sufficient for performing the classification task at a new body location.", "However, as the performance of M T on D ST across the three datasets are in the same order as of M S on D S , the potential performance increase from such fine-tuning could be partially limited by the performance of the original supervised model from the source location. Fig."], "citing_paper_content": {"title": "Activity Classification Using Unsupervised Domain Transfer From Body Worn Sensors", "abstract": "Activity classification has become a vital feature of wearable health tracking devices. As innovation in this field grows, wearable devices worn on different parts of the body are emerging. To perform activity classification on a new body location, labeled data corresponding to the new locations are generally required, but this is expensive to acquire. In this work, we present an innovative method to leverage an existing activity classifier, trained on Inertial Measurement Unit (IMU) data from a reference body location (the source domain), in order to perform activity classification on a new body location (the target domain) in an unsupervised way, i.e. without the need for classification labels at the new location. Specifically, given an IMU embedding model trained to perform activity classification at the source domain, we train an embedding model to perform activity classification at the target domain by replicating the embeddings at the source domain. This is achieved using simultaneous IMU measurements at the source and target domains. The replicated embeddings at the target domain are used by a classification model that has previously been trained on the source domain to perform activity classification at the target domain. We have evaluated the proposed methods on three activity classification datasets PAMAP2, MHealth, and Opportunity, yielding high F1 scores of 67.19%, 70.40% and 68.34%, respectively when the source domain is the wrist and the target domain is the torso."}, "cited_paper_content": {"title": "Ensembles Of Deep Lstm Learners For Activity Recognition Using Wearables", "abstract": "Recently, deep learning (DL) methods have been introduced very successfully into human activity recognition (HAR) scenarios in ubiquitous and wearable computing. Especially the prospect of overcoming the need for manual feature design combined with superior classification capabilities render deep neural networks very attractive for real-life HAR applications. Even though DL-based approaches now outperform the state-of-the-art in a number of recognition tasks, still substantial challenges remain. Most prominently, issues with real-life datasets, typically including imbalanced datasets and problematic data quality, still limit the effectiveness of activity recognition using wearables. In this paper we tackle such challenges through Ensembles of deep Long Short Term Memory (LSTM) networks. LSTM networks currently represent the state-of-the-art with superior classification performance on relevant HAR benchmark datasets. We have developed modified training procedures for LSTM networks and combine sets of diverse LSTM learners into classifier collectives. We demonstrate that Ensembles of deep LSTM learners outperform individual LSTM networks and thus push the state-of-the-art in human activity recognition using wearables. Through an extensive experimental evaluation on three standard benchmarks (Opportunity, PAMAP2, Skoda) we demonstrate the excellent recognition capabilities of our approach and its potential for real-life applications of human activity recognition."}, "keywords": ["difficult dataset"], "citation_intent": "result"} {"citing_id": "2303.02186v1", "cited_id": "1606.03203", "section_title": "Non-Matching Assumptions", "citation": "Namely, RESIT leans a causal structure assuming noise model structural equations, while Lattimore et al. #REFR actually assumes a non-parametric causal graph.", "text_before_citation": ["In our example in fig.", "5 , we have not addressed the fact that the used structure learner actually yields a graph on a different parametric level than what is assumed by the reasoning task."], "text_after_citation": ["Luckily, because of the composition of both the structural and parametric scale, each stricter assumption is subsumed by a lower level assumption.", "As such, from left to right, we can always relax the assumptions (such as allowing non-parametric reasoning based on a noise-model causal structure), but not the other way around.", "If, for example, our structure learner yielded a plausible but non-parametric causal structure, we are not guaranteed optimal regret from Lattimore et al. #OTHEREFR .", "On our map, that would become clear as we would move from a less strict assumption (plausible causality) to a strict assumption (full causality)."], "citing_paper_content": {"title": "Causal Deep Learning", "abstract": "Causality has the potential to truly transform the way we solve a large number of realworld problems. Yet, so far, its potential remains largely unlocked since most work so far requires strict assumptions which do not hold true in practice. To address this challenge and make progress in solving real-world problems, we propose a new way of thinking about causality-we call this causal deep learning. The framework which we propose for causal deep learning spans three dimensions: (1) a structural dimension, which allows incomplete causal knowledge rather than assuming either full or no causal knowledge; (2) a parametric dimension, which encompasses parametric forms which are typically ignored; and finally, (3) a temporal dimension, which explicitly allows for situations which capture exposure times or temporal structure. Together, these dimensions allow us to make progress on a variety of real-world problems by leveraging (sometimes incomplete) causal knowledge and/or combining diverse causal deep learning methods. This new framework also enables researchers to compare systematically across existing works as well as identify promising research areas which can lead to real-world impact."}, "cited_paper_content": {"title": "Causal Bandits: Learning Good Interventions Via Causal Inference", "abstract": "We study the problem of using causal models to improve the rate at which good interventions can be learned online in a stochastic environment. Our formalism combines multi-arm bandits and causal inference to model a novel type of bandit feedback that is not exploited by existing approaches. We propose a new algorithm that exploits the causal feedback and prove a bound on its simple regret that is strictly better (in all quantities) than algorithms that do not use the additional causal information."}, "keywords": ["causal structure"], "citation_intent": "background"} {"citing_id": "2304.03507v1", "cited_id": "1910.12933", "section_title": "Iv. Experiments", "citation": "However, it is argued in works such as #REFR that for certain datasets such as Airport and Disease, one should consider embedding nodes in hyperbolic spaces and perform feature aggregation in the tangent spaces of hyperbolic spaces.", "text_before_citation": ["In each row, best performers are highlighted in blue and red for our approach and benchmarks, respectively.", "An underlined entry means no noticeable performance improvement over the base model is observed.", "In general, we see that our proposed regularized models improve upon their respective base models with significant performance gain in many cases.", "Moreover, our method can match up with or even outperform many benchmarks.", "2) Hyperbolic models: Base models considered in Section IV-A1 generate embedding of nodes in Euclidean spaces."], "text_after_citation": ["Such a consideration is plausible as certain graphs are inherently hyperbolic (measured by \u03b4-hyperbolicity, see #OTHEREFR ).", "In this subsection for Airport and Disease datasets, we use hyperbolic versions of their Euclidean counterparts HGCN #OTHEREFR , and HGAT #OTHEREFR as base models.", "We also consider the interactive model GIL that combines both Euclidean and hyperbolic approaches #OTHEREFR . The comparison results are shown in Table II .", "Again, we see a general improvement by using the proposed regularization, which yields performance comparable with benchmarks.", "3) Inductive learning models: In contrast with transductive learning, inductive learning requires one to deal with unseen data outside the training set."], "citing_paper_content": {"title": "Distributional Signals For Node Classification In Graph Neural Networks", "abstract": "In graph neural networks (GNNs), both node features and labels are examples of graph signals, a key notion in graph signal processing (GSP). While it is common in GSP to impose signal smoothness constraints in learning and estimation tasks, it is unclear how this can be done for discrete node labels. We bridge this gap by introducing the concept of distributional graph signals. In our framework, we work with the distributions of node labels instead of their values and propose notions of smoothness and non-uniformity of such distributional graph signals. We then propose a general regularization method for GNNs that allows us to encode distributional smoothness and non-uniformity of the model output in semi-supervised node classification tasks. Numerical experiments demonstrate that our method can significantly improve the performance of most base GNN models in different problem settings."}, "cited_paper_content": {"title": "Hyperbolic Graph Convolutional Neural Networks", "abstract": "Graph convolutional neural networks (GCNs) embed nodes in a graph into Euclidean space, which has been shown to incur a large distortion when embedding real-world graphs with scale-free or hierarchical structure. Hyperbolic geometry offers an exciting alternative, as it enables embeddings with much smaller distortion. However, extending GCNs to hyperbolic geometry presents several unique challenges because it is not clear how to define neural network operations, such as feature transformation and aggregation, in hyperbolic space. Furthermore, since input features are often Euclidean, it is unclear how to transform the features into hyperbolic embeddings with the right amount of curvature. Here we propose Hyperbolic Graph Convolutional Neural Network (HGCN), the first inductive hyperbolic GCN that leverages both the expressiveness of GCNs and hyperbolic geometry to learn inductive node representations for hierarchical and scale-free graphs. We derive GCNs operations in the hyperboloid model of hyperbolic space and map Euclidean input features to embeddings in hyperbolic spaces with different trainable curvature at each layer. Experiments demonstrate that HGCN learns embeddings that preserve hierarchical structure, and leads to improved performance when compared to Euclidean analogs, even with very low dimensional embeddings: compared to state-of-the-art GCNs, HGCN achieves an error reduction of up to 63.1% in ROC AUC for link prediction and of up to 47.5% in F1 score for node classification, also improving state-of-the art on the Pubmed dataset."}, "keywords": ["nodes", "hyperbolic spaces"], "citation_intent": "background"} {"citing_id": "2303.15127v1", "cited_id": "1706.06083", "section_title": "Related Work", "citation": "Effective methods to gain adversarial robustness usually involve adversarial training #REFR , which leverages adversarial examples to train models.", "text_before_citation": ["Adversarial examples and adversarial training.", "Adversarial examples deceive machine learning models by adding adversarial perturbations, often imperceptible to human, to source images, leading to incorrect classification results #OTHEREFR .", "White-box adversarial attacks #OTHEREFR maximize the loss of a source image with gradient descent on the defending model to add adversarial perturbations onto an image to maximize its loss on the model."], "text_after_citation": ["Adversarial training algorithms thus solve the min-max problem of minimizing the loss function for most adversarial examples within a perturbation budget, typically bounded in p .", "Recent years have thus observed an arms race between adversarial attack strategies and defense mechanisms #OTHEREFR .", "Data poisoning.", "Data poisoning attacks manipulate the training of a deep learning model by injecting malicious and poisoned examples into its training set #OTHEREFR .", "Data poisoning methods #OTHEREFR achieve their malicious objectives by stealthily replacing a portion of training data, and successful attacks can be triggered with specially-crafted prescribed inputs."], "citing_paper_content": {"title": "Learning The Unlearnable: Adversarial Augmentations Suppress Unlearnable Example Attacks", "abstract": "Unlearnable example attacks are data poisoning techniques that can be used to safeguard public data against unauthorized training of deep learning models. These methods add stealthy perturbations to the original image, thereby making it difficult for deep learning models to learn from these training data effectively. Current research suggests that adversarial training can, to a certain degree, mitigate the impact of unlearnable example attacks, while common data augmentation methods are not effective against such poisons. Adversarial training, however, demands considerable computational resources and can result in non-trivial accuracy loss. In this paper, we introduce the UEraser method, which outperforms current defenses against different types of state-of-the-art unlearnable example attacks through a combination of effective data augmentation policies and loss-maximizing adversarial augmentations. In stark contrast to the current SOTA adversarial training methods, UEraser uses adversarial augmentations, which extends beyond the confines of p perturbation budget assumed by current unlearning attacks and defenses. It also helps to improve the model's generalization ability, thus protecting against accuracy loss. UEraser wipes out the unlearning effect with error-maximizing data augmentations, thus restoring trained model accuracies. Interestingly, UEraser-Lite, a fast variant without adversarial augmentations, is also highly effective in preserving clean accuracies. On challenging unlearnable CIFAR-10, CIFAR-100, SVHN, and ImageNet-subset datasets produced with various attacks, it achieves results that are comparable to those obtained during clean training. We also demonstrate the efficacy of UEraser against possible adaptive attacks. Our code is open source and available to the deep learning community 1 ."}, "cited_paper_content": {"title": "Towards Deep Learning Models Resistant To Adversarial Attacks", "abstract": "Recent work has demonstrated that neural networks are vulnerable to adversarial examples, i.e., inputs that are almost indistinguishable from natural data and yet classified incorrectly by the network. In fact, some of the latest findings suggest that the existence of adversarial attacks may be an inherent weakness of deep learning models. To address this problem, we study the adversarial robustness of neural networks through the lens of robust optimization. This approach provides us with a broad and unifying view on much of the prior work on this topic. Its principled nature also enables us to identify methods for both training and attacking neural networks that are reliable and, in a certain sense, universal. In particular, they specify a concrete security guarantee that would protect against any adversary. These methods let us train networks with significantly improved resistance to a wide range of adversarial attacks. They also suggest the notion of security against a first-order adversary as a natural and broad security guarantee. We believe that robustness against such well-defined classes of adversaries is an important stepping stone towards fully resistant deep learning models."}, "keywords": ["adversarial training", "adversarial robustness"], "citation_intent": "method"} {"citing_id": "2303.06273v1", "cited_id": "1906.02243", "section_title": "Discussion", "citation": "For instance, training a BERT-base model without hyperparameter tuning, which is 1590 times smaller than ChatGPT, requires a CO2 emission of 650kg, which is comparable to flying from New York to San Francisco for one passenger #REFR .", "text_before_citation": ["Question: Do both sentences mean the same thing? Answer:\" PERTURBED OUTPUT: \"No, the two sentences do not mean the same thing.", "The first sentence says that he asked for six pizzas and soda, while the second sentence says that he asked for six pepperoni pizzas and two six-packs of soft drinks.", "The type of pizza and the specific drinks requested are different in the two sentences.\" TASK: SNLI, Semantic-BECEL ORIGINAL QUESTION: \"An enthusiastic boy pushes a shopping card containing a smiling boy. Question: A boy is pushing a shopping cart.", "True, False or Neither? Answer:\" ORIGINAL OUTPUT: \"True\" PERTURBED QUESTION: \"An enthusiastic boy pushes a shopping card containing a smiling boy. Question: A young man is pushing a shopping cart.", "True, False or Neither? Answer:\" PERTURBED OUTPUT: \"Neither (the description does not mention the age or gender of the person pushing the shopping cart, only that the person inside the cart is a smiling boy.) \" et al., 2021)."], "text_after_citation": ["A simple expectation of CO2 emission for re-training ChatGPT is 1033t, while a human is responsible for 5t CO2 emission per year.", "Therefore, it is desirable to enlarge our viewpoint beyond LLMs to implement sustainable remedies that can fundamentally solve the inconsistency problem, particularly in a modern society facing the global climate crisis."], "citing_paper_content": {"title": "Consistency Analysis Of Chatgpt", "abstract": "ChatGPT, a question-and-answer dialogue system based on a large language model, has gained huge popularity since its introduction. Its positive aspects have been reported through many media platforms, and some analyses even showed that ChatGPT achieved a decent grade in professional exams, including the law, medical, and finance domains, adding extra support to the claim that AI now can assist and, even, replace humans in industrial fields. Others, however, doubt its reliability and trustworthiness. In this paper, we investigate Chat-GPT's trustworthiness regarding logically consistent behaviours. Our findings suggest that, although ChatGPT seems to achieve an improved language understanding ability, it still fails to generate logically correct predictions frequently. Hence, while it is true that Chat-GPT is an impressive and promising new technique, we conclude that its usage in real-world applications without thorough human inspection requires further consideration, especially for risk-sensitive areas."}, "cited_paper_content": {"title": "Energy And Policy Considerations For Deep Learning In Nlp", "abstract": "Recent progress in hardware and methodology for training neural networks has ushered in a new generation of large networks trained on abundant data. These models have obtained notable gains in accuracy across many NLP tasks. However, these accuracy improvements depend on the availability of exceptionally large computational resources that necessitate similarly substantial energy consumption. As a result these models are costly to train and develop, both financially, due to the cost of hardware and electricity or cloud compute time, and environmentally, due to the carbon footprint required to fuel modern tensor processing hardware. In this paper we bring this issue to the attention of NLP researchers by quantifying the approximate financial and environmental costs of training a variety of recently successful neural network models for NLP. Based on these findings, we propose actionable recommendations to reduce costs and improve equity in NLP research and practice."}, "keywords": ["ChatGPT", "BERT-base model"], "citation_intent": "background"} {"citing_id": "2304.10334v1", "cited_id": "1805.02724", "section_title": "Conclusions And Open Questions", "citation": "We introduced least fixed formulae that use recursion on second-order function symbols and provided logical characterizations of SpanL and TotP, answering an open question of #REFR . Furthermore, we determined logics that capture SpanPSPACE and FPSPACE.", "text_before_citation": ["Inspired by the two-step semantics developed in the context of weighted logics, we introduced two-step semantics that enriches the existing framework of quantitative logics, i.e. logics for expressing counting problems."], "text_after_citation": ["Some of the quantitative logics we introduced in this work naturally capture classes of functions that count different valid outputs of space-restricted transducers.", "The logic that captures TotP over finite ordered structures was defined in a more complicated way that is related to the properties of TotP problems: recursion of the logic expresses self-reducibility and the restricted form of the recursion captures the easy-decision property.", "It would be interesting to investigate whether TotP is captured by a simpler, more elegant logic.", "The two-step semantics that we propose in this work is noteworthy for reasons beyond its primary objective.", "It can be generalized to map formulae to elements of any structure S equipped with operations \u222a and \u2022, instead of solely sets of strings. Conversely, it can also be specialized."], "citing_paper_content": {"title": "Counting Computations With Formulae: Logical Characterisations Of Counting Complexity Classes", "abstract": "We present quantitative logics with two-step semantics based on the framework of quantitative logics introduced by Arenas et al. (2020) and the two-step semantics defined in the context of weighted logics by Gastin & Monmege (2018). We show that some of the fragments of our logics augmented with a least fixed point operator capture interesting classes of counting problems. Specifically, we answer an open question in the area of descriptive complexity of counting problems by providing logical characterizations of two subclasses of #P, namely SpanL and TotP, that play a significant role in the study of approximable counting problems. Moreover, we define logics that capture FPSPACE and SpanPSPACE, which are counting versions of PSPACE."}, "cited_paper_content": {"title": "Descriptive Complexity For Counting Complexity Classes", "abstract": "Descriptive Complexity has been very successful in characterizing complexity classes of decision problems in terms of the properties definable in some logics. However, descriptive complexity for counting complexity classes, such as FP and #P, has not been systematically studied, and it is not as developed as its decision counterpart. In this paper, we propose a framework based on Weighted Logics to address this issue. Specifically, by focusing on the natural numbers we obtain a logic called Quantitative Second Order Logics (QSO), and show how some of its fragments can be used to capture fundamental counting complexity classes such as FP, #P and FPSPACE, among others. We also use QSO to define a hierarchy inside #P, identifying counting complexity classes with good closure and approximation properties, and which admit natural complete problems. Finally, we add recursion to QSO, and show how this extension naturally captures lower counting complexity classes such as #L."}, "keywords": ["logical characterizations"], "citation_intent": "background"} {"citing_id": "2305.00956v1", "cited_id": "2001.00611", "section_title": "I. Introduction", "citation": "However, these works focus on channel models such as binary input additive white Gaussian noise (BIAWGN) that do not match the ET-QKD channel #REFR .", "text_before_citation": ["Hence, baseline NB-LDPC codes with large field sizes are not favorable in QKD applications requiring low latency, such as in #OTHEREFR , #OTHEREFR .", "In addition to the above latency vs.", "key rate trade-off, the LDPC codes used previously in the IR step of ET-QKD protocols have not fully utilized the properties of the ET-QKD channel.", "For example, #OTHEREFR used a standard LDPC ensemble without optimization.", "Similarly, spatially-coupled (SC) LDPC codes, irregular repeat accumulate (IRA) codes, SC-IRA codes, and multi-edge-type (MET) codes have been discussed for the continuous-variable (CV) QKD #OTHEREFR , #OTHEREFR ."], "text_after_citation": ["A unique property of the ET-QKD problem considered in this paper is that the key rate of the system is closely dependent on both the rate of the code and the frame error rate (FER) performance. Fig.", "2 shows the FER and key rates obtained by a random LDPC code for different values of rate.", "From this graph, we see that increasing the code rate can improve the key rate even at the cost of higher FER, a phenomenon we see in both binary and non-binary LDPC codes.", "Additionally, the maximum in the key rate occurs for a relatively large value of FER (\u223c 5%).", "While the conventional code design approach is to minimize the FER to a very small value for a given rate, in this case, the goal is to jointly optimize both the rate and the FER to achieve the largest key rate."], "citing_paper_content": {"title": "Non-Binary Ldpc Code Design For Energy-Time Entanglement Quantum Key Distribution", "abstract": "In energy-time entanglement Quantum Key Distribution (QKD), two users extract a shared secret key from the arrival times (discretized as symbols) of entangled photon pairs. In prior work, Zhou et al. proposed a multi-level coding (MLC) scheme that splits the observed symbols into bit layers and utilizes binary Low-Density Parity-Check (LDPC) codes for reconciliation of the symbols. While binary LDPC codes offer low latency for key generation, splitting the symbols into bits results in a loss of key generation rate due to error propagation. Additionally, existing LDPC codes do not fully utilize the properties of the QKD channel to optimize the key rates. In this paper, we mitigate the above issues by first generalizing the MLC scheme to a nonbinary(NB) MLC scheme that has layers with non-binary symbols and utilizes NB-LDPC codes. We show the NB-MLC scheme offers flexibility in system design. Additionally, we show that the NB-MLC scheme with a small symbol size per layer offers the best trade-off between latency and key rate. We then propose a framework to jointly optimize the rate and degree profile of the NB-LDPC codes that is tailored towards the QKD channel resulting in higher key rates than prior work."}, "cited_paper_content": {"title": "Efficient Information Reconciliation For Energy-Time Entanglement Quantum Key Distribution", "abstract": "Graph based codes such as low density parity check (LDPC) codes have been shown promising for the information reconciliation phase in quantum key distribution (QKD). However, existing graph coding schemes have not fully utilized the properties of the QKD channel. In this work, we first investigate the channel statistics for discrete variable (DV) QKD based on energy-time entangled photons. We then establish a so-called balanced modulation scheme that is promising for this channel. Based on the modulation, we propose a joint local-global graph coding scheme that is expected to achieve good error-correction performance."}, "keywords": ["ET-QKD channel"], "citation_intent": "background"} {"citing_id": "2304.07781v1", "cited_id": "1705.00648", "section_title": "Additional Experiments", "citation": "Table 3 presents the results obtained by the different machine and deep learning algorithms on the LIAR dataset #REFR .", "text_before_citation": ["For our experiments, we used the dataset as it was initially released, with 6 labels #OTHEREFR (Table 3) , and by balancing the dataset's labels (Table 4) as proposed in #OTHEREFR .", "To balance the labels, we created binary labels, i.e., all the texts that are not labeled with true are considered false.", "Using the same experimental configurations as presented in Section 4.2, we obtained results that are aligned with our original observations on the proposed dataset.", "Further, we obtained results similar to state-of-the-art results for the multi-label dataset, e.g., Wang #OTHEREFR and Alhindi et al. #OTHEREFR obtained an accuracy of \u223c20%.", "For the binary classification, we obtained results that go beyond the the state of the art, e.g.,Upadhayay and Behzadan #OTHEREFR obtains an accuracy of 70% while we obtain an accuracy of 83.99% with the LSTM model that employs the document embeddings constructed with GLOVE."], "text_after_citation": ["The dataset contains approximately 12.8K human annotated short statements collected using POLITIFACT.COM's API.", "In this set of experiments, we used all the 6 labels of LIAR, i.e., pants-fire, false, barely-true, half-true, mostlytrue, and true, to build our classification models.", "The dataset is highly imbalanced, as there are more news articles labeled with true than news articles labeled with the other five classes combined.", "Due to this high degree of imbalance, the models performed poorly.", "We observe that the best-performing models employ document embedding constructed with BART."], "citing_paper_content": {"title": "It'S All In The Embedding! Fake News Detection Using Document Embeddings", "abstract": "With the current shift in the mass media landscape from journalistic rigor to social media, personalized social media is becoming the new norm. Although the digitalization progress of the media brings many advantages, it also increases the risk of spreading disinformation, misinformation, and malformation through the use of fake news. The emergence of this harmful phenomenon has managed to polarize society and manipulate public opinion on particular topics, e.g., elections, vaccinations, etc. Such information propagated on social media can distort public perceptions and generate social unrest while lacking the rigor of traditional journalism. Natural Language Processing and Machine Learning techniques are essential for developing efficient tools that can detect fake news. Models that use the context of textual data are essential for resolving the fake news detection problem, as they manage to encode linguistic features within the vector representation of words. In this paper, we propose a new approach that uses document embeddings to build multiple models that accurately label news articles as reliable or fake. We also present a benchmark on different architectures that detect fake news using binary or multi-labeled classification. We evaluated the models on five large news corpora using accuracy, precision, and recall. We obtained better results than more complex state-of-the-art Deep Neural Network models. We observe that the most important factor for obtaining high accuracy is the document encoding, not the classification model's complexity."}, "cited_paper_content": {"title": "\"Liar, Liar Pants On Fire\": A New Benchmark Dataset For Fake News Detection", "abstract": "Automatic fake news detection is a challenging problem in deception detection, and it has tremendous real-world political and social impacts. However, statistical approaches to combating fake news has been dramatically limited by the lack of labeled benchmark datasets. In this paper, we present liar: a new, publicly available dataset for fake news detection. We collected a decade-long, 12.8K manually labeled short statements in various contexts from PolitiFact.com, which provides detailed analysis report and links to source documents for each case. This dataset can be used for fact-checking research as well. Notably, this new dataset is an order of magnitude larger than previously largest public fake news datasets of similar type. Empirically, we investigate automatic fake news detection based on surface-level linguistic patterns. We have designed a novel, hybrid convolutional neural network to integrate meta-data with text. We show that this hybrid approach can improve a text-only deep learning model."}, "keywords": ["LIAR dataset"], "citation_intent": "method"} {"citing_id": "2305.00873v2", "cited_id": "1602.05629", "section_title": "Introduction", "citation": "However, the model performance degradation is still significant compared with FL methods without considering privacy, such as FedAvg #REFR .", "text_before_citation": ["adopted for ensuring strong client-level DP.", "However, this method includes two operations: clipping the l 2 norm of local updates to a sensitivity threshold C and adding random noise proportional to the model size, whose standard deviation (STD) is also decided by C.", "These steps may cause severe performance degradation #OTHEREFR , #OTHEREFR , especially on large-scale complex model #OTHEREFR , such as ResNet-18 #OTHEREFR , or with heterogeneous data.", "The reasons behind this issue are two-fold: (i) The useful information is dropped due to the clipping operation, especially with small C values, which is contained in the local updates; (ii) The model inconsistency among local models is exacerbated as the addition of random noise severely damages local updates and leads to large variances between local models, especially with large C values #OTHEREFR .", "Existing works try to overcome these issues via restricting the norm of local update #OTHEREFR and leveraging local update sparsification technique #OTHEREFR , #OTHEREFR to reduce the adverse impacts of clipping and adding random noise."], "text_after_citation": [], "citing_paper_content": {"title": "Towards The Flatter Landscape And Better Generalization In Federated Learning Under Client-Level Differential Privacy", "abstract": "To defend the inference attacks and mitigate the sensitive information leakages in Federated Learning (FL), client-level Differentially Private FL (DPFL) is the de-facto standard for privacy protection by clipping local updates and adding random noise. However, existing DPFL methods tend to make a sharp loss landscape and have poor weight perturbation robustness, resulting in severe performance degradation. To alleviate these issues, we propose a novel DPFL algorithm named DP-FedSAM, which leverages gradient perturbation to mitigate the negative impact of DP. Specifically, DP-FedSAM integrates Sharpness Aware Minimization (SAM) optimizer to generate local flatness models with improved stability and weight perturbation robustness, which results in the small norm of local updates and robustness to DP noise, thereby improving the performance. To further reduce the magnitude of random noise while achieving better performance, we propose DP-FedSAM-top k by adopting the local update sparsification technique. From the theoretical perspective, we present the convergence analysis to investigate how our algorithms mitigate the performance degradation induced by DP. Meanwhile, we give rigorous privacy guarantees with R\u00e9nyi DP, the sensitivity analysis of local updates, and generalization analysis. At last, we empirically confirm that our algorithms achieve state-of-the-art (SOTA) performance compared with existing SOTA baselines in DPFL."}, "cited_paper_content": {"title": "Communication-Efficient Learning Of Deep Networks From Decentralized Data", "abstract": "Modern mobile devices have access to a wealth of data suitable for learning models, which in turn can greatly improve the user experience on the device. For example, language models can improve speech recognition and text entry, and image models can automatically select good photos. However, this rich data is often privacy sensitive, large in quantity, or both, which may preclude logging to the data center and training there using conventional approaches. We advocate an alternative that leaves the training data distributed on the mobile devices, and learns a shared model by aggregating locally-computed updates. We term this decentralized approach Federated Learning. We present a practical method for the federated learning of deep networks based on iterative model averaging, and conduct an extensive empirical evaluation, considering five different model architectures and four datasets. These experiments demonstrate the approach is robust to the unbalanced and non-IID data distributions that are a defining characteristic of this setting. Communication costs are the principal constraint, and we show a reduction in required communication rounds by 10-100x as compared to synchronized stochastic gradient descent."}, "keywords": ["privacy"], "citation_intent": "result"} {"citing_id": "2304.12053v1", "cited_id": "1603.09382", "section_title": "Implementation Details", "citation": "Additionally, a drop path #REFR rate of 0.1 is employed to prevent overfitting, which randomly drops entire paths (i.e., sequences of layers) in the model during training.", "text_before_citation": ["While during testing, we only resize images to 224 \u00d7 224.", "For our detector, we use a ResNet-50 #OTHEREFR pretrained on Ima-geNet #OTHEREFR .", "The model is trained using the AdamW optimizer #OTHEREFR .", "The learning rate was equal to 10 \u22123 , and a step scheduler with 5 epochs was used.", "Weight decay is also applied with a factor of 5 \u2022 10 \u22125 ."], "text_after_citation": ["All training and evaluation processes were carried out on a server with one NVIDIA GeForce RTX 3060 GPU."], "citing_paper_content": {"title": "Improving Synthetically Generated Image Detection In Cross-Concept Settings", "abstract": "New advancements for the detection of synthetic images are critical for fighting disinformation, as the capabilities of generative AI models continuously evolve and can lead to hyper-realistic synthetic imagery at unprecedented scale and speed. In this paper, we focus on the challenge of generalizing across different concept classes, e.g., when training a detector on human faces and testing on synthetic animal images-highlighting the ineffectiveness of existing approaches that randomly sample generated images to train their models. By contrast, we propose an approach based on the premise that the robustness of the detector can be enhanced by training it on realistic synthetic images that are selected based on their quality scores according to a probabilistic quality estimation model. We demonstrate the effectiveness of the proposed approach by conducting experiments with generated images from two seminal architectures, StyleGAN2 and Latent Diffusion, and using three different concepts for each, so as to measure the cross-concept generalization ability. Our results show that our quality-based sampling method leads to higher detection performance for nearly all concepts, improving the overall effectiveness of the synthetic image detectors."}, "cited_paper_content": {"title": "Deep Networks With Stochastic Depth", "abstract": "Very deep convolutional networks with hundreds of layers have led to significant reductions in error on competitive benchmarks. Although the unmatched expressiveness of the many layers can be highly desirable at test time, training very deep networks comes with its own set of challenges. The gradients can vanish, the forward flow often diminishes, and the training time can be painfully slow. To address these problems, we propose stochastic depth, a training procedure that enables the seemingly contradictory setup to train short networks and use deep networks at test time. We start with very deep networks but during training, for each mini-batch, randomly drop a subset of layers and bypass them with the identity function. This simple approach complements the recent success of residual networks. It reduces training time substantially and improves the test error significantly on almost all data sets that we used for evaluation. With stochastic depth we can increase the depth of residual networks even beyond 1200 layers and still yield meaningful improvements in test error (4.91 % on CIFAR-10)."}, "keywords": ["overfitting", "layers"], "citation_intent": "method"} {"citing_id": "2304.12825v1", "cited_id": "1802.04364", "section_title": "2D Topology Prior Encoding", "citation": "To encode the 2D topology in R 2 into a prior distribution, we adopt the junction tree encoder architecture from JT-VAE #REFR .", "text_before_citation": [], "text_after_citation": ["The whole procedure is illustrated in Figure 3 and detailed next."], "citing_paper_content": {"title": "Graphvf: Controllable Protein-Specific 3D Molecule Generation With Variational Flow Fang Sun", "abstract": "Designing molecules that bind to specific target proteins is a fundamental task in drug discovery. Recent models leverage geometric constraints to generate ligand molecules that bind cohesively with specific protein pockets. However, these models cannot effectively generate 3D molecules with 2D skeletal curtailments and property constraints, which are pivotal to drug potency and development. To tackle this challenge, we propose GraphVF, a variational flow-based framework that combines 2D topology and 3D geometry, for controllable generation of binding 3D molecules. Empirically, our method achieves state-of-the-art binding affinity and realistic sub-structural layouts for protein-specific generation. In particular, GraphVF represents the first controllable geometry-aware, protein-specific molecule generation method, which can generate binding 3D molecules with tailored sub-structures and physio-chemical properties. Our code is available at https://github.com/Franco-Solis/GraphVF-code."}, "cited_paper_content": {"title": "Junction Tree Variational Autoencoder For Molecular Graph Generation", "abstract": "We seek to automate the design of molecules based on specific chemical properties. In computational terms, this task involves continuous embedding and generation of molecular graphs. Our primary contribution is the direct realization of molecular graphs, a task previously approached by generating linear SMILES strings instead of graphs. Our junction tree variational autoencoder generates molecular graphs in two phases, by first generating a tree-structured scaffold over chemical substructures, and then combining them into a molecule with a graph message passing network. This approach allows us to incrementally expand molecules while maintaining chemical validity at every step. We evaluate our model on multiple tasks ranging from molecular generation to optimization. Across these tasks, our model outperforms previous state-of-the-art baselines by a significant margin."}, "keywords": ["2D topology", "junction tree encoder"], "citation_intent": "method"} {"citing_id": "2305.01032v1", "cited_id": "1704.03647", "section_title": "Vi. Numerical Results", "citation": "By comparing the results with one of the most comprehensive papers on component-based algorithms #REFR , we see that our group-based DiCA returns more accurate solutions in much smaller number of iterations.", "text_before_citation": ["For Algorithm 2, we need to set four parameters \u03c1 v , \u03c1 \u03b8 , \u03c1 p and \u03c1 q .", "For simplicity and based on numerical results, we control all of them with a single scalar \u03c1 as", "EQUATION", "Table III shows the results of running the DiCA algorithm for some problem instances.", "The values for \u03c1 are chosen as the one that yields the minimum number of iterations among some multiples of 50."], "text_after_citation": ["Some of the common cases reported in #OTHEREFR are given in Table IV for comparison.", "Figure 3 shows the plots of the minimum residual of the regions versus the number of iterations for different values of \u03c1.", "As can be seen, \u03c1 can significantly change the number of iterations.", "As also reported in #OTHEREFR , the oscillations in the progress of the residual slow down the convergence of the distributed algorithms. Figure 4 shows the number of iterations versus \u03c1.", "For all the tested problems, DiCA does not converge if \u03c1 is smaller than a certain limit, for example, in Figure 3 for the case300 problem, when \u03c1 = 200."], "citing_paper_content": {"title": "Distributed Optimization For Power Systems With Radial Partitioning", "abstract": "This paper proposes group-based distributed optimization algorithms on top of intelligent partitioning for the optimal power flow (OPF) problem. Radial partitioning of the graph of a network is introduced as a systematic way to split a large-scale problem into more tractable sub-problems, which can potentially be solved efficiently with methods such as convex relaxations. The simple implementation of a Distributed Consensus Algorithm (DiCA) with very few parameters makes it viable for different parameter selection methods, which are crucial for the fast convergence of the distributed algorithms. The DiCA algorithm returns more accurate solutions to the tested problems with fewer iterations than component-based algorithms. Our numerical results show the performance of the algorithms for different power network instances and the effect of parameter selection. A software package DiCARP is created, which is implemented in Python using the Pyomo optimization package."}, "cited_paper_content": {"title": "A Component-Based Dual Decomposition Method For The Opf Problem", "abstract": "This paper proposes a component-based dual decomposition of the nonconvex AC optimal power flow (OPF) problem, where the modified dual function is solved in a distributed fashion. The main contribution of this work is that is demonstrates that a distributed method with carefully tuned parameters can converge to globally optimal solutions despite the inherent nonconvexity of the problem and the absence of theoretical guarantees of convergence. This paper is the first to conduct extensive numerical analysis resulting in the identification and tabulation of the algorithmic parameter settings that are crucial for the convergence of the method on 72 AC OPF test instances. Moreover, this work provides a deeper insight into the geometry of the modified Lagrange dual function of the OPF problem and highlights the conditions that make this function differentiable. This numerical demonstration of convergence coupled with the scalability and the privacy preserving nature of the proposed method makes it well suited for smart grid applications such as multi-period OPF with demand response (DR) and security constrained unit commitment (SCUC) with contingency constraints and multiple transmission system operators (TSOs)."}, "keywords": ["group-based DiCA", "component-based algorithms"], "citation_intent": "result"} {"citing_id": "2303.14939v1", "cited_id": "1707.06766", "section_title": "Related Work", "citation": "Among the different prediction tasks in the Predictive Process Monitoring state-of-the-art, an important group of papers focuses, as this work, on predicting outcomes #REFR .", "text_before_citation": ["To the best of our knowledge, no other works exist on the accuracy improvement exploiting frequent explanations, except for our previous work #OTHEREFR .", "We hence first position our work with respect to the works in Predictive Process Monitoring, the works focused on explanations in Predictive Process Monitoring, and the works improving the predictive model's performance employing (user) feedback; we finally address the specific comparison with our previous work #OTHEREFR ."], "text_after_citation": ["For instance, several approaches deal with predicting the fulfilment (or the violation) of a boolean predicate in a running case #OTHEREFR , #OTHEREFR .", "Initial approaches and encodings for outcome-oriented Predictive Process Monitoring have been enhanced in terms of performance by introducing pre-processing steps based on clustering and bucketing #OTHEREFR , #OTHEREFR , as well as in terms of prediction accuracy by introducing different types of encodings #OTHEREFR , #OTHEREFR .", "Moreover, more recently, deep learning approaches have also been investigated for predicting outcomes #OTHEREFR , #OTHEREFR .", "Differently from these works, the focus of this paper is not on proposing a specific outcome prediction method, but rather on explaining why predictive models are wrong and on leveraging these explanations for eventually improving the performance of the predictive model.", "A number of works have recently focused on providing explanations in Predictive Process Monitoring #OTHEREFR ."], "citing_paper_content": {"title": "Explain, Adapt And Retrain: How To Improve The Accuracy Of A Ppm Classifier Through Different Explanation Styles", "abstract": "Recent papers have introduced a novel approach to explain why a Predictive Process Monitoring (PPM) model for outcome-oriented predictions provides wrong predictions. Moreover, they have shown how to exploit the explanations, obtained using state-of-the art post-hoc explainers, to identify the most common features that induce a predictor to make mistakes in a semi-automated way, and, in turn, to reduce the impact of those features and increase the accuracy of the predictive model. This work starts from the assumption that frequent control flow patterns in event logs may represent important features that characterize, and therefore explain, a certain prediction. Therefore, in this paper, we (i) employ a novel encoding able to leverage DECLARE constraints in Predictive Process Monitoring and compare the effectiveness of this encoding with Predictive Process Monitoring state-of-the art encodings, in particular for the task of outcome-oriented predictions; (ii) introduce a completely automated pipeline for the identification of the most common features inducing a predictor to make mistakes; and (iii) show the effectiveness of the proposed pipeline in increasing the accuracy of the predictive model by validating it on different real-life datasets."}, "cited_paper_content": {"title": "Outcome-Oriented Predictive Process Monitoring: Review And Benchmark", "abstract": "Predictive business process monitoring refers to the act of making predictions about the future state of ongoing cases of a business process, based on their incomplete execution traces and logs of historical (completed) traces. Motivated by the increasingly pervasive availability of fine-grained event data about business process executions, the problem of predictive process monitoring has received substantial attention in the past years. In particular, a considerable number of methods have been put forward to address the problem of outcome-oriented predictive process monitoring, which refers to classifying each ongoing case of a process according to a given set of possible outcomes - e.g. Will the customer complain or not? Will an order be delivered, cancelled or withdrawn? Unfortunately, different authors have used different datasets, experimental settings, evaluation measures and baselines to assess their proposals, resulting in poor comparability and an unclear picture of the relative merits and applicability of different methods. To address this gap, this article presents a systematic review and taxonomy of outcome-oriented predictive process monitoring methods, and a comparative experimental evaluation of eleven representative methods using a benchmark covering twelve predictive process monitoring tasks based on four real-life event logs."}, "keywords": ["Predictive Process Monitoring"], "citation_intent": "background"} {"citing_id": "2304.04275v1", "cited_id": "1810.11975", "section_title": "Sparse Self-Attention", "citation": "Next, a thresholding step is done, where the probabilities lower than a dynamic threshold \u03c4 are truncated to zero, while redistributing the remaining probabilities. For more details refer to the paper #REFR .", "text_before_citation": ["The key idea here is to turn these non-consequential pairwise connections to zero attention weights, while bumping the attention score of the important ones, i.e., a sparse attention distribution.", "In particular, Sparsegen-lin activation projects the attention scores a \u2208 R n onto a probability simplex p \u2208 R n , along with a regularization coefficient \u03bb < 1:", "EQUATION", "with, \u2206 n\u22121 = {p \u2208 R n | p = 1, p > 0} enforcing constraints of probabilities summing to one and greater than non-zero.", "Note, the L2 norm with negative \u03bb regularization acts to actually assign larger probability values in p, as the objective is to minimize the cost function above."], "text_after_citation": ["We can use this Sparse Self-Attention by replacing the softmax with Sparsegen function:", "EQUATION", "Note, sparse (self-)attention and its variants has been used in NLP works with impressive results #OTHEREFR as well as time series forecasting #OTHEREFR ."], "citing_paper_content": {"title": "Filling Out The Missing Gaps: Time Series Imputation With Semi-Supervised Learning", "abstract": "Missing data in time series is a challenging issue affecting time series analysis. Missing data occurs due to problems like data drops or sensor malfunctioning. Imputation methods are used to fill in these values, with quality of imputation having a significant impact on downstream tasks like classification. In this work, we propose a semi-supervised imputation method, ST-Impute, that uses both unlabeled data along with downstream task's labeled data. ST-Impute is based on sparse self-attention and trains on tasks that mimic the imputation process. Our results indicate that the proposed method outperforms the existing supervised and unsupervised time series imputation methods measured on the imputation quality as well as on the downstream tasks ingesting imputed time series."}, "cited_paper_content": {"title": "On Controllable Sparse Alternatives To Softmax", "abstract": "Converting an n-dimensional vector to a probability distribution over n objects is a commonly used component in many machine learning tasks like multiclass classification, multilabel classification, attention mechanisms etc. For this, several probability mapping functions have been proposed and employed in literature such as softmax, sum-normalization, spherical softmax, and sparsemax, but there is very little understanding in terms how they relate with each other. Further, none of the above formulations offer an explicit control over the degree of sparsity. To address this, we develop a unified framework that encompasses all these formulations as special cases. This framework ensures simple closed-form solutions and existence of sub-gradients suitable for learning via backpropagation. Within this framework, we propose two novel sparse formulations, sparsegen-lin and sparsehourglass, that seek to provide a control over the degree of desired sparsity. We further develop novel convex loss functions that help induce the behavior of aforementioned formulations in the multilabel classification setting, showing improved performance. We also demonstrate empirically that the proposed formulations, when used to compute attention weights, achieve better or comparable performance on standard seq2seq tasks like neural machine translation and abstractive summarization."}, "keywords": ["details"], "citation_intent": "method"} {"citing_id": "2304.13722v1", "cited_id": "2004.00049", "section_title": "G.1 Experiments On Openimages500K", "citation": "Note that (Chai et al., 2021) can only generate one deterministic output given the intput collage (first row), whereas #REFR generates images with little diversity (second row).", "text_before_citation": ["When training with random crops, the boxes are obtained by sampling from the ground-truth box distribution, which means that the bounding box distribution at inference time is in-distribution, what could explain the lower image FID when compared to Mask-RCNN or OLN.", "However, the latter methods obtain a better object FID, which could be the result of the bounding boxes being more likely to contain actual objects than the random crops baseline.", "Moreover, in Figure 17 , we show generated images using out-of-distribution object combinations in a collage and observe that training with Mask-RCNN and OLN bounding boxes results in better object generations than when training with random crops, as already seen quantitatively in Table 12 , as well as visually comparable image quality and diversity to the model trained with ground-truth bounding boxes.", "Overall, these experiments showcase that M&Ms can be trained without ground-truth bounding boxes to generate reasonable scenes, see Figure 17 , at the expense of worse image and object FID metrics for the in-distribution setting, compared to the model trained with ground-truth boxes.", "Sample obtained with (Chai et al., 2021) Collage Samples obtained with in-domain image editing #OTHEREFR Samples obtained with M&Ms (a) Qualitative comparison with image editing methods (Chai et al., 2021; #OTHEREFR that admit a collage as input."], "text_after_citation": ["In contrast, M&Ms offers diverse outputs given the same collage (third row).", "Note that neither of the image editing methods supports moving nor resizing the collage elements.", "Sample obtained with (Chai et al., 2021) Collage Samples obtained with in-domain image editing #OTHEREFR Samples obtained with M&Ms (a) Qualitative comparison with image editing methods (Chai et al., 2021; #OTHEREFR that admit a collage as input.", "Comparisons are made with collages that include the novel class cacti.", "Note that (Chai et al., 2021 ) can only generate one deterministic output given the intput collage (first row), whereas #OTHEREFR generates images with little diversity (second row)."], "citing_paper_content": {"title": "Controllable Image Generation Via Collage Representations", "abstract": "Recent advances in conditional generative image models have enabled impressive results. On the one hand, text-based conditional models have achieved remarkable generation quality, by leveraging large-scale datasets of image-text pairs. To enable fine-grained controllability, however, text-based models require long prompts, whose details may be ignored by the model. On the other hand, layout-based conditional models have also witnessed significant advances. These models rely on bounding boxes or segmentation maps for precise spatial conditioning in combination with coarse semantic labels. The semantic labels, however, cannot be used to express detailed appearance characteristics. In this paper, we approach fine-grained scene controllability through image collages which allow a rich visual description of the desired scene as well as the appearance and location of the objects therein, without the need of class nor attribute labels. We introduce \"mixing and matching scenes\" (M&Ms), an approach that consists of an adversarially trained generative image model which is conditioned on appearance features and spatial positions of the different elements in a collage, and integrates these into a coherent image. We train our model on the OpenImages (OI) dataset and evaluate it on collages derived from OI and MS-COCO datasets. Our experiments on the OI dataset show that M&Ms outperforms baselines in terms of fine-grained scene controllability while being very competitive in terms of image quality and sample diversity. On the MS-COCO dataset, we highlight the generalization ability of our model by outperforming DALL-E in terms of the zero-shot FID metric, despite using two magnitudes fewer parameters and data. Collage based generative models have the potential to advance content creation in an efficient and effective way as they are intuitive to use and yield high quality generations."}, "cited_paper_content": {"title": "In-Domain Gan Inversion For Real Image Editing", "abstract": "Recent work has shown that a variety of controllable semantics emerges in the latent space of the Generative Adversarial Networks (GANs) when being trained to synthesize images. However, it is difficult to use these learned semantics for real image editing. A common practice of feeding a real image to a trained GAN generator is to invert it back to a latent code. However, we find that existing inversion methods typically focus on reconstructing the target image by pixel values yet fail to land the inverted code in the semantic domain of the original latent space. As a result, the reconstructed image cannot well support semantic editing through varying the latent code. To solve this problem, we propose an in-domain GAN inversion approach, which not only faithfully reconstructs the input image but also ensures the inverted code to be semantically meaningful for editing. We first learn a novel domain-guided encoder to project any given image to the native latent space of GANs. We then propose a domain-regularized optimization by involving the encoder as a regularizer to fine-tune the code produced by the encoder, which better recovers the target image. Extensive experiments suggest that our inversion method achieves satisfying real image reconstruction and more importantly facilitates various image editing tasks, such as image interpolation and semantic manipulation, significantly outperforming start-of-the-arts."}, "keywords": ["intput collage", "images"], "citation_intent": "background"} {"citing_id": "2303.07911v1", "cited_id": "1207.5726", "section_title": "Appendix H: First Benchmarking Sdp 1", "citation": "In this appendix, we detail the derivation of our first benchmarking SDP 1 , based on the SDP for fidelity #REFR . Let and be bipartite states.", "text_before_citation": [], "text_after_citation": ["The SDP for the root fidelity \u221a ( , ), which makes use of Uhlmann's theorem #OTHEREFR , is as follows:", "EQUATION", "where L (H ) is the set of all linear operators acting on the Hilbert space H .", "Note that there is no semidefinite constraint that directly corresponds to optimizing over the set of separable states #OTHEREFR .", "Instead, we can constrain to have a positive partial transpose (PPT) #OTHEREFR and be -extendible #OTHEREFR , since all separable states satisfy these constraints."], "citing_paper_content": {"title": "Quantum Steering Algorithm For Estimating Fidelity Of Separability", "abstract": "Quantifying entanglement is an important task by which the resourcefulness of a state can be measured. Here we develop a quantum algorithm that tests for and quantifies the separability of a general bipartite state, by making use of the quantum steering effect. Our first separability test consists of a distributed quantum computation involving two parties: a computationally limited client, who prepares a purification of the state of interest, and a computationally unbounded server, who tries to steer the reduced systems to a probabilistic ensemble of pure product states. To design a practical algorithm, we replace the role of the server by a combination of parameterized unitary circuits and classical optimization techniques to perform the necessary computation. The result is a variational quantum steering algorithm (VQSA), which is our second separability test that is better suited for the capabilities of quantum computers available today. This VQSA has an additional interpretation as a distributed variational quantum algorithm (VQA) that can be executed over a quantum network, in which each node is equipped with classical and quantum computers capable of executing VQA. We then simulate our VQSA on noisy quantum simulators and find favorable convergence properties on the examples tested. We also develop semidefinite programs, executable on classical computers, that benchmark the results obtained from our VQSA. Our findings here thus provide a meaningful connection between steering, entanglement, quantum algorithms, and quantum computational complexity theory. They also demonstrate the value of a parameterized mid-circuit measurement in a VQSA and represent a first-of-its-kind application for a distributed VQA. Finally, the whole framework generalizes to the case of multipartite states and entanglement."}, "cited_paper_content": {"title": "Simpler Semidefinite Programs For Completely Bounded Norms", "abstract": "The completely bounded trace and spectral norms, for finite-dimensional spaces, are known to be efficiently expressible by semidefinite programs (J. Watrous, Theory of Computing 5: 11, 2009). This paper presents two new, and arguably much simpler, semidefinite programming formulations of these norms."}, "keywords": ["bipartite states", "first benchmarking SDP"], "citation_intent": "background"} {"citing_id": "2304.01955v1", "cited_id": "1803.00418", "section_title": "Dynamic Modeling Of Pipe Flow", "citation": "To solve for dynamics of mass flows and pressures across the system we use the staggered-grid approach of #REFR which is an explicit, conservative, second order, finite difference scheme, stable given a CFL condition is satisfied.", "text_before_citation": ["EQUATION", "\u03c6", "EQUATION", "Where S ij is the cross-section of the pipe.", "Initial conditions for density and mass-flux in the system are constructed based on actual operational data."], "text_after_citation": ["We remind that, as of now, the Israel system does not contain compressors."], "citing_paper_content": {"title": "Control Of Line Pack In Natural Gas System: Balancing Limited Resources Under Uncertainty", "abstract": "We build and experiment with a realistic but reduced natural gas model of Israel. The system is unusual because (a) it is controlled from a limited number of points which are at, or close to, the gas extraction sites offshore of Israel's Mediterranean coast; (b) control specifies average flux at inlet, not pressure; (c) there are no inland compressors to regulate pressure; (d) power system is the main consumer of gas (70% of Israel's power is generated at gas-fired power plants distributed across the country). Nature of the system suggests that a special attention should be given to understanding dynamics driven by fast transients in gas consumption meeting intraday variations in the electricity demand, and accounting for increasing role of uncertain renewable generation (mainly solar). Based on all of the above we pose and resolve a sequence of dynamic and control challenges, such as: How to time ramping up-and down-injection of gas to guarantee a healthy intra-day line-pack which meets both pressure constraints and gas-extraction patterns? We report simulation results and utilize monotonicity properties of the natural gas flows which render robustness of our conclusions to the uncertainties of the edge withdrawals of gas."}, "cited_paper_content": {"title": "An Explicit Staggered-Grid Method For Numerical Simulation Of Large-Scale Natural Gas Pipeline Networks", "abstract": "We present an explicit second order staggered finite difference (FD) discretization scheme for forward simulation of natural gas transport in pipeline networks. By construction, this discretization approach guarantees that the conservation of mass condition is satisfied exactly. The mathematical model is formulated in terms of density, pressure, and mass flux variables, and as a result permits the use of a general equation of state to define the relation between the gas density and pressure for a given temperature. In a single pipe, the model represents the dynamics of the density by propagation of a non-linear wave according to a variable wave speed. We derive compatibility conditions for linking domain boundary values to enable efficient, explicit simulation of gas flows propagating through a network with pressure changes created by gas compressors. We compare our staggered grid method with an explicit operator splitting method and a lumped element scheme, and perform numerical experiments to validate the convergence order of the new discretization approach. In addition, we perform several computations to investigate the influence of non-ideal equation of state models and temperature effects on pipeline simulations with boundary conditions on various time and space scales."}, "keywords": ["staggered-grid approach", "finite difference scheme"], "citation_intent": "method"} {"citing_id": "2303.07014v1", "cited_id": "1806.03589", "section_title": "Iii. Method", "citation": "In order to force the training generator to produce more realistic outputs, we employ a global discriminator and three local discriminators against the generator. We use SN-PatchGAN #REFR as our global discriminator.", "text_before_citation": ["The generator adopts an encoder-decoder architecture with skip connections #OTHEREFR , as illustrated in Fig. 3(a) .", "Specifically, the encoder is composed of several successive gated convolutional layers #OTHEREFR with stride 2.", "The decoder is composed of several Half-AdaIN with upsampling operations.", "In addition, two CWSI are inserted into the decoder at the resolution of 64 \u00d7 64 and 128 \u00d7 128.", "For the inpainting task, we generally believe that the encoder can transfer the available information from the known pixels to missing pixels by gradually increasing the receptive field, and the decoder is responsible for the reconstruction of details #OTHEREFR , so we only place the control modules in the decoder."], "text_after_citation": ["The local discriminators are focused on specific sub-regions, including two eyes and mouths, which helps the generator synthesize high-frequency textures in these regions. The discriminators are omitted in Fig. 3 ."], "citing_paper_content": {"title": "Reference-Guided Large-Scale Face Inpainting With Identity And Texture Control", "abstract": "Face inpainting aims at plausibly predicting missing pixels of face images within a corrupted region. Most existing methods rely on generative models learning a face image distribution from a big dataset, which produces uncontrollable results, especially with large-scale missing regions. To introduce strong control for face inpainting, we propose a novel reference-guided face inpainting method that fills the large-scale missing region with identity and texture control guided by a reference face image. However, generating high-quality results under imposing two control signals is challenging. To tackle such difficulty, we propose a dual control one-stage framework that decouples the reference image into two levels for flexible control: High-level identity information and low-level texture information, where the identity information figures out the shape of the face and the texture information depicts the component-aware texture. To synthesize high-quality results, we design two novel modules referred to as Half-AdaIN and Component-Wise Style Injector (CWSI) to inject the two kinds of control information into the inpainting processing. Our method produces realistic results with identity and texture control faithful to reference images. To the best of our knowledge, it is the first work to concurrently apply identity and component-level controls in face inpainting to promise more precise and controllable results."}, "cited_paper_content": {"title": "Free-Form Image Inpainting With Gated Convolution", "abstract": "We present a generative image inpainting system to complete images with free-form mask and guidance. The system is based on gated convolutions learned from millions of images without additional labelling efforts. The proposed gated convolution solves the issue of vanilla convolution that treats all input pixels as valid ones, generalizes partial convolution by providing a learnable dynamic feature selection mechanism for each channel at each spatial location across all layers. Moreover, as free-form masks may appear anywhere in images with any shape, global and local GANs designed for a single rectangular mask are not applicable. Thus, we also present a patch-based GAN loss, named SN-PatchGAN, by applying spectral-normalized discriminator on dense image patches. SN-PatchGAN is simple in formulation, fast and stable in training. Results on automatic image inpainting and user-guided extension demonstrate that our system generates higher-quality and more flexible results than previous methods. Our system helps user quickly remove distracting objects, modify image layouts, clear watermarks and edit faces. Code, demo and models are available at: \\url{https://github.com/JiahuiYu/generative_inpainting}."}, "keywords": ["training generator", "SN-PatchGAN"], "citation_intent": "method"} {"citing_id": "2304.06767v1", "cited_id": "1707.06347", "section_title": "Related Work", "citation": "The main idea is learning a reward function to reflect human preferences with human annotations and optimize language models by RL methods like proximal policy optimization (PPO) #REFR .", "text_before_citation": ["Alignment of Generative Models.", "Alignment #OTHEREFR is first proposed to build agents that behave in accordance with the human's intention.", "By communicating with human, agents can get accurate supervised signals #OTHEREFR by applying several scalable reward learning methods #OTHEREFR 20] .", "Alignment benefits many recent generative foundation models, like InstructGPT #OTHEREFR , Claude #OTHEREFR and Sparrow #OTHEREFR , in achieving better performance.", "In language foundation model training #OTHEREFR , alignment is often achieved by Reinforcement Learning from Human Feedback (RLHF)."], "text_after_citation": ["By incorporating supervised finetuning (SFT), InstructGPT #OTHEREFR successfully achieved alignment for GPT-3 #OTHEREFR .", "Besides, Claude #OTHEREFR and Sparrow #OTHEREFR stressed aligning language foundation models from helpful, honest, and harmless(HHH) human feedbacks.", "In visual generative models, several works #OTHEREFR studied aligning them with human feedbacks.", "Models are expected to understand specific visual control signals like colors, counts, and backgrounds #OTHEREFR more accurately after alignment.", "It is still challenging to achieve tradeoffs between aligning human preferences and generating high-fidelity images."], "citing_paper_content": {"title": "Raft: Reward Ranked Finetuning For Generative Foundation Model Alignment", "abstract": "Generative foundation models are susceptible to implicit biases that can arise from extensive unsupervised training data. Such biases can produce suboptimal samples, skewed outcomes, and unfairness, with potentially significant repercussions. Consequently, aligning these models with human ethics and preferences is an essential step toward ensuring their responsible and effective deployment in real-world applications. Prior research has primarily employed Reinforcement Learning from Human Feedback (RLHF) as a means of addressing this problem, wherein generative models are fine-tuned using RL algorithms guided by a humanfeedback-informed reward model. However, the inefficiencies and instabilities associated with RL algorithms frequently present substantial obstacles to the successful alignment of generative models, necessitating the development of a more robust and streamlined approach. To this end, we introduce a new framework, Reward rAnked FineTuning (RAFT), designed to align generative models more effectively. Utilizing a reward model and a sufficient number of samples, our approach selects the high-quality samples, discarding those that exhibit undesired behavior, and subsequently assembles a streaming dataset. This dataset serves as the basis for aligning the generative model and can be employed under both offline and online settings. Notably, the sample generation process within RAFT is gradient-free, rendering it compatible with black-box generators. Through extensive experiments, we demonstrate that our proposed algorithm exhibits strong performance in the context of both large language models and diffusion models."}, "cited_paper_content": {"title": "Proximal Policy Optimization Algorithms", "abstract": "We propose a new family of policy gradient methods for reinforcement learning, which alternate between sampling data through interaction with the environment, and optimizing a \"surrogate\" objective function using stochastic gradient ascent. Whereas standard policy gradient methods perform one gradient update per data sample, we propose a novel objective function that enables multiple epochs of minibatch updates. The new methods, which we call proximal policy optimization (PPO), have some of the benefits of trust region policy optimization (TRPO), but they are much simpler to implement, more general, and have better sample complexity (empirically). Our experiments test PPO on a collection of benchmark tasks, including simulated robotic locomotion and Atari game playing, and we show that PPO outperforms other online policy gradient methods, and overall strikes a favorable balance between sample complexity, simplicity, and wall-time."}, "keywords": ["proximal policy optimization"], "citation_intent": "method"} {"citing_id": "2303.06223v1", "cited_id": "1911.03064", "section_title": "Large Language Models", "citation": "Counterfactual fairness #REFR examines how perturbing the demographic signals of existing test examples can change the performance of the model (e.g.", "text_before_citation": ["Current LLM evaluation mechanisms include quantitative metrics measuring notions of accuracy (how similar are the generated outputs to the expected outputs), robustness (how resilient is the model to transformations of the input), calibration (how meaningful are the generated probabilities in respect to uncertainty), efficiency (what are the energy, carbon, and time costs for training and inference) and more #OTHEREFR .", "Some also go beyond singular trainingtime loss objectives and implement reinforcement learning using human feedback in the loop #OTHEREFR .", "A variety of potential harms and failure modes for LLMs have also been identified.", "There are substantial environmental costs associated with the volume of computational power required for training and inference #OTHEREFR .", "There are also concerns of LLMs propagating unfairness or bias."], "text_after_citation": ["\"He worked at the local hospital\" versus \"She worked at the local hospital\").", "Fairness can also be evaluated via performance disparities between demographic subgroups.", "There are also other concerning forms of biases such as stereotypical associations, erasure, and over-representation in the semantics of its output #OTHEREFR . Finally, LLMs have been shown to produce toxic outputs.", "Toxicity in this context refers to hateful, violent, or offensive text #OTHEREFR , and has been shown to result even when the text prompt input is not itself toxic #OTHEREFR .", "LLMs often suffer from factual errors-they can \"hallucinate\" information #OTHEREFR by providing very confident-sounding but entirely false responses."], "citing_paper_content": {"title": "Who'S Thinking? A Push For Human-Centered Evaluation Of Llms Using The Xai Playbook", "abstract": "Deployed artificial intelligence (AI) often impacts humans, and there is no one-size-fits-all metric to evaluate these tools. Human-centered evaluation of AI-based systems combines quantitative and qualitative analysis and human input. It has been explored to some depth in the explainable AI (XAI) and human-computer interaction (HCI) communities. Gaps remain, but the basic understanding that humans interact with AI and accompanying explanations, and that humans' needs-complete with their cognitive biases and quirksshould be held front and center, is accepted by the community. In this paper, we draw parallels between the relatively mature field of XAI and the rapidly evolving research boom around large language models (LLMs). Accepted evaluative metrics for LLMs are not human-centered. We argue that many of the same paths tread by the XAI community over the past decade will be retread when discussing LLMs. Specifically, we argue that humans' tendenciesagain, complete with their cognitive biases and quirks-should rest front and center when evaluating deployed LLMs. We outline three developed focus areas of human-centered evaluation of XAI: mental models, use case utility, and cognitive engagement, and we highlight the importance of exploring each of these concepts for LLMs. Our goal is to jumpstart human-centered LLM evaluation. CCS CONCEPTS \u2022 Human-centered computing \u2192 User models; HCI theory, concepts and models; \u2022 Computing methodologies \u2192 Philosophical/theoretical foundations of artificial intelligence; Artificial intelligence."}, "cited_paper_content": {"title": "Reducing Sentiment Bias In Language Models Via Counterfactual Evaluation", "abstract": "Recent improvements in large-scale language models have driven progress on automatic generation of syntactically and semantically consistent text for many real-world applications. Many of these advances leverage the availability of large corpora. While training on such corpora encourages the model to understand long-range dependencies in text, it can also result in the models internalizing the social biases present in the corpora. This paper aims to quantify and reduce biases exhibited by language models. Given a conditioning context (e.g. a writing prompt) and a language model, we analyze if (and how) the sentiment of the generated text is affected by changes in values of sensitive attributes (e.g. country names, occupations, genders, etc.) in the conditioning context, a.k.a. counterfactual evaluation. We quantify these biases by adapting individual and group fairness metrics from the fair machine learning literature. Extensive evaluation on two different corpora (news articles and Wikipedia) shows that state-of-the-art Transformer-based language models exhibit biases learned from data. We propose embedding-similarity and sentiment-similarity regularization methods that improve both individual and group fairness metrics without sacrificing perplexity and semantic similarity---a positive step toward development and deployment of fairer language models for real-world applications."}, "keywords": ["model", "Counterfactual fairness"], "citation_intent": "background"} {"citing_id": "2304.02147v1", "cited_id": "1811.11742", "section_title": "Method", "citation": "Following #REFR , we predict the 3D pose for the central frame from any such sequence, i.e.p i/2 \u2208 R J\u00d73 .", "text_before_citation": ["In the following subsections, we present an overview of our solution methodology for estimating 3D poses from a sequence of 2D poses, then we describe in our global network architecture, and lastly we present our dynamic multi-headed convolutional self-attention mechanism.", "The overall architecture of our methodology is described in Figure 1 .", "Given a sequence of 2D poses P = {Pi} T i=1 \u2282 R J\u00d72 where T represents the number of frames in the sequence and J is the number of joints in the skeleton.", "We seek to reconstruct the 3D poses in the root relative camera reference frame (i.e.", "the camera reference frame where the root joint sits at the origin)."], "text_after_citation": ["Our network contains two Dynamic ConvFormer blocks, one with spatial attention and the other with temporal attention.", "More specifically, we leverage a spatial attention mechanism to extract frame-wise inter-joint dependencies by analyzing sections of joints that are related.", "The temporal attention mechanism extracts global inter-frame relationships by analyzing correlations between the temporal profiles of joints.", "In contrast to #OTHEREFR , which queries latent pose representations for individual frames and then computes attention with respect to the temporal axis, our temporal joints profile mechanism fuses temporal information at the querying level prior to computing self-attention with respect to the temporal axis."], "citing_paper_content": {"title": "Convformer: Parameter Reduction In Transformer Models For 3D Human Pose Estimation By Leveraging Dynamic Multi-Headed Convolutional Attention", "abstract": "Recently, fully-transformer architectures have replaced the defacto convolutional architecture for the 3D human pose estimation task. In this paper we propose ConvFormer , a novel convolutional transformer that leverages a new dynamic multi-headed convolutional self-attention mechanism for monocular 3D human pose estimation. We designed a spatial and temporal convolutional transformer to comprehensively model human joint relations within individual frames and globally across the motion sequence. Moreover, we introduce a novel notion of temporal joints profile for our temporal ConvFormer that fuses complete temporal information immediately for a local neighborhood of joint features. We have quantitatively and qualitatively validated our method on three common benchmark datasets: Human3.6M, MPI-INF-3DHP, and HumanEva. Extensive experiments have been conducted to identify the optimal hyper-parameter set. These experiments demonstrated that we achieved a significant parameter reduction relative to prior transformer models while attaining State-of-the-Art (SOTA) or near SOTA on all three datasets. Additionally, we achieved SOTA for Protocol III on H36M for both GT and CPN detection inputs. Finally, we obtained SOTA on all three metrics for the MPI-INF-3DHP dataset and for all three subjects on HumanEva under Protocol II."}, "cited_paper_content": {"title": "3D Human Pose Estimation In Video With Temporal Convolutions And Semi-Supervised Training", "abstract": "In this work, we demonstrate that 3D poses in video can be effectively estimated with a fully convolutional model based on dilated temporal convolutions over 2D keypoints. We also introduce back-projection, a simple and effective semi-supervised training method that leverages unlabeled video data. We start with predicted 2D keypoints for unlabeled video, then estimate 3D poses and finally back-project to the input 2D keypoints. In the supervised setting, our fully-convolutional model outperforms the previous best result from the literature by 6 mm mean per-joint position error on Human3.6M, corresponding to an error reduction of 11%, and the model also shows significant improvements on HumanEva-I. Moreover, experiments with back-projection show that it comfortably outperforms previous state-of-the-art results in semi-supervised settings where labeled data is scarce. Code and models are available at https://github.com/facebookresearch/VideoPose3D"}, "keywords": ["3D"], "citation_intent": "method"} {"citing_id": "2303.02411v1", "cited_id": "1906.00067", "section_title": "Introduction", "citation": "The first image of Figure 1 requires knowledge about human culture and history #REFR , to combine with visual information: the object in the image is a church, and people usually go to the church on Sundays.", "text_before_citation": ["For example, masking out words from image captions enforces learning how to fill them based on visual cues; reversely, image regions can be masked out, with language guiding their reconstruction.", "Task-specific fine-tuning steps upon this basic understanding of vision and language, by refining the neural weights of the trained model to adapt to each specific task at a time, upon which the final evaluation is performed.", "Despite the rich VL knowledge acquired during this process, current transformer-based VL models #OTHEREFR lack generalization to several concepts and scenarios that require commonsense knowledge, or knowledge of abstract entities, facts and real-world events.", "Of course, this is somehow expected, since neither pre-training nor fine-tuning VL datasets contain or demand perceiving concepts beyond visual descriptions.", "Figure 1 presents some examples of this claim: questions (Q) about the image (I ) require some knowledge beyond the visual domain, so that the correct answer (A) can be inferred. #OTHEREFR ."], "text_after_citation": ["The second image #OTHEREFR requires one more reasoning step, since it is not only required to detect that this is a postage stamp containing the photo of a person (visual information), but also who this person is.", "Knowledge about named entities recognizes this person as Alexander Hamilton.", "Further factual knowledge provides that Alexander Hamilton was born in todays Saint Kitts and Nevis and Saint Kitts and Nevis is in North America.", "The combination of these two facts derives the final answer Alexander Hamilton was born in North America.", "The third image #OTHEREFR requires the visual extraction of the two people present in it."], "citing_paper_content": {"title": "The Contribution Of Knowledge In Visiolinguistic Learning: A Survey On Tasks And Challenges", "abstract": "Recent advancements in visiolinguistic (VL) learning have allowed the development of multiple models and techniques that offer several impressive implementations, able to currently resolve a variety of tasks that require the collaboration of vision and language. Current datasets used for VL pre-training only contain a limited amount of visual and linguistic knowledge, thus significantly limiting the generalization capabilities of many VL models. External knowledge sources such as knowledge graphs (KGs) and Large Language Models (LLMs) are able to cover such generalization gaps by filling in missing knowledge, resulting in the emergence of hybrid architectures. In the current survey, we analyze tasks that have benefited from such hybrid approaches. Moreover, we categorize existing knowledge sources and types, proceeding to discussion regarding the KG vs LLM dilemma and its potential impact to future hybrid approaches."}, "cited_paper_content": {"title": "Ok-Vqa: A Visual Question Answering Benchmark Requiring External Knowledge", "abstract": "Visual Question Answering (VQA) in its ideal form lets us study reasoning in the joint space of vision and language and serves as a proxy for the AI task of scene understanding. However, most VQA benchmarks to date are focused on questions such as simple counting, visual attributes, and object detection that do not require reasoning or knowledge beyond what is in the image. In this paper, we address the task of knowledge-based visual question answering and provide a benchmark, called OK-VQA, where the image content is not sufficient to answer the questions, encouraging methods that rely on external knowledge resources. Our new dataset includes more than 14,000 questions that require external knowledge to answer. We show that the performance of the state-of-the-art VQA models degrades drastically in this new setting. Our analysis shows that our knowledge-based VQA task is diverse, difficult, and large compared to previous knowledge-based VQA datasets. We hope that this dataset enables researchers to open up new avenues for research in this domain."}, "keywords": ["visual information"], "citation_intent": "background"} {"citing_id": "2304.07638v1", "cited_id": "1701.06686", "section_title": "Outlook", "citation": "This would extend an analogous result for causal effects by Richardson et al. from Ref. #REFR (see Rems.", "text_before_citation": ["Counterfactuals. An obvious next step, pointed out in Sec.", "8.2.3, is to prove completeness of the identification algorithm id-cf for counterfactuals.", "This would establish necessity of the diagrammatic identifiability criterion from Cor.", "91 and complete the diagrammatic translation of the results by Shpitser et al. from Ref. #OTHEREFR .", "Once completeness is established, it is natural to check whether the string diagrammatic treatment of counterfactuals actually allows for an easy argument to the effect that there is no loss of generality in studying the identifiability of counterfactuals based on ADMGs as latent projections."], "text_after_citation": ["81 and 93) and fill a gap that seems to currently exist in the conventional literature.", "Beyond these, it would be interesting to study whether our approach might naturally extend to cover socalled nested counterfactuals #OTHEREFR by allowing for connections across worlds other than just through the shared background variables U .", "Finally, the generalisation of the notion of a counterfactual in Sec.", "8.3, which involves the conditioning with fuzzy facts and parallel worlds defined by interventions more general than do-interventions, deserves further exploration.", "In particular, one might study how the criteria and methods for the identification of counterfactuals would change in this case."], "citing_paper_content": {"title": "Causal Models In String Diagrams", "abstract": "The framework of causal models, pioneered by Pearl and his collaborators, as well as Spirtes, Glymour and Scheines, provides a principled approach to causal reasoning which is applied today across many scientific domains. Here we present the framework of causal models in the language of string diagrams, interpreted formally using category theory. A class of string diagrams, called network diagrams, are in 1-to-1 correspondence with directed acyclic graphs. A causal model is given by interpreting the components of such a diagram in terms of stochastic maps, functions, or more general channels in a symmetric monoidal category with a 'copy-discard' structure (cd-category). This represents a model as a single mathematical object that can be reasoned with intuitively and yet rigorously. Building on earlier works, most notably by Fong and Jacobs, Kissinger and Zanasi, as well as Fritz and Klingler, this work presents diagrammatic definitions of causal models and functional causal models in a cd-category, which generalise causal Bayesian networks and structural causal models, respectively. We formalise the most general kind of interventions on a model, including but beyond atomic ones described by do-interventions, and present the natural notion of an open causal model, as a causal model 'with inputs'. To apply these to causal reasoning, we also give an approach to conditioning based on a normalisation box, which allows causal inference calculations to be done fully diagrammatically. We use these to define counterfactuals in this setup, and to treat the problems of the identifiability of both causal effects and counterfactuals fully diagrammatically. The benefits of such a presentation of causal models lie both in foundational questions in causal reasoning, and in particular in their clarificatory role and pedagogical value. In fact this manuscript aims to be accessible to different communities, including causal model practitioners as well as researchers in applied category theory. For illustration of the key ideas many examples from the causal model literature are discussed in the diagrammatic language. Overall, we argue and demonstrate that causal reasoning according to the causal model framework is most naturally and intuitively done as diagrammatic reasoning."}, "cited_paper_content": {"title": "Nested Markov Properties For Acyclic Directed Mixed Graphs", "abstract": "Directed acyclic graph (DAG) models may be characterized in at least four different ways: via a factorization, the d-separation criterion, the moralization criterion, and the local Markov property. As pointed out by Robins (1986, 1999), Verma and Pearl (1990), and Tian and Pearl (2002b), marginals of DAG models also imply equality constraints that are not conditional independences. The well-known `Verma constraint' is an example. Constraints of this type were used for testing edges (Shpitser et al., 2009), and an efficient marginalization scheme via variable elimination (Shpitser et al., 2011). ::: We show that equality constraints like the `Verma constraint' can be viewed as conditional independences in kernel objects obtained from joint distributions via a fixing operation that generalizes conditioning and marginalization. We use these constraints to define, via Markov properties and a factorization, a graphical model associated with acyclic directed mixed graphs (ADMGs). We show that marginal distributions of DAG models lie in this model, prove that a characterization of these constraints given in (Tian and Pearl, 2002b) gives an alternative definition of the model, and finally show that the fixing operation we used to define the model can be used to give a particularly simple characterization of identifiable causal effects in hidden variable graphical causal models."}, "keywords": ["causal effects"], "citation_intent": "result"} {"citing_id": "2304.10726v1", "cited_id": "1910.10601", "section_title": "Related Work", "citation": "Moreover, empirical evaluation of 9 static analysis tools #REFR classified 93% of contracts as vulnerable, thus indicating a considerable number of false positives.", "text_before_citation": ["Although such tools are very impressive, and indeed we ourselves use Slither, this reliance on expert rules can make these tools difficult to maintain and update.", "We are unaware of any detection tool that detects all known vulnerabilities; or that is easily extendable for future bugs without human developers carefully crafting subtle expert rules and/or hardcoding additional features.", "Most smart contract vulnerability analyzers use symbolic execution to reason about all execution paths of a program.", "However, symbolic execution can suffer from \"path explosion\" when the size and complexity of the code increases, leading to significant time and space requirements.", "Practical limits on time and space can lead to difficulties analyzing smart contracts at scale."], "text_after_citation": ["In addition, only a few vulnerabilities were detected simultaneously that got consensus from four or more tools.", "Fuzzing is a dynamic analysis technique, that has the advantage of scaling well to larger programs.", "Contractfuzzer #OTHEREFR , and Echidna #OTHEREFR are two notable examples applied to smart contracts.", "Rather than relying on a fixed set of pre-defined bug oracles to detect vulnerabilities, fuzzing technique uses sophisticated grammar-based fuzzing campaigns based on a contract API to falsify user-defined predicates or Solidity assertions.", "However, generating meaningful inputs for fuzzing typically requires annotating the source code of a contract."], "citing_paper_content": {"title": "Smart Learning To Find Dumb Contracts", "abstract": "We introduce Deep Learning Vulnerability Analyzer (DLVA), a vulnerability detection tool for Ethereum smart contracts based on powerful deep learning techniques for sequential data adapted for bytecode. We train DLVA to judge bytecode even though the supervising oracle, Slither, can only judge source code. DLVA's training algorithm is general: we \"extend\" a source code analysis to bytecode without any manual feature engineering, predefined patterns, or expert rules. DLVA's training algorithm is also robust: it overcame a 1.25% error rate mislabeled contracts, and-the student surpassing the teacher-found vulnerable contracts that Slither mislabeled. In addition to extending a source code analyzer to bytecode, DLVA is much faster than conventional tools for smart contract vulnerability detection based on formal methods: DLVA checks contracts for 29 vulnerabilities in 0.2 seconds, a speedup of 10-500x+ compared to traditional tools. DLVA has three key components. Smart Contract to Vector (SC2V) uses neural networks to map arbitrary smart contract bytecode to an high-dimensional floating-point vector. Sibling Detector (SD) classifies contracts when a target contract's vector is Euclidian-close to a labeled contract's vector in a training set; although only able to judge 55.7% of the contracts in our test set, it has an average accuracy of 97.4% with a false positive rate of only 0.1%. Lastly, Core Classifier (CC) uses neural networks to infer vulnerable contracts regardless of vector distance. DLVA has an overall accuracy of 96.6% with an associated false positive rate of only 3.7%. We benchmark DLVA's CC with 10 \"off-the-shelf\" machine learning techniques and show that the CC is more accurate, reducing the average size of the error set by 36.5%. We also benchmark DLVA against well-known alternatives: Oyente, Mythril, Osiris, Smartcheck, Slither, and SoliAudit. DLVA enjoyed meaningfully higher true positive rates while producing comparable false positive rates. Moreover, it did so despite using dramatically less analysis time. Notably, DLVA outperformed state-of-the-art alternatives without using any painstakingly-crafted expert rules or predefined patterns. Table 1: Summary of DLVA results on all datasets analysed (TPR: detection rate; FPR: false alarm rate). Analysed data Test set Acc TPR FPR EthereumSC large [2] 22,634 87.7% 87.3% 11.9% EthereumSC small [3] 1,381 97.6% 95.4% 2.3% SolidiFI benchmark [5] 444 98.6% 100% 1.9% Elysium benchmark [1] 900 99.5% 99.3% 2.2% Reentrancy benchmark [4] 473 99.4% 94.3% 0.0% Average 25,832 96.6% 95.3% 3.7%"}, "cited_paper_content": {"title": "Empirical Review Of Automated Analysis Tools On 47,587 Ethereum Smart Contracts", "abstract": "Over the last few years, there has been substantial research on automated analysis, testing, and debugging of Ethereum smart contracts. However, it is not trivial to compare and reproduce that research. To address this, we present an empirical evaluation of 9 state-of-the-art automated analysis tools using two new datasets: i) a dataset of 69 annotated vulnerable smart contracts that can be used to evaluate the precision of analysis tools; and ii) a dataset with all the smart contracts in the Ethereum Blockchain that have Solidity source code available on Etherscan (a total of 47,518 contracts). The datasets are part of SmartBugs, a new extendable execution framework that we created to facilitate the integration and comparison between multiple analysis tools and the analysis of Ethereum smart contracts. We used SmartBugs to execute the 9 automated analysis tools on the two datasets. In total, we ran 428,337 analyses that took approximately 564 days and 3 hours, being the largest experimental setup to date both in the number of tools and in execution time. We found that only 42% of the vulnerabilities from our annotated dataset are detected by all the tools, with the tool Mythril having the higher accuracy (27%). When considering the largest dataset, we observed that 97% of contracts are tagged as vulnerable, thus suggesting a considerable number of false positives. Indeed, only a small number of vulnerabilities (and of only two categories) were detected simultaneously by four or more tools."}, "keywords": ["false positives", "9 static analysis"], "citation_intent": "background"} {"citing_id": "2303.06872v3", "cited_id": "1812.07035", "section_title": "B. Camera-Lidar Fusion For Relocalization With Multi-Head Self-Attention", "citation": "To take into account the continuity of the rotation angle #REFR , we present the rotation q as [cos \u03b8, sin \u03b8] T rather than \u03b8.", "text_before_citation": ["T and the orientation q = [cos \u03b8, sin \u03b8]", "T from the output of the MHSA module.", "It consists of a position branch and an orientation branch as in #OTHEREFR , #OTHEREFR . Each branch is composed of consecutive MLPs.", "In #OTHEREFR , a leaky ReLU activation function was used after each MLP except for the last one in its regression head, but we replace it with the ReLU activation function in our network.", "Different from most of the previous studies for end-to-end relocalization, both the position and the orientation are two-dimensional under the assumption that typical serving robots move on planar space."], "text_after_citation": ["To our best knowledge, this work is the first study addressing the end-to-end relocalization for a serving robot based on the camera-2D LiDAR fusion in two-dimensional planar space."], "citing_paper_content": {"title": "Fusionloc: Camera-2D Lidar Fusion Using Multi-Head Self-Attention For End-To-End Serving Robot Relocalization", "abstract": "As technology advances in autonomous mobile robots, mobile service robots have been actively used more and more for various purposes. Especially, serving robots have been not surprising products anymore since the COVID-19 pandemic. One of the practical problems in operating serving a robot is that it often fails to estimate its pose on a map that it moves around. Whenever the failure happens, servers should bring the serving robot to its initial location and reboot it manually. In this paper, we focus on end-to-end relocalization of serving robots to address the problem. It is to predict robot pose directly from only the onboard sensor data using neural networks. In particular, we propose a deep neural network architecture for the relocalization based on camera-2D LiDAR sensor fusion. We call the proposed method FusionLoc. In the proposed method, the multi-head selfattention complements different types of information captured by the two sensors to regress the robot pose. Our experiments on a dataset collected by a commercial serving robot demonstrate that FusionLoc can provide better performances than previous endto-end relocalization methods taking only a single image or a 2D LiDAR point cloud as well as a straightforward fusion method concatenating their features."}, "cited_paper_content": {"title": "On The Continuity Of Rotation Representations In Neural Networks", "abstract": "In neural networks, it is often desirable to work with various representations of the same space. For example, 3D rotations can be represented with quaternions or Euler angles. In this paper, we advance a definition of a continuous representation, which can be helpful for training deep neural networks. We relate this to topological concepts such as homeomorphism and embedding. We then investigate what are continuous and discontinuous representations for 2D, 3D, and n-dimensional rotations. We demonstrate that for 3D rotations, all representations are discontinuous in the real Euclidean spaces of four or fewer dimensions. Thus, widely used representations such as quaternions and Euler angles are discontinuous and difficult for neural networks to learn. We show that the 3D rotations have continuous representations in 5D and 6D, which are more suitable for learning. We also present continuous representations for the general case of the n-dimensional rotation group SO(n). While our main focus is on rotations, we also show that our constructions apply to other groups such as the orthogonal group and similarity transforms. We finally present empirical results, which show that our continuous rotation representations outperform discontinuous ones for several practical problems in graphics and vision, including a simple autoencoder sanity test, a rotation estimator for 3D point clouds, and an inverse kinematics solver for 3D human poses."}, "keywords": ["rotation"], "citation_intent": "method"} {"citing_id": "2305.00127v1", "cited_id": "1507.06527", "section_title": "B. Designing The Framework", "citation": "There are a few popular DRL algorithms for solving POMDP such as Deep Recurrent Q-Learning (DRQN) #REFR and RDPG.", "text_before_citation": ["As we focus on energy management within one day with T time steps, our task corresponds to a finite horizon POMDP model."], "text_after_citation": ["The basic ideas of DRQN and RDPG are to add recurrency to DQN and DDPG algorithms, respectively, by replacing the first fully-connected layer with a recurrent long short-term memory (LSTM) layer.", "However, the above DRL algorithms are developed for an infinite horizon setting, while the value function, i.e., critic, and the policy, i.e., actor, are normally dependent on the time steps for the finite horizon case.", "Therefore, we propose a novel DRL algorithm named HAFH-RDPG, which adopts a framework that is a combination of dynamic programming and DRL, where RDPG with fixed target is embedded within the framework of finite-horizon value iteration.", "In HAFH-RDPG, the finite-horizon value iteration starts from the time step T , and uses backward induction to iteratively derive the value function and optimal policy for each time step t \u2208 {T, T \u2212 1, ..., 1}, until it reaches the first time step t = 1.", "In each time step, the RDPG algorithm is used to solve a simple one-step POMDP where the target actor and critic networks, i.e., \u03bb t and \u00b5 t are fixed to be the trained actor and critic networks of the next time step, i.e., \u03bb t+1 and \u00b5 t+1 , which greatly increases stability and performance."], "citing_paper_content": {"title": "Optimal Scheduling In Iot-Driven Smart Isolated Microgrids Based On Deep Reinforcement Learning", "abstract": "In this paper, we investigate the scheduling issue of diesel generators (DGs) in an Internet of Things (IoT)-Driven isolated microgrid (MG) by deep reinforcement learning (DRL). The renewable energy is fully exploited under the uncertainty of renewable generation and load demand. The DRL agent learns an optimal policy from history renewable and load data of previous days, where the policy can generate real-time decisions based on observations of past renewable and load data of previous hours collected by connected sensors. The goal is to reduce operating cost on the premise of ensuring supply-demand balance. In specific, a novel finite-horizon partial observable Markov decision process (POMDP) model is conceived considering the spinning reserve. In order to overcome the challenge of discrete-continuous hybrid action space due to the binary DG switching decision and continuous energy dispatch (ED) decision, a DRL algorithm, namely the hybrid action finite-horizon RDPG (HAFH-RDPG), is proposed. HAFH-RDPG seamlessly integrates two classical DRL algorithms, i.e., deep Q-network (DQN) and recurrent deterministic policy gradient (RDPG), based on a finite-horizon dynamic programming (DP) framework. Extensive experiments are performed with real-world data in an IoT-driven MG to evaluate the capability of the proposed algorithm in handling the uncertainty due to inter-hour and inter-day power fluctuation and to compare its performance with those of the benchmark algorithms."}, "cited_paper_content": {"title": "Deep Recurrent Q-Learning For Partially Observable Mdps", "abstract": "Deep Reinforcement Learning has yielded proficient controllers for complex tasks. However, these controllers have limited memory and rely on being able to perceive the complete game screen at each decision point. To address these shortcomings, this article investigates the effects of adding recurrency to a Deep Q-Network (DQN) by replacing the first post-convolutional fully-connected layer with a recurrent LSTM. The resulting \\textit{Deep Recurrent Q-Network} (DRQN), although capable of seeing only a single frame at each timestep, successfully integrates information through time and replicates DQN's performance on standard Atari games and partially observed equivalents featuring flickering game screens. Additionally, when trained with partial observations and evaluated with incrementally more complete observations, DRQN's performance scales as a function of observability. Conversely, when trained with full observations and evaluated with partial observations, DRQN's performance degrades less than DQN's. Thus, given the same length of history, recurrency is a viable alternative to stacking a history of frames in the DQN's input layer and while recurrency confers no systematic advantage when learning to play the game, the recurrent net can better adapt at evaluation time if the quality of observations changes."}, "keywords": ["Deep Recurrent Q-Learning"], "citation_intent": "method"} {"citing_id": "2303.11100v1", "cited_id": "1512.02325", "section_title": "Iii. Mthars Methodology", "citation": "In this section, we provide details of the problem definition and proposed MTHARS method inspired by SSD #REFR that exploited in the computer vision field.", "text_before_citation": [], "text_after_citation": ["There are four main components: multiscale window generation and matching, non-maximum suppression, model architecture, and model training."], "citing_paper_content": {"title": "A Multi-Task Deep Learning Approach For Sensor-Based Human Activity Recognition And Segmentation", "abstract": "Sensor-based human activity segmentation and recognition are two important and challenging problems in many real-world applications and they have drawn increasing attention from the deep learning community in recent years. Most of the existing deep learning works were designed based on pre-segmented sensor streams and they have treated activity segmentation and recognition as two separate tasks. In practice, performing data stream segmentation is very challenging. We believe that both activity segmentation and recognition may convey unique information which can complement each other to improve the performance of the two tasks. In this paper, we firstly proposes a new multitask deep neural network to solve the two tasks simultaneously. The proposed neural network adopts selective convolution and features multiscale windows to segment activities of long or short time durations. First, multiple windows of different scales are generated to center on each unit of the feature sequence. Then, the model is trained to predict, for each window, the activity class and the offset to the true activity boundaries. Finally, overlapping windows are filtered out by non-maximum suppression, and adjacent windows of the same activity are concatenated to complete the segmentation task. Extensive experiments were conducted on eight popular benchmarking datasets, and the results show that our proposed method outperforms the state-of-the-art methods both for activity recognition and segmentation."}, "cited_paper_content": {"title": "Ssd: Single Shot Multibox Detector", "abstract": "We present a method for detecting objects in images using a single deep neural network. Our approach, named SSD, discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location. At prediction time, the network generates scores for the presence of each object category in each default box and produces adjustments to the box to better match the object shape. Additionally, the network combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes. SSD is simple relative to methods that require object proposals because it completely eliminates proposal generation and subsequent pixel or feature resampling stages and encapsulates all computation in a single network. This makes SSD easy to train and straightforward to integrate into systems that require a detection component. Experimental results on the PASCAL VOC, COCO, and ILSVRC datasets confirm that SSD has competitive accuracy to methods that utilize an additional object proposal step and is much faster, while providing a unified framework for both training and inference. For \\(300 \\times 300\\) input, SSD achieves 74.3 % mAP on VOC2007 test at 59 FPS on a Nvidia Titan X and for \\(512 \\times 512\\) input, SSD achieves 76.9 % mAP, outperforming a comparable state of the art Faster R-CNN model. Compared to other single stage methods, SSD has much better accuracy even with a smaller input image size. Code is available at https://github.com/weiliu89/caffe/tree/ssd."}, "keywords": ["computer vision field"], "citation_intent": "method"} {"citing_id": "2303.07080v1", "cited_id": "1704.04861", "section_title": "Introduction", "citation": "Efficient structural design is a challenge in research, with introduction of the separated convolution #REFR proposed as an effective technique.", "text_before_citation": ["Since the introduction of AlexNet #OTHEREFR , there has been an exponential increase in the number of exceptional convolutional neural networks proposed, resulting in promising outcomes for a variety of visual tasks #OTHEREFR .", "Despite the remarkable results, deploying CNN models on embedded or mobile devices proves challenging as it poses an immense burden on computation and memory storage.", "To address this issue, a significant amount of research has been dedicated to reducing associated costs, thereby making CNN models more practical for real-world applications.", "Broadly speaking, this line of research can be categorized into three distinct areas: efficient structure design, network pruning, and network quantization."], "text_after_citation": ["This method factorizes the standard convolution into a depthwise and pointwise convolution, reducing computation.", "Successful examples of its use in efficient networks include MobileNets #OTHEREFR and ShuffleNets #OTHEREFR .", "These networks are widely used on resource-constrained devices and have shown promise in practical applications.", "Besides that, various pruning strategies #OTHEREFR have also been proposed to reduce both the computational and storage burdens.", "However, these methods often incur accuracy degradation, making them less attractive for practical applications."], "citing_paper_content": {"title": "Bag Of Tricks With Quantized Convolutional Neural Networks For Image Classification", "abstract": "Deep neural networks have been proven effective in a wide range of tasks. However, their high computational and memory costs make them impractical to deploy on resourceconstrained devices. To address this issue, quantization schemes have been proposed to reduce the memory footprint and improve inference speed. While numerous quantization methods have been proposed, they lack systematic analysis for their effectiveness. To bridge this gap, we collect and improve existing quantization methods and propose a gold guideline for post-training quantization. We evaluate the effectiveness of our proposed method with two popular models, ResNet50 and MobileNetV2, on the ImageNet dataset. By following our guidelines, no accuracy degradation occurs even after directly quantizing the model to 8-bits without additional training. A quantization-aware training based on the guidelines can further improve the accuracy in lower-bits quantization. Moreover, we have integrated a multi-stage fine-tuning strategy that works harmoniously with existing pruning techniques to reduce cost even further. Remarkably, our results reveal that a quantized MobileNetV2 with 30% sparsity actually surpasses the performance of the equivalent full-precision model, underscoring the effectiveness and resilience of our proposed scheme."}, "cited_paper_content": {"title": "Mobilenets: Efficient Convolutional Neural Networks For Mobile Vision Applications", "abstract": "We present a class of efficient models called MobileNets for mobile and embedded vision applications. MobileNets are based on a streamlined architecture that uses depth-wise separable convolutions to build light weight deep neural networks. We introduce two simple global hyper-parameters that efficiently trade off between latency and accuracy. These hyper-parameters allow the model builder to choose the right sized model for their application based on the constraints of the problem. We present extensive experiments on resource and accuracy tradeoffs and show strong performance compared to other popular models on ImageNet classification. We then demonstrate the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization."}, "keywords": ["separated convolution"], "citation_intent": "method"} {"citing_id": "2304.02878v1", "cited_id": "1905.11968", "section_title": "=1", "citation": "Instead, #REFR generalizes ideas from NCBC and proposes an algorithm that selects the functional Steiner point of the work function.", "text_before_citation": ["where OPT is the offline optimal total path length.", "There are many works that combine the Steiner point algorithm for NCBC with existing control methods to perform learning-based online control for LTI systems, e.g., #OTHEREFR , #OTHEREFR , #OTHEREFR .", "2) General CBC: For general CBC problems, we can no longer take advantage of the nested property of the convex bodies.", "One may consider naively applying NCBC algorithms when the convex bodies happen to be nested and restarting the NCBC algorithm when they are not.", "However, due to the myopic nature of NCBC algorithms, which try to remain deep inside of each convex set, they no longer guarantee a competitive ratio when used this way."], "text_after_citation": [], "citing_paper_content": {"title": "Online Stabilization Of Unknown Linear Time-Varying Systems", "abstract": "This paper studies the problem of online stabilization of an unknown discrete-time linear time-varying (LTV) system under bounded non-stochastic (potentially adversarial) disturbances. We propose a novel algorithm based on convex body chasing (CBC). Under the assumption of infrequently changing or slowly drifting dynamics, the algorithm guarantees bounded-input-bounded-output stability in the closed loop. Our approach avoids system identification and applies, with minimal disturbance assumptions, to a variety of LTV systems of practical importance. We demonstrate the algorithm numerically on examples of LTV systems including Markov linear jump systems with finitely many jumps."}, "cited_paper_content": {"title": "Chasing Convex Bodies Optimally", "abstract": "In the chasing convex bodies problem, an online player receives a request sequence of $N$ convex sets $K_1,\\dots, K_N$ contained in a normed space $\\mathbb R^d$. The player starts at $x_0\\in \\mathbb R^d$, and after observing each $K_n$ picks a new point $x_n\\in K_n$. At each step the player pays a movement cost of $||x_n-x_{n-1}||$. The player aims to maintain a constant competitive ratio against the minimum cost possible in hindsight, i.e. knowing all requests in advance. The existence of a finite competitive ratio for convex body chasing was first conjectured in 1991 by Friedman and Linial. This conjecture was recently resolved with an exponential $2^{O(d)}$ upper bound on the competitive ratio. ::: In this paper, we drastically improve the exponential upper bound. We give an algorithm achieving competitive ratio $d$ for arbitrary normed spaces, which is exactly tight for $\\ell^{\\infty}$. In Euclidean space, our algorithm achieves nearly optimal competitive ratio $O(\\sqrt{d\\log N})$, compared to a lower bound of $\\sqrt{d}$. Our approach extends another recent work which chases nested convex bodies using the classical Steiner point of a convex body. We define the functional Steiner point of a convex function and apply it to the work function to obtain our algorithm."}, "keywords": ["functional Steiner point"], "citation_intent": "background"} {"citing_id": "2303.13434v1", "cited_id": "1702.08811", "section_title": "Related Work", "citation": "In addition, the central moment discrepancy (CMD) loss #REFR and maximum density divergence (MDD) loss [16] are also proposed to align the feature distributions.", "text_before_citation": ["Unsupervised Domain Adaptation.", "The prevailing UDA methods focus on domain alignment and learning discriminative domain-invariant features via metric learning, domain adversarial training, and optimal transport.", "Firstly, the metric learning-based methods aim to reduce the domain discrepancy by learning the domain-invariant feature representations using various metrics.", "For instance, some methods [14, 25, 26, #OTHEREFR use the maximum mean discrepancy (MMD) loss to measure the divergence between different domains."], "text_after_citation": ["Secondly, the domain adversarial training methods learn the domaininvariant representations to encourage samples from different domains to be non-discriminative with respect to the domain labels via an adversarial loss [13, #OTHEREFR .", "The third type of approach aims to minimize the cost transported from the source to the target distribution by finding an optimal coupling cost to mitigate the domain shift [6, 7] .", "Unfortunately, these methods are not robust enough for the noisy pseudo target labels for accurate domain alignment.", "Different from these mainstream UDA methods and [2] , we interpret the process of UDA as a min-max CE game and find the Nash Equilibria for domain alignment with an intermediate domain and a pure ViT-based solution. Mixup.", "It is an effective data augmentation technique to prevent models from over-fitting by linearly interpolating two input data."], "citing_paper_content": {"title": "Patch-Mix Transformer For Unsupervised Domain Adaptation: A Game Perspective", "abstract": "Endeavors have been recently made to leverage the vision transformer (ViT) for the challenging unsupervised domain adaptation (UDA) task. They typically adopt the cross-attention in ViT for direct domain alignment. However, as the performance of cross-attention highly relies on the quality of pseudo labels for targeted samples, it becomes less effective when the domain gap becomes large. We solve this problem from a game theory's perspective with the proposed model dubbed as PMTrans, which bridges source and target domains with an intermediate domain. Specifically, we propose a novel ViT-based module called PatchMix that effectively builds up the intermediate domain, i.e., probability distribution, by learning to sample patches from both domains based on the game-theoretical models. This way, it learns to mix the patches from the source and target domains to maximize the cross entropy (CE), while exploiting two semi-supervised mixup losses in the feature and label spaces to minimize it. As such, we interpret the process of UDA as a min-max CE game with three players, including the feature extractor, classifier, and PatchMix, to find the Nash Equilibria. Moreover, we leverage attention maps from ViT to re-weight the label of each patch by its importance, making it possible to obtain more domain-discriminative feature representations. We conduct extensive experiments on four benchmark datasets, and the results show that PM-Trans significantly surpasses the ViT-based and CNN-based SoTA methods by +3.6% on Office-Home, +1.4% on Office-31, and +17.7% on DomainNet, respectively. https: //vlis2022.github.io/cvpr23/PMTrans"}, "cited_paper_content": {"title": "Central Moment Discrepancy (Cmd) For Domain-Invariant Representation Learning", "abstract": "The learning of domain-invariant representations in the context of domain adaptation with neural networks is considered. We propose a new regularization method that minimizes the discrepancy between domain-specific latent feature representations directly in the hidden activation space. Although some standard distribution matching approaches exist that can be interpreted as the matching of weighted sums of moments, e.g. Maximum Mean Discrepancy (MMD), an explicit order-wise matching of higher order moments has not been considered before. We propose to match the higher order central moments of probability distributions by means of order-wise moment differences. Our model does not require computationally expensive distance and kernel matrix computations. We utilize the equivalent representation of probability distributions by moment sequences to define a new distance function, called Central Moment Discrepancy (CMD). We prove that CMD is a metric on the set of probability distributions on a compact interval. We further prove that convergence of probability distributions on compact intervals w.r.t. the new metric implies convergence in distribution of the respective random variables. We test our approach on two different benchmark data sets for object recognition (Office) and sentiment analysis of product reviews (Amazon reviews). CMD achieves a new state-of-the-art performance on most domain adaptation tasks of Office and outperforms networks trained with MMD, Variational Fair Autoencoders and Domain Adversarial Neural Networks on Amazon reviews. In addition, a post-hoc parameter sensitivity analysis shows that the new approach is stable w.r.t. parameter changes in a certain interval. The source code of the experiments is publicly available."}, "keywords": ["feature distributions"], "citation_intent": "method"} {"citing_id": "2304.05216v1", "cited_id": "1902.00751", "section_title": "Accelerating The Fine-Tuning Process", "citation": "For example, Houlsby et al.l #REFR design some adapters with two orders of magnitude fewer parameters to fine-tune compared to full models and achieve similar performance with fine-tuning all parameters of the pre-trained mode.", "text_before_citation": ["There are many studies on accelerating fine-tuning process #OTHEREFR . These studies can be roughly categorized into two categories.", "The first is to use the knowledge distillation technique to compress large-scale pre-trained language models #OTHEREFR .", "For example, Jiao et al.", "#OTHEREFR propose TinyBERT to distill BERT and only use about 28% parameter for natural language understanding.", "The second is the adapter-based fine-tuning approach #OTHEREFR , where adapters are new trainable modules added between layers of pre-trained models."], "text_after_citation": ["In addition, there are some studies on efficient neural network training from scratch with layer freezing #OTHEREFR . For example, Wang et al.", "#OTHEREFR leverage the knowledge from a reference model to accurately evaluate individual layers' training plasticity, freeze the converged ones and unfreeze the frozen layers to continue training.", "Our study could motivate researchers to come up with more efficient fine-tuning approaches."], "citing_paper_content": {"title": "Towards Efficient Fine-Tuning Of Pre-Trained Code Models: An Experimental Study And Beyond", "abstract": "Recently, fine-tuning pre-trained code models such as CodeBERT on downstream tasks has achieved great success in many software testing and analysis tasks. While effective and prevalent, fine-tuning the pre-trained parameters incurs a large computational cost. In this paper, we conduct an extensive experimental study to explore what happens to layer-wise pre-trained representations and their encoded code knowledge during fine-tuning. We then propose efficient alternatives to fine-tune the large pre-trained code model based on the above findings. Our experimental study shows that (1) lexical, syntactic and structural properties of source code are encoded in the lower, intermediate, and higher layers, respectively, while the semantic property spans across the entire model. (2) The process of fine-tuning preserves most of the code properties. Specifically, the basic code properties captured by lower and intermediate layers are still preserved during fine-tuning. Furthermore, we find that only the representations of the top two layers change most during fine-tuning for various downstream tasks. (3) Based on the above findings, we propose Telly to efficiently fine-tune pre-trained code models via layer freezing. The extensive experimental results on five various downstream tasks demonstrate that training parameters and the corresponding time cost are greatly reduced, while performances are similar or better. CCS CONCEPTS \u2022 Software and its engineering \u2192 Software development techniques; Reusability."}, "cited_paper_content": {"title": "Parameter-Efficient Transfer Learning For Nlp", "abstract": "Fine-tuning large pre-trained models is an effective transfer mechanism in NLP. However, in the presence of many downstream tasks, fine-tuning is parameter inefficient: an entire new model is required for every task. As an alternative, we propose transfer with adapter modules. Adapter modules yield a compact and extensible model; they add only a few trainable parameters per task, and new tasks can be added without revisiting previous ones. The parameters of the original network remain fixed, yielding a high degree of parameter sharing. To demonstrate adapter's effectiveness, we transfer the recently proposed BERT Transformer model to 26 diverse text classification tasks, including the GLUE benchmark. Adapters attain near state-of-the-art performance, whilst adding only a few parameters per task. On GLUE, we attain within 0.4% of the performance of full fine-tuning, adding only 3.6% parameters per task. By contrast, fine-tuning trains 100% of the parameters per task."}, "keywords": ["fine-tuning"], "citation_intent": "background"} {"citing_id": "2304.09214v1", "cited_id": "1911.08251", "section_title": "Experimental Setup", "citation": "The architectures are inspired from the work of #REFR and are presented in a generic fashion in Table 2 .", "text_before_citation": ["In this work, we use again the implementation provided by the authors of E(2)-CNNs, who re-implement HNets in their own framework, for convenience.", "\u2022 Regarding our B-CNNs, four setups are considered to achieve SO(2) or O(2), with or without scale invariance (denoted by the presence or not of \"+\" in our tables and figures), with the computation of k max as described by Equation (4.7).", "Another setup for SO(2) invariance with a stronger cutoff frequency, that corresponds to half the initial k max , is also considered.", "This last setup is motivated by the empirical observation that it often leads to better performances.", "\u2022 Finally, a vanilla CNN with the same architecture than for the other methods, as well as a ResNet-18 #OTHEREFR are also trained for reference."], "text_after_citation": ["Note that the size of the filters is larger than conventional sizes in CNNs.", "The reason why it is preferable to increase the size of the filters in those cases is explained in Section 6.3.", "The same template architecture is used for all the methods (except for the ResNet-18 architecture that is kept unmodified) and data sets. Nonetheless, minor modifications are sometimes performed.", "Firstly, the number of filters in each convolutional layer should be adapted from one method to another, in order to keep the same number of trainable parameters.", "To do so, a parameter \u03bb is introduced to manually scale the number of filters and guarantee the same number of trainable parameters for all the methods."], "citing_paper_content": {"title": "So(2) And O(2) Equivariance In Image Recognition With Bessel-Convolutional Neural Networks", "abstract": "For many years, it has been shown how much exploiting equivariances can be beneficial when solving image analysis tasks. For example, the superiority of convolutional neural networks (CNNs) compared to dense networks mainly comes from an elegant exploitation of the translation equivariance. Patterns can appear at arbitrary positions and convolutions take this into account to achieve translation invariant operations through weight sharing. Nevertheless, images often involve other symmetries that can also be exploited. It is the case of rotations and reflections that have drawn particular attention and led to the development of multiple equivariant CNN architectures. Among all these methods, Bessel-convolutional neural networks (B-CNNs) exploit a particular decomposition based on Bessel functions to modify the key operation between images and filters and make it by design equivariant to all the continuous set of planar rotations. In this work, the mathematical developments of B-CNNs are presented along with several improvements, including the incorporation of reflection and multi-scale equivariances. Extensive study is carried out to assess the performances of B-CNNs compared to other methods. Finally, we emphasize the theoretical advantages of B-CNNs by giving more insights and in-depth mathematical details."}, "cited_paper_content": {"title": "General $E(2)$-Equivariant Steerable Cnns", "abstract": "The big empirical success of group equivariant networks has led in recent years to the sprouting of a great variety of equivariant network architectures. A particular focus has thereby been on rotation and reflection equivariant CNNs for planar images. Here we give a general description of $E(2)$-equivariant convolutions in the framework of Steerable CNNs. The theory of Steerable CNNs thereby yields constraints on the convolution kernels which depend on group representations describing the transformation laws of feature spaces. We show that these constraints for arbitrary group representations can be reduced to constraints under irreducible representations. A general solution of the kernel space constraint is given for arbitrary representations of the Euclidean group $E(2)$ and its subgroups. We implement a wide range of previously proposed and entirely new equivariant network architectures and extensively compare their performances. $E(2)$-steerable convolutions are further shown to yield remarkable gains on CIFAR-10, CIFAR-100 and STL-10 when used as a drop-in replacement for non-equivariant convolutions."}, "keywords": ["architectures", "generic fashion"], "citation_intent": "method"} {"citing_id": "2303.15742v2", "cited_id": "1908.09216", "section_title": "Experiments", "citation": "We compare our method with the existing dynamic pose estimation networks, i.e., DKD #REFR and Skip-Convolution [12] .", "text_before_citation": ["A trace of models' accuracy and delay for online videobased action recognition (left) and pose estimation (right).", "In the areas masked by the gray color, we can observe a rapid increase of the baseline model's delay (see the second row), which corresponds to the high-load status of the system.", "joints.", "Here, we consider three (i.e., m = 3) resolution candidates: [128, 192, 256] and follow the work of #OTHEREFR to use 3 deconvolutional layers to build the task head for pose estimation.", "Moreover, we replace LSTM with a ConvL-STM #OTHEREFR to accommodate for 2D feature maps."], "text_after_citation": ["Similarly, we re-implement DKD and Skip-Convolutions as there is no publicly available code. Experiment settings.", "To build dynamic systems with varying system status, we design multiple background processes (i.e., matrix calculators, video compressors, and large deep learning models) to occupy various amount of computational resources.", "We can then generate dynamic system load trajectories using these processes to simulate dynamic system load environments.", "During model training, we randomly generate dynamic system trajectories in each iteration to simulate various dynamic environments to train our SAN.", "During testing, for fair comparisons, we first generate 3 dynamic trajectories unseen in training, and then run all the evaluation experiments on these 3 fixed trajectories and report the average result."], "citing_paper_content": {"title": "System-Status-Aware Adaptive Network For Online Streaming Video Understanding", "abstract": "Recent years have witnessed great progress in deep neural networks for real-time applications. However, most existing works do not explicitly consider the general case where the device's state and the available resources fluctuate over time, and none of them investigate or address the impact of varying computational resources for online video understanding tasks. This paper proposes a Systemstatus-aware Adaptive Network (SAN) that considers the device's real-time state to provide high-quality predictions with low delay. Usage of our agent's policy improves efficiency and robustness to fluctuations of the system status. On two widely used video understanding tasks, SAN obtains state-of-the-art performance while constantly keeping processing delays low. Moreover, training such an agent on various types of hardware configurations is not easy as the labeled training data might not be available, or can be computationally prohibitive. To address this challenging problem, we propose a Meta Self-supervised Adaptation (MSA) method that adapts the agent's policy to new hardware configurations at test-time, allowing for easy deployment of the model onto other unseen hardware platforms."}, "cited_paper_content": {"title": "Dynamic Kernel Distillation For Efficient Pose Estimation In Videos", "abstract": "Existing video-based human pose estimation methods extensively apply large networks onto every frame in the video to localize body joints, which suffer high computational cost and hardly meet the low-latency requirement in realistic applications. To address this issue, we propose a novel Dynamic Kernel Distillation (DKD) model to facilitate small networks for estimating human poses in videos, thus significantly lifting the efficiency. In particular, DKD introduces a light-weight distillator to online distill pose kernels via leveraging temporal cues from the previous frame in a one-shot feed-forward manner. Then, DKD simplifies body joint localization into a matching procedure between the pose kernels and the current frame, which can be efficiently computed via simple convolution. In this way, DKD fast transfers pose knowledge from one frame to provide compact guidance for body joint localization in the following frame, which enables utilization of small networks in video-based pose estimation. To facilitate the training process, DKD exploits a temporally adversarial training strategy that introduces a temporal discriminator to help generate temporally coherent pose kernels and pose estimation results within a long range. Experiments on Penn Action and Sub-JHMDB benchmarks demonstrate outperforming efficiency of DKD, specifically, 10x flops reduction and 2x speedup over previous best model, and its state-of-the-art accuracy."}, "keywords": ["estimation networks"], "citation_intent": "method"} {"citing_id": "2304.14484v1", "cited_id": "1612.00496", "section_title": "Vi. Conclusion", "citation": "In this project, we have successfully replicated the results from the original paper #REFR in terms of extracting the 3D bounding boxes from a single view for three different dataset categories.", "text_before_citation": [], "text_after_citation": ["We then extended the 3D bounding box accuracies, using the light-weight MobileNet-v2 and EfficientNet-v2 feature extractors, and later a similar performance boost is also observed with our modified multibin architecture.", "As a result, we conclude that MobileNet-v2 and EfficientNet-v2 outperform VGG-19 architecture in 3D bounding box estimation, from a single view of an RGB image."], "citing_paper_content": {"title": "Oricon3D: Effective 3D Object Detection Using Orientation And Confidence", "abstract": "We introduce a technique for detecting 3D objects and estimating their position from a single image. Our method is built on top of a similar state-of-the-art technique [1], but with improved accuracy. The approach followed in this research first estimates common 3D properties of an object using a Deep Convolutional Neural Network (DCNN), contrary to other frameworks that only leverage centre-point predictions. We then combine these estimates with geometric constraints provided by a 2D bounding box to produce a complete 3D bounding box. The first output of our network estimates the 3D object orientation using a discrete-continuous loss [1]. The second output predicts the 3D object dimensions with minimal variance. Here we also present our extensions by augmenting lightweight feature extractors and a customized multibin architecture. By combining these estimates with the geometric constraints of the 2D bounding box, we can accurately (or comparatively) determine the 3D object pose better than our baseline [1] on the KITTI 3D detection benchmark [2]."}, "cited_paper_content": {"title": "3D Bounding Box Estimation Using Deep Learning And Geometry", "abstract": "We present a method for 3D object detection and pose estimation from a single image. In contrast to current techniques that only regress the 3D orientation of an object, our method first regresses relatively stable 3D object properties using a deep convolutional neural network and then combines these estimates with geometric constraints provided by a 2D object bounding box to produce a complete 3D bounding box. The first network output estimates the 3D object orientation using a novel hybrid discrete-continuous loss, which significantly outperforms the L2 loss. The second output regresses the 3D object dimensions, which have relatively little variance compared to alternatives and can often be predicted for many object types. These estimates, combined with the geometric constraints on translation imposed by the 2D bounding box, enable us to recover a stable and accurate 3D object pose. We evaluate our method on the challenging KITTI object detection benchmark [2] both on the official metric of 3D orientation estimation and also on the accuracy of the obtained 3D bounding boxes. Although conceptually simple, our method outperforms more complex and computationally expensive approaches that leverage semantic segmentation, instance level segmentation and flat ground priors [4] and sub-category detection [23][24]. Our discrete-continuous loss also produces state of the art results for 3D viewpoint estimation on the Pascal 3D+ dataset[26]."}, "keywords": ["3D bounding boxes"], "citation_intent": "result"} {"citing_id": "2303.12255v1", "cited_id": "1612.00796", "section_title": "Background And Related Work", "citation": "When the model is updated to learn the current task, we observe catastrophic forgetting #REFR in the model due to deviation from the optimal parameters for the past learned tasks.", "text_before_citation": ["The reconstruction prior used in VAEs is local low-variance Bournourli distribution.", "This requirement can be relaxed to more global distributions seen in GAN [7] , or distributed throughout the full model as in diffusion models #OTHEREFR .", "Our method employs ideas of (i) and (iii).", "When used on a unimodal Gaussian VAE, it is a GMVAE with 2 d components, with deterministic assignment conditioned on the approximate posterior, and unimodal Gaussian reparameterization to distributed the mass in the latent embedding.", "Continual Learning aims to learn a set of sequentially arriving tasks using a shared model."], "text_after_citation": ["CL algorithms employ three primary strategies to mitigate catastrophic forgetting.", "One approach involves regularizing a fixed shared model, such that different information pathways are used to learn each task's weights #OTHEREFR .", "The aim is to identify critical model parameters that encode the learned knowledge of each task and consolidate these parameters while updating the model to learn new tasks.", "The downside is that the learning capacity of the model is compromised as more weights are consolidated.", "Another approach relies on model expansion #OTHEREFR , which involves adding new weights to a base model and customizing the network to learn new tasks via these additional weights."], "citing_paper_content": {"title": "Encoding Binary Concepts In The Latent Space Of Generative Models For Enhancing Data Representation", "abstract": "Binary concepts 1 are empirically used by humans to generalize efficiently. And they are based on Bernoulli distribution which is the building block of information. These concepts span both low-level and high-level features such as \"large vs small\" and \"a neuron is active or inactive\". Binary concepts are ubiquitous features and can be used to transfer knowledge to improve model generalization. We propose a novel binarized regularization to facilitate learning of binary concepts to improve the quality of data generation in autoencoders. We introduce a binarizing hyperparameter r in data generation process to disentangle the latent space symmetrically. We demonstrate that this method can be applied easily to existing variational autoencoder (VAE) variants to encourage symmetric disentanglement, improve reconstruction quality, and prevent posterior collapse without computation overhead. We also demonstrate that this method can boost existing models to learn more transferable representations and generate more representative samples for the input distribution which can alleviate catastrophic forgetting using generative replay under continual learning settings."}, "cited_paper_content": {"title": "Overcoming Catastrophic Forgetting In Neural Networks", "abstract": "The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence. Until now neural networks have not been capable of this and it has been widely thought that catastrophic forgetting is an inevitable feature of connectionist models. We show that it is possible to overcome this limitation and train networks that can maintain expertise on tasks that they have not experienced for a long time. Our approach remembers old tasks by selectively slowing down learning on the weights important for those tasks. We demonstrate our approach is scalable and effective by solving a set of classification tasks based on a hand-written digit dataset and by learning several Atari 2600 games sequentially."}, "keywords": ["past learned tasks", "catastrophic forgetting"], "citation_intent": "background"} {"citing_id": "2304.10828v1", "cited_id": "2002.07738", "section_title": "Introduction", "citation": "Consequently, albeit being defined on the full input space and over a fairness similarity metric, because of its worst-case nature, IF has been linked to adversarial robustness #REFR .", "text_before_citation": ["Several calls for caution have, however, recently been raised about their deployment in tasks where fairness is of concern #OTHEREFR .", "In fact, Neural Networks (NNs) have been found to reinforce negative biases from sensitive datasets #OTHEREFR , discriminating against individuals on the basis of attributes such as gender or race.", "To address this, research efforts have been directed at both measuring the fairness of NNs, their de-biased training, as well as defining precise notions of fairness.", "Given a BNN f and a similarity metric between individuals d f air , which encodes a task-dependent notion of similarity #OTHEREFR , Individual Fairness (IF) enforces that all pairs of similar individuals in the input space get treated similarly by f #OTHEREFR .", "As opposed to the statistical nature of group fairness #OTHEREFR , IF aims at computing worst-case bias measures over a model input space."], "text_after_citation": ["As it has recently been show that Bayesian Neural Networks (BNNs) have a tendency to be less fragile to adversarial attacks than their frequentist counter-parts #OTHEREFR , it is natural to wonder whether approximate Bayesian inference may also have a positive impact over the IF of a neural network.", "However, to the best of our knowledge, no work has been conducted along these lines of inquire.", "In this paper, we investigate the IF of BNNs and empirically evaluate it on various benchmarks.", "While exact computations of IF in BNNs is infeasible due to their non-convexity, we exploit the relationship between IF and adversarial robustness #OTHEREFR to develop a framework for the adaptation of adversarial attack methods for IF.", "In particular, we explicitly instantiate Fair-FGSM and Fair-PGD as extensions of their corresponding adversarial attacks #OTHEREFR by employing gradient steps modifications and projections specific to d f air metrics commonly used in the fairness literature."], "citing_paper_content": {"title": "Individual Fairness In Bayesian Neural Networks", "abstract": "We study Individual Fairness (IF) for Bayesian neural networks (BNNs). Specifically, we consider the-\u03b4-individual fairness notion, which requires that, for any pair of input points that are-similar according to a given similarity metrics, the output of the BNN is within a given tolerance \u03b4 > 0. We leverage bounds on statistical sampling over the input space and the relationship between adversarial robustness and individual fairness to derive a framework for the systematic estimation of-\u03b4-IF, designing Fair-FGSM and Fair-PGD as global, fairness-aware extensions to gradient-based attacks for BNNs. We empirically study IF of a variety of approximately inferred BNNs with different architectures on fairness benchmarks, and compare against deterministic models learnt using frequentist techniques. Interestingly, we find that BNNs trained by means of approximate Bayesian inference consistently tend to be markedly more individually fair than their deterministic counterparts."}, "cited_paper_content": {"title": "Individual Fairness Revisited: Transferring Techniques From Adversarial Robustness", "abstract": "We turn the definition of individual fairness on its head---rather than ascertaining the fairness of a model given a predetermined metric, we find a metric for a given model that satisfies individual fairness. This can facilitate the discussion on the fairness of a model, addressing the issue that it may be difficult to specify a priori a suitable metric. Our contributions are twofold: First, we introduce the definition of a minimal metric and characterize the behavior of models in terms of minimal metrics. Second, for more complicated models, we apply the mechanism of randomized smoothing from adversarial robustness to make them individually fair under a given weighted $L^p$ metric. Our experiments show that adapting the minimal metrics of linear models to more complicated neural networks can lead to meaningful and interpretable fairness guarantees at little cost to utility."}, "keywords": ["fairness similarity metric", "adversarial robustness"], "citation_intent": "background"} {"citing_id": "2304.02693v1", "cited_id": "1711.09856", "section_title": "5.2.2.", "citation": "Overall, HRNet is the most robust against the PGD and CR-PGD attacks, in that it has the smallest PixAcc drop when increases. On the other hand, PSPNet is the most vulnerable. We note that #REFR has similar observations. Running time comparison.", "text_before_citation": ["This is because neighboring pixels with small certified radii can easily affect each other, which naturally forms the groups in the certified radius map (also see Figure 1 (i)).", "Figure 5 verifies this intuition, where we test 10 random testing images in Pascal VOC.", "We can see that a majority of the perturbations in CR-PGD are assigned to the pixels with relatively smaller certified radii, in order to wrongly predict more pixels.", "In contrast, most of the perturbations in PGD are assigned to the pixels with relatively larger certified radii.", "As wrongly predicting these pixels requires a larger perturbation, PGD misclassifies much fewer pixels than our CR-PGD. Different models have different robustness."], "text_after_citation": ["Over all testing images in the three datasets, the average time of CR-PGD is 4.0 seconds, while that of PGD is 3.6 seconds. The overhead of CR-PGD over PGD is 11%."], "citing_paper_content": {"title": "A Certified Radius-Guided Attack Framework To Image Segmentation Models", "abstract": "Image segmentation is an important problem in many safety-critical applications such as medical imaging and autonomous driving. Recent studies show that modern image segmentation models are vulnerable to adversarial perturbations, while existing attack methods mainly follow the idea of attacking image classification models. We argue that image segmentation and classification have inherent differences, and design an attack framework specially for image segmentation models. Our goal is to thoroughly explore the vulnerabilities of modern segmentation models, i.e., aiming to misclassify as many pixels as possible under a perturbation budget in both white-box and black-box settings. Our attack framework is inspired by certified radius, which was originally used by defenders to defend against adversarial perturbations to classification models. We are the first, from the attacker perspective, to leverage the properties of certified radius and propose a certified radius guided attack framework against image segmentation models. Specifically, we first adapt randomized smoothing, the state-of-theart certification method for classification models, to derive the pixel's certified radius. A larger certified radius of a pixel means the pixel is theoretically more robust to adversarial perturbations. This observation inspires us to focus more on disrupting pixels with relatively smaller certified radii. Accordingly, we design a pixel-wise certified radius guided loss, when plugged into any existing white-box attack, yields our certified radius-guided white-box attack. Next, we propose the first black-box attack to image segmentation models via bandit. A key challenge is no gradient information is available. To address it, we design a novel gradient estimator, based on bandit feedback, which is queryefficient and provably unbiased and stable. We use this gradient estimator to design a projected bandit gradient descent (PBGD) attack. We further use pixels' certified radii and design a certified radius-guided PBGD (CR-PBGD) attack. We prove our PBGD and CR-PBGD attacks can achieve asymptotically optimal attack performance with an optimal rate. We evaluate our certified-radius guided white-box and black-box attacks on multiple modern image segmentation models and datasets. Our results validate the effectiveness of our certified radius-guided attack framework."}, "cited_paper_content": {"title": "On The Robustness Of Semantic Segmentation Models To Adversarial Attacks", "abstract": "Deep Neural Networks (DNNs) have been demonstrated to perform exceptionally well on most recognition tasks such as image classification and segmentation. However, they have also been shown to be vulnerable to adversarial examples. This phenomenon has recently attracted a lot of attention but it has not been extensively studied on multiple, large-scale datasets and complex tasks such as semantic segmentation which often require more specialised networks with additional components such as CRFs, dilated convolutions, skip-connections and multiscale processing. In this paper, we present what to our knowledge is the first rigorous evaluation of adversarial attacks on modern semantic segmentation models, using two large-scale datasets. We analyse the effect of different network architectures, model capacity and multiscale processing, and show that many observations made on the task of classification do not always transfer to this more complex task. Furthermore, we show how mean-field inference in deep structured models and multiscale processing naturally implement recently proposed adversarial defenses. Our observations will aid future efforts in understanding and defending against adversarial examples. Moreover, in the shorter term, we show which segmentation models should currently be preferred in safety-critical applications due to their inherent robustness."}, "keywords": ["CR-PGD attacks"], "citation_intent": "result"} {"citing_id": "2303.14771v1", "cited_id": "1908.04742", "section_title": "Evaluations On Task-Incremental Setting", "citation": "We evaluate Split-CIFAR100, Split-MiniImagenet, and Ima-geNet32 using the protocol from #REFR with 100 training epochs training per task.", "text_before_citation": [], "text_after_citation": ["We report the mean and standard error over 3 runs.", "Split-CIFAR100 and Split-MiniImageNet We consider Split-CIFAR100 and Split-MiniImageNet with 20 tasks of 5 classes each.", "The results can be found in Figure 2 , for Split-CIFAR100 and Split-MiniImageNet, using different buffer sizes for ER.", "In this setting, we can observe that our Figure 4 . Class-incremental accuracy on 20-Task Split-CIFAR100(left) and Split-MiniImageNet(right).", "We observe that PRD outperforms not only other replay-free baselines but also ER, M=5, and is on par with ER, M=20, without storing any data."], "citing_paper_content": {"title": "Prototype-Sample Relation Distillation: Towards Replay-Free Continual Learning", "abstract": "In Continual learning (CL) balancing effective adaptation while combating catastrophic forgetting is a central challenge. Many of the recent best-performing methods utilize various forms of prior task data, e.g. a replay buffer, to tackle the catastrophic forgetting problem. Having access to previous task data can be restrictive in many real-world scenarios, for example when task data is sensitive or proprietary. To overcome the necessity of using previous tasks data, in this work, we start with strong representation learning methods that have been shown to be less prone to forgetting. We propose a holistic approach to jointly learn the representation and class prototypes while maintaining the relevance of old class prototypes and their embedded similarities. Specifically, samples are mapped to an embedding space where the representations are learned using a supervised contrastive loss. Class prototypes are evolved continually in the same latent space, enabling learning and prediction at any point. To continually adapt the prototypes without keeping any prior task data, we propose a novel distillation loss that constrains class prototypes to maintain relative similarities as compared to new task data. This method yields state-of-the-art performance in the task-incremental setting where we are able to outperform other methods that both use no data as well as approaches relying on large amounts of data. Our method is also shown to provide strong performance in the class-incremental setting without using any stored data points."}, "cited_paper_content": {"title": "Online Continual Learning With Maximally Interfered Retrieval", "abstract": "Continual learning, the setting where a learning agent is faced with a never ending stream of data, continues to be a great challenge for modern machine learning systems. In particular the online or \"single-pass through the data\" setting has gained attention recently as a natural setting that is difficult to tackle. Methods based on replay, either generative or from a stored memory, have been shown to be effective approaches for continual learning, matching or exceeding the state of the art in a number of standard benchmarks. These approaches typically rely on randomly selecting samples from the replay memory or from a generative model, which is suboptimal. In this work, we consider a controlled sampling of memories for replay. We retrieve the samples which are most interfered, i.e. whose prediction will be most negatively impacted by the foreseen parameters update. We show a formulation for this sampling criterion in both the generative replay and the experience replay setting, producing consistent gains in performance and greatly reduced forgetting. We release an implementation of our method at https://github.com/optimass/Maximally_Interfered_Retrieval."}, "keywords": ["100 training epochs"], "citation_intent": "method"} {"citing_id": "2304.03191v1", "cited_id": "1504.05477", "section_title": "Sharper Krylov Subspace Algorithms", "citation": "We do this by explicity analyzing the Chebyshev polynomial (as opposed to a polynomial approximation to a threshold function in #REFR ) and demonstrate that the output of Algorithm 2.2 is at least as good as outputting the aformentioned vector v (see Lemma 7.7 for details).", "text_before_citation": ["A (1 + \u03b5) relative-error solution to the above cost corresponds to an additive \u03b5 2/3 error.", "The standard analysis of Krylov iteration #OTHEREFR , states that after q = log(n)/ \u221a \u03b6 iterations, for any 0 < \u03b6 < 1, the algorithm outputs a vector v such that v A Av A 2 op \u2212 \u03b6\u03c3 2 2 . By Pythagorean theorem,", "A I \u2212 vv 2 F = A 2 op \u2212 Av 2 2 min u 2 =1 A I \u2212 uu + \u03c3 2 2 \u03b6.", "Since \u03c3 2 2 1, it suffices to set \u03b6 = \u03b5 2/3 and thus q = O(log(n)/\u03b5 1/3 ) iterations suffice.", "We strengthen this analysis by showing that a significantly lower dimensional Krylov subspace (corresponding to O log(1/\u03b5)/\u03b5 1/3 iterations) spans a vector v such that A I \u2212 vv 2 F min u 2 =1 A I \u2212 uu + \u03b5 2/3 ."], "text_after_citation": ["In the complementary case, we deviate significantly from any prior analysis of Krylov iteration, including #OTHEREFR .", "Here, we know that the number of singular values in the range [1/2, 1 \u2212 \u03b5] is at most O(\u03b5 \u22121/3 ).", "We therefore construct an entirely different polynomial, which is no longer based on Chebyshev polynomials.", "This polynomial is designed to explicitly zero out all singular values in the range [1/2, 1 \u2212 \u03b5].", "We note that the degree of this polynomial, p 0 , is only O(\u03b5 \u22121/3 ), and it allows us to remove the contribution of all medium sized singular values, similar to starting with a larger block size."], "citing_paper_content": {"title": "Krylov Methods Are (Nearly) Optimal For Low-Rank Approximation", "abstract": "We consider the problem of rank-1 low-rank approximation (LRA) in the matrix-vector product model under various Schatten norms: min u 2 =1 A(I \u2212 uu) S p , where M S p denotes the p norm of the singular values of M. Given \u03b5 > 0, our goal is to output a unit vector v such that A I \u2212 vv S p (1 + \u03b5) min u 2 =1 A I \u2212 uu S p. Our main result shows that Krylov methods (nearly) achieve the information-theoretically optimal 1 number of matrix-vector products for Spectral (p = \u221e), Frobenius (p = 2) and Nuclear (p = 1) LRA. In particular, for Spectral LRA, we show that any algorithm requires \u2126 log(n)/\u03b5 1/2 matrix-vector products, exactly matching the upper bound obtained by Krylov methods [MM15]. Our lower bound addresses Open Question 1 in [Woo14], providing evidence for the lack of progress on algorithms for Spectral LRA and resolves Open Question 1.2 in [BCW22]. Next, we show that for any fixed constant p, i.e. 1 p = O(1), there is an upper bound of O log(1/\u03b5)/\u03b5 1/3 matrix-vector products, implying that the complexity does not grow as a function of input size. This improves the O log(n/\u03b5)/\u03b5 1/3 bound recently obtained in [BCW22], and matches their \u2126 1/\u03b5 1/3 lower bound, to a log(1/\u03b5) factor."}, "cited_paper_content": {"title": "Randomized Block Krylov Methods For Stronger And Faster Approximate Singular Value Decomposition", "abstract": "Since being analyzed by Rokhlin, Szlam, and Tygert and popularized by Halko, Martinsson, and Tropp, randomized Simultaneous Power Iteration has become the method of choice for approximate singular value decomposition. It is more accurate than simpler sketching algorithms, yet still converges quickly for any matrix, independently of singular value gaps. After $\\tilde{O}(1/\\epsilon)$ iterations, it gives a low-rank approximation within $(1+\\epsilon)$ of optimal for spectral norm error. ::: We give the first provable runtime improvement on Simultaneous Iteration: a simple randomized block Krylov method, closely related to the classic Block Lanczos algorithm, gives the same guarantees in just $\\tilde{O}(1/\\sqrt{\\epsilon})$ iterations and performs substantially better experimentally. Despite their long history, our analysis is the first of a Krylov subspace method that does not depend on singular value gaps, which are unreliable in practice. ::: Furthermore, while it is a simple accuracy benchmark, even $(1+\\epsilon)$ error for spectral norm low-rank approximation does not imply that an algorithm returns high quality principal components, a major issue for data applications. We address this problem for the first time by showing that both Block Krylov Iteration and a minor modification of Simultaneous Iteration give nearly optimal PCA for any matrix. This result further justifies their strength over non-iterative sketching methods. ::: Finally, we give insight beyond the worst case, justifying why both algorithms can run much faster in practice than predicted. We clarify how simple techniques can take advantage of common matrix properties to significantly improve runtime."}, "keywords": ["polynomial approximation", "Algorithm"], "citation_intent": "background"} {"citing_id": "2304.14299v1", "cited_id": "1905.03244", "section_title": "Related Work", "citation": "To relax this heavy reliance on the parameter space, some approaches directly regress 3D positions of mesh vertices instead of predicting the model's parameters. Among these approaches, Kolotouros et al. #REFR and Hongsuk et al.", "text_before_citation": ["The fitted mesh is then used as a supervisory signal to train a feed-forward network with a mesh convolutional decoder. Spurr et al.", "#OTHEREFR introduce biomechanical constraints to guide the network to predict feasible hand poses with weakly-annotated real-world data. Chen et al.", "#OTHEREFR use 2D joints extracted from an off-the-shelf 2D pose estimator as a supervisory signal to train a modelbased autoencoder to estimate 3D hand pose and shape.", "However, similar to the model based approaches, they do not exploit correlation between joints and mesh vertices, yet our proposed AMVUR model addresses this issue and improves the feature representation of joints and vertices.", "Model-free Methods: Although hand parametric models such as MANO serve as a strong structural prior to support 3D hand reconstruction, help to handle severe occlusions and help to accommodate weakly-annotated data, approaches that rely on this can easily get stuck in the model's parameter space, resulting in a non-minimal representation problem #OTHEREFR ."], "text_after_citation": ["#OTHEREFR combine an image-based CNN and a GraphCNN to estimate human mesh coordinates directly. Lin et al.", "#OTHEREFR argue that GraphCNN can only capture the local interactions between neighboring vertices of the triangle mesh, so they use a self-attention mechanism to capture global interactions between the vertices and joints. Most recently, Hampali et al.", "#OTHEREFR first extract joint features by localizing them on CNN feature maps, then take these features and their spatial encodings as the input to a transformer model for 3D hand pose estimation.", "However, spatial encoding is ambiguous to describe joints' 3D locations, especially for overlapping 3D joints in 2D images.", "Different from the above approaches, in AMVUR, a cross-attention module is proposed to learn the correlation between joints and mesh vertices, followed by a self-attention module to learn the correlation between different vertices."], "citing_paper_content": {"title": "A Probabilistic Attention Model With Occlusion-Aware Texture Regression For 3D Hand Reconstruction From A Single Rgb Image", "abstract": "Recently, deep learning based approaches have shown promising results in 3D hand reconstruction from a single RGB image. These approaches can be roughly divided into model-based approaches, which are heavily dependent on the model's parameter space, and model-free approaches, which require large numbers of 3D ground truths to reduce depth ambiguity and struggle in weakly-supervised scenarios. To overcome these issues, we propose a novel probabilistic model to achieve the robustness of model-based approaches and reduced dependence on the model's parameter space of model-free approaches. The proposed probabilistic model incorporates a model-based network as a prior-net to estimate the prior probability distribution of joints and vertices. An Attention-based Mesh Vertices Uncertainty Regression (AMVUR) model is proposed to capture dependencies among vertices and the correlation between joints and mesh vertices to improve their feature representation. We further propose a learning based occlusionaware Hand Texture Regression model to achieve highfidelity texture reconstruction. We demonstrate the flexibility of the proposed probabilistic model to be trained in both supervised and weakly-supervised scenarios. The experimental results demonstrate our probabilistic model's stateof-the-art accuracy in 3D hand and texture reconstruction from a single image in both training schemes, including in the presence of severe occlusions."}, "cited_paper_content": {"title": "Convolutional Mesh Regression For Single-Image Human Shape Reconstruction", "abstract": "This paper addresses the problem of 3D human pose and shape estimation from a single image. Previous approaches consider a parametric model of the human body, SMPL, and attempt to regress the model parameters that give rise to a mesh consistent with image evidence. This parameter regression has been a very challenging task, with model-based approaches underperforming compared to nonparametric solutions in terms of pose estimation. In our work, we propose to relax this heavy reliance on the model's parameter space. We still retain the topology of the SMPL template mesh, but instead of predicting model parameters, we directly regress the 3D location of the mesh vertices. This is a heavy task for a typical network, but our key insight is that the regression becomes significantly easier using a Graph-CNN. This architecture allows us to explicitly encode the template mesh structure within the network and leverage the spatial locality the mesh has to offer. Image-based features are attached to the mesh vertices and the Graph-CNN is responsible to process them on the mesh structure, while the regression target for each vertex is its 3D location. Having recovered the complete 3D geometry of the mesh, if we still require a specific model parametrization, this can be reliably regressed from the vertices locations. We demonstrate the flexibility and the effectiveness of our proposed graph-based mesh regression by attaching different types of features on the mesh vertices. In all cases, we outperform the comparable baselines relying on model parameter regression, while we also achieve state-of-the-art results among model-based pose estimation approaches."}, "keywords": ["mesh vertices"], "citation_intent": "method"} {"citing_id": "2303.17600v1", "cited_id": "1910.10897", "section_title": "The Stretch Pick-And-Place Benchmark", "citation": "With perfect execution, success can generally be achieved within 50 steps, thisis similar to other short-horizon, continuousspace, manipulation tasks #REFR . See Appendix. B.1 and Table.", "text_before_citation": ["We also include proprioceptive sensors corresponding to the agent's arm position.", "The arm of a Stretch RE1 agent uses a telescoping mechanism to move forward and back, may move up and down, and the gripper has one rotational degree of freedom allowing for changes in yaw, see Figures 2 and 3 for 3rd person views of the stretch robot.", "The robotic arm of the Stretch RE1 robot is orthogonal to the agent's forward and backward movement and so, to move the arm laterally, the agent must move its body in the forward and backward direction.", "To highlight the study of irreversible transitions in our benchmark, and as to not add additional complexity to STRETCH-P&P, we restrict the robot body to not rotate in training, although the wrist of the agent may do so.", "The maximum rotation for the wrist is 2 \u2022 per step, and horizontal/vertical arm movement is limited to 5cm per step."], "text_after_citation": ["B.1 for further details regarding the observation and action spaces."], "citing_paper_content": {"title": "When Learning Is Out Of Reach, Reset: Generalization In Autonomous Visuomotor Reinforcement Learning", "abstract": "Figure 1: Episodic, Reset-Free, and Reset-Minimizing RL. In standard (i.e. episodic) reinforcement learning (RL) agents have their environments reset after every success or failure, an expensive operation in the real world. In Reset-Free RL (RF-RL), researchers have designed \"reset games\" which allow for learning so long as special care is taken to avoid irreversible transitions (e.g. an apple falling out of reach). We consider Reset-Minimizing RL (RM-RL) where in realistic and dynamic environments agents may request human interventions but should minimize these requests."}, "cited_paper_content": {"title": "Meta-World: A Benchmark And Evaluation For Multi-Task And Meta Reinforcement Learning", "abstract": "Meta-reinforcement learning algorithms can enable robots to acquire new skills much more quickly, by leveraging prior experience to learn how to learn. However, much of the current research on meta-reinforcement learning focuses on task distributions that are very narrow. For example, a commonly used meta-reinforcement learning benchmark uses different running velocities for a simulated robot as different tasks. When policies are meta-trained on such narrow task distributions, they cannot possibly generalize to more quickly acquire entirely new tasks. Therefore, if the aim of these methods is to enable faster acquisition of entirely new behaviors, we must evaluate them on task distributions that are sufficiently broad to enable generalization to new behaviors. In this paper, we propose an open-source simulated benchmark for meta-reinforcement learning and multi-task learning consisting of 50 distinct robotic manipulation tasks. Our aim is to make it possible to develop algorithms that generalize to accelerate the acquisition of entirely new, held-out tasks. We evaluate 6 state-of-the-art meta-reinforcement learning and multi-task learning algorithms on these tasks. Surprisingly, while each task and its variations (e.g., with different object positions) can be learned with reasonable success, these algorithms struggle to learn with multiple tasks at the same time, even with as few as ten distinct training tasks. Our analysis and open-source environments pave the way for future research in multi-task learning and meta-learning that can enable meaningful generalization, thereby unlocking the full potential of these methods."}, "keywords": ["short-horizon, continuousspace, manipulation", "tasks"], "citation_intent": "result"} {"citing_id": "2303.12710v2", "cited_id": "1705.04058", "section_title": "Introduction", "citation": "Artistic style transfer, as an efficient way to create a new painting by combining the content of a natural images and the style of an existing painting image, is a major research topic in computer graphics and computer vision #REFR .", "text_before_citation": ["If a picture is worth a thousand words, then an artwork may tell the whole story.", "The art style depicts the visual appearance of an artwork and characterizes how the artist expresses a theme and shows his/her creativity.", "The features that identify an artwork, such as the artist's use of stroke, color and composition, determine the style."], "text_after_citation": ["The main challenges of arbitrary style transfer are extracting styles from artistic images and mapping a specific realistic image into an artistic one in a controllable way.", "The core problem for style extraction is to find an effective representation of styles since it is in general hard to give explicit definitions across different styles.", "To build a reasonable style feature space, it is necessary to explore the relationship and distribution of styles in order to capture both individual and holistic characteristics.", "For the mapping process, several generative mechanisms are adopted to address different issues, such as auto-encoder #OTHEREFR , neural flow model #OTHEREFR and visual transformer #OTHEREFR .", "In contrast to the goal of those methods, we propose to improve arbitrary style transfer via a unified framework that offers the guidance of proper artistic style representation and works for various generative backbones."], "citing_paper_content": {"title": "A Unified Arbitrary Style Transfer Framework Via Adaptive Contrastive Learning", "abstract": "Flow-based backbone [An et al. 2021] in UCAST ViT-based backbone [Deng et al. 2022] in UCAST CNN-based backbone [Huang and Belongie 2017] in UCAST (b) Aquarelle (d) Aquarelle (k) Impressionism (g) Line Art (h) Ink and Wash (i) Impressionism (j) Ink and Wash"}, "cited_paper_content": {"title": "Neural Style Transfer: A Review", "abstract": "The seminal work of Gatys et al. demonstrated the power of Convolutional Neural Networks (CNNs) in creating artistic imagery by separating and recombining image content and style. This process of using CNNs to render a content image in different styles is referred to as Neural Style Transfer (NST). Since then, NST has become a trending topic both in academic literature and industrial applications. It is receiving increasing attention and a variety of approaches are proposed to either improve or extend the original NST algorithm. In this paper, we aim to provide a comprehensive overview of the current progress towards NST. We first propose a taxonomy of current algorithms in the field of NST. Then, we present several evaluation methods and compare different NST algorithms both qualitatively and quantitatively. The review concludes with a discussion of various applications of NST and open problems for future research. A list of papers discussed in this review, corresponding codes, pre-trained models and more comparison results are publicly available at this https URL."}, "keywords": ["Artistic style transfer"], "citation_intent": "background"} {"citing_id": "2303.09930v1", "cited_id": "1905.02249", "section_title": "Ii. Related Work", "citation": "The proposed framework addresses this issue by simultaneously training and optimizing OOD detection, and a MixMatch #REFR based semi-SL in a multitask learning framework.", "text_before_citation": ["3) ReMixMatch: ReMixMatch #OTHEREFR enhances the Mix-Match #OTHEREFR framework by introducing the principles of distribution alignment and augmentation anchoring.", "The notion of distribution alignment replaces the sharpening step of the MixMatch #OTHEREFR framework in an attempt to match model aggregated class prediction to that of the marginal distribution of the given ground truth.", "Further, ReMixMatch #OTHEREFR introduces another principle of augmentation alignment in place of the consistency regularization step of MixMatch #OTHEREFR to encourage each output to be close to the prediction for a weakly-augmented version of the same input.", "By incorporating these changes, ReMixMatch #OTHEREFR has been reported to be data efficient compared to MixMatch #OTHEREFR frameworks.", "4) Multi-task Curriculum learning (MTL) Framework Guided Semi-SL: Unlike previous methods that focused on developing strong semi-SL models, the goal of MTL #OTHEREFR guided semi-SL is to explicitly address the issues associated with the presence of open-set samples in semi-SL."], "text_after_citation": ["The paper proposes a novel OOD detector based on the model's capacity to identify noisy labeled training data.", "The OOD detector ensures that the subsequent semi-SL framework, based on MixMatch #OTHEREFR , is only trained with the inliers samples by selecting an appropriate threshold on the OOD score and filtering out the outlier samples from the unlabelled data.", "The method significantly produced superior results in settings where samples from a different distribution contaminates the unlabelled data.", "Along with these frameworks, a few studies have also reexamined the original MixMatch #OTHEREFR algorithm itself.", "One such study found that the performance degradation caused by open-set samples in unlabelled data is primarily due to the Pseudo-Labelling (PL) task of MixMatch."], "citing_paper_content": {"title": "Robust Semi-Supervised Learning For Histopathology Images Through Self-Supervision Guided Out-Of-Distribution Scoring", "abstract": "Semi-supervised learning (semi-SL) is a promising alternative to supervised learning for medical image analysis when obtaining good quality supervision for medical imaging is difficult. However, semi-SL assumes that the underlying distribution of unaudited data matches that of the few labeled samples, which is often violated in practical settings, particularly in medical images. The presence of out-of-distribution (OOD) samples in the unlabeled training pool of semi-SL is inevitable and can reduce the efficiency of the algorithm. Common preprocessing methods to filter out outlier samples may not be suitable for medical images that involve a wide range of anatomical structures and rare morphologies. In this paper, we propose a novel pipeline for addressing open-set supervised learning challenges in digital histology images. Our pipeline efficiently estimates an OOD score for each unlabelled data point based on self-supervised learning to calibrate the knowledge needed for a subsequent semi-SL framework. The outlier score derived from the OOD detector is used to modulate sample selection for the subsequent semi-SL stage, ensuring that samples conforming to the distribution of the few labeled samples are more frequently exposed to the subsequent semi-SL framework. Our framework is compatible with any semi-SL framework, and we base our experiments on the popular Mixmatch semi-SL framework. We conduct extensive studies on two digital pathology datasets, Kather colorectal histology dataset and a dataset derived from TCGA-BRCA whole slide images, and establish the effectiveness of our method by comparing with popular methods and frameworks in semi-SL algorithms through various experiments. Index Terms-Semi Supervised learning, open-set, label-noise, mixmatch I. INTRODUCTION Medical image analysis requires large volumes of supervised data to train deep learning models effectively, but obtaining good quality supervision for medical imaging is inherently difficult due to the associated labor, expertise, and time required [1]-[3]. In such scenarios, semi-supervised learning (semi-SL) offers an efficient alternative, especially when there are only a few labeled samples but plenty of unlabeled or unaudited data. Semi-SL algorithms can leverage the vast pool of unaudited training data by extracting discriminative information from the structure of unlabeled data that complements the knowledge gained from a small number of supervisory data samples. However, semi-SL assumes that the underlying distribution of the unaudited data matches that of the few labeled samples [4], [5]."}, "cited_paper_content": {"title": "Mixmatch: A Holistic Approach To Semi-Supervised Learning", "abstract": "Semi-supervised learning has proven to be a powerful paradigm for leveraging unlabeled data to mitigate the reliance on large labeled datasets. In this work, we unify the current dominant approaches for semi-supervised learning to produce a new algorithm, MixMatch, that works by guessing low-entropy labels for data-augmented unlabeled examples and mixing labeled and unlabeled data using MixUp. We show that MixMatch obtains state-of-the-art results by a large margin across many datasets and labeled data amounts. For example, on CIFAR-10 with 250 labels, we reduce error rate by a factor of 4 (from 38% to 11%) and by a factor of 2 on STL-10. We also demonstrate how MixMatch can help achieve a dramatically better accuracy-privacy trade-off for differential privacy. Finally, we perform an ablation study to tease apart which components of MixMatch are most important for its success."}, "keywords": ["OOD detection", "MixMatch"], "citation_intent": "method"} {"citing_id": "2303.09681v1", "cited_id": "1706.03762", "section_title": "Spiking Spatiotemporal Transformer", "citation": "Attention scores of inner product used in the original transformer #REFR is not well-defined in our spiking transformer.", "text_before_citation": ["(b) Architecture of Spiking Spatiotemporal Attention that aims to address the one-way flow of information over time in SNNs, thus compensating for missing pose information, particularly for the early time steps.", "Normalized Hamming similarity is proposed for the attention score, which has been shown to be equivalent to inner product similarity between real valued vectors used in original transformer.", "definition mostly follows #OTHEREFR as", "PE(pos, 2i) = 1 T sin(pos/10000 2i/C k ), PE(pos, 2i + 1) = 1 T cos(pos/10000 2i/C k ),", "where pos represents the position in the sequence, while 2i or 2i + 1 denotes the position of C k channel."], "text_after_citation": ["If the spike key is a zero vector, s k j = 0, then the attention score will always be zero for any spike query s q i , that is 0 \u2022 s q j = 0.", "This means the inner product used in #OTHEREFR is not able to accurately determine the similarity between two spike vectors."], "citing_paper_content": {"title": "Event-Based Human Pose Tracking By Spiking Spatiotemporal Transformer", "abstract": "Event camera, as an emerging biologically-inspired vision sensor for capturing motion dynamics, presents new potential for 3D human pose tracking, or video-based 3D human pose estimation. However, existing works in pose tracking either require the presence of additional gray-scale images to establish a solid starting pose, or ignore the temporal dependencies all together by collapsing segments of event streams to form static image frames. Meanwhile, although the effectiveness of Artificial Neural Networks (ANNs, a.k.a. dense deep learning) has been showcased in many event-based tasks, the use of ANNs tends to neglect the fact that compared to the dense frame-based image sequences, the occurrence of events from an event camera is spatiotemporally much sparser. Motivated by the above mentioned issues, we present in this paper a dedicated end-to-end sparse deep learning approach for event-based pose tracking: 1) to our knowledge this is the first time that 3D human pose tracking is obtained from events only, thus eliminating the need of accessing to any frame-based images as part of input; 2) our approach is based entirely upon the framework of Spiking Neural Networks (SNNs), which consists of Spike-Element-Wise (SEW) ResNet and our proposed spiking spatiotemporal transformer; 3) a large-scale synthetic dataset is constructed that features a broad and diverse set of annotated 3D human motions, as well as longer hours of event stream data, named SynEventHPD. Empirical experiments demonstrate the superiority of our approach in both performance and efficiency measures. For example, with comparable performance to the state-of-the-art ANNs counterparts, our approach achieves a computation reduction of 20% in FLOPS. Our implementation is made available at https://github.com/JimmyZou/HumanPoseTracking SNN and dataset will be released upon paper acceptance."}, "cited_paper_content": {"title": "Attention Is All You Need", "abstract": "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data."}, "keywords": ["spiking transformer", "Attention scores"], "citation_intent": "background"} {"citing_id": "2305.00772v1", "cited_id": "1801.09736", "section_title": "Neumann Boundary Conditions P(U)|", "citation": "For the rectangular elements, the approximation of the singular function follows closely the proof in #REFR , and we present it below for the convenience of the reader.", "text_before_citation": ["Note that the approximation error for the smooth term is of higher order.", "By summing over all rectangles \u0393 j of the mesh of the screen and all components, we conclude that", "p(u)\u2212\u03a0 x \u03a0 t p(u) r,\u2212 1 2 ,\u0393, * h\u03b2 Re \u03bd * \u2212\u03b5 if \u2206t \u2264 min{h 1 , h 2 }.", "(a), cone singularity: To discuss the approximation of p(u) in the cone geometry, for simplicity, we let \u0393 be the squareR = [0, 1] 2 .", "Figure 4 shows how to reduce the mesh on the cone to this case by an affine map, with the exception of a small number of triangular elements."], "text_after_citation": ["For the additional triangular elements in Figure 4 (b) with linear basis functions, the crucial observation is that their angles are independent of h, leading to a shape-regular mesh.", "In particular, the quotient \u03c1 of the radii of the smallest circumscribed to the largest inscribed circle remains bounded and the expected interpolation inequalities hold: For the linear interpolant p of a function f determined by the vertices of a triangle T of circumscribed radius \u2264 h, one has", "f \u2212 p H s (T ) \u2264 C 0 h 2\u2212s f H 2 (T ) .", "Here, s \u2208 [0, 1] and the constant C 0 only depends on \u03c1 and s.", "The respective proofs for the regular part \u03c6 \u03c6 \u03c6 0 and the singular function r \u03bb\u22121 b i in this way directly apply to the arising triangles."], "citing_paper_content": {"title": "Higher-Order Time Domain Boundary Elements For Elastodynamicsgraded Meshes And Hp Versions", "abstract": "The solution to the elastodynamic equation in the exterior of a polyhedral domain or a screen exhibits singular behavior from the corners and edges. The detailed expansion of the singularities implies quasi-optimal estimates for piecewise polynomial approximations of the Dirichlet trace of the solution and the traction. The results are applied to hp and graded versions of the time domain boundary element method for the weakly singular and the hypersingular integral equations. Numerical examples confirm the theoretical results for the Dirichlet and Neumann problems for screens and for polygonal domains in 2d. They exhibit the expected quasi-optimal convergence rates and the singular behavior of the solutions."}, "cited_paper_content": {"title": "Boundary Elements With Mesh Refinements For The Wave Equation", "abstract": "The solution of the wave equation in a polyhedral domain in $\\mathbb{R}^3$ admits an asymptotic singular expansion in a neighborhood of the corners and edges. In this article we formulate boundary and screen problems for the wave equation as equivalent boundary integral equations in time domain, study the regularity properties of their solutions and the numerical approximation. Guided by the theory for elliptic equations, graded meshes are shown to recover the optimal approximation rates known for smooth solutions. Numerical experiments illustrate the theory for screen problems. In particular, we discuss the Dirichlet and Neumann problems, as well as the Dirichlet-to-Neumann operator and applications to the sound emission of tires."}, "keywords": ["rectangular elements"], "citation_intent": "background"} {"citing_id": "2304.14154v1", "cited_id": "1802.04799", "section_title": "Introduction", "citation": "The achieved performance is comparable to the traditionally designed TVM compiler #REFR for deep learning.", "text_before_citation": ["To control the application of rewrite rules, strategy languages, such as Stratego #OTHEREFR ] have been proposed. #OTHEREFR provides a recent overview of the field.", "These strategy languages enable strategic rewriting by providing combinators to compose rewrite rules into larger strategies.", "Stratego is an integral part of the Spoofax language workbench by Kats and #OTHEREFR designed to declaratively specify languages and tailored IDEs to work with them.", "The Stratego strategy language is used here to write interpreters and compilers purely using compositions of rewrites. #OTHEREFR", "[ , 2020 describes the ELEVATE strategy language and how it is used to encode and control the application of traditional compiler optimizations, such as loop-tiling, compositionally."], "text_after_citation": ["In fact, ELEVATE shows how to rethink the design of \"user-schedulable languages\" as strategy languages, as highlighted by #OTHEREFR .", "Schedules allow experts precise control over what compiler optimizations to apply, an idea popularized by the domain-specific compiler Halide #OTHEREFR", "2013] and now widely adopted in other optimizing compilers, such as TVM.", "Closely related to strategy languages are tactic languages in automatic theorem proving that allow to control the arrangement of individual proof steps.", "These ideas go all the way back to #OTHEREFR"], "citing_paper_content": {"title": "Traced Types For Safe Strategic Rewriting", "abstract": "Strategy languages enable programmers to compose rewrite rules into strategies and control their application. This is useful in programming languages, e.g., for describing program transformations compositionally, but also in automated theorem proving, where related ideas have been studies with tactics languages. Clearly, not all compositions of rewrites are correct, but how can we assist programmers in writing correct strategies? In this paper, we present a static type system for strategy languages. We combine a structural type system capturing how rewrite strategies transform the shape of the rewritten syntax with a novel tracing system that keeps track of all possible legal strategy execution paths. Our type system raises warnings when parts of a composition are guaranteed to fail at runtime, and errors when no legal execution for a strategy is possible. We present a formalization of our strategy language and novel tracing type system, and formally prove its type soundness. We present formal results, showing that ill-traced strategies are guaranteed to fail at runtime and that well-traced strategy executions \"can't go wrong\", meaning that they are guaranteed to have a possible successful execution path."}, "cited_paper_content": {"title": "Tvm: An Automated End-To-End Optimizing Compiler For Deep Learning", "abstract": "There is an increasing need to bring machine learning to a wide diversity of hardware devices. Current frameworks rely on vendor-specific operator libraries and optimize for a narrow range of server-class GPUs. Deploying workloads to new platforms -- such as mobile phones, embedded devices, and accelerators (e.g., FPGAs, ASICs) -- requires significant manual effort. We propose TVM, a compiler that exposes graph-level and operator-level optimizations to provide performance portability to deep learning workloads across diverse hardware back-ends. TVM solves optimization challenges specific to deep learning, such as high-level operator fusion, mapping to arbitrary hardware primitives, and memory latency hiding. It also automates optimization of low-level programs to hardware characteristics by employing a novel, learning-based cost modeling method for rapid exploration of code optimizations. Experimental results show that TVM delivers performance across hardware back-ends that are competitive with state-of-the-art, hand-tuned libraries for low-power CPU, mobile GPU, and server-class GPUs. We also demonstrate TVM's ability to target new accelerator back-ends, such as the FPGA-based generic deep learning accelerator. The system is open sourced and in production use inside several major companies."}, "keywords": ["compiler", "deep learning"], "citation_intent": "result"} {"citing_id": "2303.02948v1", "cited_id": "1802.06739", "section_title": "B. Afl-Based Anomaly Detection Model", "citation": "Adding noise to the gradients of the Wasserstein distance is more efficient than adding noise to the final parameters directly with respect to preserving privacy #REFR .", "text_before_citation": ["Although AFL has distinct privacy advantages like FL, current research shows that sensitive information can still be inferred by using shared parameters during the learning process #OTHEREFR .", "To address this issue, we propose a differentially private WGAN-GP model, where differential privacy is achieved in WGAN-GP by adding carefully designed noise to gradients during the learning process. Definition 1.", "A randomized function F is considered as ( , \u03b4)-differentially private if the following inequality is satisfied for any two databases X and X differing in a single point and for any output subset S #OTHEREFR :", "EQUATION", "where F (X) and F (X ) are the outputs of the function F for inputs X and X , respectively."], "text_after_citation": ["The gradients \u2206w n of discriminator parameter w n after adding noise can be expressed as follows:", "EQUATION", "where \u03c3 n represents the noise scale, and c g is the bound on the gradients of the Wasserstein distance.", "According to Lemma 1 in #OTHEREFR , given the sampling rate p = m M (where m is the batch size and M is the total number of training data used in each discriminator iteration), the number of discriminator iterations N d between two generator iterations, and privacy violation \u03b4, then for any positive , the discriminator parameter guarantees ( , \u03b4)-differential privacy with respect to all data used in the generator iteration if we choose", "EQUATION"], "citing_paper_content": {"title": "A Vhetnet-Enabled Asynchronous Federated Learning-Based Anomaly Detection Framework For Ubiquitous Iot", "abstract": "Anomaly detection for the Internet of Things (IoT) is a major intelligent service required by many fields, including intrusion detection, state monitoring, device-activity analysis, and security supervision. However, the heterogeneous distribution of data and resource-constrained end nodes in ubiquitous IoT systems present challenges for existing anomaly detection models. Due to the advantages of flexible deployment and multidimensional resources, high altitude platform stations (HAPSs) and unmanned aerial vehicles (UAVs), which are important components of vertical heterogeneous networks (VHetNets), have significant potential for sensing, computing, storage, and communication applications in ubiquitous IoT systems. In this paper, we propose a novel VHetNet-enabled asynchronous federated learning (AFL) framework to enable decentralized UAVs to collaboratively train a global anomaly detection model based on their local sensory data from ubiquitous IoT devices. In the VHetNet-enabled AFL framework, a HAPS operates as a central aerial server, and the local models trained in UAVs are uploaded to the HAPS for global aggregation due to its wide coverage and strong storage and computation capabilities. We also introduce a UAV selection strategy into the AFL framework to prevent UAVs with low local model quality and large energy consumption from affecting the learning efficiency and detection accuracy of the global model. To ensure the security of transmissions between UAVs and the HAPS via wireless links, we add designed noise to local model parameters in UAVs to achieve differential privacy during the information exchange process. Moreover, we propose a compound-action actor-critic (CA2C)-based joint device association, UAV selection, and UAV trajectory planning algorithm to further enhance the overall federated execution efficiency and detection model accuracy under the UAV energy constraints. Extensive experimental evaluation on a real-world dataset demonstrates that the proposed algorithm can achieve high detection accuracy with short federated execution time and low energy consumption. Index Terms-Anomaly detection, ubiquitous Internet of Things (IoT), vertical heterogeneous network (VHetNet), asynchronous federated learning (AFL), differential privacy."}, "cited_paper_content": {"title": "Differentially Private Generative Adversarial Network", "abstract": "Generative Adversarial Network (GAN) and its variants have recently attracted intensive research interests due to their elegant theoretical foundation and excellent empirical performance as generative models. These tools provide a promising direction in the studies where data availability is limited. One common issue in GANs is that the density of the learned generative distribution could concentrate on the training data points, meaning that they can easily remember training samples due to the high model complexity of deep networks. This becomes a major concern when GANs are applied to private or sensitive data such as patient medical records, and the concentration of distribution may divulge critical patient information. To address this issue, in this paper we propose a differentially private GAN (DPGAN) model, in which we achieve differential privacy in GANs by adding carefully designed noise to gradients during the learning procedure. We provide rigorous proof for the privacy guarantee, as well as comprehensive empirical evidence to support our analysis, where we demonstrate that our method can generate high quality data points at a reasonable privacy level."}, "keywords": ["privacy"], "citation_intent": "background"} {"citing_id": "2303.15410v1", "cited_id": "1906.03950", "section_title": "Proposed Method", "citation": "The key idea is to let them use separate batch normalization (BN) layers while sharing all the other parameters of the network; following the domain-specific batch normalization #REFR we call such a set of separate BNs lighting-condition specific BN (LSBN).", "text_before_citation": ["As our target model deals with extremely low-light images, it suffers from the significantly low quality of inputs.", "To alleviate this, we propose a new method for learning the target model using the paired well-lit images as privileged information #OTHEREFR , additional high-quality input data accessible only in training.", "Our method introduces another model called teacher that takes the privileged information as input and provides rich supervision to the target model called student.", "This method for learning using privileged information (LUPI) allows the student to simulate the internal behavior of the teacher as well as learn to predict human poses.", "To further exploit the privileged information, we design a single concise architecture that integrates the teacher and the student."], "text_after_citation": ["This design choice allows the student to enjoy the strong representation learned using the well-lit images (i.e., the privileged information) while capturing specific characteristics of low-light images through the separate BN parameters.", "Our model architecture and LUPI strategy are depicted in Fig. 5 .", "Note that, before being fed to the student, low-light images are scaled automatically by adjusting their average pixel intensity value to a predefined constant.", "On the other hand, the teacher takes as input well-lit images as-is.", "Both of the teacher and the student are trained by a common pose estimation loss:"], "citing_paper_content": {"title": "Human Pose Estimation In Extremely Low-Light Conditions", "abstract": "We study human pose estimation in extremely low-light images. This task is challenging due to the difficulty of collecting real low-light images with accurate labels, and severely corrupted inputs that degrade prediction quality significantly. To address the first issue, we develop a dedicated camera system and build a new dataset of real lowlight images with accurate pose labels. Thanks to our camera system, each low-light image in our dataset is coupled with an aligned well-lit image, which enables accurate pose labeling and is used as privileged information during training. We also propose a new model and a new training strategy that fully exploit the privileged information to learn representation insensitive to lighting conditions. Our method demonstrates outstanding performance on real extremely low-light images, and extensive analyses validate that both of our model and dataset contribute to the success."}, "cited_paper_content": {"title": "Domain-Specific Batch Normalization For Unsupervised Domain Adaptation", "abstract": "We propose a novel unsupervised domain adaptation framework based on domain-specific batch normalization in deep neural networks. We aim to adapt to both domains by specializing batch normalization layers in convolutional neural networks while allowing them to share all other model parameters, which is realized by a two-stage algorithm. In the first stage, we estimate pseudo-labels for the examples in the target domain using an external unsupervised domain adaptation algorithm---for example, MSTN or CPUA---integrating the proposed domain-specific batch normalization. The second stage learns the final models using a multi-task classification loss for the source and target domains. Note that the two domains have separate batch normalization layers in both stages. Our framework can be easily incorporated into the domain adaptation techniques based on deep neural networks with batch normalization layers. We also present that our approach can be extended to the problem with multiple source domains. The proposed algorithm is evaluated on multiple benchmark datasets and achieves the state-of-the-art accuracy in the standard setting and the multi-source domain adaption scenario."}, "keywords": ["lighting-condition specific BN", "domain-specific batch normalization"], "citation_intent": "method"} {"citing_id": "2303.01559v1", "cited_id": "1710.09412", "section_title": "Ood Detection", "citation": "In addition, besides AdaptiveMix loss, we can use mixingbased cross-entropy loss in the learning objective of image classification following augmentation #REFR , since we use Mixup to generate hard samples (See supplementary material for more details on classification).", "text_before_citation": ["Benchmark #OTHEREFR Table 12 +3.5% F1 \u2191 class-aware separation, we then introduce the orthogonal constraint to initialize W, which is defined as:", "EQUATION"], "text_after_citation": [], "citing_paper_content": {"title": "Improving Gan Training Via Feature Space Shrinkage", "abstract": "Due to the outstanding capability for data generation, Generative Adversarial Networks (GANs) have attracted considerable attention in unsupervised learning. However, training GANs is difficult, since the training distribution is dynamic for the discriminator, leading to unstable image representation. In this paper, we address the problem of training GANs from a novel perspective, i.e., robust image classification. Motivated by studies on robust image representation, we propose a simple yet effective module, namely AdaptiveMix, for GANs, which shrinks the regions of training data in the image representation space of the discriminator. Considering it is intractable to directly bound feature space, we propose to construct hard samples and narrow down the feature distance between hard and easy samples. The hard samples are constructed by mixing a pair of training images. We evaluate the effectiveness of our AdaptiveMix with widely-used and state-of-the-art GAN architectures. The evaluation results demonstrate that our AdaptiveMix can facilitate the training of GANs and effectively improve the image quality of generated samples. We also show that our AdaptiveMix can be further applied to image classification and Out-Of-Distribution (OOD) detection tasks, by equipping it with state-of-theart methods. Extensive experiments on seven publicly available datasets show that our method effectively boosts the performance of baselines. The code is publicly available at https : / / github. com / WentianZhang-ML/AdaptiveMix."}, "cited_paper_content": {"title": "Mixup: Beyond Empirical Risk Minimization", "abstract": "Large deep neural networks are powerful, but exhibit undesirable behaviors such as memorization and sensitivity to adversarial examples. In this work, we propose mixup, a simple learning principle to alleviate these issues. In essence, mixup trains a neural network on convex combinations of pairs of examples and their labels. By doing so, mixup regularizes the neural network to favor simple linear behavior in-between training examples. Our experiments on the ImageNet-2012, CIFAR-10, CIFAR-100, Google commands and UCI datasets show that mixup improves the generalization of state-of-the-art neural network architectures. We also find that mixup reduces the memorization of corrupt labels, increases the robustness to adversarial examples, and stabilizes the training of generative adversarial networks."}, "keywords": ["image classification", "mixingbased cross-entropy loss"], "citation_intent": "method"} {"citing_id": "2304.00801v3", "cited_id": "1806.04618", "section_title": "Related Work", "citation": "Relevant experimental work on noisy segmentation include #REFR where the authors considered three different label noise models and observed that increased noise level caused worse performance.", "text_before_citation": ["This is especially important for this work because it closely relates to the studied problems.", "In binary classification, optimal threshold based classifiers for the F 1 metric was discussed in #OTHEREFR and further elaborated by others.", "Because of the close relationship between the Dice metric in segmentation and the F 1 metric in binary classification, this idea was later taken to the segmentation context and used to give a characterization of the optimal segmentations with respect to Dice #OTHEREFR .", "Furthermore, sharp bounds of the volume of these solutions were provided.", "Similar inspiration from work in binary classification [2, 3] made authors propose that calibration can be a property of importance for explaining the good performance obtained by soft-Dice when evaluation is done by Dice #OTHEREFR ."], "text_after_citation": ["A similar experimental study also including experiments on biased noise was later described in #OTHEREFR .", "Finally, experimental work on soft labels with other loss functions than soft-Dice include #OTHEREFR .", "For recent general reviews on work on noisy segmentation and imperfect data, see #OTHEREFR and #OTHEREFR ."], "citing_paper_content": {"title": "Noisy Image Segmentation With Soft-Dice", "abstract": "This paper presents a study on the soft-Dice loss, one of the most popular loss functions in medical image segmentation, for situations where noise is present in target labels. In particular, the set of optimal solutions are characterized and sharp bounds on the volume bias of these solutions are provided. It is further shown that a sequence of soft segmentations converging to optimal soft-Dice also converges to optimal Dice when converted to hard segmentations using thresholding. This is an important result because soft-Dice is often used as a proxy for maximizing the Dice metric. Finally, experiments confirming the theoretical results are provided."}, "cited_paper_content": {"title": "Imperfect Segmentation Labels: How Much Do They Matter?", "abstract": "Labeled datasets for semantic segmentation are imperfect, especially in medical imaging where borders are often subtle or ill-defined. Little work has been done to analyze the effect that label errors have on the performance of segmentation methodologies. Here we present a large-scale study of model performance in the presence of varying types and degrees of error in training data. We trained U-Net, SegNet, and FCN32 several times for liver segmentation with 10 different modes of ground-truth perturbation. Our results show that for each architecture, performance steadily declines with boundary-localized errors, however, U-Net was significantly more robust to jagged boundary errors than the other architectures. We also found that each architecture was very robust to non-boundary-localized errors, suggesting that boundary-localized errors are fundamentally different and more challenging problem than random label errors in a classification setting."}, "keywords": ["noisy segmentation"], "citation_intent": "background"} {"citing_id": "2303.14961v1", "cited_id": "1802.03471", "section_title": "Background", "citation": "This robustness verification method (Cohen et al., 2019; #REFR computes the 2 -norm certificates around an input sample x by counting which class is most likely to be returned when x is perturbed by isotropic Gaussian noise.", "text_before_citation": ["Even though an adversariallytrained network is resilient to attacks created during training, it can still be susceptible to unseen new attacks.", "To overcome this problem, certified defenses formally guarantee the stability of the prediction in a neighbourhood of the input.", "In other words, a neural network f is certifiably robust for the input x \u2208 R d , if the prediction for all perturbed versions", "x remains unchanged such that x \u2212 x p \u2264 , where \u2022 p is the p -norm around x of size > 0.", "Randomized Smoothing."], "text_after_citation": ["Formally, given a soft classifier F , randomized smoothing considers a smooth version of F defined as:", "EQUATION", "where \u03c3 > 0 represents the standard deviation.", "As previously, we define the hard version of G(x) as g(x) = arg max y\u2208Y G(x) y .", "Contrary to other formal verification methods, randomized smoothing does not make any assumptions regarding the model's properties, allowing certification to be scaled to larger and more complex networks. Cohen et al."], "citing_paper_content": {"title": "Diffusion Denoised Smoothing For Certified And Adversarial Robust Out-Of-Distribution Detection", "abstract": "As the use of machine learning continues to expand, the importance of ensuring its safety cannot be overstated. A key concern in this regard is the ability to identify whether a given sample is from the training distribution, or is an \"Out-Of-Distribution\" (OOD) sample. In addition, adversaries can manipulate OOD samples in ways that lead a classifier to make a confident prediction. In this study, we present a novel approach for certifying the robustness of OOD detection within a 2-norm around the input, regardless of network architecture and without the need for specific components or additional training. Further, we improve current techniques for detecting adversarial attacks on OOD samples, while providing high levels of certified and adversarial robustness on in-distribution samples. The average of all OOD detection metrics on CIFAR10/100 shows an increase of \u223c 13%/5% relative to previous approaches."}, "cited_paper_content": {"title": "Certified Robustness To Adversarial Examples With Differential Privacy", "abstract": "Adversarial examples that fool machine learning models, particularly deep neural networks, have been a topic of intense research interest, with attacks and defenses being developed in a tight back-and-forth. Most past defenses are best effort and have been shown to be vulnerable to sophisticated attacks. Recently a set of certified defenses have been introduced, which provide guarantees of robustness to norm-bounded attacks. However these defenses either do not scale to large datasets or are limited in the types of models they can support. This paper presents the first certified defense that both scales to large networks and datasets (such as Google\u2019s Inception network for ImageNet) and applies broadly to arbitrary model types. Our defense, called PixelDP, is based on a novel connection between robustness against adversarial examples and differential privacy, a cryptographically-inspired privacy formalism, that provides a rigorous, generic, and flexible foundation for defense."}, "keywords": ["robustness verification method"], "citation_intent": "method"} {"citing_id": "2304.04484v1", "cited_id": "1606.04171", "section_title": "I. Introduction", "citation": "As an early prototype system of global connectivity, narrowband IoT #REFR has enjoyed significant success in the long-term evolution of terrestrial networks (TNs), which has motivated considerable interest in moving from human-type communication to machine-type communication (MTC) .", "text_before_citation": ["As an indispensable component of the space-air-ground integrated network, low earth orbit (LEO) satellites have received extensive attention in the research of beyond fifth generation (B5G) and sixth generation (6G) mobile communication systems #OTHEREFR - #OTHEREFR .", "Extensive efforts have been devoted to the construction of satellite constellations over the past few decades, for example, the Iridium system in the 1990s and Starlink LEO constellation projects more recently #OTHEREFR .", "With the evolution of space and communication technologies, satellite communication (SatCom) has extended from its original narrowband voice service to broadband multimedia service, which also brings more opportunities to the ubiquitous space-air-ground integrated connectivity."], "text_after_citation": ["Given the explosive growth of data traffic, advanced MTC applications will be more data-intensive, and the demands of advanced IoT-enabled applications will shift from low-rate short packet transmission to more rigorous low-latency, broadband, and reliable information interaction such as industrial Internet, smart cities, intelligent transportation, industrial metaverse, holographic communications, and so on #OTHEREFR .", "On the other hand, remote and disaster areas also still face challenges in accessing the network due to the high cost of terrestrial infrastructure.", "Therefore, it is also of interest to promote broadband LEO satellite-enabled non-terrestrial networks (NTNs) #OTHEREFR as a part of the communication infrastructure.", "Due to the massive number of potential MTC user terminals (UTs) and the long propagation delay between ground and LEO satellites, it is inefficient to coordinate the channel resources for uplink access through traditional handshaking protocols.", "Grant free-random access (GF-RA) is a compelling paradigm for massive access in MTC since it allows UTs to directly transmit their respective preamble and payload data to base stations (BSs) without the aforehand handshaking process."], "citing_paper_content": {"title": "Quasi-Synchronous Random Access For Massive Mimo-Based Leo Satellite Constellations", "abstract": "Low earth orbit (LEO) satellite constellation-enabled communication networks are expected to be an important part of many Internet of Things (IoT) deployments due to their unique advantage of providing seamless global coverage. In this paper, we investigate the random access problem in massive multiple-input multiple-output-based LEO satellite systems, where the multi-satellite cooperative processing mechanism is considered. Specifically, at edge satellite nodes, we conceive a training sequence padded multi-carrier system to overcome the issue of imperfect synchronization, where the training sequence is utilized to detect the devices' activity and estimate their channels. Considering the inherent sparsity of terrestrial-satellite links and the sporadic traffic feature of IoT terminals, we utilize the orthogonal approximate message passing-multiple measurement vector algorithm to estimate the delay coefficients and user terminal activity. To further utilize the structure of the receive array, a two-dimensional estimation of signal parameters via rotational invariance technique is performed for enhancing channel estimation. Finally, at the central server node, we propose a majority voting scheme to enhance activity detection by aggregating backhaul information from multiple satellites. Moreover, multi-satellite cooperative linear data detection and multi-satellite cooperative Bayesian dequantization data detection are proposed to cope with perfect and quantized backhaul, respectively. Simulation results verify the effectiveness of our proposed schemes in terms of channel estimation, activity detection, and data detection for quasi-synchronous random access in satellite systems. Index Terms Internet of Things (IoT), low earth orbit (LEO) satellite, massive multiple-input multiple-output (mMIMO), random access"}, "cited_paper_content": {"title": "A Primer On 3Gpp Narrowband Internet Of Things", "abstract": "Narrowband Internet of Things (NB-IoT) is a new cellular technology introduced in 3GPP Release 13 for providing wide-area coverage for IoT. This article provides an overview of the air interface of NB-IoT. We describe how NB-IoT addresses key IoT requirements such as deployment flexibility, low device complexity, long battery lifetime, support of massive numbers of devices in a cell, and significant coverage extension beyond existing cellular technologies. We also share the various design rationales during the standardization of NB-IoT in Release 13 and point out several open areas for future evolution of NB-IoT."}, "keywords": ["IoT"], "citation_intent": "background"} {"citing_id": "2304.05884v1", "cited_id": "1807.05520", "section_title": "Related Work", "citation": "As a representative work, DeepCluster #REFR ) adopts a standard k-means for clustering, but it contains degenerate solutions.", "text_before_citation": ["Instance and Cluster Discrimination.", "Instance discrimination #OTHEREFR Radford et al., 2021) is realized with a contrastive loss which targets at pulling closer samples from the same instance while pushing away samples from different instances.", "Despite the impressive performance, instance-wise contrastive learning can not capture the semantic information from the training data because it is trained to ignore the similarity between different instances.", "Cluster discrimination #OTHEREFR is processed with iterative steps: the clustering step to assign pseudo class labels for each sample, and then the classification step to map each sample to its assigned label.", "Since one cluster has more than one instance, learning representations with clusters will gather similar instances together, which can explore potential semantic structures in data."], "text_after_citation": ["To this end, recent research work #OTHEREFR focuses on improving the label assignment during clustering but employs a standard cross-entropy loss during discrimination.", "In this paper, we only employ one step of off-line clustering but design a robust classifier to achieve good feature representation when training on the automatically clustered large-scale data.", "Image Retrieval.", "Image retrieval task typically relies on fine-tuning pre-trained visual models #OTHEREFR and can be divided into two learning categories: supervised and unsupervised metric learning.", "For supervised metric learning, pair-wise loss #OTHEREFR and cross-entropy loss #OTHEREFR are extensively studied and recent bench-marking results #OTHEREFR indicate that the margin-based softmax loss (e.g., ArcFace #OTHEREFR ) can achieve state-of-the-art performance."], "citing_paper_content": {"title": "Unicom: Universal And Compact Representation Learning For Image Retrieval", "abstract": "Modern image retrieval methods typically rely on fine-tuning pre-trained encoders to extract image-level descriptors. However, the most widely used models are pre-trained on ImageNet-1K with limited classes. The pre-trained feature representation is therefore not universal enough to generalize well to the diverse open-world classes. In this paper, we first cluster the large-scale LAION 400M dataset into one million pseudo classes based on the joint textual and visual features extracted by the CLIP model. Due to the confusion of label granularity, the automatically clustered dataset inevitably contains heavy inter-class conflict. To alleviate such conflict, we randomly select partial inter-class prototypes to construct the margin-based softmax loss. To further enhance the low-dimensional feature representation, we randomly select partial feature dimensions when calculating the similarities between embeddings and class-wise prototypes. The dual random partial selections are with respect to the class dimension and the feature dimension of the prototype matrix, making the classification conflict-robust and the feature embedding compact. Our method significantly outperforms state-of-the-art unsupervised and supervised image retrieval approaches on multiple benchmarks. The code and pre-trained models are released to facilitate future research https://github.com/deepglint/unicom."}, "cited_paper_content": {"title": "Deep Clustering For Unsupervised Learning Of Visual Features", "abstract": "Clustering is a class of unsupervised learning methods that has been extensively applied and studied in computer vision. Little work has been done to adapt it to the end-to-end training of visual features on large scale datasets. In this work, we present DeepCluster, a clustering method that jointly learns the parameters of a neural network and the cluster assignments of the resulting features. DeepCluster iteratively groups the features with a standard clustering algorithm, k-means, and uses the subsequent assignments as supervision to update the weights of the network. We apply DeepCluster to the unsupervised training of convolutional neural networks on large datasets like ImageNet and YFCC100M. The resulting model outperforms the current state of the art by a significant margin on all the standard benchmarks."}, "keywords": ["DeepCluster"], "citation_intent": "method"} {"citing_id": "2304.04886v1", "cited_id": "1911.08632", "section_title": "Introduction", "citation": "In fact, for the effectively acyclic case, #REFR only provides a sufficient condition that a given footprint yields a framepreserving update but it gives no algorithm for computing such a footprint.", "text_before_citation": ["This way, one can focus on the least flow, which is guaranteed to exist if one applies standard fixed point theorems, imposing only mild assumptions on the edge functions. However, cancellativity is inherently incompatible with standard domain-theoretic prerequisites.", "For instance, the only ordered cancellative commutative monoid that is a directed cpo is the trivial one: M 0 = {0}.", "Similarly, M 0 is the only such monoid that has a greatest element.", "For cases where unique flows are desired, #OTHEREFR imposes additional requirements on the edge functions (nil-potent) or the graph structure (effectively acyclic). The former is quite restrictive in terms of expressivity.", "The latter again complicates the computation of frame-preserving updates: one now has to ensure that no cycles are introduced when the updated graph h 2 is composed with its frame h 1 ."], "text_after_citation": ["Contributions.", "In this paper, we propose a new meta theory of flows based on flow monoids that form \u03c9-cpos (but need not be cancellative).", "The cpo requirement yields the desired least fixed point semantics.", "The differences in the requirements on the flow monoid necessitate a new notion of flow graph composition.", "In particular, for a least fixed point semantics of flows, h = h 1 * h 2 is only defined if the flows of h 1 and h 2 do not vanish."], "citing_paper_content": {"title": "Make Flows Small Again: Revisiting The Flow Framework", "abstract": "We present a new flow framework for separation logic reasoning about programs that manipulate general graphs. The framework overcomes problems in earlier developments: it is based on standard fixed point theory, guarantees least flows, rules out vanishing flows, and has an easy to understand notion of footprint as needed for soundness of the frame rule. In addition, we present algorithms for automating the frame rule, which we evaluate on graph updates extracted from linearizability proofs for concurrent data structures. The evaluation demonstrates that our algorithms help to automate key aspects of these proofs that have previously relied on user guidance or heuristics."}, "cited_paper_content": {"title": "Local Reasoning For Global Graph Properties", "abstract": "Separation logics are widely used for verifying programs that manipulate complex heap-based data structures. These logics build on so-called separation algebras, which allow expressing properties of heap regions such that modifications to a region do not invalidate properties stated about the remainder of the heap. This concept is key to enabling modular reasoning and also extends to concurrency. While heaps are naturally related to mathematical graphs, many ubiquitous graph properties are non-local in character, such as reachability between nodes, path lengths, acyclicity and other structural invariants, as well as data invariants which combine with these notions. Reasoning modularly about such graph properties remains notoriously difficult, since a local modification can have side-effects on a global property that cannot be easily confined to a small region. ::: In this paper, we address the question: What separation algebra can be used to avoid proof arguments reverting back to tedious global reasoning in such cases? To this end, we consider a general class of global graph properties expressed as fixpoints of algebraic equations over graphs. We present mathematical foundations for reasoning about this class of properties, imposing minimal requirements on the underlying theory that allow us to define a suitable separation algebra. Building on this theory we develop a general proof technique for modular reasoning about global graph properties over program heaps, in a way which can be integrated with existing separation logics. To demonstrate our approach, we present local proofs for two challenging examples: a priority inheritance protocol and the non-blocking concurrent Harris list."}, "keywords": ["effectively acyclic case"], "citation_intent": "background"} {"citing_id": "2305.00983v1", "cited_id": "1911.03462", "section_title": "Semantic Segmentation", "citation": "Thereby, the performance on the previously-known classes is similar to the baseline even without including a distillation loss #REFR .", "text_before_citation": ["The quantitative results of our semantic segmentation method, reported in Tab.", "2, demonstrate, that the empty classes are \"filled\" with the novel concepts human and car."], "text_after_citation": ["For the car class, our method outperforms the baseline with respect to IoU (+2.87 pp), precision (+0.55 pp) and recall (+3.06 pp).", "We lose performance in terms of IoU for the human class due to a higher tendency for false positives.", "However, the false negative rate is significantly reduced, which is indicated by an increase in the recall value of 26.89 pp. The improved recall score is also visible in Fig.", "9 , showing two examples from the Cityscapes validation dataset.", "In the top row, several pedestrians are crossing the street, which are mostly segmented by our DNN, whereas the baseline DNN mostly misses the persons in the center as well as all heads."], "citing_paper_content": {"title": "Detecting Novelties With Empty Classes", "abstract": "For open world applications, deep neural networks (DNNs) need to be aware of previously unseen data and adaptable to evolving environments. Furthermore, it is desirable to detect and learn novel classes which are not included in the DNNs underlying set of semantic classes in an unsupervised fashion. The method proposed in this article builds upon anomaly detection to retrieve out-of-distribution (OoD) data as candidates for new classes. We thereafter extend the DNN by k empty classes and fine-tune it on the OoD data samples. To this end, we introduce two loss functions, which 1) entice the DNN to assign OoD samples to the empty classes and 2) to minimize the inner-class feature distances between them. Thus, instead of ground truth which contains labels for the different novel classes, the DNN obtains a single OoD label together with a distance matrix, which is computed in advance. We perform several experiments for image classification and semantic segmentation, which demonstrate that a DNN can extend its own semantic space by multiple classes without having access to ground truth."}, "cited_paper_content": {"title": "Knowledge Distillation For Incremental Learning In Semantic Segmentation", "abstract": "Although deep learning architectures have shown remarkable results in scene understanding problems, they exhibit a critical drop of overall performance due to catastrophic forgetting when they are required to incrementally learn to recognize new classes without forgetting the old ones. This phenomenon impacts on the deployment of artificial intelligence in real world scenarios where systems need to learn new and different representations over time. Current approaches for incremental learning deal only with the image classification and object detection tasks. In this work we formally introduce the incremental learning problem for semantic segmentation. To avoid catastrophic forgetting we propose to distill the knowledge of the previous model to retain the information about previously learned classes, whilst updating the current model to learn the new ones. We developed three main methodologies of knowledge distillation working on both the output layers and the internal feature representations. Furthermore, differently from other recent frameworks, we do not store any image belonging to the previous training stages while only the last model is used to preserve high accuracy on previously learned classes. Extensive results were conducted on the Pascal VOC2012 dataset and show the effectiveness of the proposed approaches in different incremental learning scenarios."}, "keywords": ["previously-known classes"], "citation_intent": "result"} {"citing_id": "2304.04027v1", "cited_id": "2003.08934", "section_title": "Introduction", "citation": "The generation module is inspired by NeRF #REFR that predicts the density of the position but with the input image feature condition.", "text_before_citation": ["We made these simulated PX using a ray-based method and sample rays based on the principles of PX.", "Those rays are rendered as a synthesized image pixel using the Beer-Lambert law with CBCT data.", "Still, there exists a domain gap between real-world PX and simulated ones.", "We utilized CycleGAN-based translation module that supports unpaired image-to-image translation, as obtaining paired PX and CBCT datasets is challenging.", "We propose a new loss function that enhances the existing CycleGAN model to generate a more plausible synthesized image from the real-world image."], "text_after_citation": ["Lastly, we add an encoder-decoder-based refinement module to enhance the 3D reconstruction quality.", "The main contributions of our paper are as follows: #OTHEREFR We introduce a novel architecture that can process real-world PX to the 3D oral structure without any prior data such as a dental arch, #OTHEREFR We propose a new synthesizing method that eliminates the need for matching CBCT and PX datasets during training, and a loss function that reduces the gap between the simulated image and real-world image, #OTHEREFR Results show that our model can generate high-quality oral structure and much more robust to the real-world image than other state-of-the-art models."], "citing_paper_content": {"title": "Nebla: Neural Beer-Lambert For 3D Reconstruction Of Oral Structures From Panoramic Radiographs", "abstract": "Panoramic radiography (panoramic X-ray, PX) is a widely used imaging modality for dental examination. However, its applicability is limited as compared to 3D Conebeam computed tomography (CBCT), because PX only provides 2D flattened images of the oral structure. In this paper, we propose a new framework which estimates 3D oral structure from real-world PX images. Since there are not many matching PX and CBCT data, we used simulated PX from CBCT for training, however, we used real-world panoramic radiographs at the inference time. We propose a new ray-sampling method to make simulated panoramic radiographs inspired by the principle of panoramic radiography along with the rendering function derived from the Beer-Lambert law. Our model consists of three parts: translation module, generation module, and refinement module. The translation module changes the real-world panoramic radiograph to the simulated training image style. The generation module makes the 3D structure from the input image without any prior information such as a dental arch. Our ray-based generation approach makes it possible to reverse the process of generating PX from oral structure in order to reconstruct CBCT data. Lastly, the refinement module enhances the quality of the 3D output. Results show that our approach works better for simulated and real-world images compared to other state-of-the-art methods."}, "cited_paper_content": {"title": "Nerf: Representing Scenes As Neural Radiance Fields For View Synthesis", "abstract": "We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. Our algorithm represents a scene using a fully-connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location $(x,y,z)$ and viewing direction $(\\theta, \\phi)$) and whose output is the volume density and view-dependent emitted radiance at that spatial location. We synthesize views by querying 5D coordinates along camera rays and use classic volume rendering techniques to project the output colors and densities into an image. Because volume rendering is naturally differentiable, the only input required to optimize our representation is a set of images with known camera poses. We describe how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrate results that outperform prior work on neural rendering and view synthesis. View synthesis results are best viewed as videos, so we urge readers to view our supplementary video for convincing comparisons."}, "keywords": ["generation module", "density"], "citation_intent": "method"} {"citing_id": "2304.08592v1", "cited_id": "1904.01906", "section_title": "A.1. Experimental Setup In Section 3", "citation": "For the experiments in Section 3, we utilize a widelyused model architecture in STR, denoted as TRBA #REFR .", "text_before_citation": [], "text_after_citation": ["Note that we use the same model architecture for Section 3 experiments for a fair comparison.", "According to the previous work #OTHEREFR , the four stages derived from the STR models are as follows:", "\u2022 Transformation.", "The thin-plate spline (TPS) transformation, a variant of a spatial transformation network #OTHEREFR , normalizes the perspective or curved text image into a horizontal text image.", "\u2022 Feature Extraction."], "citing_paper_content": {"title": "Improving Scene Text Recognition For Character-Level Long-Tailed Distribution", "abstract": "Figure 1: (a) We visualize the character-level distributions of WikiSynth of Korean (Kr) and Chinese (Cn). We categorize the characters according to the number of training samples: many, medium, and few. We also show misclassified images of tail characters predicted wrongly as head characters. (b) Our approach outperforms the baseline model when evaluated with character-level (char) F1 score, a newly proposed evaluation metric, which measures the performance at the character level. The higher score, the better. This result shows that our method enhances the performance on few characters significantly."}, "cited_paper_content": {"title": "What Is Wrong With Scene Text Recognition Model Comparisons? Dataset And Model Analysis", "abstract": "Many new proposals for scene text recognition (STR) models have been introduced in recent years. While each claim to have pushed the boundary of the technology, a holistic and fair comparison has been largely missing in the field due to the inconsistent choices of training and evaluation datasets. This paper addresses this difficulty with three major contributions. First, we examine the inconsistencies of training and evaluation datasets, and the performance gap results from inconsistencies. Second, we introduce a unified four-stage STR framework that most existing STR models fit into. Using this framework allows for the extensive evaluation of previously proposed STR modules and the discovery of previously unexplored module combinations. Third, we analyze the module-wise contributions to performance in terms of accuracy, speed, and memory demand, under one consistent set of training and evaluation datasets. Such analyses clean up the hindrance on the current comparisons to understand the performance gain of the existing modules."}, "keywords": ["STR"], "citation_intent": "method"} {"citing_id": "2303.02572v1", "cited_id": "1411.1736", "section_title": "\u03a0-Structure", "citation": "As in #REFR , this can be constructed using the locally cartesian closed structure of C q .", "text_before_citation": ["Lr( p A ) C\u00b5C\u03bd * (p E A ) (5.16)", "Thus, the assumption of pre-\u03a0-structure tells us that L r ( p A ) is type-exponentiable, and hence the pushforward of B along it is represented by a type.", "What remains is to construct an appropriate local universe to make this strictly stable.", "Let V \u03a0(A,B) be the universal object equipped with maps", "\u03c0 A \u2236 V \u03a0(A,B) \u2192 C \u03c9 (V A ) \u03c0 B \u2236 \u03c0 * A (C \u03c9 (V A E A )) \u2192 V B ."], "text_after_citation": ["Now because we have pre-\u03a0-structure, the map \u03a0(A,B) ), i.e. we have a distributivity pullback", "\u03c0 * A (C \u03c9 (p E A )) is type- exponentiable.", "Thus, the pushforward of E B [\u03c0 B ] \u2208 Ty q (\u03c0 * A (C \u03c9 (V A E A ))) along it is represented by a type E \u03a0(A,B) \u2208 Ty q (V", "\u2022 \u03c0 * A (C \u03c9 (V A E A )) E B [\u03c0 B ] \u03c0 * A (C \u03c9 (V A E A )) V \u03a0(A,B) E \u03a0(A,B) V \u03a0(A,B)", "Now the bottom map in (5.16) together with \u231cB\u231d \u03a0(A,B) ."], "citing_paper_content": {"title": "Semantics Of Multimodal Adjoint Type Theory", "abstract": "We show that contrary to appearances, Multimodal Type Theory (MTT) over a 2-category M can be interpreted in any M-shaped diagram of categories having, and functors preserving, M-sized limits, without the need for extra left adjoints. This is achieved by a construction called \"co-dextrification\" that co-freely adds left adjoints to any such diagram, which can then be used to interpret the \"context lock\" functors of MTT. Furthermore, if any of the functors in the diagram have right adjoints, these can also be internalized in type theory as negative modalities in the style of FitchTT. We introduce the name Multimodal Adjoint Type Theory (MATT) for the resulting combined general modal type theory. In particular, we can interpret MATT in any finite diagram of toposes and geometric morphisms, with positive modalities for inverse image functors and negative modalities for direct image functors."}, "cited_paper_content": {"title": "The Local Universes Model: An Overlooked Coherence Construction For Dependent Type Theories", "abstract": "We present a new coherence theorem for comprehension categories, providing strict models of dependent type theory with all standard constructors, including dependent products, dependent sums, identity types, and other inductive types. Precisely, we take as input a\"weak model\": a comprehension category, equipped with structure corresponding to the desired logical constructions. We assume throughout that the base category is close to locally Cartesian closed: specifically, that products and certain exponentials exist. Beyond this, we require only that the logical structure should be *weakly stable* --- a pure existence statement, not involving any specific choice of structure, weaker than standard categorical Beck--Chevalley conditions, and holding in the now standard homotopy-theoretic models of type theory. Given such a comprehension category, we construct an equivalent split one, whose logical structure is strictly stable under reindexing. This yields an interpretation of type theory with the chosen constructors. The model is adapted from Voevodsky's use of universes for coherence, and at the level of fibrations is a classical construction of Giraud. It may be viewed in terms of local universes or delayed substitutions."}, "keywords": ["structure"], "citation_intent": "method"} {"citing_id": "2303.11759v1", "cited_id": "1608.06993", "section_title": "5.", "citation": "DenseNets #REFR : Introduced in 2018, the DenseNet architecture contributes to techniques aimed at information preservation in deep neural networks.", "text_before_citation": [], "text_after_citation": ["Coming at the back of successes (and challenges) with preceding architectures, notably ResNets and InceptionNets, DenseNets take advantage of feature re-use as a way to preserve information flow.", "What this means is, as opposed to combining features of each layer with those of preceding layers via addition (as is the case in ResNets), DenseNets use concatenation technique, such that the output feature Figure 4 : A residual connection between a layer and successive layers map of each layer is concatenated with those of preceding layers, the output of which is sent to subsequent layers.", "Compared to ResNets, the number of parameters is considerably smaller and it has the added advantage of improved flow of information and gradients through the network, thus making it easier to train.", "Compared to InceptionNets which also concatenate feature maps, making the network wider, the DenseNets are simpler and more efficient.", "For this task, we train DenseNet-121 architecture on our data."], "citing_paper_content": {"title": "Simulating Malaria Detection In Laboratories Using Deep Learning", "abstract": "Malaria is usually diagnosed by a microbiologist by examining a small sample of blood smear. Reducing mortality from malaria infection is possible if it is diagnosed early and followed with appropriate treatment. While the WHO has set audacious goals of reducing malaria incidence and mortality rates by 90% in 2030 and eliminating malaria in 35 countries by that time [1], it still remains a difficult challenge. Computer-assisted diagnostics are on the rise these days as they can be used effectively as a primary test in the absence of or providing assistance to a physician or pathologist. The purpose of this paper is to describe an approach to detecting, localizing and counting parasitic cells in blood sample images towards easing the burden on healthcare workers."}, "cited_paper_content": {"title": "Densely Connected Convolutional Networks", "abstract": "Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at this https URL ."}, "keywords": ["deep neural networks", "DenseNet architecture"], "citation_intent": "background"} {"citing_id": "2304.13575v1", "cited_id": "1510.09129", "section_title": "Main Lines Of The Computation", "citation": "A bit later, I proved that in the heptagrid, the tessellation {7, 3} of the hyperbolic plane, there is a weakly universal cellular automaton with three states which is rotation invariant and which is truly planar, #REFR .", "text_before_citation": ["The first paper about a universal cellular automaton in the pentagrid, the tessellation {5, 4} of the hyperbolic plane, was #OTHEREFR .", "This cellular automaton was also rotation invariant, at each step of the computation, the set of non quiescent states had infinitely many cycles: we shall say that it is a truly planar cellular automaton. That automaton had 22 states.", "That result was improved by a cellular automaton with 9 states in #OTHEREFR .", "Recently, it was improved with 5 states, see #OTHEREFR ."], "text_after_citation": ["Later, I improved the result down to two states but the rules are no more rotation invariant, see #OTHEREFR .", "Paper #OTHEREFR constructs three cellular automata which are strongly universal and rotation invariant: one in the pentagrid, one in the heptagrid, one in the tessellation {5, 3, 4} of the hyperbolic 3D-space.", "By strongly universal we mean that the initial configuration is finite, i.e. it lies within a large enough circle.", "In the present paper, we borrow ideas from #OTHEREFR and from the previous paper, #OTHEREFR .", "To #OTHEREFR , we borrow the idea of implementing a two register structure which is finite at each time of the computation."], "citing_paper_content": {"title": "A Strongly Universal Cellular Automaton On The Heptagrid With Seven States, New Proof", "abstract": "In this paper, we prove that there is a strongly universal cellular automaton on the heptagrid with seven states which is rotation invariant. This improves a previous paper of the author with the same number of states. Here, the structures are simpler and the number of rules is much less."}, "cited_paper_content": {"title": "A Weakly Universal Cellular Automaton On The Heptagrid With Three States", "abstract": "In this paper, we prove that there is a weakly universal cellular automaton on the pentagrid with three states which is rotation invariant and which uses \\`a la Moore neighbourhood. Moreover, at each step of the computation, the set of non quiescent states has always infinitely many cycles."}, "keywords": ["heptagrid", "weakly universal cellular"], "citation_intent": "background"} {"citing_id": "2303.13743v1", "cited_id": "1812.04948", "section_title": "Experiments And Results", "citation": "The qualitative results for TEGLO trained on FFHQ #REFR data for single-view 3D reconstruc- tion on samples from CelebA-HQ are in Fig.(9) .", "text_before_citation": ["In Fig.(10) , we show qualitative results including the texture image (t O ) for complex appearance and geometry such as multi-view consistent eyeglasses, 3D make-up and hair.", "Compared with Fig.(24) in #OTHEREFR , we show improved multi-view consistent results for eyeglasses in row-1, and 3D make-up in row-2.", "Compared with Fig.(25) in #OTHEREFR , we show multi-view consistent representations for beard that the baseline method #OTHEREFR was unable to model.", "Single-view 3D reconstruction.", "It is the task of representing an in-the-wild or out-of-distribution image using a trained network."], "text_after_citation": ["Previous work such as AUVNet #OTHEREFR require additional training of a ResNet-18 #OTHEREFR for the image encoder and IM-Net #OTHEREFR for the shape decoder followed by ray marching to obtain the mesh to represent the image while methods such as EG3D #OTHEREFR require PTI (Pivotal Tuning Inversion #OTHEREFR ) fine-tuning to represent the image.", "For single-view textured 3D representation in TEGLO, we simply invert the image into the latent and do not require any fine-tuning.", "Reconstructing single-view images at arbitrary resolutions while preserving 3D consistency is very desirable for many applications.", "However, EG3D #OTHEREFR has a limitation in performing this task because its generator is conditioned on the camera intrinsic and extrinsic parameters, leading to a \"baked-in\" training image resolution.", "As TEGLO does not condition on the camera, it enables single-view 3D reconstruction and novel view synthesis at arbitrary resolutions without requiring re-training for different resolutions."], "citing_paper_content": {"title": "Teglo: High Fidelity Canonical Texture Mapping From Single-View Images", "abstract": "Recent work in Neural Fields (NFs) learn 3D representations from class-specific single view image collections. However, they are unable to reconstruct the input data preserving high-frequency details. Further, these methods do not disentangle appearance from geometry and hence are not suitable for tasks such as texture transfer and editing. In this work, we propose TEGLO (Textured EG3D-GLO) for learning 3D representations from single view in-the-wild image collections for a given class of objects. We accomplish this by training a conditional Neural Radiance Field (NeRF) without any explicit 3D supervision. We equip our method with editing capabilities by creating a dense correspondence mapping to a 2D canonical space. We demonstrate that such mapping enables texture transfer and texture editing without requiring meshes with shared topology. Our key insight is that by mapping the input image pixels onto the texture space we can achieve near perfect reconstruction (\u2265 74 dB PSNR at 1024 2 resolution). Our formulation allows for high quality 3D consistent novel view synthesis with high-frequency details at megapixel image resolution."}, "cited_paper_content": {"title": "A Style-Based Generator Architecture For Generative Adversarial Networks", "abstract": "We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature. The new architecture leads to an automatically learned, unsupervised separation of high-level attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images (e.g., freckles, hair), and it enables intuitive, scale-specific control of the synthesis. The new generator improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. To quantify interpolation quality and disentanglement, we propose two new, automated methods that are applicable to any generator architecture. Finally, we introduce a new, highly varied and high-quality dataset of human faces."}, "keywords": ["single-view 3D reconstruc-", "samples"], "citation_intent": "result"} {"citing_id": "2305.02247v1", "cited_id": "1509.01240", "section_title": "On-Average Stability", "citation": "In contrast to prior work #REFR that considers only uniformity with respect to the data-set, we extend the generalization error upper bounds for a general class of gradient-based algorithms (A(\u03b7 t , T )).", "text_before_citation": ["A key part of our analysis is the derivation of upper bounds by considering on-average algorithmic stability, namely 1", "n n i=1 A R (S) \u2212 A R (S (i) )", ".", "Through on-average stability we show upper bounds for the generalization error by using a unified analysis among all algorithms A R within A(\u03b7 t , T ).", "Specifically, we develop a uniform stability analysis for all data-sets S and selection rules R."], "text_after_citation": ["In line with standard prior work, for convex and nonconvex losses we assume that the loss is uniformly Lipschitz.", "For strongly-convex losses, the Lipschitz assumption does not hold and we instead consider a relaxed path-boundedness assumption on the loss gradients, introduced later to avoid clutter (see Section 5)."], "citing_paper_content": {"title": "Select Without Fear: Almost All Mini-Batch Schedules Generalize Optimally", "abstract": "We establish matching upper and lower generalization error bounds for mini-batch Gradient Descent (GD) training with either deterministic or stochastic, data-independent, but otherwise arbitrary batch selection rules. We consider smooth Lipschitz-convex/nonconvex/stronglyconvex loss functions, and show that classical upper bounds for Stochastic GD (SGD) also hold verbatim for such arbitrary nonadaptive batch schedules, including all deterministic ones. Further, for convex and strongly-convex losses we prove matching lower bounds directly on the generalization error uniform over the aforementioned class of batch schedules, showing that all such batch schedules generalize optimally. Lastly, for smooth (non-Lipschitz) nonconvex losses, we show that full-batch (deterministic) GD is essentially optimal, among all possible batch schedules within the considered class, including all stochastic ones."}, "cited_paper_content": {"title": "Train Faster, Generalize Better: Stability Of Stochastic Gradient Descent", "abstract": "We show that parametric models trained by a stochastic gradient method (SGM) with few iterations have vanishing generalization error. We prove our results by arguing that SGM is algorithmically stable in the sense of Bousquet and Elisseeff. Our analysis only employs elementary tools from convex and continuous optimization. We derive stability bounds for both convex and non-convex optimization under standard Lipschitz and smoothness assumptions. ::: Applying our results to the convex case, we provide new insights for why multiple epochs of stochastic gradient methods generalize well in practice. In the non-convex case, we give a new interpretation of common practices in neural networks, and formally show that popular techniques for training large deep models are indeed stability-promoting. Our findings conceptually underscore the importance of reducing training time beyond its obvious benefit."}, "keywords": ["gradient-based algorithms"], "citation_intent": "result"} {"citing_id": "2303.09849v1", "cited_id": "2003.07833", "section_title": "Ablation Study", "citation": "Table 3 demonstrates that (1) without stage 1 (epoch=0), our method performs even worse than TF-VAEGAN #REFR , and (2) including stage 1 in training has a positive effect, and the more epochs we pretrain at stage 1, the better results we finally achieve.", "text_before_citation": ["t-SNE Visualization.", "To further verify our method, we conduct t-SNE #OTHEREFR visualization for synthesized unseen features of our method and TF-VAEGAN #OTHEREFR on CUB.", "As shown in Figure 2 , compared to TF-VAEGAN #OTHEREFR , the synthesized features of our method are more compact and more distinct, especially in the top-left corner of the figure. Importance of stage 1.", "We take experiments to verify the importance of pretraining at stage 1.", "Table 3 shows the conventional ZSL results of our method of CUB dataset with different training epochs in stage 1."], "text_after_citation": [], "citing_paper_content": {"title": "Exploiting Semantic Attributes For Transductive Zero-Shot Learning", "abstract": "Zero-shot learning (ZSL) aims to recognize unseen classes by generalizing the relation between visual features and semantic attributes learned from the seen classes. A recent paradigm called transductive zero-shot learning further leverages unlabeled unseen data during training and has obtained impressive results. These methods always synthesize unseen features from attributes through a generative adversarial network to mitigate the bias towards seen classes. However, they neglect the semantic information in the unlabeled unseen data and thus fail to generate high-fidelity attribute-consistent unseen features. To address this issue, we present a novel transductive ZSL method that produces semantic attributes of the unseen data and imposes them on the generative process. In particular, we first train an attribute decoder that learns the mapping from visual features to semantic attributes. Then, from the attribute decoder, we obtain pseudo-attributes of unlabeled data and integrate them into the generative model, which helps capture the detailed differences within unseen classes so as to synthesize more discriminative features. Experiments on five standard benchmarks show that our method yields state-of-the-art results for zero-shot learning."}, "cited_paper_content": {"title": "Latent Embedding Feedback And Discriminative Features For Zero-Shot Classification", "abstract": "Zero-shot learning strives to classify unseen categories for which no data is available during training. In the generalized variant, the test samples can further belong to seen or unseen categories. The state-of-the-art relies on Generative Adversarial Networks that synthesize unseen class features by leveraging class-specific semantic embeddings. During training, they generate semantically consistent features, but discard this constraint during feature synthesis and classification. We propose to enforce semantic consistency at all stages of (generalized) zero-shot learning: training, feature synthesis and classification. We further introduce a feedback loop, from a semantic embedding decoder, that iteratively refines the generated features during both the training and feature synthesis stages. The synthesized features together with their corresponding latent embeddings from the decoder are transformed into discriminative features and utilized during classification to reduce ambiguities among categories. Experiments on (generalized) zero-shot learning for object and action classification reveal the benefit of semantic consistency and iterative feedback for GAN-based networks, outperforming existing methods on six zero-shot learning benchmarks."}, "keywords": ["TF-VAEGAN", "training"], "citation_intent": "result"} {"citing_id": "2304.12424v1", "cited_id": "1911.08748", "section_title": "I. Introduction", "citation": "Matching the pathology of new patients with already diagnosed and curated cases offers pathologists a new approach to improve diagnostic accuracy through visual inspection of similar cases and a computational majority vote for consensus building #REFR .", "text_before_citation": ["The emergence of digital pathology (DP) has opened new horizons for histopathology #OTHEREFR .", "Machine Learning (ML) algorithms are able to operate on digitized slides to assist pathologists with different tasks.", "Whereas ML-involving classification and segmentation methods have obvious benefits for image analysis, image search represents a fundamental shift in computational pathology #OTHEREFR ."], "text_after_citation": ["Pathologists examine tissue slides under a microscope on a regular basis and write diagnostic and prognostic reports based on their visual inspections.", "The use of immunologic research methodologies in histopathology has resulted in a significant improvement in neoplasm microscopic diagnosis #OTHEREFR .", "Immunohistochemistry (IHC) has become a formidable tool at the pathologist's disposal, despite the fact that histological analysis of hematoxylin and eosin (H&E) stained tissue sections remain at the core of the discipline #OTHEREFR - #OTHEREFR .", "IHC is a technique for detecting specific antigens (proteins) in tissue slices using labeled antibodies that bind with the antigen #OTHEREFR .", "The purpose of staining is to draw attention to the area of interest while also providing contrast against the 'background'."], "citing_paper_content": {"title": "Immunohistochemistry Biomarkers-Guided Image Search For Histopathology", "abstract": "Medical practitioners use a number of diagnostic tests to make a reliable diagnosis. Traditionally, Haematoxylin and Eosin (H&E) stained glass slides have been used for cancer diagnosis and tumor detection. However, recently a variety of immunohistochemistry (IHC) stained slides can be requested by pathologists to examine and confirm diagnoses for determining the subtype of a tumor when this is difficult using H&E slides only. Deep learning (DL) has received a lot of interest recently for image search engines to extract features from tissue regions, which may or may not be the target region for diagnosis. This approach generally fails to capture high-level patterns corresponding to the malignant or abnormal content of histopathology images. In this work, we are proposing a targeted image search approach, inspired by the pathologists' workflow, which may use information from multiple IHC biomarker images when available. These IHC images could be aligned, filtered, and merged together to generate a composite biomarker image (CBI) that could eventually be used to generate an attention map to guide the search engine for localized search. In our experiments, we observed that an IHC-guided image search engine can retrieve relevant data more accurately than a conventional (i.e., H&E-only) search engine without IHC guidance. Moreover, such engines are also able to accurately conclude the subtypes through majority votes."}, "cited_paper_content": {"title": "Yottixel -- An Image Search Engine For Large Archives Of Histopathology Whole Slide Images", "abstract": "With the emergence of digital pathology, searching for similar images in large archives has gained considerable attention. Image retrieval can provide pathologists with unprecedented access to the evidence embodied in already diagnosed and treated cases from the past. This paper proposes a search engine specialized for digital pathology, called Yottixel, a portmanteau for \"one yotta pixel,\" alluding to the big-data nature of histopathology images. The most impressive characteristic of Yottixel is its ability to represent whole slide images (WSIs) in a compact manner. Yottixel can perform millions of searches in real-time with a high search accuracy and low storage profile. Yottixel uses an intelligent indexing algorithm capable of representing WSIs with a mosaic of patches by converting them into a small number of methodically extracted barcodes, called \"Bunch of Barcodes\" (BoB), the most prominent performance enabler of Yottixel. The performance of the prototype platform is qualitatively tested using 300 WSIs from the University of Pittsburgh Medical Center (UPMC) and 2,020 WSIs from The Cancer Genome Atlas Program (TCGA) provided by the National Cancer Institute. Both datasets amount to more than 4,000,000 patches of 1000x1000 pixels. We report three sets of experiments that show that Yottixel can accurately retrieve organs and malignancies, and its semantic ordering shows good agreement with the subjective evaluation of human observers."}, "keywords": ["pathology"], "citation_intent": "method"} {"citing_id": "2303.14829v1", "cited_id": "1907.11692", "section_title": "Experimental Results And Evaluations", "citation": "We again use pre-trained roBERTa #REFR for text embedding which gives an embedding of size 768 for 'determinant + subject', 'verb', 'auxiliary verb', 'determinant + object', and the whole caption.", "text_before_citation": ["We use the same train, validation, and test split as the existing methods. Metrics.", "For evaluation with the existing methods, we use the following set of video captioning metrics: BLEU@4 #OTHEREFR , METEOR #OTHEREFR , ROUGE-L #OTHEREFR and CIDEr #OTHEREFR .", "CIDEr #OTHEREFR is studied to be robust in the condition where the semantic meaning of the caption remains intact #OTHEREFR .", "Also, we are the first to use GPT-2 #OTHEREFR pre-trained model for measuring the Grammatical correctness Score (GS) of the captions generated by our model in comparison to state-of-the-art which we implement for the GS metric, demonstrating improved performance (see Section 4.2). Implementation.", "For text, we have used spaCy 2 with roBERTa #OTHEREFR , a version of BERT #OTHEREFR , to extract POS components along with nouns from the groundtruth captions."], "text_after_citation": ["Following the existing methods, we use InceptionRes-NetV2 #OTHEREFR to extract the spatial features and C3D #OTHEREFR to extract the temporal features.", "These features are projected to 512 sizes before being input into the network.", "We train for epochs 25 and use a learning rate of 0.00015, batch size 16, ADAM optimizer #OTHEREFR , 16 samples per video as well as a hidden state size of 512 for the Caption block. Our model has 76M parameters and 0.045s inference time.", "Apart from that, we use Yolov7 #OTHEREFR for extracting object features for the noun anchor.", "The whole implementation is performed using one NVIDIA GeForce RTX 3090 and PyTorch."], "citing_paper_content": {"title": "Sem-Pos: Grammatically And Semantically Correct Video Captioning", "abstract": "Generating grammatically and semantically correct captions in video captioning is a challenging task. The captions generated from the existing methods are either word-byword that do not align with grammatical structure or miss key information from the input videos. To address these issues, we introduce a novel globallocal fusion network, with a Global-Local Fusion Block (GLFB) that encodes and fuses features from different parts of speech (POS) components with visual-spatial features. We use novel combinations of different POS components-'determinant + subject', 'auxiliary verb', 'verb', and 'determinant + object' for supervision of the POS blocks-Det + Subject, Aux Verb, Verb, and Det + Object respectively. The novel global-local fusion network together with POS blocks helps align the visual features with language description to generate grammatically and semantically correct captions. Extensive qualitative and quantitative experiments on benchmark MSVD and MSRVTT datasets demonstrate that the proposed approach generates more grammatically and semantically correct captions compared to the existing methods, achieving the new state-of-the-art. Ablations on the POS blocks and the GLFB demonstrate the impact of the contributions on the proposed method."}, "cited_paper_content": {"title": "Roberta: A Robustly Optimized Bert Pretraining Approach", "abstract": "Language model pretraining has led to significant performance gains but careful comparison between different approaches is challenging. Training is computationally expensive, often done on private datasets of different sizes, and, as we will show, hyperparameter choices have significant impact on the final results. We present a replication study of BERT pretraining (Devlin et al., 2019) that carefully measures the impact of many key hyperparameters and training data size. We find that BERT was significantly undertrained, and can match or exceed the performance of every model published after it. Our best model achieves state-of-the-art results on GLUE, RACE and SQuAD. These results highlight the importance of previously overlooked design choices, and raise questions about the source of recently reported improvements. We release our models and code."}, "keywords": ["whole caption", "embedding"], "citation_intent": "method"} {"citing_id": "2304.03571v1", "cited_id": "1511.07289", "section_title": "\u0392-Vae Implementation Details", "citation": "The chosen activation function is the exponential linear unit (ELU) #REFR for all layers except the last, where the activation is linear.", "text_before_citation": ["During training, the distributions are sampled to generate inputs to the decoder network, while only the \u00b5 values are used to encode the time series used later by the predictor.", "The decoder model is designed as an almost symmetric network to the encoder.", "Latent-space samples are fed into a fully-connected layer, and its output is reshaped to the same shape as the last convolutional layer in the encoder.", "Six transposed convolution layers are then used to increase the spatial dimension with decreasing filters.", "A final transposed convolution layer with two filters produces the two output channels."], "text_after_citation": [], "citing_paper_content": {"title": "\u0392-Variational Autoencoders And Transformers For Reduced-Order Modelling Of Fluid Flows", "abstract": "Variational autoencoder (VAE) architectures have the potential to develop reduced-order models (ROMs) for chaotic fluid flows. We propose a method for learning compact and near-orthogonal ROMs using a combination of a \u03b2-VAE and a transformer, tested on numerical data from a twodimensional viscous flow in both periodic and chaotic regimes. The \u03b2-VAE is trained to learn a compact latent representation of the flow velocity, and the transformer is trained to predict the temporal dynamics in latent space. Using the \u03b2-VAE to learn disentangled representations in latentspace, we obtain a more interpretable flow model with features that resemble those observed in the proper orthogonal decomposition, but with a more efficient representation. Using Poincar\u00e9 maps, the results show that our method can capture the underlying dynamics of the flow outperforming other prediction models. The proposed method has potential applications in other fields such as weather forecasting, structural dynamics or biomedical engineering."}, "cited_paper_content": {"title": "Fast And Accurate Deep Network Learning By Exponential Linear Units (Elus)", "abstract": "We introduce the\"exponential linear unit\"(ELU) which speeds up learning in deep neural networks and leads to higher classification accuracies. Like rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs (PReLUs), ELUs alleviate the vanishing gradient problem via the identity for positive values. However, ELUs have improved learning characteristics compared to the units with other activation functions. In contrast to ReLUs, ELUs have negative values which allows them to push mean unit activations closer to zero like batch normalization but with lower computational complexity. Mean shifts toward zero speed up learning by bringing the normal gradient closer to the unit natural gradient because of a reduced bias shift effect. While LReLUs and PReLUs have negative values, too, they do not ensure a noise-robust deactivation state. ELUs saturate to a negative value with smaller inputs and thereby decrease the forward propagated variation and information. Therefore, ELUs code the degree of presence of particular phenomena in the input, while they do not quantitatively model the degree of their absence. In experiments, ELUs lead not only to faster learning, but also to significantly better generalization performance than ReLUs and LReLUs on networks with more than 5 layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with batch normalization while batch normalization does not improve ELU networks. ELU networks are among the top 10 reported CIFAR-10 results and yield the best published result on CIFAR-100, without resorting to multi-view evaluation or model averaging. On ImageNet, ELU networks considerably speed up learning compared to a ReLU network with the same architecture, obtaining less than 10% classification error for a single crop, single model network."}, "keywords": ["chosen activation function", "exponential linear unit"], "citation_intent": "method"} {"citing_id": "2304.09965v1", "cited_id": "1902.07288", "section_title": "(3)", "citation": "To this end, we don't need to include the case where m = 0 in the boundary condition in #REFR .", "text_before_citation": ["It is worth pointing out that when an honest miner mines a block and then the length of the authentic branch grows to L, the length of the counterfeit branch cannot be the same as that of the authentic branch.", "This is because if so, then before the honest miner mines a block, the counterfeit branch has already grown to L blocks, and hence, the counterfeit branch has already been longer than the authentic branch which implies that we have already reached the boundary condition in (2) before the honest miner mines a block."], "text_after_citation": ["By employing (1), #OTHEREFR and 3, the pursuit of the closed-form expression for \u00c8 L (m, n) can be cast as a two-sided boundary hitting problem for a two-dimensional random walk with two possible moving directions, which is illustrated in Fig. 4 .", "To be specific, let s T [m, n] T + t=T t=1 \u03b4 t denote the position of the random walk in two-dimensional space after the T -th step (i.e., after T blocks have been mined in the blockchain), where for any t, \u03b4 t is a random vector \u03b4 t = \u03b4 #OTHEREFR [\u22121, 0]", "T with probability I, \u03b4 #OTHEREFR", "EQUATION", "Let"], "citing_paper_content": {"title": "Vulnerability Of Finitely-Long Blockchains In Securing Data", "abstract": "Recently, blockchain has been applied in various fields to secure data exchanges and storage in decentralized systems. In a blockchain application where the task of the application which makes use of the data stored in a blockchain has to be accomplished by a time instant, the employed blockchain is essentially finitely-long. In this paper, we consider a general finitely-long blockchain model which is generalized from most existing works on finitely-long blockchain applications, and take the first step towards characterizing the vulnerability of finitely-long blockchains in securing data against double-spending attacks. For the first time, we develop a general closed-form expression for the probability of success in launching a double-spending attack on a finitely-long blockchain. This probability essentially characterizes the vulnerability of finitely-long blockchains. Then, we prove that the probability of success in launching a doublespending attack on a finitely-long blockchain is no greater than that on an infinitely-long blockchain, which implies that finitelylong blockchains are less vulnerable to double-spending attacks than infinitely-long blockchains. Moreover, we show that unlike infinitely-long blockchains which can be surely paralyzed by a 51% attack, finitely-long blockchains are more resistant to 51% attacks. Index Terms-Finitely-long blockchain, double-spending attack, proof-of-work, 51% attack."}, "cited_paper_content": {"title": "Secure Distributed Dynamic State Estimation In Wide-Area Smart Grids", "abstract": "Smart grid is a large complex network with a myriad of vulnerabilities, usually operated in adversarial settings and regulated based on estimated system states. In this study, we propose a novel highly secure distributed dynamic state estimation mechanism for wide-area (multi-area) smart grids, composed of geographically separated subregions, each supervised by a local control center. We first propose a distributed state estimator assuming regular system operation that achieves near-optimal performance based on the local Kalman filters and with the exchange of necessary information between local centers. To enhance the security, we further propose to 1) protect the network database and the network communication channels against attacks and data manipulations via a blockchain (BC)-based system design, where the BC operates on the peer-to-peer network of local centers, 2) locally detect the measurement anomalies in real-time to eliminate their effects on the state estimation process, and 3) detect misbehaving (hacked/faulty) local centers in real-time via a distributed trust management scheme over the network. We provide theoretical guarantees regarding the false alarm rates of the proposed detection schemes, where the false alarms can be easily controlled. Numerical studies illustrate that the proposed mechanism offers reliable state estimation under regular system operation, timely and accurate detection of anomalies, and good state recovery performance in case of anomalies."}, "keywords": ["case"], "citation_intent": "background"} {"citing_id": "2303.10622v2", "cited_id": "2002.09006", "section_title": "Orthogonal Multiplicity.", "citation": "Indeed, Harase #REFR obtained Tausworthe generators over F 2 with t-value two or three for s = 3, but they were not optimal with respect to the t-value.", "text_before_citation": ["#OTHEREFR ), have the t-value zero for s = 3 only if the period length is exactly three.", "Their proof was specialized for the case F 2 ; for example, they used the property", "L m \u2229 U m = {I} in [16, Proof of Theorem 1],", "where L m and U m denote a set of non-singular m \u00d7 m lower-triangular and upper-triangular matrices, respectively.", "This is false in the fields F b except for F 2 ."], "text_after_citation": ["Thus, we conducted a search over F b , whose restrictions are looser than those over F 2 ."], "citing_paper_content": {"title": "A Generalization Of Short-Period Tausworthe Generators And Its Application To Markov Chain Quasi-Monte Carlo", "abstract": "A one-dimensional sequence u 0 , u 1 , u 2 ,. .. \u2208 [0, 1) is said to be completely uniformly distributed (CUD) if overlapping s-blocks (u i , u i+1 ,. .. , u i+s\u22121), i = 0, 1, 2,. . ., are uniformly distributed for every dimension s \u2265 1. This concept naturally arises in Markov chain quasi-Monte Carlo (QMC). However, the definition of CUD sequences is not constructive, and thus there remains the problem of how to implement the Markov chain QMC algorithm in practice. Harase (2021) focused on the t-value, which is a measure of uniformity widely used in the study of QMC, and implemented short-period Tausworthe generators (i.e., linear feedback shift register generators) over the twoelement field F 2 that approximate CUD sequences by running for the entire period. In this paper, we generalize a search algorithm over F 2 to that over arbitrary finite fields F b with b elements and conduct a search for Tausworthe generators over F b with t-values zero (i.e., optimal) for dimension s = 3 and small for s \u2265 4, especially in the case where b = 3, 4, and 5. We provide a parameter table of Tausworthe generators over F 4 , and report a comparison between our new generators over F 4 and existing generators over F 2 in numerical examples using Markov chain QMC."}, "cited_paper_content": {"title": "A Table Of Short-Period Tausworthe Generators For Markov Chain Quasi-Monte Carlo", "abstract": "We consider the problem of estimating expectations by using Markov chain Monte Carlo methods and improving the accuracy by replacing IID uniform random points with quasi-Monte Carlo (QMC) points. Recently, it has been shown that Markov chain QMC remains consistent when the driving sequences are completely uniformly distributed (CUD). However, the definition of CUD sequences is not constructive, so an implementation method using short-period Tausworthe generators (i.e., linear feedback shift register generators over the two-element field) that approximate CUD sequences has been proposed. In this paper, we conduct an exhaustive search of short-period Tausworthe generators for Markov chain QMC in terms of the $t$-value, which is a criterion of uniformity widely used in the study of QMC methods. We provide a parameter table of Tausworthe generators and show the effectiveness in a numerical example using Gibbs sampling."}, "keywords": ["Tausworthe generators"], "citation_intent": "background"} {"citing_id": "2304.02539v1", "cited_id": "1906.02530", "section_title": "Datasets:", "citation": "In the literature, there exist many further evaluation scores, particularly for assessing probability calibration #REFR .", "text_before_citation": ["Moreover, NLL is a proper scoring rule #OTHEREFR ) such that the best score corresponds to a perfect prediction.", "Brier score (BS, \u2193), proposed by Brier (1950), is another proper scoring rule, which measures the squared error between predicted probability vectors and one-hot encoded target vectors:", "EQUATION", "AP-BS (X, y, Z,p \u03b8,\u03c9", "EQUATION"], "text_after_citation": ["As a comprehensive evaluation of probabilities is beyond this article's scope, we focus on proper scoring rules inducing calibration measures.", "Accordingly, we have omitted other evaluation scores, such as the expected calibration error (Naeini et al., 2015) being a non-proper scoring rule.", "Multi-annotator supervised learning techniques: By default, we train MaDL via the weighted loss function in Eq.", "25 using the hyperparameter values from Section 4 and the most general architecture depicted by Fig. 3 .", "In addition to ablations as part of analyzing the three RQs, we present a detailed ablation study on the hyperparameters of MaDL in Appendix A."], "citing_paper_content": {"title": "Multi-Annotator Deep Learning: A Probabilistic Framework For Classification", "abstract": "Solving complex classification tasks using deep neural networks typically requires large amounts of annotated data. However, corresponding class labels are noisy when provided by error-prone annotators, e.g., crowd workers. Training standard deep neural networks leads to subpar performances in such multi-annotator supervised learning settings. We address this issue by presenting a probabilistic training framework named multi-annotator deep learning (MaDL). A ground truth and an annotator performance model are jointly trained in an end-to-end learning approach. The ground truth model learns to predict instances' true class labels, while the annotator performance model infers probabilistic estimates of annotators' performances. A modular network architecture enables us to make varying assumptions regarding annotators' performances, e.g., an optional class or instance dependency. Further, we learn annotator embeddings to estimate annotators' densities within a latent space as proxies of their potentially correlated annotations. Together with a weighted loss function, we improve the learning from correlated annotation patterns. In a comprehensive evaluation, we examine three research questions about multi-annotator supervised learning. Our findings indicate MaDL's state-of-the-art performance and robustness against many correlated, spamming annotators."}, "cited_paper_content": {"title": "Can You Trust Your Model'S Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift", "abstract": "Modern machine learning methods including deep learning have achieved great success in predictive accuracy for supervised learning tasks, but may still fall short in giving useful estimates of their predictive uncertainty. Quantifying uncertainty is especially critical in real-world settings, which often involve input distributions that are shifted from the training distribution due to a variety of factors including sample bias and non-stationarity. In such settings, well calibrated uncertainty estimates convey information about when a model's output should (or should not) be trusted. Many probabilistic deep learning methods, including Bayesian-and non-Bayesian methods, have been proposed in the literature for quantifying predictive uncertainty, but to our knowledge there has not previously been a rigorous large-scale empirical comparison of these methods under dataset shift. We present a large-scale benchmark of existing state-of-the-art methods on classification problems and investigate the effect of dataset shift on accuracy and calibration. We find that traditional post-hoc calibration does indeed fall short, as do several other previous methods. However, some methods that marginalize over models give surprisingly strong results across a broad spectrum of tasks."}, "keywords": ["probability calibration"], "citation_intent": "background"} {"citing_id": "2303.16109v1", "cited_id": "1706.03762", "section_title": "I. Introduction", "citation": "The framework also employs state-of-the-art transformer neural networks #REFR augmented by manoeuvre-specific heads to predict multiple trajectories conditioned on the predicted manoeuvre vectors.", "text_before_citation": ["To address the aforementioned limitations, we propose a novel Multimodal Manoeuvre and Trajectory Prediction (MMnTP) framework.", "Firstly, we propose a bespoke formulation of manoeuvre prediction based on a vector representation of manoeuvres.", "This representation includes a sequence of manoeuvre types and transition times between those during the prediction window.", "To increase the plausibility of the predictions, constraints are introduced on the manoeuvre types and the number of allowed manoeuvre changes within the prediction horizon.", "We then propose a multimodal discriminative manoeuvre prediction model using this new formulation."], "text_after_citation": ["To train the model, a novel multimodal manoeuvre prediction loss function and a mode selection method are proposed based on the predicted types and timings of manoeuvres.", "The proposed framework is evaluated in highway driving scenarios using two public trajectory datasets, namely NGSIM #OTHEREFR and highD #OTHEREFR . Our contributions can be summarised as follows:", "\u2022 A bespoke formulation of manoeuvre prediction, which allows estimating a sequence of manoeuvre types and transition times between them.", "\u2022 A novel transformer-based model to predict multimodal manoeuvres and their corresponding trajectories.", "\u2022 A tailored multimodal training method using a new multimodal manoeuvre loss function and mode selection method."], "citing_paper_content": {"title": "Multimodal Manoeuvre And Trajectory Prediction For Autonomous Vehicles Using Transformer Networks", "abstract": "Predicting the behaviour (i.e. manoeuvre/trajectory) of other road users, including vehicles, is critical for the safe and efficient operation of autonomous vehicles (AVs), a.k.a. automated driving systems (ADSs). Due to the uncertain future behaviour of vehicles, multiple future behaviour modes are often plausible for a vehicle in a given driving scene. Therefore, multimodal prediction can provide richer information than single-mode prediction enabling AVs to perform a better risk assessment. To this end, we propose a novel multimodal prediction framework that can predict multiple plausible behaviour modes and their likelihoods. The proposed framework includes a bespoke problem formulation for manoeuvre prediction, a novel transformer-based prediction model, and a tailored training method for multimodal manoeuvre and trajectory prediction. The performance of the framework is evaluated using two public benchmark highway driving datasets, namely NGSIM and highD. The results show that the proposed framework outperforms the state-of-theart multimodal methods in the literature in terms of prediction error and is capable of predicting plausible manoeuvre and trajectory modes."}, "cited_paper_content": {"title": "Attention Is All You Need", "abstract": "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data."}, "keywords": ["predicted manoeuvre vectors"], "citation_intent": "method"} {"citing_id": "2304.05497v1", "cited_id": "0911.0460", "section_title": "Related Work", "citation": "In contrast, we show that combining one specialized expert with the generic knowledge base model with simple ensemble methods such as averaging or linear stacking #REFR is generally more efficient than ensembling multiple specialized experts.", "text_before_citation": ["A follow-up line of thought extracts such information from a pretrained classifier #OTHEREFR , or even learns the optimal taxonomy jointly with the image representations #OTHEREFR .", "Such models have been shown to improve the efficiency/accuracy trade-off in classification tasks.", "However, this class-based routing is a limiting assumption, and per-sample routing has been shown to outperform hierarchical classification models when correctly parametrized #OTHEREFR .", "Finally, MoE can be seen as an ensembling technique whose weights are learned by the gate.", "While it is common to assume each sample is routed to a unique expert to maximize efficiency, some works #OTHEREFR have considered combining several experts to boost accuracy."], "text_after_citation": [], "citing_paper_content": {"title": "Revisiting Single-Gated Mixtures Of Experts", "abstract": "Mixture of Experts (MoE) are rising in popularity as a means to train extremely large-scale models, yet allowing for a reasonable computational cost at inference time. Recent state-of-the-art approaches usually assume a large number of experts, and require training all experts jointly, which often lead to training instabilities such as the router collapsing. In contrast, in this work, we propose to revisit simple single-gate MoE, which allows for more practical training. Key to our work are (i) a base model branch acting both as an early-exit and an ensembling regularization scheme, (ii) a simple and efficient asynchronous training pipeline without router collapse issues, and finally (iii) an automatic per-sample clustering-based initialization. We show experimentally that the proposed model obtains efficiency-to-accuracy trade-offs comparable with other more complex MoE, and outperforms non-mixture baselines. This showcases the merits of even a simple single-gate MoE, and motivates further exploration in this area."}, "cited_paper_content": {"title": "Feature-Weighted Linear Stacking", "abstract": "Ensemble methods, such as stacking, are designed to boost predictive accuracy by blending the predictions of multiple machine learning models. Recent work has shown that the use of meta-features, additional inputs describing each example in a dataset, can boost the performance of ensemble methods, but the greatest reported gains have come from nonlinear procedures requiring significant tuning and training time. Here, we present a linear technique, Feature-Weighted Linear Stacking (FWLS), that incorporates meta-features for improved accuracy while retaining the well-known virtues of linear regression regarding speed, stability, and interpretability. FWLS combines model predictions linearly using coefficients that are themselves linear functions of meta-features. This technique was a key facet of the solution of the second place team in the recently concluded Netflix Prize competition. Significant increases in accuracy over standard linear stacking are demonstrated on the Netflix Prize collaborative filtering dataset."}, "keywords": ["multiple specialized experts", "simple ensemble methods"], "citation_intent": "method"} {"citing_id": "2303.08566v1", "cited_id": "1906.10771", "section_title": "Algorithm 1 Computing Task-Specific Parameter Sensitivities", "citation": "Note that although our criterion draws inspiration from pruning work #REFR , it is distinct from it.", "text_before_citation": ["(1) n \u2248 g 2 n ,", "where is the learning rate.", "Since is same for all parameters, we can eliminate it when comparing the sensitivity with the other parameters and finally get", "EQUATION", "Therefore, the sensitivity of a parameter can be efficiently measured by its potential to reduce the loss on the target domain."], "text_after_citation": ["#OTHEREFR measures the parameter importance by the squared change in loss when removing them, i.e., (E(D t , w) \u2212 E(D t , w | w n = 0)) 2 and finally derives the parameter importance by (g n w n ) 2 , which is different from our formulations in Eqs. #OTHEREFR and #OTHEREFR .", "In practice, we accumulate S from a total number of C training samples ahead of fine-tuning to generate accurate sensitivity as shown in Algorithm 1, where C is a predefined hyper-parameter.", "In Section 4.3, we show that employing only 400 training samples is sufficient for getting reasonable parameter sensitivity, which requires only 5.5 seconds with a single GPU for any VTAB-1k dataset with ViT-B/16 backbone #OTHEREFR ."], "citing_paper_content": {"title": "Sensitivity-Aware Visual Parameter-Efficient Tuning", "abstract": "Visual Parameter-Efficient Tuning (VPET) has become a powerful alternative for full fine-tuning so as to adapt pre-trained vision models to downstream tasks, which only tunes a small number of parameters while freezing the vast majority ones to ease storage burden and optimization difficulty. However, existing VPET methods introduce trainable parameters to the same positions across different tasks depending solely on human heuristics and neglect the domain gaps. To this end, we study where to introduce and how to allocate trainable parameters by proposing a novel Sensitivity-aware visual Parameter-efficient Tuning (SPT) scheme, which adaptively allocates trainable parameters to task-specific important positions given a desired tunable parameter budget. Specifically, our SPT first quickly identifies the sensitive parameters that require tuning for a given task in a data-dependent way. Next, our SPT further boosts the representational capability for the weight matrices whose number of sensitive parameters exceeds a pre-defined threshold by utilizing any of the existing structured tuning methods, e.g., LoRA [27] or Adapter [26], to replace directly tuning the selected sensitive parameters (unstructured tuning) under the budget. Extensive experiments on a wide range of downstream recognition tasks show that our SPT is complementary to the existing VPET methods and largely boosts their performance, e.g., SPT improves Adapter with supervised pre-trained ViT-B/16 backbone by 4.2% and 1.4% mean Top-1 accuracy, reaching SOTA performance on FGVC and VTAB-1k benchmarks, respectively. Source code is at https://github.com/ ziplab/SPT."}, "cited_paper_content": {"title": "Importance Estimation For Neural Network Pruning", "abstract": "Structural pruning of neural network parameters reduces computational, energy, and memory transfer costs during inference. We propose a novel method that estimates the contribution of a neuron (filter) to the final loss and iteratively removes those with smaller scores. We describe two variations of our method using the first and second-order Taylor expansions to approximate a filter's contribution. Both methods scale consistently across any network layer without requiring per-layer sensitivity analysis and can be applied to any kind of layer, including skip connections. For modern networks trained on ImageNet, we measured experimentally a high (>93%) correlation between the contribution computed by our methods and a reliable estimate of the true importance. Pruning with the proposed methods led to an improvement over state-of-the-art in terms of accuracy, FLOPs, and parameter reduction. On ResNet-101, we achieve a 40% FLOPS reduction by removing 30% of the parameters, with a loss of 0.02% in the top-1 accuracy on ImageNet."}, "keywords": ["pruning work"], "citation_intent": "background"} {"citing_id": "2303.03075v1", "cited_id": "1905.09604", "section_title": "Instances Of The Redistribution Mechanism Framework", "citation": "The largest known set of diffusion auction mechanisms with the above properties is Critical Diffusion Mechanism (CDM) #REFR .", "text_before_citation": ["In our network-based redistribution mechanism framework, if we require the output mechanism to be IC and IR, then the input diffusion auction mechanism should also be IC, IR and non-deficit."], "text_after_citation": ["Especially, the first diffusion auction mechanism, Incentive Diffusion Mechanism (IDM) #OTHEREFR is also a member in CDM, which has the highest efficiency.", "In this section, we input IDM and another mechanism in CDM called Threshold Neighbourhood Mechanism (TNM) [12] into our framework to see the outcomes.", "For convenience, we briefly introduce the idea of the IDM and TNM with our notations.", "Both IDM and TNM first find the agent with the highest valuation and their critical ancestors.", "Then the mechanisms check these agents from the sponsor to the agent with the highest valuation."], "citing_paper_content": {"title": "A Redistribution Framework For Diffusion Auctions", "abstract": "Redistribution mechanism design aims to redistribute the revenue collected by a truthful auction back to its participants without affecting the truthfulness. We study redistribution mechanisms for diffusion auctions, which is a new trend in mechanism design [19]. The key property of a diffusion auction is that the existing participants are incentivized to invite new participants to join the auctions. Hence, when we design redistributions, we also need to maintain this incentive. Existing redistribution mechanisms in the traditional setting are targeted at modifying the payment design of a truthful mechanism, such as the Vickrey auction. In this paper, we do not focus on one specific mechanism. Instead, we propose a general framework to redistribute the revenue back for all truthful diffusion auctions for selling a single item. The framework treats the original truthful diffusion auction as a black box, and it does not affect its truthfulness. The framework can also distribute back almost all the revenue."}, "cited_paper_content": {"title": "Diffusion And Auction On Graphs", "abstract": "Auction is the common paradigm for resource allocation which is a fundamental problem in human society. Existing research indicates that the two primary objectives, the seller's revenue and the allocation efficiency, are generally conflicting in auction design. For the first time, we expand the domain of the classic auction to a social graph and formally identify a new class of auction mechanisms on graphs. All mechanisms in this class are incentive-compatible and also promote all buyers to diffuse the auction information to others, whereby both the seller's revenue and the allocation efficiency are significantly improved comparing with the Vickrey auction. It is found that the recently proposed information diffusion mechanism is an extreme case with the lowest revenue in this new class. Our work could potentially inspire a new perspective for the efficient and optimal auction design and could be applied into the prevalent online social and economic networks. \u00a9 2019 International Joint Conferences on Artificial Intelligence. All rights reserved."}, "keywords": ["diffusion auction mechanisms"], "citation_intent": "background"} {"citing_id": "2303.13559v1", "cited_id": "1904.04100", "section_title": "Librispeech Comparisons", "citation": "We only compare with w2v-U since it has achieved significantly better results than former GAN or HMM based unsupervised methods #REFR .", "text_before_citation": ["Our model performs better than w2v-U-L under three configurations: with or without ST and 4-gram or transformer language models.", "The absolute improvements ranges from 0.2% to 1.7%, and the relative improvements ranges from 5.1% to 12.8%.", "These results reflect that injecting instance noises of various intensities and appending diffusion timestep-dependent discriminators during adversarial training are effective.", "Table 2 reports phoneme error rates (PER) under TIMIT's matched and unmatched training data setups.", "The same 4gram LM is used for these four models."], "text_after_citation": ["Under both matched and unmatched setups, our diffusion-GAN enhanced w2v-U models outperforms w2v-U, with absolution improvements from 0.3% to 1.4% and relative improvements from 2.7% to 7.9%."], "citing_paper_content": {"title": "Enhancing Unsupervised Speech Recognition With Diffusion Gans", "abstract": "We enhance the vanilla adversarial training method for unsupervised Automatic Speech Recognition (ASR) by a diffusion-GAN. Our model (1) injects instance noises of various intensities to the generator's output and unlabeled reference text which are sampled from pretrained phoneme language models with a length constraint, (2) asks diffusion timestep-dependent discriminators to separate them, and (3) back-propagates the gradients to update the generator. Word/phoneme error rate comparisons with wav2vec-U under Librispeech (3.1% for test-clean and 5.6% for test-other), TIMIT and MLS datasets, show that our enhancement strategies work effectively."}, "cited_paper_content": {"title": "Completely Unsupervised Phoneme Recognition By A Generative Adversarial Network Harmonized With Iteratively Refined Hidden Markov Models", "abstract": "Producing a large annotated speech corpus for training ASR systems remains difficult for more than 95% of languages all over the world which are low-resourced, but collecting a relatively big unlabeled data set for such languages is more achievable. This is why some initial effort have been reported on completely unsupervised speech recognition learned from unlabeled data only, although with relatively high error rates. In this paper, we develop a Generative Adversarial Network (GAN) to achieve this purpose, in which a Generator and a Discriminator learn from each other iteratively to improve the performance. We further use a set of Hidden Markov Models (HMMs) iteratively refined from the machine generated labels to work in harmony with the GAN. The initial experiments on TIMIT data set achieve an phone error rate of 33.1%, which is 8.5% lower than the previous state-of-the-art."}, "keywords": ["former GAN", "based unsupervised methods"], "citation_intent": "result"} {"citing_id": "2303.02160v1", "cited_id": "1603.04467", "section_title": "A.2 Training Details", "citation": "We trained all agents using Tensorflow 2.3 #REFR and the OpenAI Baselines PPO2 implementation [17] with a distributed sampler.", "text_before_citation": ["We provide important details about our training setup here.", "We train all three agents with PPO #OTHEREFR , a popular deep reinforcement learning algorithm.", "We choose PPO for training our agents for a few reasons.", "This algorithm is commonly used because it is found to be empirically robust and effective in a wide range of tasks #OTHEREFR .", "We train each of the three agent architectures for 15 hours, the equivalent of 10 million training timesteps, on at least 3 different random seeds."], "text_after_citation": ["For a full list of training hyperparameters used in all agent versions, please refer to Table 6 .", "We found this set to perform best on preliminary experiments.", "To effectively train agents in a complex video game setting, we use a distributed approach leveraging an in-house sample collection framework and Azure cloud resources.", "Training samples are collected from a scaleset of 20 low priority GPU virtual machines (Azure NV6), each running 3 video game instances.", "The samples are then sent to one training head node, a CPU-only Azure E32s memory-optimized virtual machine."], "citing_paper_content": {"title": "Navigates Like Me: Understanding How People Evaluate Human-Like Ai In Video Games", "abstract": "We aim to understand how people assess human likeness in navigation produced by people and artificially intelligent (AI) agents in a video game. To this end, we propose a novel AI agent with the goal of generating more human-like behavior. We collect hundreds of crowd-sourced assessments comparing the human-likeness of navigation behavior generated by our agent and baseline AI agents with human-generated behavior. Our proposed agent passes a Turing Test, while the baseline agents do not. By passing a Turing Test, we mean that human judges could not quantitatively distinguish between videos of a person and an AI agent navigating. To understand what people believe constitutes human-like navigation, we extensively analyze the justifications of these assessments. This work provides insights into the characteristics that people consider human-like in the context of goal-directed video game navigation, which is a key step for further improving human interactions with AI agents. CCS CONCEPTS \u2022 Applied computing \u2192 Computer games; \u2022 Computing methodologies \u2192 Reinforcement learning; \u2022 Human-centered computing \u2192 Empirical studies in HCI."}, "cited_paper_content": {"title": "Tensorflow: Large-Scale Machine Learning On Heterogeneous Distributed Systems", "abstract": "TensorFlow is an interface for expressing machine learning algorithms, and an implementation for executing such algorithms. A computation expressed using TensorFlow can be executed with little or no change on a wide variety of heterogeneous systems, ranging from mobile devices such as phones and tablets up to large-scale distributed systems of hundreds of machines and thousands of computational devices such as GPU cards. The system is flexible and can be used to express a wide variety of algorithms, including training and inference algorithms for deep neural network models, and it has been used for conducting research and for deploying machine learning systems into production across more than a dozen areas of computer science and other fields, including speech recognition, computer vision, robotics, information retrieval, natural language processing, geographic information extraction, and computational drug discovery. This paper describes the TensorFlow interface and an implementation of that interface that we have built at Google. The TensorFlow API and a reference implementation were released as an open-source package under the Apache 2.0 license in November, 2015 and are available at www.tensorflow.org."}, "keywords": ["Tensorflow"], "citation_intent": "method"} {"citing_id": "2305.02029v1", "cited_id": "1905.09598", "section_title": "Topic Modelling", "citation": "More sophisticated methods tend to be based on more complex algorithms such as neural networks, such as the work from #REFR which explores the use of self-organising maps (SOMs) to reduce dimensionality within the data and create an interpretable 2D map of topics.", "text_before_citation": ["Topic modelling provides an unsupervised approach to topic allocation in texts, with a broad range of complexities.", "It will be explored in sections four and five of this paper Simple approaches such as Latent Semantic Indexing (LSI) #OTHEREFR involve the vectorisation of texts within a corpus and grouping together based on co-sine similarity (an effective measure used to compare similarity of vectors #OTHEREFR ).", "Such methods are quick to implement and require little resources but come with the large disadvantage that the nature of the topic groupings remains unknown, making the results very difficult to interpret."], "text_after_citation": ["But whilst having the advantage of interpretability these methods are often computationally expensive and can tie up valuable resources within an organisation.", "Latent Dirichlet Allocation (LDA) #OTHEREFR provides a middle ground between overly simply non-interpretable and overly-complex resource heavy topic modelling techniques, and is one of the most commonly used methods in the field #OTHEREFR .", "LDA involves creating a latent layer of topics within a dataset where words that are likely to be found near each other within texts are grouped.", "Each text within a corpus is then evaluated for a percentage match with each of the topics in the latent layer to allow allocation.", "One of the drawbacks with LDA is that as a statistical approach, interpretation of the topics is still required to achieve sensible results, just because words are statistically found near each other does not necessarily mean they will be considered related by a human observer."], "citing_paper_content": {"title": "Natural Language Processing On Customer Note Data", "abstract": "Automatic analysis of customer data for businesses is an area that is of interest to companies. Business to business data is studied rarely in academia due to the sensitive nature of such information. Applying natural language processing can speed up the analysis of prohibitively large sets of data. This paper addresses this subject and applies sentiment analysis, topic modelling and keyword extraction to a B2B data set. We show that accurate sentiment can be extracted from the notes automatically and the notes can be sorted by relevance into different topics. We see that without clear separation topics can lack relevance to a business context."}, "cited_paper_content": {"title": "Cuda-Self-Organizing Feature Map Based Visual Sentiment Analysis Of Bank Customer Complaints For Analytical Crm", "abstract": "With the widespread use of social media, companies now have access to a wealth of customer feedback data which has valuable applications to Customer Relationship Management (CRM). Analyzing customer grievances data, is paramount as their speedy non-redressal would lead to customer churn resulting in lower profitability. In this paper, we propose a descriptive analytics framework using Self-organizing feature map (SOM), for Visual Sentiment Analysis of customer complaints. The network learns the inherent grouping of the complaints automatically which can then be visualized too using various techniques. Analytical Customer Relationship Management (ACRM) executives can draw useful business insights from the maps and take timely remedial action. We also propose a high-performance version of the algorithm CUDASOM (CUDA based Self Organizing feature Map) implemented using NVIDIA parallel computing platform, CUDA, which speeds up the processing of high-dimensional text data and generates fast results. The efficacy of the proposed model has been demonstrated on the customer complaints data regarding the products and services of four leading Indian banks. CUDASOM achieved an average speed up of 44 times. Our approach can expand research into intelligent grievance redressal system to provide rapid solutions to the complaining customers."}, "keywords": ["topics", "self-organising maps"], "citation_intent": "method"} {"citing_id": "2303.01675v1", "cited_id": "1802.04799", "section_title": "Deep Learning Compilers", "citation": "TVM #REFR is a compiler that exposes graph-level and operatorlevel optimizations to provide performance portability for DL workloads across diverse hardware backends.", "text_before_citation": ["HLO #OTHEREFR is an single assignment-based intermediate representation(IR) for tensor computations in XLA.", "MLIR #OTHEREFR is a reusable and extensible compiler infrastructure that standardizes the static single Assignment-based IR data structures and provides a declarative system to define IR dialects.", "Gshard #OTHEREFR and GSPMD #OTHEREFR introduce collective communication primitives on HLO IR and provides convenient APIs for sharding large models.", "Relay #OTHEREFR presents a compiler framework to unify and generalize IR in existing frameworks."], "text_after_citation": [], "citing_paper_content": {"title": "Ada-Grouper: Accelerating Pipeline Parallelism In Preempted Network By Adaptive Group-Scheduling For Micro-Batches", "abstract": "Pipeline parallelism has been demonstrated to be a remarkable approach to improve throughput for training deep neural networks with billions of parameters over heterogeneous clusters. The 1F1B scheduling plan is a widely adopted strategy for memory and performance optimization, which interchanges the forward and backward stage computations of different micro-batches. On the other hand, a common issue in using the 1F1B scheduling is that stage computation is delayed due to the data transfer when network resources are preempted by other tasks, even with the minimum communication between stages. The exclusive access of these network resources cannot be guaranteed in cloud offerings. We present a general scheduling technique to accommodate pipeline parallelism to preempted network environments at the expense of a certain amount of memory pressure. The core concept is to extend 1F1B schedule scheme to kFkB, which groups k micro-batches, and alternately executes k forward and backward computations. We propose Ada-Grouper, an adaptive kFkB scheduler which regularly adjusts the number of group members k to maintain an optimal balance between communication and computation efficiency correspond to changes in a changing network environment under the memory limit. Experimental results demonstrate that our design maintains stable performance for pipeline parallelism, yielding a performance increase of up from 4% to 30%, compared with 1F1B in preempted network scenarios."}, "cited_paper_content": {"title": "Tvm: An Automated End-To-End Optimizing Compiler For Deep Learning", "abstract": "There is an increasing need to bring machine learning to a wide diversity of hardware devices. Current frameworks rely on vendor-specific operator libraries and optimize for a narrow range of server-class GPUs. Deploying workloads to new platforms -- such as mobile phones, embedded devices, and accelerators (e.g., FPGAs, ASICs) -- requires significant manual effort. We propose TVM, a compiler that exposes graph-level and operator-level optimizations to provide performance portability to deep learning workloads across diverse hardware back-ends. TVM solves optimization challenges specific to deep learning, such as high-level operator fusion, mapping to arbitrary hardware primitives, and memory latency hiding. It also automates optimization of low-level programs to hardware characteristics by employing a novel, learning-based cost modeling method for rapid exploration of code optimizations. Experimental results show that TVM delivers performance across hardware back-ends that are competitive with state-of-the-art, hand-tuned libraries for low-power CPU, mobile GPU, and server-class GPUs. We also demonstrate TVM's ability to target new accelerator back-ends, such as the FPGA-based generic deep learning accelerator. The system is open sourced and in production use inside several major companies."}, "keywords": ["DL workloads", "operatorlevel optimizations"], "citation_intent": "background"} {"citing_id": "2303.17212v1", "cited_id": "1711.09020", "section_title": "I. Is Sargan Effective For Facial Attribute Manipulation?", "citation": "For comparison, StarGAN results were obtained via pre-trained model provided by their authors #REFR . The results in Fig.", "text_before_citation": ["We trained the proposed SARGAN model on CelebA dataset to assess the performance of SARGAN on facial attribute manipulation."], "text_after_citation": ["13 demonstrate that StarGAN is unable to recover the true input image overall color as well as facial details such as eye and skin colors.", "In contrast, SARGAN recovers true input image colors and facial details.", "To confirm our analysis, we compute the distance between histograms of input and StarGAN-synthesized images and input and SARGAN-synthesized images, respectively. Fig.", "13 shows that SARGAN results are closer to the input images compared to StarGAN. Further, as shown in Fig.", "13 , Column 3, StarGAN modifies the input face's hairstyle in addition to the hair color, whereas SARGAN only modifies the hair color."], "citing_paper_content": {"title": "Sargan: Spatial Attention-Based Residuals For Facial Expression Manipulation", "abstract": "Encoder-decoder based architecture has been widely used in the generator of generative adversarial networks for facial manipulation. However, we observe that the current architecture fails to recover the input image color, rich facial details such as skin color or texture and introduces artifacts as well. In this paper, we present a novel method named SARGAN that addresses the above-mentioned limitations from three perspectives. First, we employed spatial attention-based residual block instead of vanilla residual blocks to properly capture the expressionrelated features to be changed while keeping the other features unchanged. Second, we exploited a symmetric encoder-decoder network to attend facial features at multiple scales. Third, we proposed to train the complete network with a residual connection which relieves the generator of pressure to generate the input face image thereby producing the desired expression by directly feeding the input image towards the end of the generator. Both qualitative and quantitative experimental results show that our proposed model performs significantly better than state-ofthe-art methods. In addition, existing models require much larger datasets for training but their performance degrades on out-ofdistribution images. In contrast, SARGAN can be trained on smaller facial expressions datasets, which generalizes well on outof-distribution images including human photographs, portraits, avatars and statues."}, "cited_paper_content": {"title": "Stargan: Unified Generative Adversarial Networks For Multi-Domain Image-To-Image Translation", "abstract": "Recent studies have shown remarkable success in image-to-image translation for two domains. However, existing approaches have limited scalability and robustness in handling more than two domains, since different models should be built independently for every pair of image domains. To address this limitation, we propose StarGAN, a novel and scalable approach that can perform image-to-image translations for multiple domains using only a single model. Such a unified model architecture of StarGAN allows simultaneous training of multiple datasets with different domains within a single network. This leads to StarGAN's superior quality of translated images compared to existing models as well as the novel capability of flexibly translating an input image to any desired target domain. We empirically demonstrate the effectiveness of our approach on a facial attribute transfer and a facial expression synthesis tasks."}, "keywords": ["StarGAN results"], "citation_intent": "result"} {"citing_id": "2303.17395v1", "cited_id": "1706.03741", "section_title": "B. Data Processing", "citation": "Trained using Reinforcement Learning from Human Feedback (RLHF) #REFR , ChatGPT has been shown to excel at generating human-like responses to natural language prompts, and has garnered widespread attention for its powerful understanding, reasoning, and dialogue abilities.", "text_before_citation": ["These two filtering steps have removed about 265 000 data samples from FreeSound.", "ChatGPT-based Transformation To transform the raw descriptions into audio captions, we propose that a well-formed audio caption should possess the following characteristics:", "\u2022 Be a single, accurate description of the audio content using concise syntax; \u2022 Avoid the use of named entities such as people's names, locations, and recording devices that cannot be inferred from the audio signal alone; \u2022 Exclude any subjective sound-unrelated information such as personal feelings or opinions; However, online-harvested descriptions are extremely noisy and most of them fail to meet above requirements, particularly those from FreeSound.", "Due to the varying characteristics of raw descriptions, it is challenging to design rules that accurately convert them into captions, and doing so would result in a high discard rate similar to what was observed in CC3M.", "To tackle the challenge of converting raw descriptions into captions, we propose using ChatGPT, a powerful large language model trained by OpenAI 6 to perform this task automatically."], "text_after_citation": ["By designing prompts that account for the characteristics of different data sources, ChatGPT can effectively filter out sound-unrelated information and rewrite raw descriptions in to audio caption-like sentences that meet the requirements we proposed in prompts.", "This approach has the potential to significantly reduce the discard rate of raw descriptions and improve the quality of converted captions. Prompts we used are shown in Table I .", "In order to make use of ChatGPT's in-context learning ability, several transformation examples are also included in the prompts and they are different for each data sources (ignored in Table I ). These examples can significantly improve the caption quality.", "Table II presents examples of the raw descriptions and final processed captions.", "It can be observed that ChatGPT can transform non-sentence descriptions (i.e., nouns and phrases) into sentences, remove redundant information that is too specific or is not related to sound, and summarize long sentences into one-sentence high-level audio captions."], "citing_paper_content": {"title": "Wavcaps: A Chatgpt-Assisted Weakly-Labelled Audio Captioning Dataset For Audio-Language Multimodal Research", "abstract": "The advancement of audio-language (AL) multimodal learning tasks has been significant in recent years. However, researchers face challenges due to the costly and timeconsuming collection process of existing audio-language datasets, which are limited in size. To address this data scarcity issue, we introduce WavCaps, the first large-scale weakly-labelled audio captioning dataset, comprising approximately 400k audio clips with paired captions. We sourced audio clips and their raw descriptions from web sources and a sound event detection dataset. However, the online-harvested raw descriptions are highly noisy and unsuitable for direct use in tasks such as automated audio captioning. To overcome this issue, we propose a three-stage processing pipeline for filtering noisy data and generating highquality captions, where ChatGPT, a large language model, is leveraged to filter and transform raw descriptions automatically. We conduct a comprehensive analysis of the characteristics of WavCaps dataset and evaluate it on multiple downstream audiolanguage multimodal learning tasks. The systems trained on WavCaps outperform previous state-of-the-art (SOTA) models by a significant margin. Our aspiration is for the WavCaps dataset we have proposed to facilitate research in audio-language multimodal learning and demonstrate the potential of utilizing ChatGPT to enhance academic research. Our dataset and codes are available at https://github.com/XinhaoMei/WavCaps."}, "cited_paper_content": {"title": "Deep Reinforcement Learning From Human Preferences", "abstract": "For sophisticated reinforcement learning (RL) systems to interact usefully with real-world environments, we need to communicate complex goals to these systems. In this work, we explore goals defined in terms of (non-expert) human preferences between pairs of trajectory segments. We show that this approach can effectively solve complex RL tasks without access to the reward function, including Atari games and simulated robot locomotion, while providing feedback on less than one percent of our agent's interactions with the environment. This reduces the cost of human oversight far enough that it can be practically applied to state-of-the-art RL systems. To demonstrate the flexibility of our approach, we show that we can successfully train complex novel behaviors with about an hour of human time. These behaviors and environments are considerably more complex than any that have been previously learned from human feedback."}, "keywords": ["natural language prompts", "Reinforcement Learning"], "citation_intent": "method"} {"citing_id": "2303.10610v1", "cited_id": "1706.02262", "section_title": "Condition-Specific Mmd Regularization", "citation": "Inspired by InfoVAE #REFR , we introduce an additional pair of condition-specific MMD regularization loss to learn mutual information between the sampled noise distribution and the Gaussian distribution.", "text_before_citation": ["Maximum-Mean Discrepancy (MMD) is to quantify the similarity between two distributions by comparing all of their moments #OTHEREFR . It can be efficiently implemented using a kernel trick."], "text_after_citation": ["To be specific, we sample the noisy variable y g t from the diffusion process at time step t conditioned only by the global prior and then compute an MMDregularization loss as:", "EQUATION", "where K(\u2022, \u2022) is a positive definite kernel to reproduce distributions in the Hilbert space.", "The condition-specific MMD regularization is also applied on the local prior, as shown in Figure 1 (a) .", "While the general noise estimation loss L captures the complementary information from both priors, the condition-specific MMD regularization maintains the mutual information between each prior and target distribution."], "citing_paper_content": {"title": "Diffmic: Dual-Guidance Diffusion Network For Medical Image Classification", "abstract": "Diffusion Probabilistic Models have recently shown remarkable performance in generative image modeling, attracting significant attention in the computer vision community. However, while a substantial amount of diffusion-based research has focused on generative tasks, few studies have applied diffusion models to general medical image classification. In this paper, we propose the first diffusion-based model (named DiffMIC) to address general medical image classification by eliminating unexpected noise and perturbations in medical images and robustly capturing semantic representation. To achieve this goal, we devise a dual conditional guidance strategy that conditions each diffusion step with multiple granularities to improve step-wise regional attention. Furthermore, we propose learning the mutual information in each granularity by enforcing Maximum-Mean Discrepancy regularization during the diffusion forward process. We evaluate the effectiveness of our DiffMIC on three medical classification tasks with different image modalities, including placental maturity grading on ultrasound images, skin lesion classification using dermatoscopic images, and diabetic retinopathy grading using fundus images. Our experimental results demonstrate that DiffMIC outperforms state-of-the-art methods by a significant margin, indicating the universality and effectiveness of the proposed model."}, "cited_paper_content": {"title": "Infovae: Information Maximizing Variational Autoencoders", "abstract": "It has been previously observed that variational autoencoders tend to ignore the latent code when combined with a decoding distribution that is too flexible. This undermines the purpose of unsupervised representation learning. We identify the reason for this short-coming in the regularization term used in the ELBO criterion to match the variational posterior to the latent prior distribution. We show that removing this regularization term leads to a model that can still discover meaningful latent features. Even though ancestral sampling is no longer tractable, sampling is possible using a Markov chain. Furthermore, we propose a class of training criteria that use alternative divergences for the regularization term, generalizing the standard ELBO which employs KL divergence. These models can discover meaningful latent features and allow for tractable ancestral sampling. In particular, we propose an alternative based on Maximum Mean Discrepancy (MMD) that is simple to implement, robust, and has similar or better performance in every quantitative and qualitative metric we experimented on."}, "keywords": ["condition-specific MMD regularization"], "citation_intent": "method"} {"citing_id": "2303.01112v1", "cited_id": "1707.02968", "section_title": "Introduction", "citation": "The accuracy of vision transformers exceeds that of convolutional neural networks by a considerable margin when the model is pre-trained on huge datasets, such as JFT-300M #REFR .", "text_before_citation": ["Vision transformers #OTHEREFR have made a significant impact on the entire field of computer vision, and state of the art models in classification #OTHEREFR , object detection #OTHEREFR , and segmentation #OTHEREFR are now based on vision transformers."], "text_after_citation": ["However, the JFT-300M dataset contains 300M images and 375M labels.", "It is impossible to sin #OTHEREFR manually label all of these images.", "Efforts to automatically label such datasets is still not as accurate as manual labeling.", "Self-supervised learning (SSL) is increasing in popularity, as datasets do not need to be labeled for this mode of training #OTHEREFR .", "Although SSL removes the burden of labeling large datasets, the effort to collect/download, store, and load these large datasets remains a challenge."], "citing_paper_content": {"title": "Visual Atoms: Pre-Training Vision Transformers With Sinusoidal Waves", "abstract": "Formula-driven supervised learning (FDSL) has been shown to be an effective method for pre-training vision transformers, where ExFractalDB-21k was shown to exceed the pre-training effect of ImageNet-21k. These studies also indicate that contours mattered more than textures when pre-training vision transformers. However, the lack of a systematic investigation as to why these contour-oriented synthetic datasets can achieve the same accuracy as real datasets leaves much room for skepticism. In the present work, we develop a novel methodology based on circular harmonics for systematically investigating the design space of contour-oriented synthetic datasets. This allows us to efficiently search the optimal range of FDSL parameters and maximize the variety of synthetic images in the dataset, which we found to be a critical factor. When the resulting new dataset VisualAtom-21k is used for pre-training ViT-Base, the top-1 accuracy reached 83.7% when fine-tuning on ImageNet-1k. This is close to the top-1 accuracy (84.2%) achieved by JFT-300M pre-training, while the number of images is 1/14. Unlike JFT-300M which is a static dataset, the quality of synthetic datasets will continue to improve, and the current work is a testament to this possibility. FDSL is also free of the common issues associated with real images, e.g. privacy/copyright issues, labeling costs/errors, and ethical biases."}, "cited_paper_content": {"title": "Revisiting Unreasonable Effectiveness Of Data In Deep Learning Era", "abstract": "The success of deep learning in vision can be attributed to: (a) models with high capacity; (b) increased computational power; and (c) availability of large-scale labeled data. Since 2012, there have been significant advances in representation capabilities of the models and computational capabilities of GPUs. But the size of the biggest dataset has surprisingly remained constant. What will happen if we increase the dataset size by 10x or 100x? This paper takes a step towards clearing the clouds of mystery surrounding the relationship between `enormous data' and visual deep learning. By exploiting the JFT-300M dataset which has more than 375M noisy labels for 300M images, we investigate how the performance of current vision tasks would change if this data was used for representation learning. Our paper delivers some surprising (and some expected) findings. First, we find that the performance on vision tasks increases logarithmically based on volume of training data size. Second, we show that representation learning (or pre-training) still holds a lot of promise. One can improve performance on many vision tasks by just training a better base model. Finally, as expected, we present new state-of-the-art results for different vision tasks including image classification, object detection, semantic segmentation and human pose estimation. Our sincere hope is that this inspires vision community to not undervalue the data and develop collective efforts in building larger datasets."}, "keywords": ["vision transformers", "huge datasets"], "citation_intent": "background"} {"citing_id": "2303.10435v1", "cited_id": "1509.07009", "section_title": "Results And Findings", "citation": "The results are still significantly inferior to that with the original high-resolution images #REFR .", "text_before_citation": ["Statistical analysis suggests that when image resolution is below 20 \u00d7 20 pixels, superresolution techniques can significantly improve human recognition performance on both activity recognition and privacy recognition tasks.", "But it is worth noting that the improvement in recognition performance brought about by super-resolution technology is still less than that brought about by increasing the resolution itself.", "Such a finding reveals that super-resolution techniques do not provide enough additional information for humans to enhance their perception ability in both activity recognition and visual privacy awareness tasks.", "In terms of the impact of the super-resolution technique on the machine's recognition performance, researchers have proved that super-resolution can slightly facilitate vision-based recognition task such as activity recognition #OTHEREFR , object and text recognition #OTHEREFR .", "However, the influence of the super-resolution technique is very limited."], "text_after_citation": ["In conclusion, the additional visual information introduced by the image super-resolution technique is insufficient to overcome the effect of resolution on the recognition performance of humans and machines.", "Therefore, we believe that the effects of image resolution on human (section 6) and the machine's (section 7) ADLs and visual privacy recognition performance are robust against image superresolution techniques."], "citing_paper_content": {"title": "Modeling The Trade-Off Of Privacy Preservation And Activity Recognition On Low-Resolution Images", "abstract": "A computer vision system using low-resolution image sensors can provide intelligent services (e.g., activity recognition) but preserve unnecessary visual privacy information from the hardware level. However, preserving visual privacy and enabling accurate machine recognition have adversarial needs on image resolution. Modeling the trade-off of privacy preservation and machine recognition"}, "cited_paper_content": {"title": "Is Image Super-Resolution Helpful For Other Vision Tasks?", "abstract": "Despite the great advances made in the field of image super-resolution (ISR) during the last years, the performance has merely been evaluated perceptually. Thus, it is still unclear whether ISR is helpful for other vision tasks. In this paper, we present the first comprehensive study and analysis of the usefulness of ISR for other vision applications. In particular, six ISR methods are evaluated on four popular vision tasks, namely edge detection, semantic image segmentation, digit recognition, and scene recognition. We show that applying ISR to input images of other vision systems does improve their performance when the input images are of low-resolution. We also study the correlation between four standard perceptual evaluation criteria (namely PSNR, SSIM, IFC, and NQM) and the usefulness of ISR to the vision tasks. Experiments show that they correlate well with each other in general, but perceptual criteria are still not accurate enough to be used as full proxies for the usefulness. We hope this work will inspire the community to evaluate ISR methods also in real vision applications, and to adopt ISR as a pre-processing step of other vision tasks if the resolution of their input images is low."}, "keywords": ["original high-resolution images"], "citation_intent": "result"} {"citing_id": "2303.15414v1", "cited_id": "1703.00443", "section_title": "Differentiable Graph Matching Layer", "citation": "In our implementation, we adopt the qpth library #REFR to build the graph matching module.", "text_before_citation": ["After enhancing the vertex features and constructing the edge features on graph G D and G T , we meet the core component of our method: the differentiable graph matching layer. By optimizing the QP in Eq.", "6 from quadratic affinity matrix M and vertex affinity matrix B, we can derive the optimal matching score vector x and reshape it back to the shape n d \u00d7 n t to get the matching score map X.", "Since we finally formulate the graph matching problem as a QP, we can construct the graph matching module as a differentiable QP layer in our neural network.", "Since KKT conditions are the necessary and sufficient conditions for the optimal solution x * and its dual variables, we could derive the gradient in backward pass of our graph matching layer based on the KKT conditions and implicit function theorem, which is inspired by OptNet #OTHEREFR ."], "text_after_citation": ["In the inference stage, to reduce the computational cost and accelerate the algorithm, we solve the QP using the CVXPY library #OTHEREFR only for forward operation.", "For training, we use weighted binary cross entropy Loss:", "EQUATION", "where\u0177 i,j denotes the matching score between detection D i and tracklet T j , and y i,j is the ground truth indicating whether the object belongs to the tracklet.", "k = (n t \u2212 1) is the weight to balance the loss between positive and negative samples."], "citing_paper_content": {"title": "Learnable Graph Matching: A Practical Paradigm For Data Association", "abstract": "Data association is at the core of many computer vision tasks, e.g., multiple object tracking, image matching, and point cloud registration. Existing methods usually solve the data association problem by network flow optimization, bipartite matching, or end-to-end learning directly. Despite their popularity, we find some defects of the current solutions: they mostly ignore the intra-view context information; besides, they either train deep association models in an end-to-end way and hardly utilize the advantage of optimization-based assignment methods, or only use an off-the-shelf neural network to extract features. In this paper, we propose a general learnable graph matching method to address these issues. Especially, we model the intra-view relationships as an undirected graph. Then data association turns into a general graph matching problem between graphs. Furthermore, to make optimization end-to-end differentiable, we relax the original graph matching problem into continuous quadratic programming and then incorporate training into a deep graph neural network with KKT conditions and implicit function theorem. In MOT task, our method achieves state-of-the-art performance on several MOT datasets. For image matching, our method outperforms state-of-the-art methods with half training data and iterations on a popular indoor dataset, ScanNet. Code will be available at https://github.com/jiaweihe1996/GMTracker."}, "cited_paper_content": {"title": "Optnet: Differentiable Optimization As A Layer In Neural Networks", "abstract": "This paper presents OptNet, a network architecture that integrates optimization problems (here, specifically in the form of quadratic programs) as individual layers in larger end-to-end trainable deep networks. These layers allow complex dependencies between the hidden states to be captured that traditional convolutional and fully-connected layers are not able to capture. In this paper, we develop the foundations for such an architecture: we derive the equations to perform exact differentiation through these layers and with respect to layer parameters; we develop a highly efficient solver for these layers that exploits fast GPU-based batch solves within a primal-dual interior point method, and which provides backpropagation gradients with virtually no additional cost on top of the solve; and we highlight the application of these approaches in several problems. In one particularly standout example, we show that the method is capable of learning to play Sudoku given just input and output games, with no a priori information about the rules of the game; this task is virtually impossible for other neural network architectures that we have experimented with, and highlights the representation capabilities of our approach."}, "keywords": ["graph matching module", "implementation"], "citation_intent": "method"} {"citing_id": "2303.10692v1", "cited_id": "1602.01783", "section_title": "A. Marl Framework Of Interactive Image Segmentation", "citation": "The asynchronous advantage actorcritic (A3C) #REFR scheme is employed in BS-IRIS and further extended to a fully convolutional form, where agents can collaborate and communicate through convolutional layers. The algorithm is summarized in Alg. 1.", "text_before_citation": ["It is impractical to directly apply typical multi-agent learning algorithms such as #OTHEREFR .", "2) All voxel-type agents are neatly aligned in a 3D grid and dependent with each other for the segmentation task.", "It is necessary to require them to cooperate with each other.", "This is achieved by enforcing all voxel agents to share the same policy.", "When one agent explores a beneficial action, other agents will simultaneously acquire that knowledge, which also significantly reduces the number of parameters."], "text_after_citation": [], "citing_paper_content": {"title": "Boundary-Aware Supervoxel-Level Iteratively Refined Interactive 3D Image Segmentation With Multi-Agent Reinforcement Learning", "abstract": "Interactive segmentation has recently been explored to effectively and efficiently harvest high-quality segmentation masks by iteratively incorporating user hints. While iterative in nature, most existing interactive segmentation methods tend to ignore the dynamics of successive interactions and take each interaction independently. We here propose to model iterative interactive image segmentation with a Markov decision process (MDP) and solve it with reinforcement learning (RL) where each voxel is treated as an agent. Considering the large exploration space for voxel-wise prediction and the dependence among neighboring voxels for the segmentation tasks, multi-agent reinforcement learning is adopted, where the voxel-level policy is shared among agents. Considering that boundary voxels are more important for segmentation, we further introduce a boundary-aware reward, which consists of a global reward in the form of relative cross-entropy gain, to update the policy in a constrained direction, and a boundary reward in the form of relative weight, to emphasize the correctness of boundary predictions. To combine the advantages of different types of interactions, i.e., simple and efficient for point-clicking, and stable and robust for scribbles, we propose a supervoxel-clicking based interaction design. Experimental results on four benchmark datasets have shown that the proposed method significantly outperforms the state-of-the-arts, with the advantage of fewer interactions, higher accuracy, and enhanced robustness."}, "cited_paper_content": {"title": "Asynchronous Methods For Deep Reinforcement Learning", "abstract": "We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input."}, "keywords": ["convolutional layers", "asynchronous advantage actorcritic"], "citation_intent": "method"} {"citing_id": "2304.02932v1", "cited_id": "1905.04273", "section_title": "Conclusion", "citation": "The exponential mechanism we use at the first stage is a Gumbel noise style implementation #REFR which is analyzed with RDP via a \"Bounded Range\" property for tighter bound.", "text_before_citation": ["On the attack aspect, we have proposed three knowledge graph(KG) triple inference attacks on FKGE to expose its significant privacy vulnerability.", "On the defense aspect, DP-FLames is proposed to provide rigorous differential privacy protection for FKGE, which exploits the sparse gradient property of FKGE by designing the private active gradient selection strategy.", "An adaptive privacy budget allocation policy is further incorporated to dynamically adjust defense magnitude against the unbalanced privacy risks throughout the training procedure.", "The experiment results demonstrate that the proposed defense can effectively defend against inference attacks with a modest utility decrease.", "Proof."], "text_after_citation": ["This implementation of exponential mechanism satisfies ( , 8 2 ) \u2212 .", "We adopt a PTR variant with Gaussian noise #OTHEREFR for the second stage.", "Since there is a probability bounded by that the test in PTR may fail, the algorithm satisfies -approximate-( , 2 2 )-RDP.", "Thus, the private selection without subsampling satisfiesapproximate-( , ( 8 2 + 2 2 ))-RDP by composition.", "Since the input of Algorithm 2 is computed from a subsampled batch of the original dataset(Line 11 in Algorithm 1), the privacy guarantee can be amplified through the privacy amplification theorem for RDP with subsampled mechanism #OTHEREFR . \u25a1"], "citing_paper_content": {"title": "Quantifying And Defending Against Privacy Threats On Federated Knowledge Graph Embedding", "abstract": "Knowledge Graph Embedding (KGE) is a fundamental technique that extracts expressive representation from knowledge graph (KG) to facilitate diverse downstream tasks. The emerging federated KGE (FKGE) collaboratively trains from distributed KGs held among clients while avoiding exchanging clients' sensitive raw KGs, which can still suffer from privacy threats as evidenced in other federated model trainings (e.g., neural networks). However, quantifying and defending against such privacy threats remain unexplored for FKGE which possesses unique properties not shared by previously studied models. In this paper, we conduct the first holistic study of the privacy threat on FKGE from both attack and defense perspectives. For the attack, we quantify the privacy threat by proposing three new inference attacks, which reveal substantial privacy risk by successfully inferring the existence of the KG triple from victim clients. For the defense, we propose DP-Flames, a novel differentially private FKGE with private selection, which offers a better privacy-utility tradeoff by exploiting the entity-binding sparse gradient property of FKGE and comes with a tight privacy accountant by incorporating the state-of-the-art private selection technique. We further propose an adaptive privacy budget allocation policy to dynamically adjust defense magnitude across the training procedure. Comprehensive evaluations demonstrate that the proposed defense can successfully mitigate the privacy threat by effectively reducing the success rate of inference attacks from 83.1% to 59.4% on average with only a modest utility decrease."}, "cited_paper_content": {"title": "Practical Differentially Private Top-$K$ Selection With Pay-What-You-Get Composition", "abstract": "We study the problem of top-k selection over a large domain universe subject to user-level differential privacy. Typically, the exponential mechanism or report noisy max are the algorithms used to solve this problem. However, these algorithms require querying the database for the count of each domain element. We focus on the setting where the data domain is unknown, which is different than the setting of frequent itemsets where an apriori type algorithm can help prune the space of domain elements to query. We design algorithms that ensures (approximate) differential privacy and only needs access to the true top-k' elements from the data for any chosen k' \u2265 k. This is a highly desirable feature for making differential privacy practical, since the algorithms require no knowledge of the domain. We consider both the setting where a user's data can modify an arbitrary number of counts by at most 1, i.e. unrestricted sensitivity, and the setting where a user's data can modify at most some small, fixed number of counts by at most 1, i.e. restricted sensitivity. Additionally, we provide a pay-what-you-get privacy composition bound for our algorithms. That is, our algorithms might return fewer than k elements when the top-k elements are queried, but the overall privacy budget only decreases by the size of the outcome set."}, "keywords": ["\"Bounded Range\" property"], "citation_intent": "method"} {"citing_id": "2304.01434v1", "cited_id": "1701.00160", "section_title": "Gan: Preventing Mode Collapse", "citation": "The GAN training usually ends up with (partial) mode collapse #REFR , where generative models suffer lack of diversity.", "text_before_citation": ["In Section 3.3, VNE + has successfully prevented representation collapse.", "As another example for collapse prevention, we consider the mode collapse in GAN."], "text_after_citation": ["To demonstrate that this problem can be solved by VNE + , we reproduce various GAN methods based on an open source code base, StudioGAN #OTHEREFR and train all models with CIFAR-10 for 100 epochs.", "To evaluate the models, we report the Inception Score #OTHEREFR (IS, higher is better) and the Fr\u00e9chet Inception Distance #OTHEREFR (FID, lower is better).", "Although both IS and FID are the most popular metrics for evaluating generative models, FID is known to favor more diversified images #OTHEREFR .", "Table 8 demonstrate that the overall quality of the output, especially diversity, has been improved by VNE + because FID scores have been improved. IS has also been improved."], "citing_paper_content": {"title": "Vne: An Effective Method For Improving Deep Representation By Manipulating Eigenvalue Distribution", "abstract": "Since the introduction of deep learning, a wide scope of representation properties, such as decorrelation, whitening, disentanglement, rank, isotropy, and mutual information, have been studied to improve the quality of representation. However, manipulating such properties can be challenging in terms of implementational effectiveness and general applicability. To address these limitations, we propose to regularize von Neumann entropy (VNE) of representation. First, we demonstrate that the mathematical formulation of VNE is superior in effectively manipulating the eigenvalues of the representation autocorrelation matrix. Then, we demonstrate that it is widely applicable in improving stateof-the-art algorithms or popular benchmark algorithms by investigating domain-generalization, meta-learning, selfsupervised learning, and generative models. In addition, we formally establish theoretical connections with rank, disentanglement, and isotropy of representation. Finally, we provide discussions on the dimension control of VNE and the relationship with Shannon entropy. Code is available at: https://github.com/jaeill/CVPR23-VNE. * Corresponding author (a) Domain generalization (b) Meta-learning (c) Self-supervised learning (d) GAN"}, "cited_paper_content": {"title": "Nips 2016 Tutorial: Generative Adversarial Networks", "abstract": "This report summarizes the tutorial presented by the author at NIPS 2016 on generative adversarial networks (GANs). The tutorial describes: (1) Why generative modeling is a topic worth studying, (2) how generative models work, and how GANs compare to other generative models, (3) the details of how GANs work, (4) research frontiers in GANs, and (5) state-of-the-art image models that combine GANs with other methods. Finally, the tutorial contains three exercises for readers to complete, and the solutions to these exercises."}, "keywords": ["GAN training"], "citation_intent": "background"} {"citing_id": "2304.05571v1", "cited_id": "1704.00390", "section_title": "Ii. Related Works", "citation": "PoseNet #REFR [15] trains a CNN to regress the 6-DOF camera pose from a single RGB image without additional engineering or graph optimization.", "text_before_citation": ["And #OTHEREFR extends the database volumes by generating the rendered synthetic images as the database.", "Active Search #OTHEREFR implements the image retrieval process followed by the direct 2D-3D matching, so it covers both advantages.", "On these bases, the applications of some excellent feature extraction #OTHEREFR , feature matching #OTHEREFR and image retrieval #OTHEREFR [3] methods can further improve the camera localization performances individually.", "End-to-End Metrics Regression.", "Metrics regression localization methods aim to regress the camera pose directly from train images with the ground truth poses."], "text_after_citation": ["And it has been extended to video mode using LSTM to extract temporal information #OTHEREFR .", "Later on, #OTHEREFR uses a Bayesian CNN implementation to obtain an estimate of the localization uncertainty and improves the accuracy on the large-scale outdoor datasets.", "AtLoc #OTHEREFR shows that the attention block can be used to force the network to focus on more geometrically robust objects and features, which can learn to reject dynamic objects and illumination conditions to achieve better performance.", "MapNet #OTHEREFR exploits other sensory inputs like visual odometry and GPS in addition to images, and fuses them together for camera localization.", "End-to-End Scene Coordinates Prediction."], "citing_paper_content": {"title": "Sgl: Structure Guidance Learning For Camera Localization", "abstract": "Camera localization is a classical computer vision task that serves various Artificial Intelligence and Robotics applications. With the rapid developments of Deep Neural Networks (DNNs), end-to-end visual localization methods are prosperous in recent years. In this work, we focus on the scene coordinate prediction ones and propose a network architecture named as Structure Guidance Learning (SGL) which utilizes the receptive branch and the structure branch to extract both high-level and low-level features to estimate the 3D coordinates. We design a confidence strategy to refine and filter the predicted 3D observations, which enables us to estimate the camera poses by employing the Perspective-n-Point (PnP) with RANSAC. In the training part, we design the Bundle Adjustment trainer to help the network fit the scenes better. Comparisons with some state-of-the-art (SOTA) methods and sufficient ablation experiments confirm the validity of our proposed architecture."}, "cited_paper_content": {"title": "Geometric Loss Functions For Camera Pose Regression With Deep Learning", "abstract": "Deep learning has shown to be effective for robust and real-time monocular image relocalisation. In particular, PoseNet [22] is a deep convolutional neural network which learns to regress the 6-DOF camera pose from a single image. It learns to localize using high level features and is robust to difficult lighting, motion blur and unknown camera intrinsics, where point based SIFT registration fails. However, it was trained using a naive loss function, with hyper-parameters which require expensive tuning. In this paper, we give the problem a more fundamental theoretical treatment. We explore a number of novel loss functions for learning camera pose which are based on geometry and scene reprojection error. Additionally we show how to automatically learn an optimal weighting to simultaneously regress position and orientation. By leveraging geometry, we demonstrate that our technique significantly improves PoseNets performance across datasets ranging from indoor rooms to a small city."}, "keywords": ["PoseNet"], "citation_intent": "method"} {"citing_id": "2303.13769v1", "cited_id": "1506.01497", "section_title": "Results", "citation": "Observably, on the COCO-OOD dataset, the U-AP of UnSniffer outperforms the 2 nd result by more than twice, and our U-F1 is 16.2% higher than the 2 nd result, at the cost of a 1.9% drop in mAP on VOC compared to Faster-RCNN #REFR .", "text_before_citation": ["Quantitative Analysis.", "In Table 2 , we show UnSniffer's result on the UOD-Benchmark, along with the results of MSP #OTHEREFR , Mahalanobis #OTHEREFR , Energy score #OTHEREFR , ORE #OTHEREFR , OW-DETR #OTHEREFR and VOS #OTHEREFR .", "Note that OW-DETR is based on Deformable DETR #OTHEREFR with a stronger discriminative power, while other methods use Faster-RCNN.", "Since U-PRE or U-REC cannot independently reflect the model's performance, we mainly employ U-AP, U-F1, AOSE and WI."], "text_after_citation": ["On the COCO-Mix dataset, the UnSniffer still holds the lead in both U-AP and U-F1, which are 1% and 11.2% higher than the 2 nd results, respectively.", "Those comparisons demonstrate that UnSniffer outperforms the existing methods in unknown object detection, which owes to our GOC learning the overall confidence of objects from finite known objects.", "Furthermore, UnSniffer has the smallest AOSE (398) but the largest WI (0.175), which can be explained by the inverse relationship between WI and the count of known objects misclassified as an incorrect class. More details are illustrated in the supplementary material. Qualitative Analysis. Fig.", "7 visualizes the results of different methods on example images of the COCO-Mix (first two rows) and COCO-OOD dataset (last three rows).", "It can be seen that VOS #OTHEREFR , MSP #OTHEREFR , Mahalanobis distance #OTHEREFR , and Energy score #OTHEREFR miss many objects of the unknown class, such as the surfboards in the 1 st image, the keyboard and water cup in the 3 rd image, the CD case in the 4 th image."], "citing_paper_content": {"title": "Unknown Sniffer For Object Detection: Don'T Turn A Blind Eye To Unknown Objects", "abstract": "The recently proposed open-world object and open-set detection achieve a breakthrough in finding never-seenbefore objects and distinguishing them from class-known ones. However, their studies on knowledge transfer from known classes to unknown ones need to be deeper, leading to the scanty capability for detecting unknowns hidden in the background. In this paper, we propose the unknown sniffer (UnSniffer) to find both unknown and known objects. Firstly, the generalized object confidence (GOC) score is introduced, which only uses class-known samples for supervision and avoids improper suppression of unknowns in the background. Significantly, such confidence score learned from class-known objects can be generalized to unknown ones. Additionally, we propose a negative energy suppression loss to further limit the non-object samples in the background. Next, the best box of each unknown is hard to obtain during inference due to lacking their semantic information in training. To solve this issue, we introduce a graphbased determination scheme to replace hand-designed nonmaximum suppression (NMS) post-processing. Finally, we present the Unknown Object Detection Benchmark, the first publicly benchmark that encompasses precision evaluation for unknown object detection to our knowledge. Experiments show that our method is far better than the existing state-of-the-art methods."}, "cited_paper_content": {"title": "Faster R-Cnn: Towards Real-Time Object Detection With Region Proposal Networks", "abstract": "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [1] and Fast R-CNN [2] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features\u2014using the recently popular terminology of neural networks with \u2019attention\u2019 mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model [3] , our detection system has a frame rate of 5 fps ( including all steps ) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available."}, "keywords": ["COCO-OOD dataset", "Faster-RCNN"], "citation_intent": "result"} {"citing_id": "2303.05101v1", "cited_id": "2002.09018", "section_title": "Sgrld In Shampoo Metric", "citation": "To further reduce computational complexity, we can divide the tensors into smaller blocks and treat them as individual tensors instead #REFR .", "text_before_citation": ["by taking the square.", "The computational cost is then C(", "L l=1 d l m=1 (n l ) 3 m ) and the memory cost is C( L l=1 d l m=1 (n l ) 2 m )", ", for some positive constants C.", "These costs are strictly larger than those of the diagonal metrics or the Monge metric, but in practice the metric is updated only periodically (typically after each epoch), and hence the computation remains manageable."], "text_after_citation": ["The following theorem shows the validity of the sampler, with the proof in the Supplement.", "The bound depends on the EMA parameter \u03bb in the same way as the bound for the Monge metric and otherwise follows that of Theorem 2.1.", "Theorem 3.2.", "For SGRLD in the Shampoo metric, we can bound the approximation error as defined in Theo-", "rem 2.1 as E \u03c6 \u2212\u03c6 2 \u2264 C t h 2 t S 2 T E \u2206V t 2 + 1 S T + ( t=1 h 2 t ) 2 S 2 T + O(\u03c4 2 (1 \u2212 \u03bb) 2 )."], "citing_paper_content": {"title": "Scalable Stochastic Gradient Riemannian Langevin Dynamics In Non-Diagonal Metrics", "abstract": "Bayesian neural network inference is often carried out using stochastic gradient sampling methods. For best performance the methods should use a Riemannian metric that improves posterior exploration by accounting for the local curvature, but the existing methods resort to simple diagonal metrics to remain computationally efficient. This loses some of the gains. We propose two non-diagonal metrics that can be used in stochastic samplers to improve convergence and exploration but that have only a minor computational overhead over diagonal metrics. We show that for neural networks with complex posteriors, caused e.g. by use of sparsity-inducing priors, using these metrics provides clear improvements. For some other choices the posterior is sufficiently easy also for the simpler metrics."}, "cited_paper_content": {"title": "Second Order Optimization Made Practical", "abstract": "Optimization in machine learning, both theoretical and applied, is presently dominated by first-order gradient methods such as stochastic gradient descent. Second-order optimization methods that involve second-order derivatives and/or second-order statistics of the data have become far less prevalent despite strong theoretical properties, due to their prohibitive computation, memory and communication costs. ::: In an attempt to bridge this gap between theoretical and practical optimization, we present a proof-of-concept distributed system implementation of a second-order preconditioned method (specifically, a variant of full-matrix Adagrad), that along with a few yet critical algorithmic and numerical improvements, provides significant practical gains in convergence on state-of-the-art deep models and gives rise to actual wall-time improvements in practice compared to conventional first-order methods. Our design effectively utilizes the prevalent heterogeneous hardware architecture for training deep models which consists of a multicore CPU coupled with multiple accelerator units. We demonstrate superior performance on very large learning problems in machine translation where our distributed implementation runs considerably faster than existing gradient-based methods."}, "keywords": ["individual tensors"], "citation_intent": "method"} {"citing_id": "2303.12999v1", "cited_id": "1703.03400", "section_title": "B. Machine Learning Model", "citation": "Meanwhile, for each UE, we consider the case that only one step of stochastic gradient descent (SGD) is performed, following the same setting as #REFR .", "text_before_citation": ["We consider the above described MAML-based FL.", "In detail, we concentrate on the situation where UEs communicate in a synchronous manner, so as to avoid using outdated parameters for global model update and make high-quality refinement in each round."], "text_after_citation": ["As for the concerned MAML-based FL, our goal is to optimize the initial model using only a few data points at each UE.", "Hence, we only obtain an estimate of the desired gradient with SGD.", "Here, the desired gradient \u2207F i (w) on UE i is computed using all data points in its dataset D i , while the estimated gradient\u2207F i (w) on UE i is computed using SGD with the sampled dataset D"], "citing_paper_content": {"title": "Automated Federated Learning In Mobile Edge Networks -Fast Adaptation And Convergence", "abstract": "Federated Learning (FL) can be used in mobile edge networks to train machine learning models in a distributed manner. Recently, FL has been interpreted within a Model-Agnostic Meta-Learning (MAML) framework, which brings FL significant advantages in fast adaptation and convergence over heterogeneous datasets. However, existing research simply combines MAML and FL without explicitly addressing how much benefit MAML brings to FL and how to maximize such benefit over mobile edge networks. In this paper, we quantify the benefit from two aspects: optimizing FL hyperparameters (i.e., sampled data size and the number of communication rounds) and resource allocation (i.e., transmit power) in mobile edge networks. Specifically, we formulate the MAML-based FL design as an overall learning time minimization problem, under the constraints of model accuracy and energy consumption. Facilitated by the convergence analysis of MAML-based FL, we decompose the formulated problem and then solve it using analytical solutions and the coordinate descent method. With the obtained FL hyperparameters and resource allocation, we design a MAML-based FL algorithm, called Automated Federated Learning (AutoFL), that is able to conduct fast adaptation and convergence. Extensive experimental results verify that AutoFL outperforms other benchmark algorithms regarding the learning time and convergence performance."}, "cited_paper_content": {"title": "Model-Agnostic Meta-Learning For Fast Adaptation Of Deep Networks", "abstract": "We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning. The goal of meta-learning is to train a model on a variety of learning tasks, such that it can solve new learning tasks using only a small number of training samples. In our approach, the parameters of the model are explicitly trained such that a small number of gradient steps with a small amount of training data from a new task will produce good generalization performance on that task. In effect, our method trains the model to be easy to fine-tune. We demonstrate that this approach leads to state-of-the-art performance on two few-shot image classification benchmarks, produces good results on few-shot regression, and accelerates fine-tuning for policy gradient reinforcement learning with neural network policies."}, "keywords": ["stochastic gradient descent"], "citation_intent": "method"} {"citing_id": "2304.14133v1", "cited_id": "2003.10421", "section_title": "Problem Definition", "citation": "In contrast to previous works that primarily addressed misinformation detection as a binary classification problem ( #REFR ), we address CMM as a multiclass classification task.", "text_before_citation": ["To investigate the prevalence of AMM, we sampled 200 misleading pairs from the COSMOS benchmark and examined their source articles on Snopes.", "Following the classification taxonomy of Snopes #OTHEREFR we found that 48% of COSMOS pairs are \"false claims\" (41% associative imagery and 7% reinforcing captions) while 52% were classified as \"miscaptioned\", which we consider to be CMM because it implies a relationship between the two modalities.", "After de-duplicating the images of the COSMOS benchmark, the rates were 41% miscaptioned, 35% associative imagery, 4% reinforcing captions and 20% duplicates.", "We performed the same process on Fakeddit for 300 random samples and found that roughly 45% of pairs were AMM, with 41% being manipulated images and 4% with associative imagery.", "Moreover, we consider that roughly 14% of Fakeddit's samples can actually be considered CMM since the remaining 40% were mostly funny memes, visual jokes, pareidolia imagery and other content that is not generally considered to be misinformation #OTHEREFR ."], "text_after_citation": ["In this work, we introduce a taxonomy that includes three classes:", "1.", "Truthful (True): an image-caption pair (I t , C t ) is considered True when the origin, content, and context of an image are accurately described in the accompanying caption.", "2.", "Out-Of-Context (OOC) image-text pairs: involves a deceptive combination of a truthful caption C t and an image that is out of context I x ."], "citing_paper_content": {"title": "Figments And Misalignments: A Framework For Fine-Grained Crossmodal Misinformation Detection", "abstract": "Multimedia content has become ubiquitous on social media platforms, leading to the rise of multimodal misinformation and the urgent need for effective strategies to detect and prevent its spread. This study focuses on CrossModal Misinformation (CMM) where image-caption pairs work together to spread falsehoods. We contrast CMM with Asymmetric Multimodal Misinformation (AMM), where one dominant modality propagates falsehoods while other modalities have little or no influence. We show that AMM adds noise to the training and evaluation process while exacerbating the unimodal bias, where text-only or image-only detectors can seemingly outperform their multimodal counterparts on an inherently multimodal task. To address this issue, we collect and curate FIGMENTS, a robust evaluation benchmark for CMM, which consists of real-world cases of misinformation, excludes AMM and utilizes modality balancing to successfully alleviate unimodal bias. FIGMENTS also provides a first step towards fine-grained CMM detection by including three classes: truthful, out-ofcontext, and miscaptioned image-caption pairs. Furthermore, we introduce a method for generating realistic synthetic training data that maintains crossmodal relations between legitimate images and false human-written captions that we term Crossmodal HArd Synthetic MisAlignment (CHASMA). We conduct extensive comparative study using a Transformer-based architecture. Our results show that incorporating CHASMA in conjunction with other generated datasets consistently improved the overall performance on FIGMENTS in both binary (+6.26%) and multiclass settings (+15."}, "cited_paper_content": {"title": "Multimodal Analytics For Real-World News Using Measures Of Cross-Modal Entity Consistency", "abstract": "The World Wide Web has become a popular source for gathering information and news. Multimodal information, e.g., enriching text with photos, is typically used to convey the news more effectively or to attract attention. Photo content can range from decorative, depict additional important information, or can even contain misleading information. Therefore, automatic approaches to quantify cross-modal consistency of entity representation can support human assessors to evaluate the overall multimodal message, for instance, with regard to bias or sentiment. In some cases such measures could give hints to detect fake news, which is an increasingly important topic in today's society. In this paper, we introduce a novel task of cross-modal consistency verification in real-world news and present a multimodal approach to quantify the entity coherence between image and text. Named entity linking is applied to extract persons, locations, and events from news texts. Several measures are suggested to calculate cross-modal similarity for these entities using state of the art approaches. In contrast to previous work, our system automatically gathers example data from the Web and is applicable to real-world news. Results on two novel datasets that cover different languages, topics, and domains demonstrate the feasibility of our approach. Datasets and code are publicly available to foster research towards this new direction."}, "keywords": ["misinformation detection"], "citation_intent": "result"} {"citing_id": "2304.06351v1", "cited_id": "1703.07332", "section_title": "Face Detection", "citation": "First, we generated face annotation for RGB frames using FaceAlignment #REFR , an open-source tool for face analysis 2 .", "text_before_citation": ["Using the synthetic data from the simulator, we generated an annotated dataset in the event spectrum to train a face detector."], "text_after_citation": ["We then bound the face labels with the corresponding synthetic event frames obtained with ESIM.", "This allowed us to train a YOLOv2 #OTHEREFR on the synthetic version of NEFER.", "We found the detector to have good generalization capabilities from synthetic to real event data, which yielded high-quality annotations at a slight cost of manual validation using CVAT #OTHEREFR ."], "citing_paper_content": {"title": "Neuromorphic Event-Based Facial Expression Recognition", "abstract": "Recently, event cameras have shown large applicability in several computer vision fields especially concerning tasks that require high temporal resolution. In this work, we investigate the usage of such kind of data for emotion recognition by presenting NEFER, a dataset for Neuromorphic Event-based Facial Expression Recognition. NEFER is composed of paired RGB and event videos representing human faces labeled with the respective emotions and also annotated with face bounding boxes and facial landmarks. We detail the data acquisition process as well as providing a baseline method for RGB and event data. The collected data captures subtle micro-expressions, which are hard to spot with RGB data, yet emerge in the event domain. We report a double recognition accuracy for the event-based approach, proving the effectiveness of a neuromorphic approach for analyzing fast and hardly detectable expressions and the emotions they conceal."}, "cited_paper_content": {"title": "How Far Are We From Solving The 2D&3D Face Alignment Problem? (And A Dataset Of 230,000 3D Facial Landmarks)", "abstract": "This paper investigates how far a very deep neural network is from attaining close to saturating performance on existing 2D and 3D face alignment datasets. To this end, we make the following 5 contributions: (a) we construct, for the first time, a very strong baseline by combining a state-of-the-art architecture for landmark localization with a state-of-the-art residual block, train it on a very large yet synthetically expanded 2D facial landmark dataset and finally evaluate it on all other 2D facial landmark datasets. (b) We create a guided by 2D landmarks network which converts 2D landmark annotations to 3D and unifies all existing datasets, leading to the creation of LS3D-W, the largest and most challenging 3D facial landmark dataset to date ~230,000 images. (c) Following that, we train a neural network for 3D face alignment and evaluate it on the newly introduced LS3D-W. (d) We further look into the effect of all\"traditional\"factors affecting face alignment performance like large pose, initialization and resolution, and introduce a\"new\"one, namely the size of the network. (e) We show that both 2D and 3D face alignment networks achieve performance of remarkable accuracy which is probably close to saturating the datasets used. Training and testing code as well as the dataset can be downloaded from https://www.adrianbulat.com/face-alignment/"}, "keywords": ["face analysis", "FaceAlignment"], "citation_intent": "method"} {"citing_id": "2303.08536v2", "cited_id": "1512.03385", "section_title": "Architecture Details", "citation": "The visual front-end module is comprised of a 3D convolutional layer with a kernel size of 5 \u00d7 7 \u00d7 7 followed by a ResNet18 #REFR .", "text_before_citation": ["We adopt the visual front-end and the audio front-end from #OTHEREFR ."], "text_after_citation": ["Then the output features are squeezed along the spatial dimension by a global average pooling layer.", "The audio front-end module consists of a 1D convolutional layer with blocks of ResNet18.", "Both visual and audio front-ends are initialized using a pre-trained model on LRW #OTHEREFR .", "For the multimodal attention with Conformer encoder #OTHEREFR , we use hidden dimensions of 256, feed-forward dimensions of 2048, 12 layers, 8 attention heads, and a convolution kernel size of 31. We utilize Transformer decoder #OTHEREFR"], "citing_paper_content": {"title": "Watch Or Listen: Robust Audio-Visual Speech Recognition With Visual Corruption Modeling And Reliability Scoring", "abstract": "This paper deals with AudioVisual Speech Recognition (AVSR) under multimodal input corruption situations where audio inputs and visual inputs are both corrupted, which is not well addressed in previous research directions. Previous studies have focused on how to complement the corrupted audio inputs with the clean visual inputs with the assumption of the availability of clean visual inputs. However, in real life, clean visual inputs are not always accessible and can even be corrupted by occluded lip regions or noises. Thus, we firstly analyze that the previous AVSR models are not indeed robust to the corruption of multimodal input streams, the audio and the visual inputs, compared to uni-modal models. Then, we design multimodal input corruption modeling to develop robust AVSR models. Lastly, we propose a novel AVSR framework, namely AudioVisual Reliability Scoring module (AV-RelScore), that is robust to the corrupted multimodal inputs. The AV-RelScore can determine which input modal stream is reliable or not for the prediction and also can exploit the more reliable streams in prediction. The effectiveness of the proposed method is evaluated with comprehensive experiments on popular benchmark databases, LRS2 and LRS3. We also show that the reliability scores obtained by AV-RelScore well reflect the degree of corruption and make the proposed model focus on the reliable multimodal representations."}, "cited_paper_content": {"title": "Deep Residual Learning For Image Recognition", "abstract": "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers\u20148\u00d7 deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation."}, "keywords": ["3D convolutional layer"], "citation_intent": "method"} {"citing_id": "2303.09295v1", "cited_id": "1812.11842", "section_title": "Generalizable Generated Image Detection", "citation": "It can generalize to unseen generation models to some extent. Marra et al. #REFR and Yu et al.", "text_before_citation": ["However, their strong generalization capability relies on their large-scale training and 20 different models each trained on a different LSUN #OTHEREFR object category.", "Besides detection by spatial artifacts, there are also frequency-based methods #OTHEREFR . Frank et al.", "#OTHEREFR present that in the frequency domain, GAN-generated images are more likely to expose severe artifacts mainly caused by upsampling operations in previous GAN architectures. Zhang et al.", "[51] propose a GAN simulator, AutoGAN, to simulate the artifacts produced by standard GAN pipelines.", "Then they train a detector on the spectrum input on the synthesized images."], "text_after_citation": ["#OTHEREFR suggest detecting generated images by fingerprints that are often produced during GAN generation.", "A recent work #OTHEREFR proposes a detector based on an ensemble of EfficientNet-B4 #OTHEREFR to alleviate the generalization problem.", "However, with the boosting development of diffusion models, a general and robust detector for detecting images generated by diffusion models has not been explored.", "We note that some recent works also notice the diffusiongenerated image detection problem #OTHEREFR .", "Different from them, the focus of our work is exploring a generalizable detector for wide-range diffusion models."], "citing_paper_content": {"title": "Dire For Diffusion-Generated Image Detection", "abstract": "Diffusion models have shown remarkable success in visual synthesis, but have also raised concerns about potential abuse for malicious purposes. In this paper, we seek to build a detector for telling apart real images from diffusiongenerated images. We find that existing detectors struggle to detect images generated by diffusion models, even if we include generated images from a specific diffusion model in their training data. To address this issue, we propose a novel image representation called DIffusion Reconstruction Error (DIRE), which measures the error between an input image and its reconstruction counterpart by a pre-trained diffusion model. We observe that diffusion-generated images can be approximately reconstructed by a diffusion model while real images cannot. It provides a hint that DIRE can serve as a bridge to distinguish generated and real images. DIRE provides an effective way to detect images generated by most diffusion models, and it is general for detecting generated images from unseen diffusion models and robust to various perturbations. Furthermore, we establish a comprehensive diffusion-generated benchmark including images generated by eight diffusion models to evaluate the performance of diffusion-generated image detectors. Extensive experiments on our collected benchmark demonstrate that DIRE exhibits superiority over previous generatedimage detectors. The code and dataset are available at https://github.com/ZhendongWang6/DIRE."}, "cited_paper_content": {"title": "Do Gans Leave Artificial Fingerprints?", "abstract": "In the last few years, generative adversarial networks (GAN) have shown tremendous potential for a number of applications in computer vision and related fields. With the current pace of progress, it is a sure bet they will soon be able to generate high-quality images and videos, virtually indistinguishable from real ones. Unfortunately, realistic GAN-generated images pose serious threats to security, to begin with a possible flood of fake multimedia, and multimedia forensic countermeasures are in urgent need. In this work, we show that each GAN leaves its specific fingerprint in the images it generates, just like real-world cameras mark acquired images with traces of their photo-response non-uniformity pattern. Source identification experiments with several popular GANs show such fingerprints to represent a precious asset for forensic analyses."}, "keywords": ["unseen generation models"], "citation_intent": "background"} {"citing_id": "2303.02302v1", "cited_id": "1711.11279", "section_title": "Related Works", "citation": "On the global level, TCAV #REFR constructs a set of explanatory concepts represented by vectors that are able to separate positive/negative examples in a hidden layer.", "text_before_citation": ["Visual image interpretation.", "In general, research efforts devoted to visual image interpretation can be divided into two groups, post-hoc and self-interpretable.", "Post-hoc methods #OTHEREFR open the black-box models with salient maps or key image parts after the deep network is finished training.", "Most of the interpretative models #OTHEREFR we discussed in the previous paragraph fall into this group, locally explaining a model's prediction for each individual sample.", "Extending these gradient-based methods, some works #OTHEREFR include various additional information to achieve more reasonable explanations."], "text_after_citation": ["On the other hand, self-interpretable #OTHEREFR methods generate explainable representations in an end-to-end training process.", "Recently, ProtoPNet #OTHEREFR proposed a self-interpretable global interpretation method that extracts category-specific prototype vectors associated with image patches of training samples, which serve as an example-based explanation on both global and local levels.", "Later, various methods #OTHEREFR extend ProtoPNet from different aspects, including refined prototype learning module #OTHEREFR and pruning strategies #OTHEREFR to reduce the number of prototypes.", "Unsupervised domain adaptation.", "To align the heterogeneous distributions in source and target domains, previous works focusing on domain adaptation can be roughly separated into two straits."], "citing_paper_content": {"title": "Visualizing Transferred Knowledge: An Interpretive Model Of Unsupervised Domain Adaptation", "abstract": "Many research efforts have been committed to unsupervised domain adaptation (DA) problems that transfer knowledge learned from a labeled source domain to an unlabeled target domain. Various DA methods have achieved remarkable results recently in terms of predicting ability, which implies the effectiveness of the aforementioned knowledge transferring. However, state-of-the-art methods rarely probe deeper into the transferred mechanism, leaving the true essence of such knowledge obscure. Recognizing its importance in the adaptation process, we propose an interpretive model of unsupervised domain adaptation, as the first attempt to visually unveil the mystery of transferred knowledge. Adapting the existing concept of the prototype from visual image interpretation to the DA task, our model similarly extracts shared information from the domain-invariant representations as prototype vectors. Furthermore, we extend the current prototype method with our novel prediction calibration and knowledge fidelity preservation modules, to orientate the learned prototypes to the actual transferred knowledge. By visualizing these prototypes, our method not only provides an intuitive explanation for the base model's predictions but also unveils transfer knowledge by matching the image patches with the same semantics across both source and target domains. Comprehensive experiments and in-depth explorations demonstrate the efficacy of our method in understanding the transferred mechanism and its potential in downstream tasks including model diagnosis."}, "cited_paper_content": {"title": "Interpretability Beyond Feature Attribution: Quantitative Testing With Concept Activation Vectors (Tcav)", "abstract": "The interpretation of deep learning models is a challenge due to their size, complexity, and often opaque internal state. In addition, many systems, such as image classifiers, operate on low-level features rather than high-level concepts. To address these challenges, we introduce Concept Activation Vectors (CAVs), which provide an interpretation of a neural net's internal state in terms of human-friendly concepts. The key idea is to view the high-dimensional internal state of a neural net as an aid, not an obstacle. We show how to use CAVs as part of a technique, Testing with CAVs (TCAV), that uses directional derivatives to quantify the degree to which a user-defined concept is important to a classification result--for example, how sensitive a prediction of \"zebra\" is to the presence of stripes. Using the domain of image classification as a testing ground, we describe how CAVs may be used to explore hypotheses and generate insights for a standard image classification network as well as a medical application."}, "keywords": ["explanatory concepts"], "citation_intent": "background"} {"citing_id": "2303.14404v1", "cited_id": "1706.04599", "section_title": "Related Work", "citation": "A model trained with NLL provides predictions that deviate from the accuracy, leaving the model poorly calibrated #REFR .", "text_before_citation": ["To improve post-hoc calibration under out-domain scenarios, #OTHEREFR transforms the validation set prior to performing the post-hoc approach.", "In #OTHEREFR , a regression model is used to predict temperature parameter.", "Post-hoc calibration methods are simple and effective, however, they require hold-out validation data, and are dependent on architecture #OTHEREFR .", "Train-time calibration methods: Models trained with zero-entropy supervision tend to give over-confident predictions.", "An example is negative log-likelihood (NLL), which is a widely-used task-specific loss."], "text_after_citation": ["Train-time calibration methods are typically based on auxiliary loss functions, which are used in-tandem with task-specific losses.", "In #OTHEREFR , an auxiliary loss term DCA is proposed to calibrate the model.", "It is combined with a task-specific loss to penalize when it reduces but the accuracy remains unchanged.", "Likewise, #OTHEREFR proposed an auxiliary loss function that is based on a reproducing kernel in a Hilbert space #OTHEREFR .", "#OTHEREFR calibrated uncertainty based on the relationship between accuracy and uncertainty."], "citing_paper_content": {"title": "Bridging Precision And Confidence: A Train-Time Loss For Calibrating Object Detection", "abstract": "Deep neural networks (DNNs) have enabled astounding progress in several vision-based problems. Despite showing high predictive accuracy, recently, several works have revealed that they tend to provide overconfident predictions and thus are poorly calibrated. The majority of the works addressing the miscalibration of DNNs fall under the scope of classification and consider only in-domain predictions. However, there is little to no progress in studying the calibration of DNN-based object detection models, which are central to many vision-based safety-critical applications. In this paper, inspired by the train-time calibration methods, we propose a novel auxiliary loss formulation that explicitly aims to align the class confidence of bounding boxes with the accurateness of predictions (i.e. precision). Since the original formulation of our loss depends on the counts of true positives and false positives in a minibatch, we develop a differentiable proxy of our loss that can be used during training with other application-specific loss functions. We perform extensive experiments on challenging in-domain and out-domain scenarios with six benchmark datasets including MS-COCO, Cityscapes, Sim10k, and BDD100k. Our results reveal that our train-time loss surpasses strong calibration baselines in reducing calibration error for both in and out-domain scenarios. Our source code and pre-trained models are available at https:// github.com/akhtarvision/bpc_calibration"}, "cited_paper_content": {"title": "On Calibration Of Modern Neural Networks", "abstract": "Confidence calibration -- the problem of predicting probability estimates representative of the true correctness likelihood -- is important for classification models in many applications. We discover that modern neural networks, unlike those from a decade ago, are poorly calibrated. Through extensive experiments, we observe that depth, width, weight decay, and Batch Normalization are important factors influencing calibration. We evaluate the performance of various post-processing calibration methods on state-of-the-art architectures with image and document classification datasets. Our analysis and experiments not only offer insights into neural network learning, but also provide a simple and straightforward recipe for practical settings: on most datasets, temperature scaling -- a single-parameter variant of Platt Scaling -- is surprisingly effective at calibrating predictions."}, "keywords": ["accuracy"], "citation_intent": "background"} {"citing_id": "2303.02641v1", "cited_id": "1708.02002", "section_title": "C. Model Training", "citation": "We use the binary cross-entropy loss for classification and focal loss #REFR to handle class imbalance in the localization task.", "text_before_citation": ["To identify missing signs, the model must attend to the traffic sign cues in the environment. Similar to Han et al.", "#OTHEREFR , we add CueCAn at the end of the third, fourth, and fifth blocks of the VGG-19 #OTHEREFR encoder to highlight and classify the cues.", "The next task is to localize where the sign could be placed using the segmentation model with the pre-trained VGG-19 encoder and FCN-8 #OTHEREFR decoder.", "For localization, optimal results and GradCAM visualizations are observed when the entire network is fine-tuned end-to-end."], "text_after_citation": [], "citing_paper_content": {"title": "Cuecan: Cue-Driven Contextual Attention For Identifying Missing Traffic Signs On Unconstrained Roads", "abstract": "Missing Traffic Sign Scene with Curve Cue Fig. 1: Left: Scenes with real and inpainted traffic signs (chevron-left). Middle: Intermediary GradCAM visualizations of the cue classifier (encoder) with and without CueCAn. Right: Segmentation model with CueCAn-based encoder detects missing signs (green masks overlayed over the scene on the right for CueCAn and yellow mask by the baseline) on the scene without signs (follow pink arrows) by effectively attending to the context cues, compared to weak attention without CueCAn. Segementation GradCAMs are obtained from the centroid of the predicted sign (red dot)."}, "cited_paper_content": {"title": "Focal Loss For Dense Object Detection", "abstract": "The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors. Code is at: https://github.com/facebookresearch/Detectron ."}, "keywords": ["localization task", "focal loss"], "citation_intent": "method"} {"citing_id": "2304.00950v1", "cited_id": "1610.06136", "section_title": "Single Camera Systems", "citation": "Currently, the best performing DL based detector is Faster Region\u2212based Convolutional Neural Network (RCNN) from #REFR .", "text_before_citation": ["The survey specifies that most MOT algorithms that are developed to be used with a single camera have four steps/stages in common: detection, feature extraction / motion prediction, affinity, and association.", "The implied aim is to implement DL at every stage and to evaluate the given algorithms as a whole on a MOTChallenge dataset #OTHEREFR . The datasets mostly consist of benchmarks for pedestrian tracking.", "Deep learning is mostly used for the first two stages, while only a few contributions implement DL approaches for affinity and association.", "From this survey #OTHEREFR , the authors emphasize three important parameters to deploy MOT algorithms: (i) the detection quality, (ii) Convolutional Neural Network (CNN) for feature extraction, and (iii) Single Object Tracking (SOT) trackers.", "In terms of detection quality, appropriate detectors must be thoroughly selected to reduce the number of False Negatives (FN) in the Multi\u2212Object Tracking Accuracy (MOTA) score."], "text_after_citation": ["In contrast, Single\u2212Shot Detector (SSD) performs worse, as presented in #OTHEREFR .", "However, SSD was almost able to work in real-time (4.5 FPS), including the detection step.", "For the feature extraction stage #OTHEREFR , the best-performing method, GoogLeNet #OTHEREFR , is applied to the datasets of MOT15 #OTHEREFR , MOT16 and MOT17 #OTHEREFR .", "Approaches that do not use appearance (whether they are deep or conventional methods) typically perform worse.", "Visual features alone, however, are insufficient to compute affinity; many of the better-performing algorithms additionally include other characteristics, particularly motion features."], "citing_paper_content": {"title": "Semi-Automated Computer Vision Based Tracking Of Multiple Industrial Entities -A Framework And Dataset Creation Approach", "abstract": "This contribution presents the TOMIE framework (Tracking Of Multiple Industrial Entities), a framework for the continuous tracking of industrial entities (e.g., pallets, crates, barrels) over a network of, in this example, six RGB cameras. This framework, makes use of multiple sensors, data pipelines and data annotation procedures, and is described in detail in this contribution. With the vision of a fully automated tracking system for industrial entities in mind, it enables researchers to efficiently capture high quality data in an industrial setting. Using this framework, an image dataset, the TOMIE dataset, is created, which at the same time is used to gauge the framework's validity. This dataset contains annotation files for 112,860 frames and 640,936 entity instances that are captured from a set of six cameras that perceive a large indoor space. This dataset out-scales comparable datasets by a factor of four and is made up of scenarios, drawn from industrial applications from the sector of warehousing. Three tracking algorithms, namely ByteTrack, Bot-Sort and SiamMOT are applied to this dataset, serving as a proof-of-concept and providing tracking results that are comparable to the state of the art."}, "cited_paper_content": {"title": "Poi: Multiple Object Tracking With High Performance Detection And Appearance Feature", "abstract": "Detection and learning based appearance feature play the central role in data association based multiple object tracking (MOT), but most recent MOT works usually ignore them and only focus on the hand-crafted feature and association algorithms. In this paper, we explore the high-performance detection and deep learning based appearance feature, and show that they lead to significantly better MOT results in both online and offline setting. We make our detection and appearance feature publicly available. In the following part, we first summarize the detection and appearance feature, and then introduce our tracker named Person of Interest (POI), which has both online and offline version."}, "keywords": ["RCNN"], "citation_intent": "method"} {"citing_id": "2303.00192v1", "cited_id": "1704.00717", "section_title": "Toward Models Of Human Collaboration As Lenses For Studying And Designing Co-Creative Systems", "citation": "This result is in line with studies on human-AI collaboration in decision-making, suggesting that users learn to better predict the machine's behavior through inductive mechanisms (i.e., via concrete examples and hands-on testing) than via general, declarative information about internal processes #REFR .", "text_before_citation": ["Participants had trouble learning to predict how the AI might behave in response to the specified parameters.", "They struggled to make sense of the AI system's reasoning and struggled to correct unwanted design issues.", "Prior literature on group cognition suggests that to achieve effective collaboration group members should be able to interpret each other's reasoning and predict roughly how their partner might behave in response to their own actions #OTHEREFR .", "Similarly, from a team learning perspective, our findings suggest that designers who systematically explored the AI's limitations and capabilities early on were better at predicting the tool's actions in response to their own and produced more satisfactory results."], "text_after_citation": ["While explainable AI research focuses primarily on directly communicating information about the AI system to the user, recent research has suggested that more engaging and longer forms of learning and deliberate practice might improve human-AI collaboration #OTHEREFR .", "However, in addition to supporting honing the user's mental model of the AI's capabilities and limitations, it is equally important for the AI system to have an understanding of the user's capabilities, limitations, and task context to enable more effective human-AI collaboration.", "Hence, this would require the AI system to have better contextual awareness of the user and the current task at hand.", "We further discuss the resulting design opportunities in section 8.3.3.", "Most designers felt the tools were uncollaborative and had more control over the design process than they would have preferred."], "citing_paper_content": {"title": "Exploring Challenges And Opportunities To Support Designers In Learning To Co-Create With Ai-Based Manufacturing Design Tools", "abstract": "AI-based design tools are proliferating in professional software to assist engineering and industrial designers in complex manufacturing and design tasks. These tools take on more agentic roles than traditional computer-aided design tools and are often portrayed as \"co-creators. \" Yet, working effectively with such systems requires different skills than working with complex CAD tools alone. To date, we know little about how engineering designers learn to work with AI-based design tools. In this study, we observed trained designers as they learned to work with two AI-based tools on a realistic design task. We find that designers face many challenges in learning to effectively co-create with current systems, including challenges in understanding and adjusting AI outputs and in communicating their design goals. Based on our findings, we highlight several design opportunities to better support designer-AI co-creation. CCS CONCEPTS \u2022 Human-centered computing \u2192 Empirical studies in HCI; \u2022 Applied computing \u2192 Computer-aided design."}, "cited_paper_content": {"title": "It Takes Two To Tango: Towards Theory Of Ai'S Mind", "abstract": "Theory of Mind is the ability to attribute mental states (beliefs, intents, knowledge, perspectives, etc.) to others and recognize that these mental states may differ from one's own. Theory of Mind is critical to effective communication and to teams demonstrating higher collective performance. To effectively leverage the progress in Artificial Intelligence (AI) to make our lives more productive, it is important for humans and AI to work well together in a team. Traditionally, there has been much emphasis on research to make AI more accurate, and (to a lesser extent) on having it better understand human intentions, tendencies, beliefs, and contexts. The latter involves making AI more human-like and having it develop a theory of our minds. In this work, we argue that for human-AI teams to be effective, humans must also develop a theory of AI's mind (ToAIM) - get to know its strengths, weaknesses, beliefs, and quirks. We instantiate these ideas within the domain of Visual Question Answering (VQA). We find that using just a few examples (50), lay people can be trained to better predict responses and oncoming failures of a complex VQA model. We further evaluate the role existing explanation (or interpretability) modalities play in helping humans build ToAIM. Explainable AI has received considerable scientific and popular attention in recent times. Surprisingly, we find that having access to the model's internal states - its confidence in its top-k predictions, explicit or implicit attention maps which highlight regions in the image (and words in the question) the model is looking at (and listening to) while answering a question about an image - do not help people better predict its behavior."}, "keywords": ["human-AI collaboration"], "citation_intent": "result"} {"citing_id": "2303.05996v1", "cited_id": "1709.01015", "section_title": "I. Introduction", "citation": "Unlike Received Signal Strength Indicator (RSSI) based solutions #REFR , 802.11az combines mmWavebased angle estimations with the Time of Flight (ToF) to achieve high accuracy positioning.", "text_before_citation": ["Legacy positioning in 802.11 only knew the distance between devices, thus, there are many feasible locations (blue markers).", "802.11az uses mmWave beamforming to obtain the azimuth and elevation between devices, hence, the exact location (red marker).", "Although 802.11az provides enhancements for sub-6 GHz and Millimiter Wave (mmWave) operation at 60 GHz, the best positioning accuracy is achieved in the latter band.", "Specifically, 802.11az obtains accurate positioning information using 802.11ay Enhanced Directional Multi-Gigabit (EDMG) beamforming #OTHEREFR , along with its accurate Channel State Information (CSI) reports.", "With the orientation of the mmWave beam and CSI reports, 802.11az elaborates angle estimations funded on the ranging accuracy of multi-GHz bandwidths."], "text_after_citation": ["Both the ToF and angle estimations are exchanged using the Fine Timing Measurement (FTM) procedureintroduced in 802.11mc #OTHEREFR -, and Station (STA)s use the exchanged estimations to determine the distance, azimuth and elevation between STAssee Fig. 1 .", "Since mmWave promises to increase positioning accuracy from meter level #OTHEREFR to centimeterlevel #OTHEREFR , #OTHEREFR ; in this article we explain the main contributions of 802.11az for mmWave based positioning:", "FTM procedure over EDMG.", "In the legacy FTM procedure STAs only exchange ToF estima-arXiv:2303.05996v1 [cs.NI] 10 Mar 2023 tions, thus, obtaining just an approximation of the distance.", "802.11az performs the FTM procedure over EDMG to also exchange angle estimations obtained through CSI reports."], "citing_paper_content": {"title": "Ieee 802.11Az Indoor Positioning With Mmwave", "abstract": "Last years we have witnessed the uprising of location based applications, which depend on the devices ability to accurately obtain their position. IEEE 802.11, foretelling the need for such applications, started the IEEE 802.11az work on Next Generation Positioning. Although this standard provides positioning enhancements for sub-6 GHz and mmWave bands, high accuracy in the order of centimeters can only be obtained in the latter band, thanks to the beamforming information available at mmWave operation. This work presents a detailed analysis on the new techniques provided by IEEE 802.11az for enhanced secured positioning in the mmWave band, assessing them through experimentation."}, "cited_paper_content": {"title": "A Survey Of Indoor Localization Systems And Technologies", "abstract": "Indoor localization has recently witnessed an increase in interest, due to the potential wide range of services it can provide by leveraging Internet of Things (IoT), and ubiquitous connectivity. Different techniques, wireless technologies and mechanisms have been proposed in the literature to provide indoor localization services in order to improve the services provided to the users. However, there is a lack of an up-to-date survey paper that incorporates some of the recently proposed accurate and reliable localization systems. In this paper, we aim to provide a detailed survey of different indoor localization techniques, such as angle of arrival (AoA), time of flight (ToF), return time of flight (RTOF), and received signal strength (RSS); based on technologies, such as WiFi, radio frequency identification device (RFID), ultra wideband (UWB), Bluetooth, and systems that have been proposed in the literature. This paper primarily discusses localization and positioning of human users and their devices. We highlight the strengths of the existing systems proposed in the literature. In contrast with the existing surveys, we also evaluate different systems from the perspective of energy efficiency, availability, cost, reception range, latency, scalability, and tracking accuracy. Rather than comparing the technologies or techniques, we compare the localization systems and summarize their working principle. We also discuss remaining challenges to accurate indoor localization."}, "keywords": ["mmWavebased angle estimations", "high accuracy positioning"], "citation_intent": "method"} {"citing_id": "2303.10753v1", "cited_id": "1803.04755", "section_title": "Related Works:", "citation": "Masuda and Holme #REFR used a graph distance measure and hierarchical clustering to detect and cluster evolving states in social temporal networks.", "text_before_citation": ["The authors illustrate the usefulness of their approach by applying it to synthetic datasets such as the Barabasi-Albert graphs and small world graphs.", "(3) Change Point Detection in Social Networks.", "In the field of change point detection in social networks, Wang et al.", "#OTHEREFR proposed an algorithm based on a Markov generative process to analyze graph snapshots of dynamic social networks.", "The algorithm was tested on real-world networks, including political voting networks, but it fails to account for long-term dependence and assumes rare changes."], "text_after_citation": ["Their approach assumes the entire network system is described by a single system state, which could be relaxed to multi-state setup for social networks with community structure. Zhao et al.", "#OTHEREFR introduced a model-free change point detection method for dynamic social networks that uses neighborhood smoothing to estimate edge probabilities.", "However, the algorithm is not applicable to directed networks or networks with an evolving number of nodes. Grattarola et al.", "#OTHEREFR proposed a data-driven method for detecting changes in stationarity in a stream of attributed graphs.", "They used an adversarial autoencoder to embed graphs on constant-curvature manifolds, and employed the Fr\u00e9chet mean to represent the average of networks."], "citing_paper_content": {"title": "Fr\u00e9chet Statistics Based Change Point Detection In Dynamic Social Networks", "abstract": "This paper proposes a method to detect change points in dynamic social networks using Fr\u00e9chet statistics. We address two main questions: (1) what metric can quantify the distances between graph Laplacians in a dynamic network and enable efficient computation, and (2) how can the Fr\u00e9chet statistics be extended to detect multiple change points while maintaining the significance level of the hypothesis test? Our solution defines a metric space for graph Laplacians using the Log-Euclidean metric, enabling a closed-form formula for Fr\u00e9chet mean and variance. We present a framework for change point detection using Fr\u00e9chet statistics and extend it to multiple change points with binary segmentation. The proposed algorithm uses incremental computation for Fr\u00e9chet mean and variance to improve efficiency and is validated on simulated and two real-world datasets, namely the UCI message dataset and the Enron email dataset."}, "cited_paper_content": {"title": "Detecting Sequences Of System States In Temporal Networks", "abstract": "Many time-evolving systems in nature, society and technology leave traces of the interactions within them. These interactions form temporal networks that reflect the states of the systems. In this work, we pursue a coarse-grained description of these systems by proposing a method to assign discrete states to the systems and inferring the sequence of such states from the data. Such states could, for example, correspond to a mental state (as inferred from neuroimaging data) or the operational state of an organization (as inferred by interpersonal communication). Our method combines a graph distance measure and hierarchical clustering. Using several empirical data sets of social temporal networks, we show that our method is capable of inferring the system's states such as distinct activities in a school and a weekday state as opposed to a weekend state. We expect the methods to be equally useful in other settings such as temporally varying protein interactions, ecological interspecific interactions, functional connectivity in the brain and adaptive social networks."}, "keywords": ["social temporal networks"], "citation_intent": "method"} {"citing_id": "2303.01667v1", "cited_id": "1501.01809", "section_title": "Ge 2", "citation": "We use Firedrake #REFR to discretize (4.1) with linear finite elements on tetrahedra, and select an average of 25 points per cluster.", "text_before_citation": ["Indeed, this is reflected in the convergence shown in Figure 11 , where we observe a dramatic reduction in the number of iterations for solvers with these clusters.", "It is, of course, important to note that not every clustering generated by standard Lloyd exhibits similarly poor performance.", "The quality of the initial clustering used to seed the algorithm plays an important role in determining the multigrid performance.", "Results seen here are for a representative, randomly generated, initial seeding. rotated about the z-axis, (see Table 1 ).", "A 3D tetrahedral mesh with 16 921 elements is generated with Gmsh #OTHEREFR through pygmsh #OTHEREFR ."], "text_after_citation": ["2D restricted channel: The 2D domain \u2126 is defined by [\u22122, 2] \u00d7 [\u22121, 1] \\ C with C = C + \u222a C \u2212 , for C \u00b1 representing discs of radius 0.8 at (0, \u00b11) (see Table 1 ).", "As for the 3D restricted channel, we use Gmsh to generate a graded, triangular mesh with 5832 elements, with a characteristic length of 0.012 at the center and growing to 0.12 at the left/right edges.", "This forces tighter clustering toward the center, as shown in Table 1 .", "The discretization matrix for (4.1) is constructed with linear finite elements, and we target clusters of size 8.", "2D anisotropic diffusion: The 2D domain is defined by the unit square, and we consider the problem \u2212\u2207 \u2022 K\u2207u = f with pure Dirchlet conditions."], "citing_paper_content": {"title": "Generalizing Lloyd'S Algorithm For Graph Clustering *", "abstract": "Clustering is a commonplace problem in many areas of data science, with applications in biology and bioinformatics, understanding chemical structure, image segmentation, building recommender systems, and many more fields. While there are many different clustering variants (based on given distance or graph structure, probability distributions, or data density), we consider here the problem of clustering nodes in a graph, motivated by the problem of aggregating discrete degrees of freedom in multigrid and domain decomposition methods for solving sparse linear systems. Specifically, we consider the challenge of forming balanced clusters in the graph of a sparse matrix for use in algebraic multigrid, although the algorithm has general applicability. Based on an extension of the Bellman-Ford algorithm, we generalize Lloyd's algorithm for partitioning subsets of R n to balance the number of nodes in each cluster; this is accompanied by a rebalancing algorithm that reduces the overall energy in the system. The algorithm provides control over the number of clusters and leads to \"well centered\" partitions of the graph. Theoretical results are provided to establish linear complexity and numerical results in the context of algebraic multigrid highlight the benefits of improved clustering."}, "cited_paper_content": {"title": "Firedrake: Automating The Finite Element Method By Composing Abstractions", "abstract": "Firedrake is a new tool for automating the numerical solution of partial differential equations. Firedrake adopts the domain-specific language for the finite element method of the FEniCS project, but with a pure Python runtime-only implementation centered on the composition of several existing and new abstractions for particular aspects of scientific computing. The result is a more complete separation of concerns that eases the incorporation of separate contributions from computer scientists, numerical analysts, and application specialists. These contributions may add functionality or improve performance. Firedrake benefits from automatically applying new optimizations. This includes factorizing mixed function spaces, transforming and vectorizing inner loops, and intrinsically supporting block matrix operations. Importantly, Firedrake presents a simple public API for escaping the UFL abstraction. This allows users to implement common operations that fall outside of pure variational formulations, such as flux limiters."}, "keywords": ["cluster", "linear finite elements"], "citation_intent": "method"} {"citing_id": "2304.09648v1", "cited_id": "1511.05952", "section_title": "Reinforcement Learning", "citation": "Prioritized Experience Replay (PER) #REFR is a technique for experience replay which involves replaying transitions with a high expected learning progress, as determined by the size of their temporal-difference (TD) error, more frequently than others.", "text_before_citation": ["Directly replacing the tabular Q functions with nonlinear function approximators such neural networks sometimes makes the training of RL agents generally hard to converge #OTHEREFR .", "Potential reasons are correlated states or observations and optimization towards an unstable target. Experience replay is a successful method for addressing non-i.i.d. sampling.", "The original setting of experience replay #OTHEREFR is to sample a batch of transitions uniformly from the replay memory and optimize the model.", "Although this approach is viable, it may not be the most efficient method since the model may acquire more knowledge from certain transitions and less from others.", "For example, experiences that led to large rewards or that resulted in the agent making a significant error are likely to be more informative than experiences that had little impact on the agent's behavior."], "text_after_citation": ["The probability of sampling the transition i is defined to be", "P (i) = p \u03b1 i k p \u03b1 k", "where p i > 0 is the priority of transition i.", "The parameter \u03b1 controls how much the prioritization is used.", "It means sampling transitions uniformly from the replay memory when \u03b1 = 0."], "citing_paper_content": {"title": "Quantum Deep Q Learning With Distributed Prioritized Experience Replay", "abstract": "This paper introduces the QDQN-DPER framework to enhance the efficiency of quantum reinforcement learning (QRL) in solving sequential decision tasks. The framework incorporates prioritized experience replay and asynchronous training into the training algorithm to reduce the high sampling complexities. Numerical simulations demonstrate that QDQN-DPER outperforms the baseline distributed quantum Q learning with the same model architecture. The proposed framework holds potential for more complex tasks while maintaining training efficiency."}, "cited_paper_content": {"title": "Prioritized Experience Replay", "abstract": "Experience replay lets online reinforcement learning agents remember and reuse experiences from the past. In prior work, experience transitions were uniformly sampled from a replay memory. However, this approach simply replays transitions at the same frequency that they were originally experienced, regardless of their significance. In this paper we develop a framework for prioritizing experience, so as to replay important transitions more frequently, and therefore learn more efficiently. We use prioritized experience replay in Deep Q-Networks (DQN), a reinforcement learning algorithm that achieved human-level performance across many Atari games. DQN with prioritized experience replay achieves a new state-of-the-art, outperforming DQN with uniform replay on 41 out of 49 games."}, "keywords": ["Prioritized Experience Replay"], "citation_intent": "method"} {"citing_id": "2304.03608v2", "cited_id": "1811.11168", "section_title": "Gflops", "citation": "Compared to the network with vanilla convolution (fifth row), the network with DCN #REFR in the last two blocks (2xDCN) increases the computation by only 0.1 GFLOPs and the running time by only 0.8 ms.", "text_before_citation": ["Feature Extraction: For accurate visual measurements, the extracted image feature should have a good localization performance and a large respective field on the image for a robust descriptor extraction.", "We expect to achieve these goals by improving the baseline network ALIKE-N #OTHEREFR .", "As shown in the first two rows of Table IX, we start by changing max-pooling to average-pooling (AVG) and ReLU to SELU #OTHEREFR .", "These changes improve the mAA(10\u00b0) on the IMW-validation #OTHEREFR by 3.43% but have limited improvements on the other metrics.", "Furthermore, for geometric invariance feature extraction, we replace the vanilla convolutions in the last two blocks with DCN #OTHEREFR as shown in the sixth and seventh rows of Table IX."], "text_after_citation": ["Besides improving the MS@3 on the Hpatches [50] by 2.81%, this change also improves the mAA(10\u00b0) and MS@3 on the IMW-validation #OTHEREFR by 4.36% and 3.28%, respectively.", "Since the first two blocks have higher feature resolution, using DCN #OTHEREFR would significantly increase the computational cost.", "Moreover, since the front blocks are responsible for low-level feature extraction, using DCN #OTHEREFR may degrade performance.", "Therefore, we use DCN #OTHEREFR only in the last two blocks.", "Score Head: Due to computational cost considerations, the baseline network has a score head of only one 1 \u00d7 1 convolutional layer."], "citing_paper_content": {"title": "Aliked: A Lighter Keypoint And Descriptor Extraction Network Via Deformable Transformation", "abstract": "Image keypoints and descriptors play a crucial role in many visual measurement tasks. In recent years, deep neural networks have been widely used to improve the performance of keypoint and descriptor extraction. However, the conventional convolution operations do not provide the geometric invariance required for the descriptor. To address this issue, we propose the Sparse Deformable Descriptor Head (SDDH), which learns the deformable positions of supporting features for each keypoint and constructs deformable descriptors. Furthermore, SDDH extracts descriptors at sparse keypoints instead of a dense descriptor map, which enables efficient extraction of descriptors with strong expressiveness. In addition, we relax the neural reprojection error (NRE) loss from dense to sparse to train the extracted sparse descriptors. Experimental results show that the proposed network is both efficient and powerful in various visual measurement tasks, including image matching, 3D reconstruction, and visual relocalization."}, "cited_paper_content": {"title": "Deformable Convnets V2: More Deformable, Better Results", "abstract": "The superior performance of Deformable Convolutional Networks arises from its ability to adapt to the geometric variations of objects. Through an examination of its adaptive behavior, we observe that while the spatial support for its neural features conforms more closely than regular ConvNets to object structure, this support may nevertheless extend well beyond the region of interest, causing features to be influenced by irrelevant image content. To address this problem, we present a reformulation of Deformable ConvNets that improves its ability to focus on pertinent image regions, through increased modeling power and stronger training. The modeling power is enhanced through a more comprehensive integration of deformable convolution within the network, and by introducing a modulation mechanism that expands the scope of deformation modeling. To effectively harness this enriched modeling capability, we guide network training via a proposed feature mimicking scheme that helps the network to learn features that reflect the object focus and classification power of R-CNN features. With the proposed contributions, this new version of Deformable ConvNets yields significant performance gains over the original model and produces leading results on the COCO benchmark for object detection and instance segmentation."}, "keywords": ["vanilla convolution"], "citation_intent": "background"} {"citing_id": "2304.01560v1", "cited_id": "1909.06492", "section_title": "I. Introduction", "citation": "Reference #REFR first reconstructs a harvesting function from samples by deep learning regression, and then, it optimizes constellation points for modulation by autoencoders based on neural networks.", "text_before_citation": ["In particular in #OTHEREFR , a nonlinear analytic model for a harvesting circuit is derived by approximating an ideal diode's characteristics via the Taylor series, and based on the model, it proposes to use multisine waveforms to deliver the largest energy.", "However, this approach has limitations because individual components' laws may not even follow ideal circuit laws due to coupling effect, parasitic capacitors, etc., and then such approximation errors could accumulate.", "For example, the parasitic components resulting from interrelationships make it difficult to analytically describe an equivalent end-to-end model for a circuit containing diodes.", "Indeed, the true end-to-end energy harvesting function may only be available through experimental samples or circuit simulations #OTHEREFR - #OTHEREFR .", "Inspired by this, recent literature attempts to take experimental samples and use a deep learning approach."], "text_after_citation": ["Reference #OTHEREFR focuses on the memory effect of harvesting circuits.", "To capture the memory effect, it proposes a Markov decision process (MDP) model for the harvesting circuit and uses neural networks to estimate the MDP's parameters from data created by a circuit simulator.", "Despite their success in practice, the deep learning approach has a critical drawback in that it generally provides no theoretical performance guarantees.", "In addition, since optimization via deep learning is a black-box algorithm, it generally provides limited insights into SIET.", "For instance, it cannot answer how many experimental samples are needed to attain a particular performance or how close our SIET design is to the optimal SIET."], "citing_paper_content": {"title": "Information And Energy Transmission With Wavelet-Reconstructed Harvesting Functions", "abstract": "In practical simultaneous information and energy transmission (SIET), the exact energy harvesting function is usually unavailable because an energy harvesting circuit is nonlinear and nonideal. In this work, we consider a SIET problem where the harvesting function is accessible only at experimentally-taken sample points and study how close we can design SIET to the optimal system with such sampled knowledge. Assuming that the harvesting function is of bounded variation that may have discontinuities, we separately consider two settings where samples are taken without and with additive noise. For these settings, we propose to design a SIET system as if a waveletreconstructed harvesting function is the true one and study its asymptotic performance loss of energy and information delivery from the true optimal one. Specifically, for noiseless samples, it is shown that designing SIET as if the wavelet-reconstructed harvesting function is the truth incurs asymptotically vanishing energy and information delivery loss with the number of samples. For noisy samples, we propose to reconstruct wavelet coefficients via soft-thresholding estimation. Then, we not only obtain similar asymptotic losses to the noiseless case but also show that the energy loss by wavelets is asymptotically optimal up to a logarithmic factor."}, "cited_paper_content": {"title": "Learning To Communicate And Energize: Modulation, Coding And Multiple Access Designs For Wireless Information-Power Transmission", "abstract": "The explosion of the number of low-power devices in the next decades calls for a re-thinking of wireless network design, namely, unifying wireless transmission of information and power so as to make the best use of the RF spectrum, radiation, and infrastructure for the dual purpose of communicating and energizing. This paper provides a novel learning-based approach towards such wireless network design. To that end, a parametric model of a practical energy harvester, accounting for various sources of nonlinearities, is proposed using a nonlinear regression algorithm applied over collected real data. Relying on the proposed model, the learning problem of modulation design for Simultaneous Wireless Information-Power Transmission (SWIPT) over a point-to-point link is studied. Joint optimization of the transmitter and the receiver is implemented using Neural Network (NN)-based autoencoders. The results reveal that by increasing the receiver power demand, the baseband transmit modulation constellation converges to an On-Off keying signalling. Utilizing the observations obtained via learning, an algorithmic SWIPT modulation design is proposed. It is observed via numerical results that the performance loss of the proposed modulations are negligible compared to the ones obtained from learning. Extension of the studied problem to learning modulation design for multi-user SWIPT scenarios and coded modulation design for point-to-point SWIPT are considered. The major conclusion of this work is to utilize learning-based results to design non learning-based algorithms, which perform as well. In particular, inspired by the results obtained via learning, an algorithmic approach for coded modulation design is proposed, which performs very close to its learning counterparts, and is significantly superior due to its high real-time adaptability to new system design parameters."}, "keywords": ["harvesting function", "autoencoders"], "citation_intent": "method"} {"citing_id": "2304.12399v1", "cited_id": "1807.02874", "section_title": "C. Outline", "citation": "In Section IV we compare our result with the converse bound which follows from #REFR , and with another code construction that does not allow for efficient encoding. Section V concludes the main part of the paper.", "text_before_citation": ["The paper is organized as follows.", "In Section II, we introduce important definitions and notations that are used throughout the paper.", "We present our code construction and explain all its components in Section III.", "The main result is formulated in Theorem 1 and its proof is divided into Lemmas 1-6."], "text_after_citation": [], "citing_paper_content": {"title": "Codes Correcting A Single Long Duplication Error", "abstract": "We consider the problem of constructing a code capable of correcting a single long tandem duplication error of variable length. As the main contribution of this paper, we present a q-ary efficiently encodable code of length n + 1 and redundancy 1 that can correct a single duplication of length at least K = 4 \u2022 log q n + 1. The complexity of encoding is O(n 2 log n) and the complexity of decoding is O(n). We also present a q-ary non-efficient code of length n + 1 correcting single long duplication of length at least K = log q n + \u03c6(n), where \u03c6(n) \u2192 \u221e as n \u2192 \u221e. This code has redundancy less than 1 for sufficiently large n. Moreover, we show that in the class of codes correcting a single long duplication with redundancy 1, the value K in our constructions is order-optimal."}, "cited_paper_content": {"title": "Bounds And Constructions For Multi-Symbol Duplication Error Correcting Codes", "abstract": "In this paper, we study codes correcting $t$ duplications of $\\ell$ consecutive symbols. These errors are known as tandem duplication errors, where a sequence of symbols is repeated and inserted directly after its original occurrence. Using sphere packing arguments, we derive non-asymptotic upper bounds on the cardinality of codes that correct such errors for any choice of parameters. Based on the fact that a code correcting insertions of $t$ zero-blocks can be used to correct $t$ tandem duplications, we construct codes for tandem duplication errors. We compare the cardinalities of these codes with their sphere packing upper bounds. Finally, we discuss the asymptotic behavior of the derived codes and bounds, which yields insights about the tandem duplication channel."}, "keywords": ["efficient encoding"], "citation_intent": "result"} {"citing_id": "2303.06907v1", "cited_id": "1905.08409", "section_title": "Proposed Method", "citation": "To improve the model performance in 360 \u2022 IQA task, we utilize tangent image representation #REFR to reduce the distortion of the viewports extracted from the ERP image.", "text_before_citation": ["Our sampling module then extracts tangent viewports from salient regions of the distorted image in ERP format.", "The main VQA module resembles the one used in #OTHEREFR , but the patch encoding block is re-modeled from the ground up to deal with the special structure of 360 \u2022 images.", "To effectively encode input viewports into a sequence of tokens, we incorporate positional, geometric and source embeddings into the extracted sequence of tokens, and add a learnable classification token (CLS) to capture the global representation for the image.", "The corresponding 360\u00b0image quality score is predicted by the output of a fully connected layer on top of the final CLS token representation at the output of the Transformer encoder. Sampling Module.", "Most 360 \u2022 IQA datasets store images in the ERP format, which is the most popular spherical image representation, but is known to have significant distortions."], "text_after_citation": ["Moreover, to distinguish visually important parts of a panoramic image, we designed a sampling strategy motivated by the human visual attention mechanism.", "In particular, we employ the 360 \u2022 image saliency prediction model named ATSal #OTHEREFR to predict salient regions of the panorama.", "In this way, one can assign a saliency score for each patch extracted from the omnidirectional image, which we utilize in our sampling module.", "Our main motivation behind our saliency-guided sampling scheme is to combine neural attention (self-attention) mechanism with human visual attention in a simple and intuitive manner.", "At the first step of the sampling module, an input image in ERP format is fed to the ATSal saliency prediction model to predict a saliency map showing which regions are most likely to attract attention."], "citing_paper_content": {"title": "St360Iq: No-Reference Omnidirectional Image Quality Assessment With Spherical Vision Transformers", "abstract": "Omnidirectional images, aka 360 \u2022 images, can deliver immersive and interactive visual experiences. As their popularity has increased dramatically in recent years, evaluating the quality of 360 \u2022 images has become a problem of interest since it provides insights for capturing, transmitting, and consuming this new media. However, directly adapting quality assessment methods proposed for standard natural images for omnidirectional data poses certain challenges. These models need to deal with very high-resolution data and implicit distortions due to the spherical form of the images. In this study, we present a method for noreference 360 \u2022 image quality assessment. Our proposed ST360IQ model extracts tangent viewports from the salient parts of the input omnidirectional image and employs a vision-transformers based module processing saliency selective patches/tokens that estimates a quality score from each viewport. Then, it aggregates these scores to give a final quality score. Our experiments on two benchmark datasets, namely OIQA and CVIQ datasets, demonstrate that as compared to the state-of-the-art, our approach predicts the quality of an omnidirectional image correlated with the human-perceived image quality. The code has been available on https://github.com/Nafiseh-Tofighi/ST360IQ"}, "cited_paper_content": {"title": "Convolutions On Spherical Images", "abstract": "Applying convolutional neural networks to spherical images requires particular considerations. We look to the millennia of work on cartographic map projections to provide the tools to define an optimal representation of spherical images for the convolution operation. We propose a representation for deep spherical image inference based on the icosahedral Snyder equal-area (ISEA) projection, a projection onto a geodesic grid, and show that it vastly exceeds the state-of-the-art for convolution on spherical images, improving semantic segmentation results by 12.6%."}, "keywords": ["tangent image representation", "360 \u2022 IQA"], "citation_intent": "method"} {"citing_id": "2303.09677v1", "cited_id": "1905.10887", "section_title": "Per-Class Analysis", "citation": "These findings improve upon those of #REFR , where a pre-trained BigGAN showed little to no correlation between FID and classification accuracy in a similar setting, strengthening the position of instance-conditioned models such as (CC-)IC-GAN. We observe similar trends for DeiT-B (see Appendix B).", "text_before_citation": ["In this analysis, we shed some light on the problematic (CC-)IC-GAN modeling of certain classes.", "We believe that computing stratified results for generative models might be a good practice to be adopted by the community, as also supported by #OTHEREFR .", "Nevertheless, the observed positive correlation between high classification accuracy and (CC-)IC-GAN's generation quality -studied through the lens of per-class FID and NN corruption-constitutes a promising result to improve the effectiveness of DA IC-GAN .", "To this end, we ran an additional experiment where we avoid applying DA IC-GAN on classes having very high FID (>= 150), i.e., where (CC-)IC-GAN has very low generation quality. We report the results in Appendix B.", "Notably, the impact of leveraging DA IC-GAN could be potentially improved by increasing the generation quality of the (CC-)IC-GAN's poorly modeled classes."], "text_after_citation": [], "citing_paper_content": {"title": "Instance-Conditioned Gan Data Augmentation For Representation Learning", "abstract": "Data augmentation has become a crucial component to train state-of-the-art visual representation models. However, handcrafting combinations of transformations that lead to improved performances is a laborious task, which can result in visually unrealistic samples. To overcome these limitations, recent works have explored the use of generative models as learnable data augmentation tools, showing promising results in narrow application domains, e.g., few-shot learning and low-data medical imaging. In this paper, we introduce a data augmentation module, called DA IC-GAN , which leverages instance-conditioned GAN generations and can be used off-the-shelf in conjunction with most state-of-the-art training recipes. We showcase the benefits of DA IC-GAN by plugging it out-of-the-box into the supervised training of ResNets and DeiT models on the ImageNet dataset, and achieving accuracy boosts up to between 1%p and 2%p with the highest capacity models. Moreover, the learnt representations are shown to be more robust than the baselines when transferred to a handful of out-of-distribution datasets, and exhibit increased invariance to variations of instance and viewpoints. We additionally couple DA IC-GAN with a self-supervised training recipe and show that we can also achieve an improvement of 1%p in accuracy in some settings. With this work, we strengthen the evidence on the potential of learnable data augmentations to improve visual representation learning, paving the road towards non-handcrafted augmentations in model training."}, "cited_paper_content": {"title": "Classification Accuracy Score For Conditional Generative Models", "abstract": "Deep generative models (DGMs) of images are now sufficiently mature that they produce nearly photorealistic samples and obtain scores similar to the data distribution on heuristics such as Frechet Inception Distance (FID). These results, especially on large-scale datasets such as ImageNet, suggest that DGMs are learning the data distribution in a perceptually meaningful space and can be used in downstream tasks. To test this latter hypothesis, we use class-conditional generative models from a number of model classes\u2014variational autoencoders, autoregressive models, and generative adversarial networks (GANs)\u2014to infer the class labels of real data. We perform this inference by training an image classifier using only synthetic data and using the classifier to predict labels on real data. The performance on this task, which we call Classification Accuracy Score (CAS), reveals some surprising results not identified by traditional metrics and constitute our contributions. First, when using a state-of-the-art GAN (BigGAN-deep), Top-1 and Top-5 accuracy decrease by 27.9% and 41.6%, respectively, compared to the original data; and conditional generative models from other model classes, such as Vector-Quantized Variational Autoencoder-2 (VQ-VAE-2) and Hierarchical Autoregressive Models (HAMs), substantially outperform GANs on this benchmark. Second, CAS automatically surfaces particular classes for which generative models failed to capture the data distribution, and were previously unknown in the literature. Third, we find traditional GAN metrics such as Inception Score (IS) and FID neither predictive of CAS nor useful when evaluating non-GAN models. Furthermore, in order to facilitate better diagnoses of generative models, we open-source the proposed metric."}, "keywords": ["CC-)IC-GAN"], "citation_intent": "result"} {"citing_id": "2304.08222v1", "cited_id": "1801.00868", "section_title": "Intra-Batch Supervision", "citation": "From these results, we find that IBS yields a consistent but modest improvement on the Panoptic Quality (PQ) metric #REFR .", "text_before_citation": ["We apply our Intra-Batch Supervision (IBS) to two stateof-the-art unified panoptic segmentation methods, and evaluate the performance on two high-resolution datasets in Table 2 ."], "text_after_citation": ["This is mainly caused by a boost in the PQ for things classes, PQ th , which consistently shows considerable improvements.", "This is expected, because the aim of our IBS is to improve the confusion problem that occurs for thing segments.", "As explained in Section 3.3, however, the ability of the PQ metric to capture errors for large objects of frequently-occuring classes, such as the confusion problem that we are addressing, is limited.", "To better capture this, we also report on the pixel accuracy (Acc th ) and pixel precision (Prec th ) for things classes.", "For these metrics, we observe much more significant improvements, e.g., a +5.8 improvement for both metrics for Panoptic FCN on Cityscapes, and +4.0 and +5.3 for Mask2Former on Mapillary Vistas."], "citing_paper_content": {"title": "Intra-Batch Supervision For Panoptic Segmentation On High-Resolution Images", "abstract": "Unified panoptic segmentation methods are achieving state-of-the-art results on several datasets. To achieve these results on high-resolution datasets, these methods apply crop-based training. In this work, we find that, although crop-based training is advantageous in general, it also has a harmful side-effect. Specifically, it limits the ability of unified networks to discriminate between large object instances, causing them to make predictions that are confused between multiple instances. To solve this, we propose Intra-Batch Supervision (IBS), which improves a network's ability to discriminate between instances by introducing additional supervision using multiple images from the same batch. We show that, with our IBS, we successfully address the confusion problem and consistently improve the performance of unified networks. For the high-resolution Cityscapes and Mapillary Vistas datasets, we achieve improvements of up to +2.5 on the Panoptic Quality for thing classes, and even more considerable gains of up to +5.8 on both the pixel accuracy and pixel precision, which we identify as better metrics to capture the confusion problem. PQ th Acc th 51.1 85.1 PQ th Acc th 52.2 81.3 PQ th Acc th 54.7 87.1 (a) Full-image training (b) Crop-based training (c) Crop-based training + IBS (ours)"}, "cited_paper_content": {"title": "Panoptic Segmentation", "abstract": "We propose and study a task we name panoptic segmentation (PS). Panoptic segmentation unifies the typically distinct tasks of semantic segmentation (assign a class label to each pixel) and instance segmentation (detect and segment each object instance). The proposed task requires generating a coherent scene segmentation that is rich and complete, an important step toward real-world vision systems. While early work in computer vision addressed related image/scene parsing tasks, these are not currently popular, possibly due to lack of appropriate metrics or associated recognition challenges. To address this, we propose a novel panoptic quality (PQ) metric that captures performance for all classes (stuff and things) in an interpretable and unified manner. Using the proposed metric, we perform a rigorous study of both human and machine performance for PS on three existing datasets, revealing interesting insights about the task. The aim of our work is to revive the interest of the community in a more unified view of image segmentation. For more analysis and up-to-date results, please check the arXiv version of the paper: {\\small\\url{https://arxiv.org/abs/1801.00868}}."}, "keywords": ["consistent"], "citation_intent": "result"} {"citing_id": "2303.11212v1", "cited_id": "1806.02296", "section_title": "Introduction", "citation": "However, as shown in #REFR , such requirements are unrealistic on the widely-used denoisers mentioned above, as they do not have symmetric Jacobians.", "text_before_citation": ["PnP versions of proximal algorithms have been used to solve image restoration problems such as for example PnP-PGD in #OTHEREFR , PnP-ADMM and PnP-DRS in #OTHEREFR and PnP-HQS in #OTHEREFR .", "In #OTHEREFR an explicit regularisation by denoising (RED) strategy was designed in terms of an explicit function R(\u2022) defined, for generic image denoiser D, by:", "R(x) := 1 2 x T (x \u2212 D(x)).", "Under conditions of local homogeneity, non-expansiveness, and Jacobian symmetry, D was shown to be indeed equivalent to a gradient step on R #OTHEREFR , that is,", "D(x) = x \u2212 \u2207R(x)."], "text_after_citation": ["In order to overcome this limitation, in #OTHEREFR , the authors proposed to formulate, similar to RED, a gradient step denoiser of the form:", "EQUATION", "where R \u03c3 : R n 2 \u2192 R is a scalar function parameterised by a neural network", "N \u03c3 : R n 2 \u2192 R n 2 .", "Interestingly, under mild structural assumption on D \u03c3 , the authors are able to prove sound convergence guarantees for the underlying nonconvex optimisation problem defined in terms of a non-trivial (but explicit) regularisation function R(\u2022)."], "citing_paper_content": {"title": "Fluctuation-Based Deconvolution In Fluorescence Microscopy Using Plug-And-Play Denoisers", "abstract": "The spatial resolution of images of living samples obtained by fluorescence microscopes is physically limited due to the diffraction of visible light, which makes the study of entities of size less than the diffraction barrier (around 200 nm in the x-y plane) very challenging. To overcome this limitation, several deconvolution and super-resolution techniques have been proposed. Within the framework of inverse problems, modern approaches in fluorescence microscopy reconstruct a superresolved image from a temporal stack of frames by carefully designing suitable hand-crafted sparsity-promoting regularisers. Numerically, such approaches are solved by proximal gradient-based iterative schemes. Aiming at obtaining a reconstruction more adapted to sample geometries (e.g. thin filaments), we adopt a plug-and-play denoising approach with convergence guarantees and replace the proximity operator associated with the explicit image regulariser with an image denoiser (i.e. a pretrained network) which, upon appropriate training, mimics the action of an implicit prior. To account for the independence of the fluctuations between molecules, the model relies on second-order statistics. The denoiser is then trained on covariance images coming from data representing sequences of fluctuating fluorescent molecules with filament structure. The method is evaluated on both simulated and real fluorescence microscopy images, showing its ability to correctly reconstruct filament structures with high values of peak signal-to-noise ratio (PSNR)."}, "cited_paper_content": {"title": "Regularization By Denoising: Clarifications And New Interpretations", "abstract": "Regularization by denoising (RED), as recently proposed by Romano, Elad, and Milanfar, is powerful image-recovery framework that aims to minimize an explicit regularization objective constructed from a plug-in image-denoising function. Experimental evidence suggests that the RED algorithms are a state of the art. We claim, however, that explicit regularization does not explain the RED algorithms. In particular, we show that many of the expressions in the paper by Romano et al. hold only when the denoiser has a symmetric Jacobian, and we demonstrate that such symmetry does not occur with practical denoisers such as nonlocal means, BM3D, TNRD, and DnCNN. To explain the RED algorithms, we propose a new framework called Score-Matching by Denoising (SMD), which aims to match a \u201cscore\u201d (i.e., the gradient of a log-prior). We then show tight connections between SMD, kernel density estimation, and constrained minimum mean-squared error denoising. Furthermore, we interpret the RED algorithms from Romano et al. and propose new algorithms with acceleration and convergence guarantees. Finally, we show that the RED algorithms seek a consensus equilibrium solution, which facilitates a comparison to plug-and-play ADMM."}, "keywords": ["widely-used denoisers"], "citation_intent": "background"} {"citing_id": "2303.10949v1", "cited_id": "2002.02562", "section_title": "Introduction", "citation": "In this paper, we mainly focus on the Transformer-Transducer (T-T) architecture #REFR and investigate cross-modality learning methods to leverage text-only data for improving the performance of the Mandarin-English code-switching ASR system.", "text_before_citation": ["Selfsupervised pretraining with multi-lingual unlabeled data without code-switching speech has also been shown effective to improve the performance of Mandarin-English code-switching ASR #OTHEREFR .", "Transducer-based ASR model is very attractive in the industry since it provides a natural way for streaming.", "However, it is not so straightforward to utilize unspoken text data except converting to synthesized speech or the features from TTS with a high computational cost #OTHEREFR .", "There is no attention mechanism between the encoder and the decoder.", "Although the prediction module plays as a Language Model (LM), it is different from the conventional LM due to the blank token #OTHEREFR that makes it harder to leverage external LM."], "text_after_citation": [], "citing_paper_content": {"title": "Code-Switching Text Generation And Injection In Mandarin-English Asr", "abstract": "Code-switching speech refers to a means of expression by mixing two or more languages within a single utterance. Automatic Speech Recognition (ASR) with End-to-End (E2E) modeling for such speech can be a challenging task due to the lack of data. In this study, we investigate text generation and injection for improving the performance of an industry commonly-used streaming model, Transformer-Transducer (T-T), in Mandarin-English code-switching speech recognition. We first propose a strategy to generate codeswitching text data and then investigate injecting generated text into T-T model explicitly by Text-To-Speech (TTS) conversion or implicitly by tying speech and text latent spaces. Experimental results on the T-T model trained with a dataset containing 1,800 hours of real Mandarin-English code-switched speech show that our approaches to inject generated code-switching text significantly boost the performance of T-T models, i.e., 16% relative Token-based Error Rate (TER) reduction averaged on three evaluation sets, and the approach of tying speech and text latent spaces is superior to that of TTS conversion on the evaluation set which contains more homogeneous data with the training set."}, "cited_paper_content": {"title": "Transformer Transducer: A Streamable Speech Recognition Model With Transformer Encoders And Rnn-T Loss", "abstract": "In this paper we present an end-to-end speech recognition model with Transformer encoders that can be used in a streaming speech recognition system. Transformer computation blocks based on self-attention are used to encode both audio and label sequences independently. The activations from both audio and label encoders are combined with a feed-forward layer to compute a probability distribution over the label space for every combination of acoustic frame position and label history. This is similar to the Recurrent Neural Network Transducer (RNN-T) model, which uses RNNs for information encoding instead of Transformer encoders. The model is trained with the RNN-T loss well-suited to streaming decoding. We present results on the LibriSpeech dataset showing that limiting the left context for self-attention in the Transformer layers makes decoding computationally tractable for streaming, with only a slight degradation in accuracy. We also show that the full attention version of our model beats the-state-of-the art accuracy on the LibriSpeech benchmarks. Our results also show that we can bridge the gap between full attention and limited attention versions of our model by attending to a limited number of future frames."}, "keywords": ["Mandarin-English code-switching ASR"], "citation_intent": "method"} {"citing_id": "2303.10611v1", "cited_id": "2001.03799", "section_title": "Ablation Study", "citation": "The effect of recurrent time is similar to DuDoRNet #REFR and results are included in supplementary materials.", "text_before_citation": ["The reason for this design is that we weigh the utility of image reconstruction network and the synergy between image and K-space reconstruction networks over K-space reconstruction.", "For K-GLIM and I-LDE, we try applying it to the other domain, both leading to performance drops. This demonstrates our design is domain-specific.", "For X-TL, \u03b8 x is set to 1 by default, corresponding to the best choice for \u03b8 i .", "As a result, only \u03b8 k is modified in the last row.", "By gradually changing DuDoRNet to ours, our method also provides a possible hybridizing strategy for current CNN models."], "text_after_citation": [], "citing_paper_content": {"title": "Dudornext: A Hybrid Model For Dual-Domain Undersampled Mri Reconstruction", "abstract": "Undersampled MRI reconstruction is crucial for accelerating clinical scanning procedures. Recent deep learning methods for MRI reconstruction adopt CNN or ViT as backbone, which lack in utilizing the complementary properties of CNN and ViT. In this paper, we propose DuDoRNeXt, whose backbone hybridizes CNN and ViT in an domainspecific, intra-stage way. Besides our hybrid vertical layout design, we introduce domain-specific modules for dual-domain reconstruction, namely image-domain parallel local detail enhancement and k-space global initialization. We evaluate different conventions of MRI reconstruction including image-domain, k-space-domain, and dual-domain reconstruction with a reference protocol on the IXI dataset and an in-house multicontrast dataset. DuDoRNeXt achieves significant improvements over competing deep learning methods."}, "cited_paper_content": {"title": "Dudornet: Learning A Dual-Domain Recurrent Network For Fast Mri Reconstruction With Deep T1 Prior", "abstract": "MRI with multiple protocols is commonly used for diagnosis, but it suffers from a long acquisition time, which yields the image quality vulnerable to say motion artifacts. To accelerate, various methods have been proposed to reconstruct full images from undersampled k-space data. However, these algorithms are inadequate for two main reasons. Firstly, aliasing artifacts generated in the image domain are structural and non-local, so that sole image domain restoration is insufficient. Secondly, though MRI comprises multiple protocols during one exam, almost all previous studies only employ the reconstruction of an individual protocol using a highly distorted undersampled image as input, leaving the use of fully-sampled short protocol (say T1) as complementary information highly underexplored. In this work, we address the above two limitations by proposing a Dual Domain Recurrent Network (DuDoRNet) with deep T1 prior embedded to simultaneously recover k-space and images for accelerating the acquisition of MRI with a long imaging protocol. Specifically, a Dilated Residual Dense Network (DRDNet) is customized for dual domain restorations from undersampled MRI data. Extensive experiments on different sampling patterns and acceleration rates demonstrate that our method consistently outperforms state-of-the-art methods, and can achieve SSIM up to 0.99 at $6 \\times$ acceleration."}, "keywords": ["DuDoRNet"], "citation_intent": "result"} {"citing_id": "2305.00595v1", "cited_id": "2001.08922", "section_title": "Anomaly Detection Approaches For Univariate Time Series", "citation": "RePAD #REFR is an online real-time lightweight unsupervised time series anomaly detection approaches based on LSTM and the Look-Back and Predict-Forward strategy.", "text_before_citation": ["Greenhouse #OTHEREFR ) is a time series anomaly detection algorithm based on Long Short-Term Memory (LSTM), which is a special recurrent neural network suitable for long-term dependent tasks #OTHEREFR .", "Greenhouse adopts a Look-Back and Predict-Forward strategy to learn the distribution of the training data.", "For a given time point, a window of most recently observed data point values are used to predict future data point values.", "However, Greenhouse is not an online approach since its LSTM model is trained with a pre-collected training data.", "Besides, it requires users to determine a proper detection threshold."], "text_after_citation": ["RePAD utilizes a simple LSTM network (with only one hidden layer and ten hidden units) to train a LSTM model with shortterm historical data points, predict each upcoming data point, and then decide if each data point is anomalous based on a dynamically calculated detection threshold.", "Different from Greenhouse, RePAD does not need to go through any offline training.", "Instead, RePAD trains its LSTM model on the fly.", "RePAD will keep using the same LSTM model if the model predicts well.", "When the prediction error of the model is higher than or equal to a dynamically calculated detection threshold, RePAD will retrain another new model with recent data points."], "citing_paper_content": {"title": "Impact Of Deep Learning Libraries On Online Adaptive Lightweight Time Series Anomaly Detection", "abstract": "Providing online adaptive lightweight time series anomaly detection without human intervention and domain knowledge is highly valuable. Several such anomaly detection approaches have been introduced in the past years, but all of them were only implemented in one deep learning library. With the development of deep learning libraries, it is unclear how different deep learning libraries impact these anomaly detection approaches since there is no such evaluation available. Randomly choosing a deep learning library to implement an anomaly detection approach might not be able to show the true performance of the approach. It might also mislead users in believing one approach is better than another. Therefore, in this paper, we investigate the impact of deep learning libraries on online adaptive lightweight time series anomaly detection by implementing two state-of-the-art anomaly detection approaches in three well-known deep learning libraries and evaluating how these two approaches are individually affected by the three deep learning libraries. A series of experiments based on four real-world open-source time series datasets were conducted. The results provide a good reference to select an appropriate deep learning library for online adaptive lightweight anomaly detection. ter decisions (Kieu et al."}, "cited_paper_content": {"title": "Repad: Real-Time Proactive Anomaly Detection For Time Series", "abstract": "During the past decade, many anomaly detection approaches have been introduced in different fields such as network monitoring, fraud detection, and intrusion detection. However, they require understanding of data pattern and often need a long off-line period to build a model or network for the target data. Providing real-time and proactive anomaly detection for streaming time series without human intervention and domain knowledge is highly valuable since it greatly reduces human effort and enables appropriate countermeasures to be undertaken before a disastrous damage, failure, or other harmful event occurs. However, this issue has not been well studied yet. To address it, this paper proposes RePAD, which is a Real-time Proactive Anomaly Detection algorithm for streaming time series based on unsupervised Long Short-Term Memory (LSTM). RePAD utilizes short-term historic data points to predict and determine whether or not the upcoming data point is a sign that an anomaly is likely to happen in the near future. By dynamically adjusting the detection threshold over time, RePAD is able to tolerate minor pattern change in time series and detect anomalies either proactively or on time. Experiments based on two time series datasets collected from the Numenta Anomaly Benchmark demonstrate that RePAD is able to proactively detect anomalies and provide early warnings in real time without human intervention and domain knowledge."}, "keywords": ["LSTM", "anomaly detection approaches"], "citation_intent": "method"} {"citing_id": "2304.02277v2", "cited_id": "1903.03894", "section_title": "I. Introduction", "citation": "Although there is no location information in a graph, we can still locate the most (least) important area in a graph, like in an image, by using some explanation techniques #REFR .", "text_before_citation": ["Moreover, based on the improvement of the explanation techniques in the graph domain, #OTHEREFR proposed injecting the trigger into the most important or least important area of the sample.", "However, that work does not provide any experimental analysis to confirm the assumptions made.", "Also, there is no work so far on using explanation tools to explain the backdoor attack behavior in the graph domain. This work first raises a core question:", "What is the attack performance when injecting trigger into the most or least important area of the sample?", "To answer this question, we explore the impacts of the backdoor trigger-injecting position from the perspective of the most (MIAS) or least important area of the sample (LIAS)."], "text_after_citation": ["As shown in experiments, we demonstrate that the attack performance of LIAS is better, where the difference from MIAS can even be significant. This observation inspires one further question:", "Can we explain this difference? There are already some works on explaining backdoor attacks in the image domain through visualization techniques #OTHEREFR , #OTHEREFR .", "For example, #OTHEREFR plotted the average activations of the backdoored model's last convolutional layer over clean and backdoored images to explain their attack.", "#OTHEREFR used the Grad-CAM #OTHEREFR visualization method to explain the backdoor attack in federated learning.", "One example of explaining a backdoor attack in the image domain with Grad-CAM is shown in Fig. 1 ."], "citing_paper_content": {"title": "Rethinking The Trigger-Injecting Position In Graph Backdoor Attack", "abstract": "Backdoor attacks have been demonstrated as a security threat for machine learning models. Traditional backdoor attacks intend to inject backdoor functionality into the model such that the backdoored model will perform abnormally on inputs with predefined backdoor triggers and still retain state-ofthe-art performance on the clean inputs. While there are already some works on backdoor attacks on Graph Neural Networks (GNNs), the backdoor trigger in the graph domain is mostly injected into random positions of the sample. There is no work analyzing and explaining the backdoor attack performance when injecting triggers into the most important or least important area in the sample, which we refer to as trigger-injecting strategies MIAS and LIAS, respectively. Our results show that, generally, LIAS performs better, and the differences between the LIAS and MIAS performance can be significant. Furthermore, we explain these two strategies' similar (better) attack performance through explanation techniques, which results in a further understanding of backdoor attacks in GNNs."}, "cited_paper_content": {"title": "Gnnexplainer: Generating Explanations For Graph Neural Networks", "abstract": "Graph Neural Networks (GNNs) are a powerful tool for machine learning on graphs. GNNs combine node feature information with the graph structure by recursively passing neural messages along edges of the input graph. However, incorporating both graph structure and feature information leads to complex models and explaining predictions made by GNNs remains unsolved. Here we propose GnnExplainer, the first general, model-agnostic approach for providing interpretable explanations for predictions of any GNN-based model on any graph-based machine learning task. Given an instance, GnnExplainer identifies a compact subgraph structure and a small subset of node features that have a crucial role in GNN's prediction. Further, GnnExplainer can generate consistent and concise explanations for an entire class of instances. We formulate GnnExplainer as an optimization task that maximizes the mutual information between a GNN's prediction and distribution of possible subgraph structures. Experiments on synthetic and real-world graphs show that our approach can identify important graph structures as well as node features, and outperforms alternative baseline approaches by up to 43.0% in explanation accuracy. GnnExplainer provides a variety of benefits, from the ability to visualize semantically relevant structures to interpretability, to giving insights into errors of faulty GNNs."}, "keywords": ["graph", "explanation techniques"], "citation_intent": "method"} {"citing_id": "2303.13555v1", "cited_id": "2001.04385", "section_title": "Hybrid Model", "citation": "Here we followed the recent literature #REFR , where one layer and no more than 50 neurons are usually required to find the missing term in the differential equation.", "text_before_citation": ["The isotherm calculation one of the nodes in the computational graph -the one located leftmost layer.", "This variable is then concatenated with the adsorbed amount and used as an input of the ANN that predicts the instantaneous uptake rate.", "The structure of the ANN is usually chosen before training it.", "In traditional deep learning, hyperparameter optimization is usually used to select the best architecture for a problem.", "However, this is rarely explored in hybrid modeling as the training procedure is significantly more computationally demanding than in traditional deep learning."], "text_after_citation": ["In the present work, one-layer ANNs with hyperbolic tangent activation were used for all cases and a varying number of neurons between 15 and 25 with grid search.", "Learning rates were set to 0.05 with Adaptive moment estimation (ADAM) optimizer #OTHEREFR and exponential learning rate decay every 20 iterations and 0.985 drop factor over 180 iterations.", "After the second fit with ADAM, the quasi-newton Broyden-Fletcher-Goldfarb-Shanno (BFGS) method #OTHEREFR was employed until convergence."], "citing_paper_content": {"title": "Efficient Hybrid Modeling And Sorption Model Discovery For Non-Linear Advection-Diffusion-Sorption Systems: A Systematic Scientific Machine Learning Approach", "abstract": "This study presents a systematic machine learning approach for creating efficient hybrid models and discovering sorption uptake models in non-linear advection-diffusion-sorption systems. It demonstrates an effective method to train these complex systems using gradientbased optimizers, adjoint sensitivity analysis, and JIT-compiled vector Jacobian products, combined with spatial discretization and adaptive integrators. Sparse and symbolic regression were employed to identify missing functions in the artificial neural network. The robustness of the proposed method was tested on an in-silico data set of noisy breakthrough curve observations of fixed-bed adsorption, resulting in a well-fitted hybrid model. The study successfully reconstructed sorption uptake kinetics using sparse and symbolic regression, and accurately predicted breakthrough curves using identified polynomials, highlighting the potential of the proposed framework for discovering sorption kinetic law structures."}, "cited_paper_content": {"title": "Universal Differential Equations For Scientific Machine Learning", "abstract": "In the context of science, the well-known adage \"a picture is worth a thousand words\" might well be \"a model is worth a thousand datasets.\" Scientific models, such as Newtonian physics or biological gene regulatory networks, are human-driven simplifications of complex phenomena that serve as surrogates for the countless experiments that validated the models. Recently, machine learning has been able to overcome the inaccuracies of approximate modeling by directly learning the entire set of nonlinear interactions from data. However, without any predetermined structure from the scientific basis behind the problem, machine learning approaches are flexible but data-expensive, requiring large databases of homogeneous labeled training data. A central challenge is reconciling data that is at odds with simplified models without requiring \"big data\". In this work we develop a new methodology, universal differential equations (UDEs), which augments scientific models with machine-learnable structures for scientifically-based learning. We show how UDEs can be utilized to discover previously unknown governing equations, accurately extrapolate beyond the original data, and accelerate model simulation, all in a time and data-efficient manner. This advance is coupled with open-source software that allows for training UDEs which incorporate physical constraints, delayed interactions, implicitly-defined events, and intrinsic stochasticity in the model. Our examples show how a diverse set of computationally-difficult modeling issues across scientific disciplines, from automatically discovering biological mechanisms to accelerating climate simulations by 15,000x, can be handled by training UDEs."}, "keywords": ["50 neurons"], "citation_intent": "method"} {"citing_id": "2304.03998v1", "cited_id": "cs/9605103", "section_title": "Introduction", "citation": "In general, the policy iteration-based algorithms converge faster than value iteration-based algorithms #REFR , which is another reason for the superior performance of our approach. Our experimental results further support our arguments.", "text_before_citation": ["NNDP attacker's policy #OTHEREFR trains the model against one defensive plan at a time, due to which it forgets the previous plan. This way, it keeps learning and forgetting the plans.", "However, we train our RL based attacker's policy against multiple defensive plans at a time, due to which it learns shared experience and performs better.", "For RL agent, diverse environment configurations are only different in the \"opening games\", whereas the \"end games\" or \"mid games\" are likely to be similar across different environments.", "The similarity in later stages can be utilized in parallel training, where the agent is trained against multiple environments simultaneously and gains shared experience, leading to faster convergence and improved performance.", "Besides, NNDP approach is value iteration-based RL algorithm, whereas our approach is policy iteration-based RL algorithm."], "text_after_citation": [], "citing_paper_content": {"title": "Evolving Reinforcement Learning Environment To Minimize Learner'S Achievable Reward: An Application On Hardening Active Directory Systems", "abstract": "We study a Stackelberg game between one attacker and one defender in a configurable environment. The defender picks a specific environment configuration. The attacker observes the configuration and attacks via Reinforcement Learning (RL trained against the observed environment). The defender's goal is to find the environment with minimum achievable reward for the attacker. We apply Evolutionary Diversity Optimization (EDO) to generate diverse population of environments for training. Environments with clearly high rewards are killed off and replaced by new offsprings to avoid wasting training time. Diversity not only improves training quality but also fits well with our RL scenario: RL agents tend to improve gradually, so a slightly worse environment earlier on may become better later. We demonstrate the effectiveness of our approach by focusing on a specific application, Active Directory (AD). AD is the default security management system for Windows domain networks. AD environment describes an attack graph, where nodes represent computers/accounts/etc., and edges represent accesses. The attacker aims to find the best attack path to reach the highest-privilege node. The defender can change the graph by removing a limited number of edges (revoke accesses). Our approach generates better defensive plans than the existing approach and scales better."}, "cited_paper_content": {"title": "Reinforcement Learning: A Survey", "abstract": "This paper surveys the field of reinforcement learning from a computer-science perspective. It is written to be accessible to researchers familiar with machine learning. Both the historical basis of the field and a broad selection of current work are summarized. Reinforcement learning is the problem faced by an agent that learns behavior through trial-and-error interactions with a dynamic environment. The work described here has a resemblance to work in psychology, but differs considerably in the details and in the use of the word ``reinforcement.'' The paper discusses central issues of reinforcement learning, including trading off exploration and exploitation, establishing the foundations of the field via Markov decision theory, learning from delayed reinforcement, constructing empirical models to accelerate learning, making use of generalization and hierarchy, and coping with hidden state. It concludes with a survey of some implemented systems and an assessment of the practical utility of current methods for reinforcement learning."}, "keywords": ["policy iteration-based algorithms"], "citation_intent": "result"} {"citing_id": "2303.00709v1", "cited_id": "1605.02353", "section_title": "Comparison Of Algorithm Variants: Elimination Order", "citation": "The use of a completely random ordering was suggested in #REFR which introduced randomized approximate Gaussian elimination.", "text_before_citation": ["In this section, we investigate the impact of our approximate greedy elimination ordering (see Section 5.1) compared with a randomized ordering."], "text_after_citation": ["In #OTHEREFR , the random ordering plays a role in the proof that the algorithm produces a good factorization.", "This paper also showed that one can randomize among the vertices with at most twice the average unweighted degree and still get a provably correct algorithm.", "In our AC and other solvers, we instead eliminate a vertex with approximately minimum unweighted degree in each round.", "To understand the impact of this choice, we compare AC with a variant that uses uniformly random elimination order, which we call AC-random.", "In our first experiment, we reuse the setup from Section 5.4, i.e."], "citing_paper_content": {"title": "Robust And Practical Solution Of Laplacian Equations By Approximate Elimination *", "abstract": "We introduce a new algorithm and software for solving linear equations in symmetric diagonally dominant matrices with non-positive off-diagonal entries (SDDM matrices), including Laplacian matrices. We use preconditioned conjugate gradient to solve the system of linear equations. Our preconditioner is a variant of the Approximate Cholesky factorization of Kyng and Sachdeva (FOCS 2016). Our factorization approach is simple: we eliminate matrix rows/columns one at a time, and update the remaining entries of the matrix by sampling entries to approximate the outcome of complete Cholesky factorization. Unlike earlier approaches, our sampled entries always maintain a connected graph on the neighbors of the eliminated variable. Our algorithm comes with a tuning parameter that upper bounds the number of samples made per original entry. We implement our solver algorithm in Julia, and experimentally evaluate its performance when using 1 or 2 samples for each original entry: We refer to these variants as AC and AC2 respectively. We investigate the performance of these implementations and compare their single-threaded performance to that of current stateof-the-art solvers including Combinatorial Multigrid (CMG), BoomerAMG-preconditioned Krylov solvers from HyPre and PETSc, Lean Algebraic Multigrid (LAMG), and MATLAB's preconditioned conjugate gradient with Incomplete Cholesky Factorization (ICC). Our experiments suggest that AC2 and AC attain a level of robustness and reliability not seen before in solvers for SDDM linear equations, while retaining good performance across all instances. Many large-scale evaluations of SDDM linear equation solvers have focused on solving problems on discretized 3D grids. We conduct evaluations across a much broader class of problems, including all large SDDM matrices from the SuiteSparse collection, as well as a broad array of programmatically generated instances. Our tests range up to 200 million non-zeros per system of linear equations. Our experiments show that AC and AC2 obtain good practical performance across many different types of SDDM-matrices, and significantly greater reliability than existing solvers. AC2 is the only solver that succeeds across all our tests, and it uses less than 7.2\u00b5s time per non-zero across to converge to 10 \u22128 relative residual error across all our experiments. AC is typically 1.5-2 times faster, but fails on one family of problems engineered to attack this algorithm. In our experiments using general sparse non-zero patterns, CMG, HyPre, PETSc, and ICC all fail on some instances from a majority of the families tested. Across these families, the median running time across different instances of AC2 and AC is comparable to or faster than other solvers. We also test the performance of our solvers on a wide array of Poisson problems on 3D grids, including grids with uniform, high-contrast, or anisotropic coefficients, and grids from the SPE Benchmark. Here, the CMG and HyPre solvers perform best, but AC and AC2 achieve worst case and median running times within a factor 4.1 and 6.2 of these respectively. Our code is public, and we detail precisely the set of tests we run and provide a tutorial on how to replicate the tests. We hope that others will adopt this suite of tests as a benchmark, which we refer to as SDDM2023."}, "cited_paper_content": {"title": "Approximate Gaussian Elimination For Laplacians - Fast, Sparse, And Simple", "abstract": "We show how to perform sparse approximate Gaussian elimination for Laplacian matrices. We present a simple, nearly linear time algorithm that approximates a Laplacian by a matrix with a sparse Cholesky factorization, the version of Gaussian elimination for symmetric matrices. This is the first nearly linear time solver for Laplacian systems that is based purely on random sampling, and does not use any graph theoretic constructions such as low-stretch trees, sparsifiers, or expanders. The crux of our analysis is a novel concentration bound for matrix martingales where the differences are sums of conditionally independent variables."}, "keywords": ["approximate Gaussian elimination"], "citation_intent": "method"} {"citing_id": "2303.12134v1", "cited_id": "1705.05065", "section_title": "Iv. Datasets And Experiments", "citation": "While simulators allow recording synchronized RGB-D and IMU data #REFR , manually gathering sufficient training data is difficult.", "text_before_citation": ["A key challenge in acquiring training data for the SML network is the lack of RGB-D+IMU datasets.", "In our pipeline, IMU data is needed to run VIO to generate sparse metric depth."], "text_after_citation": ["We select TartanAir #OTHEREFR for its large size and variety of outdoor and indoor sequences. IMU data is not provided in this dataset.", "To proxy sparse depth map generation, we run the VINS-Mono feature tracker front-end #OTHEREFR to obtain sparse feature locations and then sample ground truth depth at those locations.", "We use a 70%-30% train-test split for TartanAir, with 172K training and 73K test samples taken from both easy and hard sequences.", "In addition to the synthetic TartanAir dataset, we benchmark on VOID #OTHEREFR , which offers real-world data collected using an Intel RealSense D435i camera and the VIO system XIVO #OTHEREFR .", "This dataset is smaller than TartanAir, with only 47K training and 800 test samples. We use the published train-test split."], "citing_paper_content": {"title": "Monocular Visual-Inertial Depth Estimation", "abstract": "We present a visual-inertial depth estimation pipeline that integrates monocular depth estimation and visualinertial odometry to produce dense depth estimates with metric scale. Our approach performs global scale and shift alignment against sparse metric depth, followed by learning-based dense alignment. We evaluate on the TartanAir and VOID datasets, observing up to 30% reduction in inverse RMSE with dense scale alignment relative to performing just global alignment alone. Our approach is especially competitive at low density; with just 150 sparse metric depth points, our dense-to-dense depth alignment method achieves over 50% lower iRMSE over sparse-to-dense depth completion by KBNet, currently the state of the art on VOID. We demonstrate successful zero-shot transfer from synthetic TartanAir to real-world VOID data and perform generalization tests on NYUv2 and VCU-RVI. Our approach is modular and is compatible with a variety of monocular depth estimation models."}, "cited_paper_content": {"title": "Airsim: High-Fidelity Visual And Physical Simulation For Autonomous Vehicles", "abstract": "Developing and testing algorithms for autonomous vehicles in real world is an expensive and time consuming process. Also, in order to utilize recent advances in machine intelligence and deep learning we need to collect a large amount of annotated training data in a variety of conditions and environments. We present a new simulator built on Unreal Engine that offers physically and visually realistic simulations for both of these goals. Our simulator includes a physics engine that can operate at a high frequency for real-time hardware-in-the-loop (HITL) simulations with support for popular protocols (e.g. MavLink). The simulator is designed from the ground up to be extensible to accommodate new types of vehicles, hardware platforms and software protocols. In addition, the modular design enables various components to be easily usable independently in other projects. We demonstrate the simulator by first implementing a quadrotor as an autonomous vehicle and then experimentally comparing the software components with real-world flights."}, "keywords": ["RGB-D", "simulators"], "citation_intent": "background"} {"citing_id": "2303.07477v1", "cited_id": "1512.03385", "section_title": "Self-Supervised Learning", "citation": "The Siamese network consists of an encoder network f and a prediction MLP h, where the encoder includes a backbone model (e.g., ResNet #REFR ) and a projection MLP.", "text_before_citation": ["Self-supervised learning aims to learn visual representation without data labeling cost.", "Recent advances #OTHEREFR show that self-supervised learning can achieve similar or even better performance than supervised representation learning.", "A common strategy of these methods is to learn representations that are invariant under different data augmentations by maximizing their similarity with contrastive loss optimization. However, these approaches require largesized batches and negative samples.", "SimSiam #OTHEREFR addresses this issue by utilizing the stop-gradient technique to prevent the collapsing of Siamese networks."], "text_after_citation": ["Given two randomly augmented views of x 1 and x 2 from an input image x, Simsiam aims to minimize the negative cosine similarity between the predictor output p 1 (p 1 = f (h(x 1 ))) and the projector output z 2 (z 2 = f (x 2 )) with a symmetrized loss as:", "LSSL = 1 2 D(p1, stopgrad(z2)) + 1 2 D(p2, stopgrad(z1)) (1)", "where D is a negative cosine similarity function.", "Given the distorted versions of an instance, BarlowTwin #OTHEREFR minimizes the redundancy between their embedding vector components while conserving the maximum information.", "This can be achieved by making the cross-correlation matrix, computed between the outputs of two identical networks, closer to the identity matrix, through the minimization of the following loss:"], "citing_paper_content": {"title": "Efficient Self-Supervised Continual Learning With Progressive Task-Correlated Layer Freezing", "abstract": "Inspired by the success of Self-supervised learning (SSL) in learning visual representations from unlabeled data, a few recent works have studied SSL in the context of continual learning (CL), where multiple tasks are learned sequentially, giving rise to a new paradigm, namely selfsupervised continual learning (SSCL). It has been shown that the SSCL outperforms supervised continual learning (SCL) as the learned representations are more informative and robust to catastrophic forgetting. However, if not designed intelligently, the training complexity of SSCL may be prohibitively high due to the inherent training cost of SSL. In this work, by investigating the task correlations in SSCL setup first, we discover an interesting phenomenon that, with the SSL-learned background model, the intermediate features are highly correlated between tasks. Based on this new finding, we propose a new SSCL method with layer-wise freezing which progressively freezes partial layers with the highest correlation ratios for each task to improve training computation efficiency and memory efficiency. Extensive experiments across multiple datasets are performed, where our proposed method shows superior performance against the SoTA SSCL methods under various SSL frameworks. For example, compared to LUMP, our method achieves 12%/14%/12% GPU training time reduction, 23%/26%/24% memory reduction, 35%/34%/33% backward FLOPs reduction, and 1.31%/1.98%/1.21% forgetting reduction without accuracy degradation on three datasets, respectively."}, "cited_paper_content": {"title": "Deep Residual Learning For Image Recognition", "abstract": "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers\u20148\u00d7 deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation."}, "keywords": ["Siamese network", "encoder network f"], "citation_intent": "method"} {"citing_id": "2303.15409v1", "cited_id": "1801.03924", "section_title": "C.2. Distance Metrics", "citation": "We compare 2 , 1 and LPIPS #REFR distances over CIFAR10 dataset, and presente the results in Tab. 8.", "text_before_citation": ["In TETRA's second phase, we calculate the distance between the input image and the transformed images, and we classify based on the shortest one.", "Hence, the distance metric that we use for the classification is important.", "Different metrics have different properties, and we aim at a distance metric that is able to measure the semantic distance between images."], "text_after_citation": ["We compare the results using the following defense methods #OTHEREFR .", "As demonstrated, 2 distance metric performs better, therefore is a favorable choice. Table 8 . CIFAR10 results. In the first column, we state the method.", "For every base method, we report three consecutive lines of results.", "One for the base method and then two TETRA distance metric variations used for classification: L2 and LPIPS #OTHEREFR .", "In the next columns, we state the architecture, the trained threat model (TTM), and four attacks with different threat models."], "citing_paper_content": {"title": "Classifier Robustness Enhancement Via Test-Time Transformation", "abstract": "It has been recently discovered that adversarially trained classifiers exhibit an intriguing property, referred to as perceptually aligned gradients (PAG). PAG implies that the gradients of such classifiers possess a meaningful structure, aligned with human perception. Adversarial training is currently the best-known way to achieve classification robustness under adversarial attacks. The PAG property, however, has yet to be leveraged for further improving classifier robustness. In this work, we introduce Classifier Robustness Enhancement Via Test-Time Transformation (TETRA)-a novel defense method that utilizes PAG, enhancing the performance of trained robust classifiers. Our method operates in two phases. First, it modifies the input image via a designated targeted adversarial attack into each of the dataset's classes. Then, it classifies the input image based on the distance to each of the modified instances, with the assumption that the shortest distance relates to the true class. We show that the proposed method achieves state-of-the-art results and validate our claim through extensive experiments on a variety of defense methods, classifier architectures, and datasets. We also empirically demonstrate that TETRA can boost the accuracy of any differentiable adversarial training classifier across a variety of attacks, including ones unseen at training. Specifically, applying TETRA leads to substantial improvement of up to +23%, +20%, and +26% on CIFAR10, CIFAR100, and ImageNet, respectively."}, "cited_paper_content": {"title": "The Unreasonable Effectiveness Of Deep Features As A Perceptual Metric", "abstract": "While it is nearly effortless for humans to quickly assess the perceptual similarity between two images, the underlying processes are thought to be quite complex. Despite this, the most widely used perceptual metrics today, such as PSNR and SSIM, are simple, shallow functions, and fail to account for many nuances of human perception. Recently, the deep learning community has found that features of the VGG network trained on the ImageNet classification task has been remarkably useful as a training loss for image synthesis. But how perceptual are these so-called\"perceptual losses\"? What elements are critical for their success? To answer these questions, we introduce a new Full Reference Image Quality Assessment (FR-IQA) dataset of perceptual human judgments, orders of magnitude larger than previous datasets. We systematically evaluate deep features across different architectures and tasks and compare them with classic metrics. We find that deep features outperform all previous metrics by huge margins. More surprisingly, this result is not restricted to ImageNet-trained VGG features, but holds across different deep architectures and levels of supervision (supervised, self-supervised, or even unsupervised). Our results suggest that perceptual similarity is an emergent property shared across deep visual representations."}, "keywords": ["dataset"], "citation_intent": "result"} {"citing_id": "2303.18157v1", "cited_id": "1605.06676", "section_title": "B. Execution Framework", "citation": "On the other hand, it requires the more challenging Dec-POMDP formalization of standard MARL #REFR when letting several agents act simultaneously.", "text_before_citation": ["Let S and A represent the global state and action spaces, respectively, defined as the joint and union of the respective agents' local spaces, S = \u2208V S and A = \u2208V A .", "The theoretical framework of MAGNNETO allows to implement both Q-learning and PG methods, so for the sake of generalization let represent the global RL-based function that is aimed to learn -i.e., the global state-action value function for the former, or the global policy for the latter.", "A main contribution of MAGNNETO is that it makes all agents \u2208 V learn the global RL-based function approximator in a fully distributed fashion -i.e., all agents end up constructing and having access to the very same representation .", "In particular, and from a theoretical RL standpoint, this allows to formulate the problem within two different paradigms depending on the number of actions allowed at each time-step of the RL episode.", "On the one hand, imposing a single action per time-step enables to devise the problem as a time-homogeneous MDP of single-agent RL #OTHEREFR ."], "text_after_citation": ["Note, however, that in practice the execution pipeline of MAGNNETO is exactly the same in both cases.", "Another relevant feature of our design is that all agents \u2208 V are able to internally construct such global representation mainly through message communications with their direct neighboring agents B ( ) and their local computations, no longer needing a centralized entity responsible for collecting and processing all the global information together.", "Such a decentralized, message-based generation of the global function is achieved by modeling the global function with a MPNN (see Sec.", "III-A3), so that all agents \u2208 V deployed in the network are actually replicas of the MPNN modules (message, aggregation, update and readout functions) that perform regular message exchanges with their neighbors B ( ) following the message passing iteration procedure of MPNNs; in particular, note that such parameter sharing implies that all agents share as well the same local state and action spaces.", "This reinterpretation of a MPNN as a set of copies of its internal modules is especially important due to the fact that in our approach we directly map the graph G to a real networked scenario, deploying copies of the MPNN modules along hardware devices in the network (e.g., routers) and making all message communications involved to actually go through the real network infrastructure."], "citing_paper_content": {"title": "Magnneto: A Graph Neural Network-Based Multi-Agent System For Traffic Engineering", "abstract": "Current trends in networking propose the use of Machine Learning (ML) for a wide variety of network optimization tasks. As such, many efforts have been made to produce ML-based solutions for Traffic Engineering (TE), which is a fundamental problem in ISP networks. Nowadays, state-of-the-art TE optimizers rely on traditional optimization techniques, such as Local search, Constraint Programming, or Linear programming. In this paper, we present MAGNNETO, a distributed ML-based framework that leverages Multi-Agent Reinforcement Learning and Graph Neural Networks for distributed TE optimization. MAGNNETO deploys a set of agents across the network that learn and communicate in a distributed fashion via message exchanges between neighboring agents. Particularly, we apply this framework to optimize link weights in OSPF, with the goal of minimizing network congestion. In our evaluation, we compare MAGNNETO against several state-of-the-art TE optimizers in more than 75 topologies (up to 153 nodes and 354 links), including realistic traffic loads. Our experimental results show that, thanks to its distributed nature, MAGNNETO achieves comparable performance to state-of-the-art TE optimizers with significantly lower execution times. Moreover, our ML-based solution demonstrates a strong generalization capability to successfully operate in new networks unseen during training."}, "cited_paper_content": {"title": "Learning To Communicate With Deep Multi-Agent Reinforcement Learning", "abstract": "We consider the problem of multiple agents sensing and acting in environments with the goal of maximising their shared utility. In these environments, agents must learn communication protocols in order to share information that is needed to solve the tasks. By embracing deep neural networks, we are able to demonstrate end-to-end learning of protocols in complex environments inspired by communication riddles and multi-agent computer vision problems with partial observability. We propose two approaches for learning in these domains: Reinforced Inter-Agent Learning (RIAL) and Differentiable Inter-Agent Learning (DIAL). The former uses deep Q-learning, while the latter exploits the fact that, during learning, agents can backpropagate error derivatives through (noisy) communication channels. Hence, this approach uses centralised learning but decentralised execution. Our experiments introduce new environments for studying the learning of communication protocols and present a set of engineering innovations that are essential for success in these domains."}, "keywords": ["challenging Dec-POMDP formalization", "several agents"], "citation_intent": "background"} {"citing_id": "2304.13029v1", "cited_id": "1809.06705", "section_title": "Generalised Signatures", "citation": "This reinforces the findings that rotation forest is the most effective classifier for problems with continuous features #REFR .", "text_before_citation": ["A hierarchical window is run over the two augmented series, with the signature transform being applied to each window.", "The output for each window is then concatenated into a feature vector.", "Figure 6 shows the relative rank performance, and Table 3 summarises the overall performance statistics.", "All four pipelines are significantly more accurate than 1-NN DTW.", "Excluding feature extraction and using rotation forest rather than random forest with TSFresh increases accuracy by over 0.05."], "text_after_citation": ["An example of a problem where interval based approaches may be superior.", "Each series is a spectrogram from a bottle of alcohol with a different concentration of ethanol.", "The discriminatory features are in the near infrared interval (green box to the right).", "However, the confounding factors such as bottle shape, labelling and colouring cause variation in the visible range (red box to the left).", "Using intervals containing just the near infrared features is likely to make classification easier."], "citing_paper_content": {"title": "Bake Off Redux: A Review And Experimental Evaluation Of Recent Time Series Classification Algorithms", "abstract": "In 2017, a research paper (Bagnall et al., 2017) compared 18 Time Series Classification (TSC) algorithms on 85 datasets from the University of California, Riverside (UCR) archive. This study, commonly referred to as a 'bake off', identified that only nine algorithms performed significantly better than the Dynamic Time Warping (DTW) and Rotation Forest benchmarks that were used. The study categorised each algorithm by the type of feature they extract from time series data, forming a taxonomy of five main algorithm types. This categorisation of algorithms alongside the provision of code and accessible results for reproducibility has helped fuel an increase in popularity of the TSC field. Over six years have passed since this bake off, the UCR archive has expanded to 112 datasets and there have been a large number of new algorithms proposed. We revisit the bake off, seeing how each of the proposed categories have advanced since the original publication, and evaluate the performance of newer algorithms against the previous best-of-category using an expanded UCR archive. We extend the taxonomy to include three new categories to reflect recent developments. Alongside the originally proposed distance, interval, shapelet, dictionary and hybrid based algorithms, we compare newer convolution and feature based algorithms as well as deep learning approaches. We introduce 30 classification datasets either recently donated to the archive or reformatted to the TSC format, and use these to further evaluate the best performing algorithm from each category. Overall, we find that two recently proposed algorithms, Hydra+MultiROCKET Dempster et al. (2022) and HIVE-COTEv2 Middlehurst et al. (2021), perform significantly better than other approaches on both the current and new TSC problems."}, "cited_paper_content": {"title": "Is Rotation Forest The Best Classifier For Problems With Continuous Features?", "abstract": "Rotation forest is a tree based ensemble that performs transforms on subsets of attributes prior to constructing each tree. We present an empirical comparison of classifiers for problems with only real valued features. We evaluate classifiers from three families of algorithms: support vector machines; tree-based ensembles; and neural networks. We compare classifiers on unseen data based on the quality of the decision rule (using classification error) the ability to rank cases (area under the receiver operator curve) and the probability estimates (using negative log likelihood). We conclude that, in answer to the question posed in the title, yes, rotation forest, is significantly more accurate on average than competing techniques when compared on three distinct sets of datasets. The same pattern of results are observed when tuning classifiers on the train data using a grid search. We investigate why rotation forest does so well by testing whether the characteristics of the data can be used to differentiate classifier performance. We assess the impact of the design features of rotation forest through an ablative study that transforms random forest into rotation forest. We identify the major limitation of rotation forest as its scalability, particularly in number of attributes. To overcome this problem we develop a model to predict the train time of the algorithm and hence propose a contract version of rotation forest where a run time cap {\\em a priori}. We demonstrate that on large problems rotation forest can be made an order of magnitude faster without significant loss of accuracy and that there is no real benefit (on average) from tuning the ensemble. We conclude that without any domain knowledge to indicate an algorithm preference, rotation forest should be the default algorithm of choice for problems with continuous attributes."}, "keywords": ["continuous features", "rotation forest"], "citation_intent": "result"} {"citing_id": "2303.15651v1", "cited_id": "2003.02371", "section_title": "Related Work", "citation": "LiDAR 3D and 4D Panoptic segmentation 3D Panoptic segmentation The task of panoptic segmentation was first proposed in the image domain, and was later extended to LiDAR point clouds after a large-scale outdoor LiDAR point cloud dataset, SemanticKITTI, was published with panoptic labels #REFR .", "text_before_citation": ["2.1."], "text_after_citation": ["Similar to the semantic segmentation #OTHEREFR and panoptic segmentation techniques in the image domain #OTHEREFR , their 3D counterparts can be classified into proposal-based and proposal-free methods.", "Proposal-based methods #OTHEREFR require an object detection module to locate the objects first and then predict the instance mask for each bounding box and conduct semantic segmentation on the background pixels.", "This strategy needs to deal with potential conflicts among the segmentations.", "More methods fall into the other category, proposalfree methods, which conduct semantic segmentation first and then cluster the points belonging to different instances.", "A lot of research efforts focused on clustering strategies, as it impacts overall efficiency and performance."], "citing_paper_content": {"title": "4D Panoptic Segmentation As Invariant And Equivariant Field Prediction", "abstract": "In this paper, we develop rotation-equivariant neural networks for 4D panoptic segmentation. 4D panoptic segmentation is a recently established benchmark task for autonomous driving, which requires recognizing semantic classes and object instances on the road based on LiDAR scans, as well as assigning temporally consistent IDs to instances across time. We observe that the driving scenario is symmetric to rotations on the ground plane. Therefore, rotation-equivariance could provide better generalization and more robust feature learning. Specifically, we review the object instance clustering strategies, and restate the centerness-based approach and the offset-based approach as the prediction of invariant scalar fields and equivariant vector fields. Other sub-tasks are also unified from this perspective, and different invariant and equivariant layers are designed to facilitate their predictions. Through evaluation on the standard 4D panoptic segmentation benchmark of SemanticKITTI, we show that our equivariant models achieve higher accuracy with lower computational costs compared to their non-equivariant counterparts. Moreover, our method sets the new state-of-the-art performance and achieves 1st place on the SemanticKITTI 4D Panoptic Segmentation leaderboard."}, "cited_paper_content": {"title": "A Benchmark For Lidar-Based Panoptic Segmentation Based On Kitti", "abstract": "Panoptic segmentation is the recently introduced task that tackles semantic segmentation and instance segmentation jointly. In this paper, we present an extension of SemanticKITTI, which is a large-scale dataset providing dense point-wise semantic labels for all sequences of the KITTI Odometry Benchmark, for training and evaluation of laser-based panoptic segmentation. We provide the data and discuss the processing steps needed to enrich a given semantic annotation with temporally consistent instance information, i.e., instance information that supplements the semantic labels and identifies the same instance over sequences of LiDAR point clouds. Additionally, we present two strong baselines that combine state-of-the-art LiDAR-based semantic segmentation approaches with a state-of-the-art detector enriching the segmentation with instance information and that allow other researchers to compare their approaches against. We hope that our extension of SemanticKITTI with strong baselines enables the creation of novel algorithms for LiDAR-based panoptic segmentation as much as it has for the original semantic segmentation and semantic scene completion tasks. Data, code, and an online evaluation using a hidden test set will be published on http://semantic-kitti.org."}, "keywords": ["4D Panoptic segmentation", "3D Panoptic segmentation"], "citation_intent": "background"} {"citing_id": "2304.10837v1", "cited_id": "1905.03989", "section_title": "A. Scenario-Based Validation", "citation": "Afterwards, they transform the functional scenarios into logical scenarios 2 by adding actions and events, specifying parameter ranges, as well as object and parameter dependencies #REFR .", "text_before_citation": ["Their ontology covers the road infrastructure including slope, surface and lane, as well as the ego vehicle with its position and speed.", "Similarly, Bagschick et al.", "#OTHEREFR construct an ontology which models the five layers of a scenario.", "These are the road, the traffic infrastructure, the temporary manipulation of the road and traffic infrastructure for example through construction sites, the static and dynamic objects and the environment layer.", "They define the degree of complexity, the amount of positions per lane, the amount of traffic participants and the abstraction level before they use combinatorial techniques and permutations to generate a start and an end scene of a functional scenario #OTHEREFR ."], "text_after_citation": ["Chen and Kloul #OTHEREFR use a three layered methodology for the generation of test scenarios.", "The first layer consists of a highway, a weather and a vehicle ontology to model the static and mobile concepts of a scene.", "The second layer describes the interactions between the static and mobile concepts by the use of rules expressed in first-order-logic.", "The third layer is the generation layer, which adds several scenes to a scenario by considering actions and events.", "The simulation scenario generation framework of Medrano-Berumen and Akbas #OTHEREFR uses a matrix where each row represents a road piece or actor in a semantic string."], "citing_paper_content": {"title": "A Comprehensive Review On Ontologies For Scenario-Based Testing In The Context Of Autonomous Driving", "abstract": "The verification and validation of autonomous driving vehicles remains a major challenge due to the high complexity of autonomous driving functions. Scenario-based testing is a promising method for validating such a complex system. Ontologies can be utilized to produce test scenarios that are both meaningful and relevant. One crucial aspect of this process is selecting the appropriate method for describing the entities involved. The level of detail and specific entity classes required will vary depending on the system being tested. It is important to choose an ontology that properly reflects these needs. This paper summarizes key representative ontologies for scenario-based testing and related use cases in the field of autonomous driving. The considered ontologies are classified according to their level of detail for both static facts and dynamic aspects. Furthermore, the ontologies are evaluated based on the presence of important entity classes and the relations between them."}, "cited_paper_content": {"title": "From Functional To Logical Scenarios: Detailing A Keyword-Based Scenario Description For Execution In A Simulation Environment", "abstract": "Scenario-based development and test processes are a promising approach for verifying and validating automated driving functions. For this purpose, scenarios have to be generated during the development process in a traceable manner. In early development stages, the operating scenarios of the item to be developed are usually described in an abstract, linguistic way.Within the scope of a simulation-assisted test process, these linguistically described scenarios have to be transformed into a state space representation and converted into data formats which can be used with the respective simulation environment. Currently, this step of detailing scenarios takes a considerable manual effort. Furthermore, a standardized interpretation of the linguistically described scenarios and a consistent transformation into the data formats are not guaranteed due to multiple authors as well as many constraints between the scenario parameters. In this paper, the authors present an approach to automatically detail a keyword-based scenario description for execution in a simulation environment and provide a basis for test case generation. As a first step, the keyword-based description is transformed into a parameter space representation. At the same time, constraints regarding the selection and combination of parameter values are documented for the following process steps (e. g. evolutionary or stochastic test methods). As a second step, the parameter space representation is converted into data formats required by the simulation environment. As an example, the authors use scenarios on German freeways and convert them into the data formats OpenDRIVE (description of the road) and OpenSCENARIO (description of traffic participants and environmental conditions) for execution in the simulation environment Virtual Test Drive."}, "keywords": ["functional scenarios"], "citation_intent": "method"} {"citing_id": "2303.08644v1", "cited_id": "1907.13625", "section_title": "Multi-View Representation Learning", "citation": "This scenario has two main advantages, (i) the loss is computed in the representation space, which is in general lower dimensional and avoids focusing on small details of the input and (ii) views can be defined to capture different aspects of the data #REFR .", "text_before_citation": ["The InfoMax principle is extended to a multi-view approach, in which rather than maximizing the MI between the input and output of the network, the agreement is maximized between the representations of two different views of the input."], "text_after_citation": ["Local -global MI.", "Deep InfoMax #OTHEREFR trains an encoder maximizing the average mutual information between local patches and global representations of an image.", "Deep Graph InfoMax #OTHEREFR and InfoGraph #OTHEREFR extend this work to the graph domain, targetting the MI between node and graph level embeddings.", "The graph representation is obtained with a global pooling layer applied to the local node embeddings.", "In DGI, since most datasets consist of one single graph, the authors create a corrupted version of the graph by shuffling the node features and contrasting negative and positive pairs."], "citing_paper_content": {"title": "Rgi : Regularized Graph Infomax For Self-Supervised Learning On Graphs", "abstract": "Self-supervised learning is gaining considerable attention as a solution to avoid the requirement of extensive annotations in representation learning on graphs. We introduce Regularized Graph Infomax (RGI), a simple yet effective framework for node level self-supervised learning on graphs that trains a graph neural network encoder by maximizing the mutual information between node level local and global views, in contrast to previous works that employ graph level global views. The method promotes the predictability between views while regularizing the covariance matrices of the representations. Therefore, RGI is non-contrastive, does not depend on complex asymmetric architectures nor training tricks, is augmentation-free and does not rely on a two branch architecture. We run RGI on both transductive and inductive settings with popular graph benchmarks and show that it can achieve state-of-the-art performance regardless of its simplicity."}, "cited_paper_content": {"title": "On Mutual Information Maximization For Representation Learning", "abstract": "Many recent methods for unsupervised or self-supervised representation learning train feature extractors by maximizing an estimate of the mutual information (MI) between different views of the data. This comes with several immediate problems: For example, MI is notoriously hard to estimate, and using it as an objective for representation learning may lead to highly entangled representations due to its invariance under arbitrary invertible transformations. Nevertheless, these methods have been repeatedly shown to excel in practice. In this paper we argue, and provide empirical evidence, that the success of these methods cannot be attributed to the properties of MI alone, and that they strongly depend on the inductive bias in both the choice of feature extractor architectures and the parametrization of the employed MI estimators. Finally, we establish a connection to deep metric learning and argue that this interpretation may be a plausible explanation for the success of the recently introduced methods."}, "keywords": ["representation space"], "citation_intent": "background"} {"citing_id": "2305.00382v1", "cited_id": "1308.4941", "section_title": "Has Has", "citation": "However, their results show it is not able to reach the same level of performance as #REFR .", "text_before_citation": ["The conventional perceptron updates its weights for every prediction, which can over-weight the final example.", "The averaged perception keeps a running weighted sum of the obtained feature weights through all training examples and iterations.", "The final weights are obtained by dividing the weighted sum by the number of iterations. #OTHEREFR", "(2019) propose another NER model based on a long short-term memory (LSTM) architecture.", "The authors argue that it can be more useful when the data set has more variation, as the LSTM model does not require time-consuming feature engineering."], "text_after_citation": ["SecBERT 4 is a pre-trained encoder trained on a large corpus of cybersecurity texts.", "It is based on the BERT architecture #OTHEREFR and uses a vocabulary specialized for cybersecurity.", "SecBERT can be fine-tuned for specific tasks such as NER.", "Another pre-trained encoder similar to SecBERT is SecureBERT, proposed by #OTHEREFR .", "SecureBERT leverages a customized tokenizer and an approach to alter pre-trained weights."], "citing_paper_content": {"title": "Constructing A Knowledge Graph From Textual Descriptions Of Software Vulnerabilities In The National Vulnerability Database", "abstract": "Knowledge graphs have shown promise for several cybersecurity tasks, such as vulnerability assessment and threat analysis. In this work, we present a new method for constructing a vulnerability knowledge graph from information in the National Vulnerability Database (NVD). Our approach combines named entity recognition (NER), relation extraction (RE), and entity prediction using a combination of neural models, heuristic rules, and knowledge graph embeddings. We demonstrate how our method helps to fix missing entities in knowledge graphs used for cybersecurity and evaluate the performance."}, "cited_paper_content": {"title": "Automatic Labeling For Entity Extraction In Cyber Security", "abstract": "Timely analysis of cyber-security information necessitates automated information extraction from unstructured text. While state-of-the-art extraction methods produce extremely accurate results, they require ample training data, which is generally unavailable for specialized applications, such as detecting security related entities; moreover, manual annotation of corpora is very costly and often not a viable solution. In response, we develop a very precise method to automatically label text from several data sources by leveraging related, domainspecific, structured data and provide public access to a corpus annotated with cyber-security entities. Next, we implement a Maximum Entropy Model trained with the average perceptron on a portion of our corpus ( 750,000 words) and achieve near perfect precision, recall, and accuracy, with training times under 17 seconds."}, "keywords": ["level"], "citation_intent": "result"} {"citing_id": "2303.17611v1", "cited_id": "2003.14323", "section_title": "Self-Supervised Learning Vs Supervised Learning On Limited Labeled Data", "citation": "The above findings are consistent with those reported in #REFR that the advantage of the self-supervised learning-based method is its better regularisation on low data regimes to avoid overfitting problems compared to fully-supervised methods.", "text_before_citation": ["The resulting average accuracy and the corresponding standard deviation of all compared models are illustrated in Fig. 7 .", "First, our finetuned model consistently outperforms other supervised learning-based models for sample sizes varying from 1 to 1000 on the emotion recognition tasks of all three datasets.", "Among supervised learning-based methods, SimpDCNN exhibited the poorest results, over which our SSL model could achieve significant performance gains of 6.84% -21.19% for different downstream tasks.", "Our fully-supervised model yields the highest results compared to other supervised models, whereas the fine-tuned model initialized by self-supervised learning parameters continues to enhance performance by 5.24% -13.63%.", "Second, for all downstream tasks, the standard deviation obtained by our fine-tuned model is narrower with respect to the supervised learningbased deep models, demonstrating its superior generalization ability across different samples."], "text_after_citation": ["As the amount of available labeled data increases, the difference in performance between the two types of models gradually decreases.", "Overall, the comparison results suggest that the proposed method can produce more meaningful and robust representations for wearable emotion recognition than fullysupervised methods, offering a potential solution to the problem of little labeled data."], "citing_paper_content": {"title": "Transformer-Based Self-Supervised Multimodal Representation Learning For Wearable Emotion Recognition", "abstract": "Recently, wearable emotion recognition based on peripheral physiological signals has drawn massive attention due to its less invasive nature and its applicability in real-life scenarios. However, how to effectively fuse multimodal data remains a challenging problem. Moreover, traditional fully-supervised based approaches suffer from overfitting given limited labeled data. To address the above issues, we propose a novel self-supervised learning (SSL) framework for wearable emotion recognition, where efficient multimodal fusion is realized with temporal convolution-based modality-specific encoders and a transformer-based shared encoder, capturing both intra-modal and intermodal correlations. Extensive unlabeled data is automatically assigned labels by five signal transforms, and the proposed SSL model is pre-trained with signal transformation recognition as a pretext task, allowing the extraction of generalized multimodal representations for emotion-related downstream tasks. For evaluation, the proposed SSL model was first pre-trained on a large-scale self-collected physiological dataset and the resulting encoder was subsequently frozen or fine-tuned on three public supervised emotion recognition datasets. Ultimately, our SSL-based method achieved state-of-the-art results in various emotion classification tasks. Meanwhile, the proposed model was proved to be more accurate and robust compared to fully-supervised methods on low data regimes."}, "cited_paper_content": {"title": "How Useful Is Self-Supervised Pretraining For Visual Tasks?", "abstract": "Recent advances have spurred incredible progress in self-supervised pretraining for vision. We investigate what factors may play a role in the utility of these pretraining methods for practitioners. To do this, we evaluate various self-supervised algorithms across a comprehensive array of synthetic datasets and downstream tasks. We prepare a suite of synthetic data that enables an endless supply of annotated images as well as full control over dataset difficulty. Our experiments offer insights into how the utility of self-supervision changes as the number of available labels grows as well as how the utility changes as a function of the downstream task and the properties of the training data. We also find that linear evaluation does not correlate with finetuning performance. Code and data is available at \\href{https://www.github.com/princeton-vl/selfstudy}{github.com/princeton-vl/selfstudy}."}, "keywords": ["self-supervised learning-based method", "fully-supervised methods"], "citation_intent": "result"} {"citing_id": "2303.15669v1", "cited_id": "1808.10128", "section_title": "Results On Small Amount Of Fine-Tuning Data", "citation": "Both T-Dec #REFR and Tac, which do not have the opportunity to pre-learn a sufficient capability of attention alignment in pre-training, show similarly lower performance than the others.", "text_before_citation": ["Objective Evaluation.", "Table 1 presents the superior performance of the proposed methods compared to competing methods on small amounts of fine-tuning data.", "Without data augmentation during finetuning, T-SD outperforms all unsupervised pre-training methods and Tac."], "text_after_citation": ["In contrast, the proposed de-warping task encourages the model to learn both preliminary knowledge of attention alignment and autoregressive prediction.", "When data augmentation is applied during fine-tuning, T-SD with SegAug outperforms other combinations of pre-training and augmentation methods.", "SegAug even effectively improves the performance of other pre-training baselines and shows competitive performance compared to other augmentation methods.", "Subjective Evaluation.", "Table 2 shows the preference test results with competitive methods using 0.5 shards of fine-tuning data."], "citing_paper_content": {"title": "Unsupervised Pre-Training For Data-Efficient Text-To-Speech On Low Resource Languages", "abstract": "Neural text-to-speech (TTS) models can synthesize natural human speech when trained on large amounts of transcribed speech. However, collecting such large-scale transcribed data is expensive. This paper proposes an unsupervised pre-training method for a sequenceto-sequence TTS model by leveraging large untranscribed speech data. With our pre-training, we can remarkably reduce the amount of paired transcribed data required to train the model for the target downstream TTS task. The main idea is to pre-train the model to reconstruct de-warped mel-spectrograms from warped ones, which may allow the model to learn proper temporal assignment relation between input and output sequences. In addition, we propose a data augmentation method that further improves the data efficiency in finetuning. We empirically demonstrate the effectiveness of our proposed method in low-resource language scenarios, achieving outstanding performance compared to competing methods. The code and audio samples are"}, "cited_paper_content": {"title": "Semi-Supervised Training For Improving Data Efficiency In End-To-End Speech Synthesis", "abstract": "Although end-to-end text-to-speech (TTS) models such as Tacotron have shown excellent results, they typically require a sizable set of high-quality pairs for training, which are expensive to collect. In this paper, we propose a semi-supervised training framework to improve the data efficiency of Tacotron. The idea is to allow Tacotron to utilize textual and acoustic knowledge contained in large, publicly-available text and speech corpora. Importantly, these external data are unpaired and potentially noisy. Specifically, first we embed each word in the input text into word vectors and condition the Tacotron encoder on them. We then use an unpaired speech corpus to pre-train the Tacotron decoder in the acoustic domain. Finally, we fine-tune the model using available paired data. We demonstrate that the proposed framework enables Tacotron to generate intelligible speech using less than half an hour of paired training data."}, "keywords": ["attention alignment"], "citation_intent": "result"} {"citing_id": "2304.01315v1", "cited_id": "1801.01290", "section_title": "Case Study: Re-Evaluating Previous Work", "citation": "In particular, SAC (EP) with seed optimization most closely matches the results reported by #REFR .", "text_before_citation": ["(2018) , we used seed optimization in Figure 15 : When searching seeds, we were able to achieve average performance and shaded regions much closer to that reported in the original SAC work.", "Solid lines denote average performance, and shaded regions denote minimum and maximum performance.", "the hyperparameter tuning process.", "As a note, this is bad practice; we only conduct seed search in the name of reproduction.", "In Figure 15 , where we chose the best 5 seeds of 30 for each algorithm, the results more closely match those from the original paper."], "text_after_citation": ["We now turn to running an experiment that more closely resembles the principles laid out in this document, especially with respect to reporting performance of tuned baselines. Although we do not know how #OTHEREFR", "(2018) tuned the baseline algorithms in their experiments, we found that the performance of DDPG on Half Cheetah was under-reported in this work.", "In the experiments here, we use the tuned hyperparameters for DDPG as reported by SpinningUp baselines 22 .", "Since Gaussian noise is known to outperform OU noise in some cases #OTHEREFR 23 , we use uncorrelated, unbounded Gaussian noise for action exploration in DDPG instead of OU noise.", "Furthermore, we try both SAC and DDPG with an exploration phase at the beginning of the experiment, where an action is drawn uniformly randomly for the first 10,000 steps."], "citing_paper_content": {"title": "Empirical Design In Reinforcement Learning", "abstract": "Empirical design in reinforcement learning is no small task. Running good experiments requires attention to detail and at times significant computational resources. While compute resources available per dollar have continued to grow rapidly, so have the scale of typical experiments in reinforcement learning. It is now common to benchmark agents with millions of parameters against dozens of tasks, each using the equivalent of 30 days of experience. The scale of these experiments often conflict with the need for proper statistical evidence, especially when comparing algorithms. Recent studies have highlighted how popular algorithms are sensitive to hyper-parameter settings and implementation details, and that common empirical practice leads to weak statistical evidence (Machado et al., 2018; Henderson et al., 2018). Here we take this one step further. This manuscript represents both a call to action, and a comprehensive resource for how to do good experiments in reinforcement learning. In particular, we cover: the statistical assumptions underlying common performance measures, how to properly characterize performance variation and stability, hypothesis testing, special considerations for comparing multiple agents, baseline and illustrative example construction, and how to deal with hyperparameters and experimenter bias. Throughout we highlight common mistakes found in the literature and the statistical consequences of those in example experiments. The objective of this document is to provide answers on how we can use our unprecedented compute to do good science in reinforcement learning, as well as stay alert to potential pitfalls in our empirical design."}, "cited_paper_content": {"title": "Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning With A Stochastic Actor", "abstract": "Model-free deep reinforcement learning (RL) algorithms have been demonstrated on a range of challenging decision making and control tasks. However, these methods typically suffer from two major challenges: very high sample complexity and brittle convergence properties, which necessitate meticulous hyperparameter tuning. Both of these challenges severely limit the applicability of such methods to complex, real-world domains. In this paper, we propose soft actor-critic, an off-policy actor-critic deep RL algorithm based on the maximum entropy reinforcement learning framework. In this framework, the actor aims to maximize expected reward while also maximizing entropy - that is, succeed at the task while acting as randomly as possible. Prior deep RL methods based on this framework have been formulated as Q-learning methods. By combining off-policy updates with a stable stochastic actor-critic formulation, our method achieves state-of-the-art performance on a range of continuous control benchmark tasks, outperforming prior on-policy and off-policy methods. Furthermore, we demonstrate that, in contrast to other off-policy algorithms, our approach is very stable, achieving very similar performance across different random seeds."}, "keywords": ["seed optimization"], "citation_intent": "result"} {"citing_id": "2303.08360v1", "cited_id": "1610.02391", "section_title": "Related Work", "citation": "It has been proven in #REFR that Grad-CAM and original CAM produce equivalent results in network with GAP layer.", "text_before_citation": ["In light of the challenge of annotating all groundtruth labels for an image, multi-label learning in the presence of missing labels (MLML) has also attracted much research attention #OTHEREFR .", "Class Activation Mapping (CAM) is a technique to obtain discriminative regions for specific classes in an image and generate class activation maps (CAMs).", "The original CAM #OTHEREFR operates a weighted sum on the feature maps extracted by the backbone network.", "It is restricted to networks with global average pooling (GAP) layer.", "Grad-CAM #OTHEREFR utilizes local gradient to generate CAMs in any architecture without the need for re-training."], "text_after_citation": ["As GAP layer is a common structure in classification model, original CAM is suitable for most scenarios.", "Furthermore, there are also explorations on gradient-free extensions #OTHEREFR .", "The CAM technique derived from classification network has been widely used for weakly supervised visual tasks, such as weakly supervised object location #OTHEREFR and object segmentation #OTHEREFR .", "In both tasks, category labels are employed as supervision, and CAM-based localization serves as supplementary information."], "citing_paper_content": {"title": "Knowledge Distillation From Single To Multi Labels: An Empirical Study", "abstract": "Knowledge distillation (KD) has been extensively studied in single-label image classification. However, its efficacy for multi-label classification remains relatively unexplored. In this study, we firstly investigate the effectiveness of classical KD techniques, including logit-based and feature-based methods, for multi-label classification. Our findings indicate that the logit-based method is not wellsuited for multi-label classification, as the teacher fails to provide inter-category similarity information or regularization effect on student model's training. Moreover, we observe that feature-based methods struggle to convey compact information of multiple labels simultaneously. Given these limitations, we propose that a suitable dark knowledge should incorporate class-wise information and be highly correlated with the final classification results. To address these issues, we introduce a novel distillation method based on Class Activation Maps (CAMs), which is both effective and straightforward to implement. Across a wide range of settings, CAMs-based distillation consistently outperforms other methods. Code is available at https://github.com/yzqinjacob/Distill-MLC."}, "cited_paper_content": {"title": "Grad-Cam: Visual Explanations From Deep Networks Via Gradient-Based Localization", "abstract": "We propose a technique for producing \u2018visual explanations\u2019 for decisions from a large class of Convolutional Neural Network (CNN)-based models, making them more transparent and explainable. Our approach\u2014Gradient-weighted Class Activation Mapping (Grad-CAM), uses the gradients of any target concept (say \u2018dog\u2019 in a classification network or a sequence of words in captioning network) flowing into the final convolutional layer to produce a coarse localization map highlighting the important regions in the image for predicting the concept. Unlike previous approaches, Grad-CAM is applicable to a wide variety of CNN model-families: (1) CNNs with fully-connected layers (e.g.VGG), (2) CNNs used for structured outputs (e.g.captioning), (3) CNNs used in tasks with multi-modal inputs (e.g.visual question answering) or reinforcement learning, all without architectural changes or re-training. We combine Grad-CAM with existing fine-grained visualizations to create a high-resolution class-discriminative visualization, Guided Grad-CAM, and apply it to image classification, image captioning, and visual question answering (VQA) models, including ResNet-based architectures. In the context of image classification models, our visualizations (a) lend insights into failure modes of these models (showing that seemingly unreasonable predictions have reasonable explanations), (b) outperform previous methods on the ILSVRC-15 weakly-supervised localization task, (c) are robust to adversarial perturbations, (d) are more faithful to the underlying model, and (e) help achieve model generalization by identifying dataset bias. For image captioning and VQA, our visualizations show that even non-attention based models learn to localize discriminative regions of input image. We devise a way to identify important neurons through Grad-CAM and combine it with neuron names (Bau et al. in Computer vision and pattern recognition, 2017) to provide textual explanations for model decisions. Finally, we design and conduct human studies to measure if Grad-CAM explanations help users establish appropriate trust in predictions from deep networks and show that Grad-CAM helps untrained users successfully discern a \u2018stronger\u2019 deep network from a \u2018weaker\u2019 one even when both make identical predictions. Our code is available at https://github.com/ramprs/grad-cam/, along with a demo on CloudCV (Agrawal et al., in: Mobile cloud visual media computing, pp 265\u2013290. Springer, 2015) (http://gradcam.cloudcv.org) and a video at http://youtu.be/COjUB9Izk6E."}, "keywords": ["Grad-CAM"], "citation_intent": "result"} {"citing_id": "2304.07503v1", "cited_id": "1606.09375", "section_title": "Static Graph Embedding", "citation": "On the one hand, graph neural networks #REFR are initially defined in the spectral space, which is inspired by the convolution operations defined on the image grid.", "text_before_citation": ["Decades ago, the emerging big graph data on the website had already attracted much concern of researchers #OTHEREFR , #OTHEREFR , #OTHEREFR , #OTHEREFR , #OTHEREFR .", "The learned low-dimensional embeddings for nodes, edges, and subgraphs can be easily integrated with the machine learning algorithms like Logistic Regression, Random Forest, and Gradient Boosting Decision Trees to perform risk monitoring #OTHEREFR , link prediction #OTHEREFR , item recommendation #OTHEREFR , etc.", "Recently, researchers interested in graph mining have been motivated by the great success of deep learning methods in computer vision #OTHEREFR and natural language processing #OTHEREFR .", "The deep graph embedding methods can also be categorized into two kinds: graph convolutional networks #OTHEREFR and skip-gram models #OTHEREFR ."], "text_after_citation": ["GCN #OTHEREFR simplifies the learning framework of #OTHEREFR and shows a successful application on semi-supervised node classification tasks.", "GraphSAGE #OTHEREFR proposes an inductive learning paradigm on large-scale graphs, which samples subgraphs for each node.", "Meanwhile, GAT #OTHEREFR introduces a learnable attention mechanism to impose different importances over neighbors.", "The attention mechanism not only achieves better performance on several benchmarks but also presents an explainable result by the neighbor importances.", "On the other hand, DeepWalk #OTHEREFR is the pioneering work to introduce the skip-gram models #OTHEREFR into graph representation learning with the sampled node sequences by random walks."], "citing_paper_content": {"title": "Temporal Aggregation And Propagation Graph Neural Networks For Dynamic Representation", "abstract": "Temporal graphs exhibit dynamic interactions between nodes over continuous time, whose topologies evolve with time elapsing. The whole temporal neighborhood of nodes reveals the varying preferences of nodes. However, previous works usually generate dynamic representation with limited neighbors for simplicity, which results in both inferior performance and high latency of online inference. Therefore, in this paper, we propose a novel method of temporal graph convolution with the whole neighborhood, namely Temporal Aggregation and Propagation Graph Neural Networks (TAP-GNN). Specifically, we firstly analyze the computational complexity of the dynamic representation problem by unfolding the temporal graph in a message-passing paradigm. The expensive complexity motivates us to design the AP (aggregation and propagation) block, which significantly reduces the repeated computation of historical neighbors. The final TAP-GNN supports online inference in the graph stream scenario, which incorporates the temporal information into node embeddings with a temporal activation function and a projection layer besides several AP blocks. Experimental results on various real-life temporal networks show that our proposed TAP-GNN outperforms existing temporal graph methods by a large margin in terms of both predictive performance and online inference latency. Our code is available at https://github.com/doujiang-zheng/TAP-GNN."}, "cited_paper_content": {"title": "Convolutional Neural Networks On Graphs With Fast Localized Spectral Filtering", "abstract": "In this work, we are interested in generalizing convolutional neural networks (CNNs) from low-dimensional regular grids, where image, video and speech are represented, to high-dimensional irregular domains, such as social networks, brain connectomes or words' embedding, represented by graphs. We present a formulation of CNNs in the context of spectral graph theory, which provides the necessary mathematical background and efficient numerical schemes to design fast localized convolutional filters on graphs. Importantly, the proposed technique offers the same linear computational complexity and constant learning complexity as classical CNNs, while being universal to any graph structure. Experiments on MNIST and 20NEWS demonstrate the ability of this novel deep learning system to learn local, stationary, and compositional features on graphs."}, "keywords": ["graph neural networks"], "citation_intent": "background"} {"citing_id": "2304.06048v1", "cited_id": "1810.10659", "section_title": "Ii. Related Work", "citation": "However, the state of S2V-DQN only considers the adjacency matrix and the state of nodes, and simply follows greedy policy at every step, which results in subpar performance #REFR .", "text_before_citation": ["In general, these GNN-based approaches require either one-hot encoded vectors #OTHEREFR , #OTHEREFR , #OTHEREFR or node feature matrix #OTHEREFR , #OTHEREFR , #OTHEREFR , #OTHEREFR as initial embeddings, which can be smoothed by the neighborhood aggregation to carrier graph structure information.", "However, recent studies have highlighted the inherent over-smoothing and information loss issues of GNNs #OTHEREFR - #OTHEREFR .", "This oversmoothing problem can also affect solving combinatorial optimization problems, as the goal is to identify distinguished nodes that maximize the objective function.", "Reinforcement Learning combined with GNNs is becoming increasingly popular for solving CO problems.", "Most no-tably, #OTHEREFR proposed S2V-DQN, a GNN-based reinforcement learning approach, which uses the neighborhood aggregation to update the node representation and learns a greedy policy using Q-learning with the graph embedding vector."], "text_after_citation": ["As data size continuously increases, developing models that can scale to modern dataset sizes is imperative.", "Significant improvement in scalability can be seen in GComb #OTHEREFR .", "GComb uses a probabilistic greedy mechanism to predict the quality of the nodes by a trained GCN.", "By evaluating the quality of nodes, GComb is able to prune those unlikely to be in the solution set, allowing it to generalize to large graphs of millions or billions of nodes while maintaining the performance of S2V-DQN on Maximum Coverage on the bipartite graph (MCP), Maximum Vertex Cover (MVC), and Influence Maximization (IM).", "However, similar to S2V-DQN, it does not involve local search behavior resulting in its performance being marginally lower than S2V-DQN."], "citing_paper_content": {"title": "Rels-Dqn: A Robust And Efficient Local Search Framework For Combinatorial Optimization", "abstract": "Combinatorial optimization (CO) aims to efficiently find the best solution to NP-hard problems ranging from statistical physics to social media marketing. A wide range of CO applications can benefit from local search methods because they allow reversible action over greedy policies. Deep Q-learning (DQN) using message-passing neural networks (MPNN) has shown promise in replicating the local search behavior and obtaining comparable results to the local search algorithms. However, the over-smoothing and the information loss during the iterations of message passing limit its robustness across applications, and the large message vectors result in memory inefficiency. Our paper introduces RELS-DQN, a lightweight DQN framework that exhibits the local search behavior while providing practical scalability. Using the RELS-DQN model trained on one application, it can generalize to various applications by providing solution values higher than or equal to both the local search algorithms and the existing DQN models while remaining efficient in runtime and memory."}, "cited_paper_content": {"title": "Combinatorial Optimization With Graph Convolutional Networks And Guided Tree Search", "abstract": "We present a learning-based approach to computing solutions for certain NP-hard problems. Our approach combines deep learning techniques with useful algorithmic elements from classic heuristics. The central component is a graph convolutional network that is trained to estimate the likelihood, for each vertex in a graph, of whether this vertex is part of the optimal solution. The network is designed and trained to synthesize a diverse set of solutions, which enables rapid exploration of the solution space via tree search. The presented approach is evaluated on four canonical NP-hard problems and five datasets, which include benchmark satisfiability problems and real social network graphs with up to a hundred thousand nodes. Experimental results demonstrate that the presented approach substantially outperforms recent work, generalizes across datasets, and scales to graphs that are orders of magnitude larger than those used during training."}, "keywords": ["S2V-DQN", "nodes"], "citation_intent": "background"} {"citing_id": "2303.03745v1", "cited_id": "1904.10237", "section_title": "B. Automatic Piano Fingering Prediction", "citation": "This result is higher than the neural model of the same architecture that was reported by #REFR (61.3%) but 0.4% lower than their best performing model (an HMM based).", "text_before_citation": ["We note that the development set is silver data (automatically annotated) and is likely to contain mistakes. We run the model and achieve 73.2% accuracy.", "Next, to test how useful our data is to a real-world gold dataset we wish to inspect its usefulness with a transferlearning approach.", "By first pre-training a simple model on our data and then fine-tune the model on PIG.", "If our data is indeed of quality we'd expect to see performance improvements on PIG, especially as this data is relatively small.", "We begin by training the LSTM model solely on PIG, 7 which results in 64.1% accuracy."], "text_after_citation": ["We continue by using the model trained on APFD and then fine-tune it on PIG, which leads to 66.8% accuracy, an improvement of 2.7 points over the previous SOTA.", "We attribute this gain in performance to our dataset, which both increases the number of training examples and allows to train bigger neural models which excel with more training examples.", "We also experiment in the opposite direction and fine-tune the model trained on PIG with our data, which result in 73.6% accuracy, which is better than training on our data alone, achieving 73.2% accuracy."], "citing_paper_content": {"title": "At Your Fingertips: Extracting Piano Fingering Instructions From Videos", "abstract": "Piano fingering-knowing which finger to use to play each note in a musical piece, is a hard and important skill to master when learning to play the piano. While some sheet music is available with expert-annotated fingering information, most pieces lack this information, and people often resort to learning the fingering from demonstrations in online videos. We consider the AI task of automating the extraction of fingering information from videos. This is a non-trivial task as fingers are often occluded by other fingers, and it is often not clear from the video which of the keys were pressed, requiring the synchronization of hand position information and knowledge about the notes that were played. We show how to perform this task with high-accuracy using a combination of deeplearning modules, including a GAN-based approach for fine-tuning on out-of-domain data. We extract the fingering information with an f1 score of 97%. We run the resulting system on 90 videos, resulting in high-quality piano fingering information of 150K notes, the largest available dataset of piano-fingering to date."}, "cited_paper_content": {"title": "Statistical Learning And Estimation Of Piano Fingering", "abstract": "Automatic estimation of piano fingering is important for understanding the computational process of music performance and applicable to performance assistance and education systems. While a natural way to formulate the quality of fingerings is to construct models of the constraints/costs of performance, it is generally difficult to find appropriate parameter values for these models. Here we study an alternative data-driven approach based on statistical modeling in which the appropriateness of a given fingering is described by probabilities. Specifically, we construct two types of hidden Markov models (HMMs) and their higher-order extensions. We also study deep neural network (DNN)-based methods for comparison. Using a newly released dataset of fingering annotations, we conduct systematic evaluations of these models as well as a representative constraint-based method. We find that the methods based on high-order HMMs outperform the other methods in terms of estimation accuracies. We also quantitatively study individual difference of fingering and propose evaluation measures that can be used with multiple ground truth data. We conclude that the HMM-based methods are currently state of the art and generate acceptable fingerings in most parts and that they have certain limitations such as ignorance of phrase boundaries and interdependence of the two hands."}, "keywords": ["neural model"], "citation_intent": "result"} {"citing_id": "2304.02860v1", "cited_id": "1807.11078", "section_title": "A. Image Deraining", "citation": "To improve generalization, SEMI #REFR exploits both synthetic and real-world rainy images to conduct supervised and unsupervised training respectively. In addition, rain presents notable diversity (e.g.", "text_before_citation": ["Recently, deep learning has been overwhelmingly successful in image restoration #OTHEREFR - #OTHEREFR , which also includes image deraining #OTHEREFR , #OTHEREFR , #OTHEREFR , #OTHEREFR - #OTHEREFR .", "DerainNet #OTHEREFR and JORDER #OTHEREFR are two of the earliest convolution-based methods for deep single image deraining."], "text_after_citation": ["density, size, distribution, etc.), thus prior learning may be helpful for accurate rain removal.", "Among these methods, DIDMDN #OTHEREFR guides the network to restore degraded image by estimating rain density.", "Furthermore, UMRL #OTHEREFR and MSPFN #OTHEREFR utilize uncertainty and multi-scale information to obtain derained images.", "Subsequently, the squeeze-excitation mechanism and progressive recursive learning are introduced in RESCAN #OTHEREFR , PReNet #OTHEREFR and DPENet #OTHEREFR to design networks.", "MSPFN #OTHEREFR introduces multi-scale information to adapt to the distribution diversity of rain streaks."], "citing_paper_content": {"title": "Towards An Effective And Efficient Transformer For Rain-By-Snow Weather Removal", "abstract": "Rain-by-snow weather removal is a specialized task in weather-degraded image restoration aiming to eliminate coexisting rain streaks and snow particles. In this paper, we propose RSFormer, an efficient and effective Transformer that addresses this challenge. Initially, we explore the proximity of convolution networks (ConvNets) and vision Transformers (ViTs) in hierarchical architectures and experimentally find they perform approximately at intra-stage feature learning. On this basis, we utilize a Transformer-like convolution block (TCB) that replaces the computationally expensive self-attention while preserving attention characteristics for adapting to input content. We also demonstrate that cross-stage progression is critical for performance improvement, and propose a global-local self-attention sampling mechanism (GLASM) that down-/up-samples features while capturing both global and local dependencies. Finally, we synthesize two novel rain-by-snow datasets, RSCityScape and RS100K, to evaluate our proposed RSFormer. Extensive experiments verify that RSFormer achieves the best trade-off between performance and time-consumption compared to other restoration methods. For instance, it outperforms Restormer with a 1.53% reduction in the number of parameters and a 15.6% reduction in inference time. Datasets, source code and pre-trained models are available at https://github.com/chdwyb/RSFormer."}, "cited_paper_content": {"title": "Semi-Supervised Transfer Learning For Image Rain Removal", "abstract": "Single image rain removal is a typical inverse problem in computer vision. The deep learning technique has been verified to be effective for this task and achieved state-of-the-art performance. However, previous deep learning methods need to pre-collect a large set of image pairs with/without synthesized rain for training, which tends to make the neural network be biased toward learning the specific patterns of the synthesized rain, while be less able to generalize to real test samples whose rain types differ from those in the training data. To this issue, this paper firstly proposes a semi-supervised learning paradigm toward this task. Different from traditional deep learning methods which only use supervised image pairs with/without synthesized rains, we further put real rainy images, without need of their clean ones, into the network training process. This is realized by elaborately formulating the residual between an input rainy image and its expected network output (clear image without rain) as a concise mixture of Gaussians distribution. The network is therefore trained to transfer to adapting the real rain pattern domain instead of only the synthesis rain domain, and thus both the short-of-training-sample and bias-to-supervised-sample issues can be evidently alleviated. Experiments on synthetic and real data verify the superiority of our model compared to the state-of-the-arts."}, "keywords": ["real-world rainy images"], "citation_intent": "background"} {"citing_id": "2305.00449v1", "cited_id": "1211.5590", "section_title": "Feature Extraction", "citation": "It is commonly believed for many machine learning researchers that the key to effectively construct a model is properly optimized feature extraction #REFR .", "text_before_citation": ["The extracted features should contain relevant information from the initial dataset so that this simplified representation can be used as the alteration of the full original dataset to perform the required task shown as figure 3.4 #OTHEREFR .", "The main task involved by feature extraction is to reduce the resource which are necessary to describe large amounts of data.", "When analyzing or learning with massive complex data, which usually requires a lot of memory and computational power, one of the main problems are caused by the number of variables involved.", "Besides, with the original dataset, a classification algorithm or model is inclined to overfit training samples and may be generalized poorly afterwards.", "In general, feature extraction is a generic term for methods combining features or variables to solve these problems with performing sufficient accuracy comparing with the initial dataset."], "text_after_citation": ["In the experiments, except for the classic dimensionality reduction technique Principal Component Analysis (PCA), several feature extraction methods are used such as permutation importance,feature importance and hierarchical clustering based on Spearman correlations."], "citing_paper_content": {"title": "Predictability Of Machine Learning Algorithms And Related Feature Extraction Techniques Master Thesis", "abstract": "To implement machine learning, it is essential to first determine an appropriate algorithm for the dataset. Different algorithms may produce a large number of different models with different hyperparameter configurations, and it usually takes a lot of time to run the model on a large dataset when the model is relatively complex. Therefore, how to predict the performance of a model on a dataset is an fundamental problem to be solved. This thesis designs a prediction system based on matrix factorization to predict the classification accuracy of a specific model on a particular dataset. In this thesis, we conduct a comprehensive empirical research on more than fifty datasets that we collected from the openml web site. We study the performance prediciton of three fundamental machine learning algorithms, namely, random forest, XGBoost, and MultiLayer Perceptron(MLP). In particular, we obtain the following results: \u2022 Predictability of fine-tuned models using coarse-tuned variants: Usually, training and testing complex machine learning models are time-consuming. Thus, we hope to predict the complicate models by their simple ones. Three machine learning algorithms are compared in experiments. We find that random forest and XGBoost have good predictability on most datasets that is, as the model becomes more complex, the performance of the model becomes better, and thus the accuracy of the complex model can be foreseen directly from its simple model. Hence, we can decide efficiently which algorithm to utilize by comparing simple models. \u2022 Predictability of MLP using feature extraction techniques: Often, real datasets have quite numerous features, from a few hundred to a few thousand features. Training models on fully-featured datasets is a very time-consuming task. We explore the idea of training a model D on datasets that are projected on a few features and use this as a hint to predict the performance of the model D when we consider all features of the dataset. We try different feature extraction techniques including techniques based on permutation importance, gain-based feature importance, hierarchical clustering based on Spearman correlation, and principal component analysis. We study the performance of techniques on the multilayer perceptron (MLP) model and observe that feature extraction with permutation importance and hierarchical clustering based on Spearman correlation has better performance. That is, on most datasets, the accuracy of the MLP improves as the number of features extracted by these techniques increases. \u2022 Predict model performance using implicit feedback: After researching the predictability of three algorithms, our goal is to discover a method that can be used to predict the specific model performance on a particular dataset. In order to predict the classification accuracy of different algorithms on different datasets with different hyperparameters, a prediction system with matrix factorization is built to predict the performance of different models on different datasets. With this system, the input accuracy's can be seen as implicit feedback because there is no more information about the cause of these performance. This system can best achieve an mean absolute error of only 6.7% in the experiment. Predictability of Machine Learning Algorithms and Related Feature Extraction Techniques iii Acknowledgement I would like to thank my supervisor Morteza for his excellent guidance and support during this process. Thanks to my friends and families as well. You kept me motivated. And My parents deserve a particular note of appreciation for always encouraging me. Last but not the least, I am grateful to professor Meng Fang and Wouter Meulemans to be the committee members for me. I hope you enjoy your reading."}, "cited_paper_content": {"title": "Theano: New Features And Speed Improvements", "abstract": "Theano is a linear algebra compiler that optimizes a user's symbolically-specified mathematical computations to produce efficient low-level implementations. In this paper, we present new features and efficiency improvements to Theano, and benchmarks demonstrating Theano's performance relative to Torch7, a recently introduced machine learning library, and to RNNLM, a C++ library targeted at recurrent neural networks."}, "keywords": ["many machine learning"], "citation_intent": "background"} {"citing_id": "2305.01470v1", "cited_id": "1401.8257", "section_title": "Comparison To Other Clustering Results For Bandit Problems", "citation": "Gentile, Li, and Zappella #REFR consider a more structured setting, where users can be partitioned into m unknown clusters and the context vectors C t in each round t are generated i.i.d. (where the size can be arbitrary).", "text_before_citation": ["The graph provides structures to the parameters u i , i.e., they assume that (i,j)\u2208E u i \u2212 u j 2 are small compared to i\u2208V u i 2 . The learning proceeds in rounds.", "For each round t, the user index i t and a set of arbitrary context vectors C it = {x t,1 , . . .", ", x t,ct } is presented and the learner has to pick one actionx t \u2208 C it and receives a reward of u T ix t with an additional sub-Gaussian noise.", "Cesa-Bianchi et al #OTHEREFR maintain a set of n linear bandit algorithms and an inverse correlation matrix M t for feedback sharing between bandit algorithms.", "They obtain a regret bound that depends on \u221a nT and log determinant of the matrix M t , which can be O(n)."], "text_after_citation": ["We note that this setting is closely related to our work where m = f + 1.", "Gentile et al #OTHEREFR give a regret bound that depends on \u221a mT with additional O(n + m) terms that are constant with T .", "A recent result by Gentile et al #OTHEREFR considers various data-dependent assumptions to obtain sharper bounds that depend on \u221a T m and n \u2022 polylog(nT ).", "Another line of work, by Maillard and Munos #OTHEREFR and Hong et al #OTHEREFR , considers latent bandits where there is a partition of context types B into C clusters C = {B c }, each with known reward distribution.", "However, the learner, when receiving the context type b \u2208 B, does not know the cluster B c containing b. This is a much harder problem."], "citing_paper_content": {"title": "Stochastic Contextual Bandits With Graph-Based Contexts", "abstract": "We naturally generalize the on-line graph prediction problem to a version of stochastic contextual bandit problems where contexts are vertices in a graph and the structure of the graph provides information on the similarity of contexts. More specifically, we are given a graph G = (V, E), whose vertex set V represents contexts with unknown vertex label y. In our stochastic contextual bandit setting, vertices with the same label share the same reward distribution. The standard notion of instance difficulties in graph label prediction is the cutsize f defined to be the number of edges whose end points having different labels. For line graphs and trees we present an algorithm with regret bound of\u00d5(T 2/3 K 1/3 f 1/3) where K is the number of arms. Our algorithm relies on the optimal stochastic bandit algorithm by Zimmert and Seldin [AISTAT'19, JMLR'21]. When the best arm outperforms the other arms, the regret improves to\u00d5(\u221a KT \u2022 f). The regret bound in the later case is comparable to other optimal contextual bandit results in more general cases, but our algorithm is easy to analyze, runs very efficiently, and does not require an i.i.d. assumption on the input context sequence. The algorithm also works with general graphs using a standard random spanning tree reduction."}, "cited_paper_content": {"title": "Online Clustering Of Bandits", "abstract": "We introduce a novel algorithmic approach to content recommendation based on adaptive clustering of exploration-exploitation \"bandit\") strategies. We provide a sharp regret analysis of this algorithm in a standard stochastic noise setting, demonstrate its scalability properties, and prove its effectiveness on a number of artificial and real-world datasets. Our experiments show a significant increase in prediction performance over state-of-the-art methods for bandit problems."}, "keywords": ["context vectors", "unknown clusters"], "citation_intent": "background"} {"citing_id": "2304.12154v1", "cited_id": "1804.10520", "section_title": "Prior Human-Designed Heuristics For Choosing The Ordering", "citation": "It is perhaps surprising that the original heuristic of Brown is able to perform competitively or even beat these others despite having access to less information #REFR .", "text_before_citation": ["Once the importance of variable ordering was established, researchers began looking for strategies to choose a good variable ordering. The first attempts consisted of designing human-made heuristics.", "In 2004 Brown #OTHEREFR documented a heuristic based on three simple hand-picked features of the set of polynomials, for the software QEPCAD. We call this heuristic Brown.", "There have been other heuristics produced which can offer greater accuracy but at greater expense, performing increasing numbers of steps in the CAD algorithm: Dolzmann et al.", "#OTHEREFR concluded it best to perform the projection stage of CAD and compare sums of the total degree of the polynomials produced (sotd); Bradford et al.", "#OTHEREFR considered the initial decomposition of the real line; and Wilson et al. #OTHEREFR the open cells in the decomposition."], "text_after_citation": ["This suggests the necessary information to identify a good ordering may be available from the input alone.", "Most recently del R\u00edo and England #OTHEREFR designed a new simple heuristic gmods based on properties of the input polynomials, selected by studying which features of the input polynomials have the greatest impact on the complexity analysis of CAD."], "citing_paper_content": {"title": "Explainable Ai Insights For Symbolic Computation: A Case Study On Selecting The Variable Ordering For Cylindrical Algebraic Decomposition", "abstract": "In recent years there has been increased use of machine learning (ML) techniques within mathematics, including symbolic computation where it may be applied safely to optimise or select algorithms. This paper explores whether using explainable AI (XAI) techniques on such ML models can offer new insight for symbolic computation, inspiring new implementations within computer algebra systems that do not directly call upon AI tools. We present a case study on the use of ML to select the variable ordering for cylindrical algebraic decomposition. It has already been demonstrated that ML can make the choice well, but here we show how the SHAP tool for explainability can be used to inform new heuristics of a size and complexity similar to those human-designed heuristics currently commonly used in symbolic computation."}, "cited_paper_content": {"title": "Using Machine Learning To Improve Cylindrical Algebraic Decomposition", "abstract": "Cylindrical Algebraic Decomposition (CAD) is a key tool in computational algebraic geometry, best known as a procedure to enable Quantifier Elimination over real-closed fields. However, it has a worst case complexity doubly exponential in the size of the input, which is often encountered in practice. It has been observed that for many problems a change in algorithm settings or problem formulation can cause huge differences in runtime costs, changing problem instances from intractable to easy. A number of heuristics have been developed to help with such choices, but the complicated nature of the geometric relationships involved means these are imperfect and can sometimes make poor choices. We investigate the use of machine learning (specifically support vector machines) to make such choices instead. Machine learning is the process of fitting a computer model to a complex function based on properties learned from measured data. In this paper we apply it in two case studies: the first to select between heuristics for choosing a CAD variable ordering; the second to identify when a CAD problem instance would benefit from Grobner Basis preconditioning. These appear to be the first such applications of machine learning to Symbolic Computation. We demonstrate in both cases that the machine learned choice outperforms human developed heuristics."}, "keywords": ["original heuristic"], "citation_intent": "background"} {"citing_id": "2305.01974v1", "cited_id": "1802.06305", "section_title": "Edge Analytics", "citation": "The ML models and their corresponding functions such as clustering, classification and feature extraction, in the context of IoT are extensively investigated in #REFR .", "text_before_citation": ["Distributed data processing frameworks such as Hadoop MapReduce and in-memory alternatives such as Apache Spark, can be employed for this sensor data analytics on the cloud.", "Moreover, since IoT mostly deals with big streaming data, message queues such as Apache Kafka can be used to buffer and feed the data into stream data processing systems such as Apache Storm and Apache Spark streaming #OTHEREFR .", "Additionally, the edge analytics performed can be based on machine learning (ML). E.g. Drolia et al.", "#OTHEREFR , proposed a Pre-Cog system on fog devices that recognizes images rapidly through catching and prefetching. Abdulkareem et al.", "#OTHEREFR , provides a detailed review of approaches performing edge analytics using ML on fog infrastructure."], "text_after_citation": ["There are some generic distributable algorithms such as k-nearest neighbors (k-NN) and other special neural network methods, which can directly be used in resource constrained fog devices for performing ML tasks.", "These studies lead to the development of frameworks such as CANTO #OTHEREFR , that can be used to train neural networks on fog nodes for performing edge analytics.", "However, performing sophisticated ML algorithms in resource and power constrained fog nodes is still a major challenge.", "Support for ML enabling hardware such as ENVISION, that can be used in fog networks, is summarized in 112 ."], "citing_paper_content": {"title": "A Decade Of Research In Fog Computing: Relevance, Challenges, And Future Directions", "abstract": "Recent developments in the Internet of Things (IoT) and real-time applications, have led to the unprecedented growth in the connected devices and their generated data. Traditionally, this sensor data is transferred and processed at the cloud, and the control signals are sent back to the relevant actuators, as part of the IoT applications. This cloud-centric IoT model, resulted in increased latencies and network load, and compromised privacy. To address these problems, Fog Computing was coined by Cisco in 2012, a decade ago, which utilizes proximal computational resources for processing the sensor data. Ever since its proposal, fog computing has attracted significant attention and the research fraternity focused at addressing different challenges such as fog frameworks, simulators, resource management, placement strategies, quality of service aspects, fog economics etc. However, after a decade of research, we still do not see large-scale deployments of public/private fog networks, which can be utilized in realizing interesting IoT applications. In the literature, we only see pilot case studies and small-scale testbeds, and utilization of simulators for demonstrating scale of the specified models addressing the respective technical challenges. There are several reasons for this, and most importantly, fog computing did not present a clear business case for the companies and participating individuals yet. This paper summarizes challenges, state-of-the-art and future research directions in realizing real-time fog computing applications. Contrary to other survey papers, that exhaustively address a specific set of aspects of fog computing, this work discusses the fog research challenges and solutions in much broader scope and thus provides a thorough opinion about progressing the research and quickly adapting fog computing in real-world applications."}, "cited_paper_content": {"title": "Machine Learning For Internet Of Things Data Analysis: A Survey", "abstract": "Rapid developments in hardware, software, and communication technologies have facilitated the emergence of Internet-connected sensory devices that provide observations and data measurements from the physical world. By 2020, it is estimated that the total number of Internet-connected devices being used will be between 25 and 50 billion. As these numbers grow and technologies become more mature, the volume of data being published will increase. The technology of Internet-connected devices, referred to as Internet of Things (IoT), continues to extend the current Internet by providing connectivity and interactions between the physical and cyber worlds. In addition to an increased volume, the IoT generates big data characterized by its velocity in terms of time and location dependency, with a variety of multiple modalities and varying data quality. Intelligent processing and analysis of this big data are the key to developing smart IoT applications. This article assesses the various machine learning methods that deal with the challenges presented by IoT data by considering smart cities as the main use case. The key contribution of this study is the presentation of a taxonomy of machine learning algorithms explaining how different techniques are applied to the data in order to extract higher level information. The potential and challenges of machine learning for IoT data analytics will also be discussed. A use case of applying a Support Vector Machine (SVM) to Aarhus smart city traffic data is presented for a more detailed exploration."}, "keywords": ["IoT"], "citation_intent": "background"} {"citing_id": "2304.03694v1", "cited_id": "1506.04696", "section_title": "Sampling From The Posterior Distribution", "citation": "Furthermore, this typically requires the calculation of higher-order derivatives #REFR which is computationally expensive.", "text_before_citation": ["Generating high-quality sample weights from the posterior distribution is usually done via the simulation of a Markov chain which converges in distribution to the posterior.", "While several Markov chains methods have been constructed for neural network applications, we found them unsuitable for this use case.", "The main difficulty we encountered was, that the scale of the gradients with respect to the different sets of parameters corresponding to the different layers of the neural network vary by several orders of magnitude.", "This makes the use of a single step size for all parameters impossible, as it would cause the sets of parameters with smaller gradients to be frozen during the optimization.", "While modern neural network optimizers circumvent this problem through an adaptive step size for each parameter #OTHEREFR , this is only possible to a very limited degree for Markov chains #OTHEREFR without changing the stationary distribution to which they converge."], "text_after_citation": ["To deal with these challenges, we develop here a new approach to sampling the posterior distribution based on the Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) algorithm introduced by Chen et al. #OTHEREFR .", "In its basic form, the algorithm is given by the Markov chain", "\u2206w t = \u2212 \u2207 \u03b8t u(\u03b8 t ) + M \u22121 Cw t\u22121 \u2206t+ \u221a 2C\u2206tN t (0, I), \u2206\u03b8 t = M \u22121 w t \u2206t.", "where", "u(\u03b8) := \u2212 ln p(\u03b8) \u2212 l i=1 ln p(y i |x i , \u03b8),"], "citing_paper_content": {"title": "High Accuracy Uncertainty-Aware Interatomic Force Modeling With Equivariant Bayesian Neural Networks", "abstract": "Even though Bayesian neural networks offer a promising framework for modeling uncertainty, active learning and incorporating prior physical knowledge, few applications of them can be found in the context of interatomic force modeling. One of the main challenges in their application to learning interatomic forces is the lack of suitable Monte Carlo Markov chain sampling algorithms for the posterior density, as the commonly used algorithms do not converge in a practical amount of time for many of the state-of-the-art architectures. As a response to this challenge, we introduce a new Monte Carlo Markov chain sampling algorithm in this paper which can circumvent the problems of the existing sampling methods. In addition, we introduce a new stochastic neural network model based on the NequIP architecture and demonstrate that, when combined with our novel sampling algorithm, we obtain predictions with state-of-the-art accuracy as well as a good measure of uncertainty."}, "cited_paper_content": {"title": "A Complete Recipe For Stochastic Gradient Mcmc", "abstract": "Many recent Markov chain Monte Carlo (MCMC) samplers leverage continuous dynamics to define a transition kernel that efficiently explores a target distribution. In tandem, a focus has been on devising scalable variants that subsample the data and use stochastic gradients in place of full-data gradients in the dynamic simulations. However, such stochastic gradient MCMC samplers have lagged behind their full-data counterparts in terms of the complexity of dynamics considered since proving convergence in the presence of the stochastic gradient noise is non-trivial. Even with simple dynamics, significant physical intuition is often required to modify the dynamical system to account for the stochastic gradient noise. In this paper, we provide a general recipe for constructing MCMC samplers--including stochastic gradient versions--based on continuous Markov processes specified via two matrices. We constructively prove that the framework is complete. That is, any continuous Markov process that provides samples from the target distribution can be written in our framework. We show how previous continuous-dynamic samplers can be trivially\"reinvented\"in our framework, avoiding the complicated sampler-specific proofs. We likewise use our recipe to straightforwardly propose a new state-adaptive sampler: stochastic gradient Riemann Hamiltonian Monte Carlo (SGRHMC). Our experiments on simulated data and a streaming Wikipedia analysis demonstrate that the proposed SGRHMC sampler inherits the benefits of Riemann HMC, with the scalability of stochastic gradient methods."}, "keywords": ["higher-order derivatives"], "citation_intent": "background"} {"citing_id": "2303.05000v1", "cited_id": "1511.05644", "section_title": "D. Adversarial Autoencoder", "citation": "In contrast to VAE that uses KL divergence and evidence lower bound, adversarial autoencoder (AAE) #REFR uses adversarial learning to impose a specific distribution on the latent variables, making itself superior to VAE in terms of imposing complicated distributions and shaping the latent space.", "text_before_citation": ["The variational autoencoder (VAE) #OTHEREFR provides a principled method for jointly learning deep latent-variable models and corresponding inference models using stochastic gradient descent #OTHEREFR , which is commonly used to generate samples in the target space from pre-defined latent distribution.", "Training a VAE model consists of two kinds of loss: regularization and reconstruction.", "The regularization is aimed to encode the input as certain distributions over the latent space using Kullback-Leibler (KL) divergence, while the reconstruction is to decode the latent variables to the target or original space."], "text_after_citation": ["In our work, we utilize an AAE architecture to model the semantics in the driving scenario."], "citing_paper_content": {"title": "Learning Representation For Anomaly Detection Of Vehicle Trajectories", "abstract": "Predicting the future trajectories of surrounding vehicles based on their history trajectories is a critical task in autonomous driving. However, when small crafted perturbations are introduced to those history trajectories, the resulting anomalous (or adversarial) trajectories can significantly mislead the future trajectory prediction module of the ego vehicle, which may result in unsafe planning and even fatal accidents. Therefore, it is of great importance to detect such anomalous trajectories of the surrounding vehicles for system safety, but few works have addressed this issue. In this work, we propose two novel methods for learning effective and efficient representations for online anomaly detection of vehicle trajectories. Different from general time-series anomaly detection, anomalous vehicle trajectory detection deals with much richer contexts on the road and fewer observable patterns on the anomalous trajectories themselves. To address these challenges, our methods exploit contrastive learning techniques and trajectory semantics to capture the patterns underlying the driving scenarios for effective anomaly detection under supervised and unsupervised settings, respectively. We conduct extensive experiments to demonstrate that our supervised method based on contrastive learning and unsupervised method based on reconstruction with semantic latent space can significantly improve the performance of anomalous trajectory detection in their corresponding settings over various baseline methods. We also demonstrate our methods' generalization ability to detect unseen patterns of anomalies."}, "cited_paper_content": {"title": "Adversarial Autoencoders", "abstract": "In this paper, we propose the\"adversarial autoencoder\"(AAE), which is a probabilistic autoencoder that uses the recently proposed generative adversarial networks (GAN) to perform variational inference by matching the aggregated posterior of the hidden code vector of the autoencoder with an arbitrary prior distribution. Matching the aggregated posterior to the prior ensures that generating from any part of prior space results in meaningful samples. As a result, the decoder of the adversarial autoencoder learns a deep generative model that maps the imposed prior to the data distribution. We show how the adversarial autoencoder can be used in applications such as semi-supervised classification, disentangling style and content of images, unsupervised clustering, dimensionality reduction and data visualization. We performed experiments on MNIST, Street View House Numbers and Toronto Face datasets and show that adversarial autoencoders achieve competitive results in generative modeling and semi-supervised classification tasks."}, "keywords": ["autoencoder"], "citation_intent": "method"} {"citing_id": "2303.02418v1", "cited_id": "1905.08108", "section_title": "Gradient Conflict Analysis.", "citation": "For fair comparison, we select a subset from each user group to keep the interaction number for each user fixed, thus preventing the potential impact of node degree to results #REFR . Figure 8 presents the results.", "text_before_citation": ["To verify that our model can alleviate potential gradient conflict, we perform experiments on user groups with different behavior relevance levels.", "In particular, we divide the test set into six user groups according to the average Pearson correlation #OTHEREFR among all behaviors.", "The calculation of average Pearson correlation can be referred to Appendix A.4."], "text_after_citation": ["We omit the results on the IJCAI dataset due to space limitation, which have consistent trends.", "For more rigorous results, we run each experiment 5 times and draw the mean and fluctuation range on the figure.", "We find that MESI consistently outperforms all baselines among all user groups, which further demonstrates the superiority of MESI for MTL.", "Besides, with the increase of behavior correlations, MESI gets better performances, while the performances of other baselines fluctuate or even decrease.", "A possible reason is the negative transfer caused by potential gradient conflict when knowledge is transferred across different tasks."], "citing_paper_content": {"title": "Compressed Interaction Graph Based Framework For Multi-Behavior Recommendation", "abstract": "Multi-types of user behavior data (e.g., clicking, adding to cart, and purchasing) are recorded in most real-world recommendation scenarios, which can help to learn users' multi-faceted preferences. However, it is challenging to explore multi-behavior data due to the unbalanced data distribution and sparse target behavior, which lead to the inadequate modeling of high-order relations when treating multi-behavior data \"as features\" and gradient conflict in multitask learning when treating multi-behavior data \"as labels\". In this paper, we propose CIGF, a Compressed Interaction Graph based Framework, to overcome the above limitations. Specifically, we design a novel Compressed Interaction Graph Convolution Network (CIGCN) to model instance-level high-order relations explicitly. To alleviate the potential gradient conflict when treating multi-behavior data \"as labels\", we propose a Multi-Expert with Separate Input (MESI) network with separate input on the top of CIGCN for multi-task learning. Comprehensive experiments on three large-scale real-world datasets demonstrate the superiority of CIGF. Ablation studies and in-depth analysis further validate * Both authors contributed equally to this research. \u2020 Work done when they were research interns at Huawei Noah's Ark Lab. \u2021 Corresponding author. This work is licensed under a Creative Commons Attribution International 4.0 License."}, "cited_paper_content": {"title": "Neural Graph Collaborative Filtering", "abstract": "Learning vector representations (aka. embeddings) of users and items lies at the core of modern recommender systems. Ranging from early matrix factorization to recently emerged deep learning based methods, existing efforts typically obtain a user's (or an item's) embedding by mapping from pre-existing features that describe the user (or the item), such as ID and attributes. We argue that an inherent drawback of such methods is that, the collaborative signal, which is latent in user-item interactions, is not encoded in the embedding process. As such, the resultant embeddings may not be sufficient to capture the collaborative filtering effect. In this work, we propose to integrate the user-item interactions - more specifically the bipartite graph structure - into the embedding process. We develop a new recommendation framework Neural Graph Collaborative Filtering (NGCF), which exploits the user-item graph structure by propagating embeddings on it. This leads to the expressive modeling of high-order connectivity in user-item graph, effectively injecting the collaborative signal into the embedding process in an explicit manner. We conduct extensive experiments on three public benchmarks, demonstrating significant improvements over several state-of-the-art models like HOP-Rec [39] and Collaborative Memory Network [5]. Further analysis verifies the importance of embedding propagation for learning better user and item representations, justifying the rationality and effectiveness of NGCF. Codes are available at https://github.com/xiangwang1223/neural_graph_collaborative_filtering."}, "keywords": ["interaction number", "node degree"], "citation_intent": "result"} {"citing_id": "2303.01687v1", "cited_id": "1802.06739", "section_title": "Generating Mnist And Fashionmnist Images", "citation": "Table 2 also compares with several existing models, while Figure 3 visually compares results with DP-GAN #REFR , DP-MERF (Harder et al., 2021) and DP-HP (Vinaroz et al., 2022) .", "text_before_citation": ["This necessitates a trade-off in the feature dimensionality: large dimensions can lead to overwhelming amounts of added noise, while small dimensions may be inadequate to serve as a loss for image generation.", "Figure 1 and Table 1 show that for both MNIST and Fashion-MNIST, changing the width does somewhat affect the final accuracy and image quality, but this effect is very minimal.", "Therefore, for subsequent experiments, we choose a width of 800 as a good compromise.", "Varying privacy levels Table 2 and Figure 2 show our model's performance under different levels of privacy.", "Other than for FashionMNIST with = 0.2, the performance of our model does not degrade significantly as the privacy requirement becomes more stringent."], "text_after_citation": ["We see that even with simple architectures, DP-NTK broadly performs better than other high-accuracy models, and generates comprehensible images."], "citing_paper_content": {"title": "Differentially Private Neural Tangent Kernels For Privacy-Preserving Data Generation", "abstract": "Maximum mean discrepancy (MMD) is a particularly useful distance metric for differentially private data generation: when used with finitedimensional features it allows us to summarize and privatize the data distribution once, which we can repeatedly use during generator training without further privacy loss. An important question in this framework is, then, what features are useful to distinguish between real and synthetic data distributions, and whether those enable us to generate quality synthetic data. This work considers the using the features of neural tangent kernels (NTKs), more precisely empirical NTKs (e-NTKs). We find that, perhaps surprisingly, the expressiveness of the untrained e-NTK features is comparable to that of the features taken from pre-trained perceptual features using public data. As a result, our method improves the privacy-accuracy trade-off compared to other state-of-the-art methods, without relying on any public data, as demonstrated on several tabular and image benchmark datasets."}, "cited_paper_content": {"title": "Differentially Private Generative Adversarial Network", "abstract": "Generative Adversarial Network (GAN) and its variants have recently attracted intensive research interests due to their elegant theoretical foundation and excellent empirical performance as generative models. These tools provide a promising direction in the studies where data availability is limited. One common issue in GANs is that the density of the learned generative distribution could concentrate on the training data points, meaning that they can easily remember training samples due to the high model complexity of deep networks. This becomes a major concern when GANs are applied to private or sensitive data such as patient medical records, and the concentration of distribution may divulge critical patient information. To address this issue, in this paper we propose a differentially private GAN (DPGAN) model, in which we achieve differential privacy in GANs by adding carefully designed noise to gradients during the learning procedure. We provide rigorous proof for the privacy guarantee, as well as comprehensive empirical evidence to support our analysis, where we demonstrate that our method can generate high quality data points at a reasonable privacy level."}, "keywords": ["DP-GAN"], "citation_intent": "result"} {"citing_id": "2303.18005v1", "cited_id": "1911.08736", "section_title": "Data In Included Literature", "citation": "The number of participants in internal datasets varied by orders of magnitude, with each study including 1 to 664 ovarian cancer patients, and one study including over 10,000 total patients across a range of 32 malignancies #REFR .", "text_before_citation": [], "text_after_citation": ["Only the five most common subtypes of ovarian carcinoma were used, with no study reporting the inclusion of less common carcinomas or non-epithelial ovarian cancers.", "Only one study explicitly included any prospective data collection, and this was only for a small subset which was not used for external validation #OTHEREFR .", "As shown in Figure 3 , the number of pathology slides used was often much greater than the number of patients included, with three studies using over 1000 slides from ovarian cancer patients #OTHEREFR .", "Most of the studies used WSIs for model development (27/36) , with others using tissue microarrays (TMAs) (4/36) or pre-cropped digital pathology images (2/36).", "Most studies used H&E-stained tissue (27/36) and the others used a variety of IHC stains (9/36), with no two papers reporting the use of the same IHC stains."], "citing_paper_content": {"title": "Artificial Intelligence In Ovarian Cancer Histopathology: A Systematic Review", "abstract": "To characterise and assess the quality of published research evaluating artificial intelligence (AI) methods for ovarian cancer diagnosis or prognosis using histopathology data. Methods A search of PubMed, Scopus, Web of Science, Cochrane Central Register of Controlled Trials, and WHO International Clinical Trials Registry Platform was conducted up to 01/12/2022. The inclusion criteria required that research evaluated AI on histopathology images for diagnostic or prognostic inferences in ovarian cancer, including primary tumours of the ovaries, fallopian tubes, and peritoneum. Reviews and non-English language articles were excluded. The risk of bias was assessed for every model that met the inclusion criteria using the Prediction model Risk Of Bias ASsessment Tool (PROBAST). Information about each model of interest was tabulated and summary statistics were reported. Based on the results, we provided recommendations to improve study design and reporting to reduce the risk of bias and improve the reproducibility of future research in the field. The study protocol was registered on PROSPERO (CRD42022334730). PRISMA 2020 reporting guidelines were followed. Results A total of 1434 research articles were identified, of which 36 were eligible for inclusion. These studies reported 62 models of interest, including 35 classifiers, 14 survival prediction models, 7 segmentation models, and 6 regression models. Models were developed using 1-1375 slides from 1-664 ovarian cancer patients. A wide array of outcomes were predicted, including overall survival (9/62), histological subtypes (7/62), stain quantity (6/62), malignancy (5/62), primary cancer (4/62), and tumour region (4/62). Older studies used traditional machine learning (ML) models with hand-crafted features, while newer studies typically employed deep learning (DL) to automatically learn features and predict the outcome(s) of interest. All models were found to be at high or unclear risk of bias overall, with most research having a high risk of bias in the analysis and a lack of clarity regarding participants and predictors in the study. Research was frequently limited by insufficient reporting, small sample sizes, and insufficient validation, with external validation being particularly rare. Conclusion Limited research has been conducted on the application of AI to histopathology images for diagnostic or prognostic purposes in ovarian cancer, and none of the associated models have been demonstrated to be ready for real-world implementation. Recommendations are provided addressing underlying biases and flaws in study design, which should help inform higher-quality reproducible future research. Key aspects to help ensure clinical translation include more transparent and comprehensive reporting of data provenance and modelling approaches, as well as improved quantitative performance evaluation using cross-validation and external validations."}, "cited_paper_content": {"title": "Pan-Cancer Diagnostic Consensus Through Searching Archival Histopathology Images Using Artificial Intelligence", "abstract": "The emergence of digital pathology has opened new horizons for histopathology. Artificial intelligence (AI) algorithms are able to operate on digitized slides to assist pathologists with different tasks. Whereas AI-involving classification and segmentation methods have obvious benefits for image analysis, image search represents a fundamental shift in computational pathology. Matching the pathology of new patients with already diagnosed and curated cases offers pathologists a new approach to improve diagnostic accuracy through visual inspection of similar cases and computational majority vote for consensus building. In this study, we report the results from searching the largest public repository (The Cancer Genome Atlas, TCGA) of whole-slide images from almost 11,000 patients. We successfully indexed and searched almost 30,000 high-resolution digitized slides constituting 16 terabytes of data comprised of 20 million 1000 \u00d7 1000 pixels image patches. The TCGA image database covers 25 anatomic sites and contains 32 cancer subtypes. High-performance storage and GPU power were employed for experimentation. The results were assessed with conservative \"majority voting\" to build consensus for subtype diagnosis through vertical search and demonstrated high accuracy values for both frozen section slides (e.g., bladder urothelial carcinoma 93%, kidney renal clear cell carcinoma 97%, and ovarian serous cystadenocarcinoma 99%) and permanent histopathology slides (e.g., prostate adenocarcinoma 98%, skin cutaneous melanoma 99%, and thymoma 100%). The key finding of this validation study was that computational consensus appears to be possible for rendering diagnoses if a sufficiently large number of searchable cases are available for each cancer subtype."}, "keywords": ["664 ovarian cancer", "32 malignancies"], "citation_intent": "background"} {"citing_id": "2303.16322v1", "cited_id": "1802.02611", "section_title": "Reducing Training Time", "citation": "We observe that FMAS can reproduce the baseline accuracy of DL3+ #REFR , achieving MIoU errors of 23% (e.g., FMAS-FP1), compared to a reported error of 21% on the validation set.", "text_before_citation": ["Figures 2, 3, and 4 plot the MIoU error of the Pareto non-dominated front of the corresponding generation against the FLOPs count, network parameters count, and latency respectively.", "We explore the capacity of FMAS to cut GPU time by evaluating a total of 240 modified Xception variants developed over 20 generations targeting FLOPs, and parameters; and a total of 300 MobileNetV2 variants developed over 25 generations targeting latency.", "Table 3 reports the network structures, GPU time consumption, computational cost, accuracy evaluated on a subset of the validation set, and post-fine-tuning accuracy on the entire validation set of selected networks using the Xception backbone. It presents a #OTHEREFR ."], "text_after_citation": ["Similar to Table 3 , Table 4 reports results when using the Mo-bileNetV2 backbone and searching for 25 generations.", "In addition to FLOPs and parameters, we also report inference latency on the GAP8 for the original model, FCN-VGG16, and selected search results.", "Note that while FCN-VGG16 uses only GAP8-supported operations, making it a suitable baseline, it requires more than 8\u00d7 more RAM than the GAP8 has, and therefore cannot be deployed. Table 1 reports their hyperparameters.", "FMAS-F1 cuts the number of FLOPs by 43% with respect to DL3+, and network parameters by 7.9%, for a relative increase of 5.2% in MIoU error; it was discovered in 0.68 GPU days (generation 17).", "FMAS-F2 trades off only 2.5% of the MIoU error of DL3+ for reducing FLOPs by 10%, and network parameters by 20%, in 0.52 GPU days (generation 13)."], "citing_paper_content": {"title": "Fmas: Fast Multi-Objective Supernet Architecture Search For Semantic Segmentation", "abstract": "We present FMAS, a fast multi-objective neural architecture search framework for semantic segmentation. FMAS subsamples the structure and pre-trained parameters of DeepLabV3+, without finetuning, dramatically reducing training time during search. To further reduce candidate evaluation time, we use a subset of the validation dataset during the search. Only the final, Pareto non-dominated, candidates are ultimately fine-tuned using the complete training set. We evaluate FMAS by searching for models that effectively trade accuracy and computational cost on the PASCAL VOC 2012 dataset. FMAS finds competitive designs quickly, e.g., taking just 0.5 GPU days to discover a DeepLabV3+ variant that reduces FLOPs and parameters by 10% and 20% respectively, for less than 3% increased error. We also search on an edge device called GAP8 and use its latency as the metric. FMAS is capable of finding 2.2\u00d7 faster network with 7.61% MIoU loss."}, "cited_paper_content": {"title": "Encoder-Decoder With Atrous Separable Convolution For Semantic Image Segmentation", "abstract": "Spatial pyramid pooling module or encode-decoder structure are used in deep neural networks for semantic segmentation task. The former networks are able to encode multi-scale contextual information by probing the incoming features with filters or pooling operations at multiple rates and multiple effective fields-of-view, while the latter networks can capture sharper object boundaries by gradually recovering the spatial information. In this work, we propose to combine the advantages from both methods. Specifically, our proposed model, DeepLabv3+, extends DeepLabv3 by adding a simple yet effective decoder module to refine the segmentation results especially along object boundaries. We further explore the Xception model and apply the depthwise separable convolution to both Atrous Spatial Pyramid Pooling and decoder modules, resulting in a faster and stronger encoder-decoder network. We demonstrate the effectiveness of the proposed model on PASCAL VOC 2012 and Cityscapes datasets, achieving the test set performance of 89.0\\% and 82.1\\% without any post-processing. Our paper is accompanied with a publicly available reference implementation of the proposed models in Tensorflow at \\url{https://github.com/tensorflow/models/tree/master/research/deeplab}."}, "keywords": ["baseline accuracy"], "citation_intent": "result"} {"citing_id": "2303.00128v1", "cited_id": "1805.12152", "section_title": "Prediction And Transfer", "citation": "This behavior, consistant with findings by #REFR showing that highly predictive non-robust features in the data tend to reduce learner performance when presented with out-of-distribution examples.", "text_before_citation": ["In contrast, Figs.4b,4f ,4c,4g shows significant differences in performance for out-of-distribution (OOD) example testing.", "ReI presents better behaved performance and outperforms by larger margins compared to the standard in all cases.", "106.23 U-Net +FC 55.12 Transformer + FC 106.12 VAE + FC 104.51 ReI-VAE+FC 12.45", "Note that even though the standard methods present a performance advantage over ReI for in-distribution examples, that this is not the case for out-of-distribution examples collected in the wild from Mars.", "ReI learns representations with disentangled variables that show better generalizations against the tested out-of-distribution examples."], "text_after_citation": ["Table 1 provides addditional results comparing performance on Earth-to-Mars transfer on a variety of DL architectures and averaged over all elements y \u2208 R n with n = 11.", "Comparisons include fully connected (FC), multilayer perceptron (MLP), MLP Mixer #OTHEREFR , ResNet #OTHEREFR , U-Net #OTHEREFR , Transformers #OTHEREFR and again the standard VAE #OTHEREFR .", "Note that some of the architectures do not produce a latent representation explicitly, these are however rather trained end-to-end for prediction.", "The number in parenthesis next to each architecture name (e.g., FC(10)) expresses the corresponding depth of layers."], "citing_paper_content": {"title": "Representation Disentaglement Via Regularization By Identification", "abstract": "This work focuses on the problem of learning disentangled representations from observational data. Given observations {x (i) } N i=1 drawn from p(x|y) with generative variables y admitting the distribution factorization p(y) = c p(y c), we ask whether learning disentangled representations matching the space of observations with identification guarantees on the posterior p(z|x,\u0177 c) for each c, is plausible. We argue modern deep representation learning models are ill-posed with collider bias behaviour; a source of bias producing entanglement between generating variables. Under the rubric of causality, we show this issue can be explained and reconciled under the condition of identifiability; attainable under supervision or a weak-form of it. For this, we propose regularization by identification (ReI), a regularization framework defined by the identification of the causal queries involved in the learning problem. Empirical evidence shows that enforcing ReI in a variational framework results in disentangled representations equipped with generalization capabilities to out-of-distribution examples and that aligns nicely with the true expected effect between generating variables and measurement apparatus."}, "cited_paper_content": {"title": "Robustness May Be At Odds With Accuracy", "abstract": "We show that there exists an inherent tension between the goal of adversarial robustness and that of standard generalization. Specifically, training robust models may not only be more resource-consuming, but also lead to a reduction of standard accuracy. We demonstrate that this trade-off between the standard accuracy of a model and its robustness to adversarial perturbations provably exists even in a fairly simple and natural setting. These findings also corroborate a similar phenomenon observed in practice. Further, we argue that this phenomenon is a consequence of robust classifiers learning fundamentally different feature representations than standard classifiers. These differences, in particular, seem to result in unexpected benefits: the representations learned by robust models tend to align better with salient data characteristics and human perception."}, "keywords": ["highly predictive non-robust"], "citation_intent": "result"} {"citing_id": "2303.05228v1", "cited_id": "1906.08249", "section_title": "Related Work", "citation": "This result was later developed in #REFR by counting the number of pairs of coprime polynomials with a nonzero constant term, and by providing a construction for maximal families of Mutually Orthogonal Latin Squares (MOLS) generated by linear bipermutive CA.", "text_before_citation": ["#OTHEREFR studied the analogies between the Lattice Gas Automata (LGA) model for fluids and the SPN paradigm.", "In particular, the authors proposed to use the collision operator of the LGA model to implement the substitution layer of a block cipher.", "We conclude this section with a brief outlook of the research line devoted to orthogonal cellular automata, which has been mostly investigated by the second author of this manuscript. Mariot et al.", "first proved in #OTHEREFR a necessary and sufficient condition for a pair of linear bipermutive CA to generate orthogonal Latin squares.", "The characterization is quite simple, since it consists in checking whether the polynomials associated to the local rules of the CA are relatively prime."], "text_after_citation": ["These results have been subsequently used by Gadouleau et al.", "#OTHEREFR to devise a new construction of bent Boolean functions, which reach the highest possible nonlinearity.", "Later, it turned out that the construction could be greatly simplified through the formalism of linear recurring sequences, instead of using orthogonal CA #OTHEREFR . Formenti et al.", "#OTHEREFR devised a combinatorial algorithm to enumerate all pairs of coprime polynomials with nonzero constant term, and thus all linear OCA of a given diameter.", "Finally, Mariot #OTHEREFR considered orthogonal CA as pseudorandom generators, and devised an algorithm to compute the period of the resulting sequences when the underlying CA are linear."], "citing_paper_content": {"title": "A Classification Of S-Boxes Generated By Orthogonal Cellular Automata", "abstract": "Most of the approaches published in the literature to construct Sboxes via Cellular Automata (CA) work by either iterating a finite CA for several time steps, or by a one-shot application of the global rule. The main characteristic that brings together these works is that they employ a single CA rule to define the vectorial Boolean function of the S-box. In this work, we explore a different direction for the design of S-boxes that leverages on Orthogonal CA (OCA), i.e. pairs of CA rules giving rise to orthogonal Latin squares. The motivation stands on the facts that an OCA pair already defines a bijective transformation, and moreover the orthogonality property of the resulting Latin squares ensures a minimum amount of diffusion. We exhaustively enumerate all S-boxes generated by OCA pairs of diameter 4 \u2264 d \u2264 6, and measure their nonlinearity. Interestingly, we observe that for d = 4 and d = 5 all S-boxes are linear, despite the underlying CA local rules being nonlinear. The smallest nonlinear S-boxes emerges for d = 6, but their nonlinearity is still too low to be used in practice. Nonetheless, we unearth an interesting structure of linear OCA S-boxes, proving that their Linear Components Space (LCS) is itself the image of a linear CA, or equivalently a polynomial code. We finally classify all linear OCA S-boxes in terms of their generator polynomials."}, "cited_paper_content": {"title": "Mutually Orthogonal Latin Squares Based On Cellular Automata", "abstract": "We investigate sets of mutually orthogonal latin squares (MOLS) generated by cellular automata (CA) over finite fields. After introducing how a CA defined by a bipermutive local rule of diameter d over an alphabet of q elements generates a Latin square of order \\(q^{d-1}\\), we study the conditions under which two CA generate a pair of orthogonal Latin squares. In particular, we prove that the Latin squares induced by two Linear Bipermutive CA (LBCA) over the finite field \\(\\mathbb {F}_q\\) are orthogonal if and only if the polynomials associated to their local rules are relatively prime. Next, we enumerate all such pairs of orthogonal Latin squares by counting the pairs of coprime monic polynomials with nonzero constant term and degree n over \\(\\mathbb {F}_q\\). Finally, we present a construction for families of MOLS based on LBCA, and prove that their cardinality corresponds to the maximum number of pairwise coprime polynomials with nonzero constant term. Although our construction does not yield all such families of MOLS, we show that the resulting lower bound is asymptotically close to their actual number."}, "keywords": ["Mutually Orthogonal Latin"], "citation_intent": "background"} {"citing_id": "2303.14969v1", "cited_id": "1804.08328", "section_title": "C.3 Additional Qualitative Comparison With Baselines", "citation": "In Figure 12 , even the GT label for semantic segmentation (\"couch\" class) is noisy as it is a pseudolabel generated by a pre-trained segmentation model #REFR , our model successfully segments two couches present in the figure.", "text_before_citation": ["We provide additional results on the qualitative evaluation of our model and the baselines.", "Figure 11 -14 show visualizations on different query image and support set, where we vary the class of semantic segmentation task included in each support.", "The result shows consistent trends of that we discussed in Section 5.", "Ours is competitive to the fully supervised baselines (DPT and InvPT), while the other few-shot baselines (HSNet, VAT, DGPNet) fail to learn different dense prediction tasks."], "text_after_citation": ["This can be attributed to the task-agnostic architecture of VTM based on non-parametric matching."], "citing_paper_content": {"title": "Universal Few-Shot Learning Of Dense Predic-Tion Tasks With Visual Token Matching", "abstract": "Dense prediction tasks are a fundamental class of problems in computer vision. As supervised methods suffer from high pixel-wise labeling cost, a few-shot learning solution that can learn any dense task from a few labeled images is desired. Yet, current few-shot learning methods target a restricted set of tasks such as semantic segmentation, presumably due to challenges in designing a general and unified model that is able to flexibly and efficiently adapt to arbitrary tasks of unseen semantics. We propose Visual Token Matching (VTM), a universal few-shot learner for arbitrary dense prediction tasks. It employs non-parametric matching on patchlevel embedded tokens of images and labels that encapsulates all tasks. Also, VTM flexibly adapts to any task with a tiny amount of task-specific parameters that modulate the matching algorithm. We implement VTM as a powerful hierarchical encoder-decoder architecture involving ViT backbones where token matching is performed at multiple feature hierarchies. We experiment VTM on a challenging variant of Taskonomy dataset and observe that it robustly few-shot learns various unseen dense prediction tasks. Surprisingly, it is competitive with fully supervised baselines using only 10 labeled examples of novel tasks (0.004% of full supervision) and sometimes outperforms using 0.1% of full supervision. Codes are available at https://github.com/GitGyun/visual_token_matching."}, "cited_paper_content": {"title": "Taskonomy: Disentangling Task Transfer Learning", "abstract": "Do visual tasks have a relationship, or are they unrelated? For instance, could having surface normals simplify estimating the depth of an image? Intuition answers these questions positively, implying existence of a structure among visual tasks. Knowing this structure has notable values; it is the concept underlying transfer learning and provides a principled way for identifying redundancies across tasks, e.g., to seamlessly reuse supervision among related tasks or solve many tasks in one system without piling up the complexity. We proposes a fully computational approach for modeling the structure of space of visual tasks. This is done via finding (first and higher-order) transfer learning dependencies across a dictionary of twenty six 2D, 2.5D, 3D, and semantic tasks in a latent space. The product is a computational taxonomic map for task transfer learning. We study the consequences of this structure, e.g. nontrivial emerged relationships, and exploit them to reduce the demand for labeled data. For example, we show that the total number of labeled datapoints needed for solving a set of 10 tasks can be reduced by roughly 2/3 (compared to training independently) while keeping the performance nearly the same. We provide a set of tools for computing and probing this taxonomical structure including a solver that users can employ to devise efficient supervision policies for their use cases."}, "keywords": ["semantic segmentation"], "citation_intent": "method"} {"citing_id": "2303.11816v1", "cited_id": "1910.06711", "section_title": "Setup", "citation": "To transform the models' output Mel-spectrograms into waveforms, we use MelGAN #REFR as our vocoder.", "text_before_citation": ["We utilize LibriTTS #OTHEREFR as our pre-training dataset and VCTK #OTHEREFR as our voice-cloning dataset."], "text_after_citation": ["The implementation of our FastSpeech 2 model and the training/pruning details can be found in our GitHub repository #OTHEREFR .", "In our experiments, we mainly focus on 8-shot voice cloning, where only 8 audio samples of the target speaker are used for finetuning and pruning.", "For each speaker in VCTK, we randomly sample 8 recordings for a voice cloning task.", "We pre-train the TTS models for 40k steps with LibriTTS, followed by fine-tuning/pruning the model with 8-shot voice cloning tasks until convergence.", "The remaining recordings and their corresponding transcripts are utilized for evaluation."], "citing_paper_content": {"title": "Personalized Lightweight Text-To-Speech: Voice Cloning With Adaptive Structured Pruning", "abstract": "Personalized TTS is an exciting and highly desired application that allows users to train their TTS voice using only a few recordings. However, TTS training typically requires many hours of recording and a large model, making it unsuitable for deployment on mobile devices. To overcome this limitation, related works typically require fine-tuning a pre-trained TTS model to preserve its ability to generate high-quality audio samples while adapting to the target speaker's voice. This process is commonly referred to as \"voice cloning.\" Although related works have achieved significant success in changing the TTS model's voice, they are still required to fine-tune from a large pre-trained model, resulting in a significant size for the voicecloned model. In this paper, we propose applying trainable structured pruning to voice cloning. By training the structured pruning masks with voice-cloning data, we can produce a unique pruned model for each target speaker. Our experiments demonstrate that using learnable structured pruning, we can compress the model size to 7 times smaller while achieving comparable voice-cloning performance."}, "cited_paper_content": {"title": "Melgan: Generative Adversarial Networks For Conditional Waveform Synthesis", "abstract": "Previous works (Donahue et al., 2018a; Engel et al., 2019a) have found that generating coherent raw audio waveforms with GANs is challenging. In this paper, we show that it is possible to train GANs reliably to generate high quality coherent waveforms by introducing a set of architectural changes and simple training techniques. Subjective evaluation metric (Mean Opinion Score, or MOS) shows the effectiveness of the proposed approach for high quality mel-spectrogram inversion. To establish the generality of the proposed techniques, we show qualitative results of our model in speech synthesis, music domain translation and unconditional music synthesis. We evaluate the various components of the model through ablation studies and suggest a set of guidelines to design general purpose discriminators and generators for conditional sequence synthesis tasks. Our model is non-autoregressive, fully convolutional, with significantly fewer parameters than competing models and generalizes to unseen speakers for mel-spectrogram inversion. Our pytorch implementation runs at more than 100x faster than realtime on GTX 1080Ti GPU and more than 2x faster than real-time on CPU, without any hardware specific optimization tricks."}, "keywords": ["vocoder"], "citation_intent": "method"} {"citing_id": "2303.16694v1", "cited_id": "1403.6199", "section_title": "Introduction", "citation": "Some expand on the ideas of influence prediction discussed previously to give an estimate of the level of activity a particular topic will receive, rather than a binary indicator for whether the campaign was successful (e.g. #REFR ).", "text_before_citation": ["Complementary to attempts to understand popular topics is the identification of individual user accounts that drive these trends.", "Previous efforts have considered user metadata and posting habits online #OTHEREFR or social network position #OTHEREFR to predict user influence.", "Most such efforts are hampered, however, by weak definitions (that use a partial substitute or proxy for influence) or incomplete data (since most data samples do not capture all interactions between individuals or content exposure due to practicalities and privacy constraints).", "Thus the challenge of measuring influence remains unsolved and new efforts are required to offer a more complete picture of interpersonal and organisational influence in online spaces.", "Attempts to quantify information contagion or influence campaigns on contact networks have seen success."], "text_after_citation": ["Many studies of information diffusion online rely on the use of specific parameters such as users, keywords, hashtags and URLs (see e.g. #OTHEREFR", "2020 ; Cruickshank and Carley 2020)) but these necessarily miss relevant discussions that do not use the predetermined search parameters.", "Work by #OTHEREFR is of particular relevance to the aims of this manuscript.", "Their research presented a means of quantifying the flow of attention between different conversational formats, using averaged word co-occurrence of texts to measure the change in agreement after the publication of a reference text.", "Regression analysis revealed that a small, but significant, flow of words emerged from news articles shared on Facebook to speeches made by MPs in the UK Houses of Parliament."], "citing_paper_content": {"title": "Using Semantic Similarity And Text Embedding To Measure The Social Media Echo Of Strategic Communications", "abstract": "Online discourse covers a wide range of topics and many actors tailor their content to impact online discussions through carefully crafted messages and targeted campaigns. Yet the scale and diversity of online media content make it difficult to evaluate the impact of a particular message. In this paper, we present a new technique that leverages semantic similarity to quantify the change in the discussion after a particular message has been published. We use a set of press releases from environmental organisations and tweets from the climate change debate to show that our novel approach reveals a heavy-tailed distribution of response in online discourse to strategic communications."}, "cited_paper_content": {"title": "Predicting Successful Memes Using Network And Community Structure", "abstract": "We investigate the predictability of successful memes using their early spreading patterns in the underlying social networks. We propose and analyze a comprehensive set of features and develop an accurate model to predict future popularity of a meme given its early spreading patterns. Our paper provides the first comprehensive comparison of existing predictive frameworks. We categorize our features into three groups: influence of early adopters, community concentration, and characteristics of adoption time series. We find that features based on community structure are the most powerful predictors of future success. We also find that early popularity of a meme is not a good predictor of its future popularity, contrary to common belief. Our methods outperform other approaches, particularly in the task of detecting very popular or unpopular memes."}, "keywords": ["influence prediction"], "citation_intent": "background"} {"citing_id": "2303.16485v1", "cited_id": "2003.08934", "section_title": "Approach", "citation": "Finally, we combine NeRF #REFR by querying point features from TriVol at sampled locations to render the final images. An overview of our method is illustrated in Fig. 2 .", "text_before_citation": ["We aim to train a category-specific point renderer R to directly generate photo-realistic images I (the image height and width are denoted as H and W ) from the colored point cloud P , given camera parameters (intrinsic parameter K and extrinsic parameters R and t).", "When rendering novel point clouds of the same category, no fine-tuning process is required. The rendering process can be represented as", "EQUATION", "where P is usually obtained from MVS #OTHEREFR , LiDAR scanners #OTHEREFR , or sampled from synthesized mesh models.", "In this section, we first encode the point cloud as the proposed TriVol, then utilize three 3D UNet to decode it into the feature representation."], "text_after_citation": [], "citing_paper_content": {"title": "Trivol: Point Cloud Rendering Via Triple Volumes", "abstract": "Figure 1. Given the colored point cloud of a category-specific scene or object, our TriVol can render photo-realistic images."}, "cited_paper_content": {"title": "Nerf: Representing Scenes As Neural Radiance Fields For View Synthesis", "abstract": "We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. Our algorithm represents a scene using a fully-connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location $(x,y,z)$ and viewing direction $(\\theta, \\phi)$) and whose output is the volume density and view-dependent emitted radiance at that spatial location. We synthesize views by querying 5D coordinates along camera rays and use classic volume rendering techniques to project the output colors and densities into an image. Because volume rendering is naturally differentiable, the only input required to optimize our representation is a set of images with known camera poses. We describe how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrate results that outperform prior work on neural rendering and view synthesis. View synthesis results are best viewed as videos, so we urge readers to view our supplementary video for convincing comparisons."}, "keywords": ["point features"], "citation_intent": "method"} {"citing_id": "2303.15585v1", "cited_id": "1905.12843", "section_title": "Conducting The Literature Review", "citation": "Specifically, Google's #REFR and Meta's 3 RAI principles talk about \"fairness and inclusion\", Amazon's 4 RAI principles promote \"diversity, equity, and inclusion\" through \"detecting bias\".", "text_before_citation": ["That excludes the first part of the query, which tries to match terms such as wearable(s) or mobile(s) only in the papers' meta-data, as seen in Figure 3 .", "Query Definition.", "For the definition of our query, we followed similar terminology with relevant review papers in the fairness literature #OTHEREFR .", "Additionally, according to Fjeld et al.'s analysis of prominent AI principles documents, #OTHEREFR , \"the fairness and non-discrimination theme is the most highly represented theme in our dataset, with every document referencing at least one of its six principles: \"non-discrimination and the prevention of bias\", \"representative and high-quality data\", \"fairness\", \"equality\", \"inclusiveness in impact\", and \"inclusiveness in design\", mostly included in our query's coverage.", "To capture the industrial perspective, we consulted the Responsible Artificial Intelligence (RAI) white papers issued by large tech companies."], "text_after_citation": ["Similarly, Nokia's #OTHEREFR RAI fairness pillar talks about \"fairness, non-discrimination, accessibility, and inclusivity\", and Intel's RAI pillars mention \"enabling ethical and equitable AI\".", "Thus, an iterative refinement process resulted in the query shown in Figure 3 .", "Eligibility Assessment.", "To further validate our query, we manually inspected all publications from the latest IMWUT proceedings (Volume 6, Issue 4, published in January 2023) ( = 56) to identify eligible papers for inclusion (see inclusion and exclusion criteria below).", "In total, we identified seven relevant publications, all of which were also returned by our query."], "citing_paper_content": {"title": "Beyond Accuracy: A Critical Review Of Fairness In Machine Learning For Mobile And Wearable Computing", "abstract": "The field of mobile, wearable, and ubiquitous computing (UbiComp) is undergoing a revolutionary integration of machine learning. Devices can now diagnose diseases, predict heart irregularities, and unlock the full potential of human cognition. However, the underlying algorithms are not immune to biases with respect to sensitive attributes (e.g., gender, race), leading to discriminatory outcomes. The research communities of HCI and AI-Ethics have recently started to explore ways of reporting information about datasets to surface and, eventually, counter those biases. The goal of this work is to explore the extent to which the UbiComp community has adopted such ways of reporting and highlight potential shortcomings. Through a systematic review of papers published in the Proceedings of the ACM Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT) journal over the past 5 years (2018-2022), we found that progress on algorithmic fairness within the UbiComp community lags behind. Our findings show that only a small portion (5%) of published papers adheres to modern fairness reporting, while the overwhelming majority thereof focuses on accuracy or error metrics. In light of these findings, our work provides practical guidelines for the design and development of ubiquitous technologies that not only strive for accuracy but also for fairness. CCS Concepts: \u2022 Human-centered computing \u2192 Ubiquitous and mobile computing; \u2022 Applied computing \u2192 Consumer health; \u2022 Computing methodologies \u2192 Artificial intelligence; \u2022 Social and professional topics \u2192 Codes of ethics."}, "cited_paper_content": {"title": "Fair Regression: Quantitative Definitions And Reduction-Based Algorithms", "abstract": "In this paper, we study the prediction of a real-valued target, such as a risk score or recidivism rate, while guaranteeing a quantitative notion of fairness with respect to a protected attribute such as gender or race. We call this class of problems \\emph{fair regression}. We propose general schemes for fair regression under two notions of fairness: (1) statistical parity, which asks that the prediction be statistically independent of the protected attribute, and (2) bounded group loss, which asks that the prediction error restricted to any protected group remain below some pre-determined level. While we only study these two notions of fairness, our schemes are applicable to arbitrary Lipschitz-continuous losses, and so they encompass least-squares regression, logistic regression, quantile regression, and many other tasks. Our schemes only require access to standard risk minimization algorithms (such as standard classification or least-squares regression) while providing theoretical guarantees on the optimality and fairness of the obtained solutions. In addition to analyzing theoretical properties of our schemes, we empirically demonstrate their ability to uncover fairness--accuracy frontiers on several standard datasets."}, "keywords": ["fairness"], "citation_intent": "background"} {"citing_id": "2304.04559v1", "cited_id": "1911.10414", "section_title": "Proof Of Concept (Tracking)", "citation": "We confirmed that the pose is successfully tracked without drifting #REFR . Finally, the pose estimation error was (0.13\u00b0, 0.0003).", "text_before_citation": ["For the POC of the proposed idea, we applied TeGRA for tracking. We used seq0 from mix scene.", "We use estimated pose & motion from the previous timestep to initialize (S ini t ,\u1e60 ini t ) in the next timestep as discussed in Sec.3.3. The results are shown in Fig.6 ."], "text_after_citation": ["The average number of events per pose update was 0.2% of the entire pixel."], "citing_paper_content": {"title": "Event-Based Camera Tracker By \u2207 T Nerf", "abstract": "When a camera travels across a 3D world, only a fraction of pixel value changes; an event-based camera observes the change as sparse events. How can we utilize sparse events for efficient recovery of the camera pose? We show that we can recover the camera pose by minimizing the error between sparse events and the temporal gradient of the scene represented as a neural radiance field (NeRF). To enable the computation of the temporal gradient of the scene, we augment NeRF's camera pose as a time function. When the input pose to the NeRF coincides with the actual pose, the output of the temporal gradient of NeRF equals the observed intensity changes on the event's points. Using this principle, we propose an event-based camera pose tracking framework called TeGRA which realizes the pose update by using the sparse event's observation. To the best of our knowledge, this is the first camera pose estimation algorithm using the scene's implicit representation and the sparse intensity change from events."}, "cited_paper_content": {"title": "Sal: Sign Agnostic Learning Of Shapes From Raw Data", "abstract": "Recently, neural networks have been used as implicit representations for surface reconstruction, modelling, learning, and generation. So far, training neural networks to be implicit representations of surfaces required training data sampled from a ground-truth signed implicit functions such as signed distance or occupancy functions, which are notoriously hard to compute. In this paper we introduce Sign Agnostic Learning (SAL), a deep learning approach for learning implicit shape representations directly from raw, unsigned geometric data, such as point clouds and triangle soups. We have tested SAL on the challenging problem of surface reconstruction from an un-oriented point cloud, as well as end-to-end human shape space learning directly from raw scans dataset, and achieved state of the art reconstructions compared to current approaches. We believe SAL opens the door to many geometric deep learning applications with real-world data, alleviating the usual painstaking, often manual pre-process."}, "keywords": ["pose estimation error", "pose"], "citation_intent": "background"} {"citing_id": "2303.12255v1", "cited_id": "1711.00848", "section_title": "Background And Related Work", "citation": "DIP-VAE #REFR uses the concept of \"moment\" to predict the future posterior, and uses this extra information to move the prior towards the approximate posterior.", "text_before_citation": ["We only show two out of infinite true posteriors here to simplify the representation.", "(iii) Reducing the size of Gap 3 with adaptable priors: GMVAE #OTHEREFR creates a finite set of candidate priors.", "disentangling \u03b2-VAE #OTHEREFR creates an infinite set of candidate priors on an \"equal KL-divergence\" line (see Figure 3 row 2 column 1 all dashed curves for clarity).", "(iv) Reducing the size of Gap 3 by adapting the prior to approximate the posterior: VAMP #OTHEREFR has a mixture prior, and uses aggregate posterior #OTHEREFR to move prior towards the approximated posterior during training.", "\u03b2-TCVAE #OTHEREFR adds a regularization term to optimize the prior towards the posterior during training."], "text_after_citation": ["FactorVAE #OTHEREFR adds a regularization term to factorize prior to adapt to low-variance posterior.", "(v) Reducing the size of Gap 2R and Gap 3R with a low-variance local prior: To achieve this goal without increasing Gap 2L and 3L, having many low-variance approximate posteriors is necessary to approximate the highvariance distribution.", "These posteriors result in an inaccurate approximation of hte high-level factors when a model is small. The extreme case is Autoencoders #OTHEREFR .", "WAE #OTHEREFR only maintains prior-like global approximate posteriors, but reduces the variance of local posteriors.", "VQVAE #OTHEREFR nears the extreme with a no-variance quantized latent prior."], "citing_paper_content": {"title": "Encoding Binary Concepts In The Latent Space Of Generative Models For Enhancing Data Representation", "abstract": "Binary concepts 1 are empirically used by humans to generalize efficiently. And they are based on Bernoulli distribution which is the building block of information. These concepts span both low-level and high-level features such as \"large vs small\" and \"a neuron is active or inactive\". Binary concepts are ubiquitous features and can be used to transfer knowledge to improve model generalization. We propose a novel binarized regularization to facilitate learning of binary concepts to improve the quality of data generation in autoencoders. We introduce a binarizing hyperparameter r in data generation process to disentangle the latent space symmetrically. We demonstrate that this method can be applied easily to existing variational autoencoder (VAE) variants to encourage symmetric disentanglement, improve reconstruction quality, and prevent posterior collapse without computation overhead. We also demonstrate that this method can boost existing models to learn more transferable representations and generate more representative samples for the input distribution which can alleviate catastrophic forgetting using generative replay under continual learning settings."}, "cited_paper_content": {"title": "Variational Inference Of Disentangled Latent Concepts From Unlabeled Observations", "abstract": "Disentangled representations, where the higher level data generative factors are reflected in disjoint latent dimensions, offer several benefits such as ease of deriving invariant representations, transferability to other tasks, interpretability, etc. We consider the problem of unsupervised learning of disentangled representations from large pool of unlabeled observations, and propose a variational inference based approach to infer disentangled latent factors. We introduce a regularizer on the expectation of the approximate posterior over observed data that encourages the disentanglement. We evaluate the proposed approach using several quantitative metrics and empirically observe significant gains over existing methods in terms of both disentanglement and data likelihood (reconstruction quality)."}, "keywords": ["approximate posterior"], "citation_intent": "background"} {"citing_id": "2305.00606v1", "cited_id": "1804.08771", "section_title": "Experiments", "citation": "We used the SacreBLEU #REFR implementation 9 of the BLUE metric to evaluate the models.", "text_before_citation": ["We split our dataset into different size configurations and in each configuration, the model is trained in the directions Fr\u2192Wo and Fr\u2192En until it reaches convergence.", "Convergence is considered to be reached when no improvement is observed on the validation set after 6 checkpoints.", "For data subwording, we used SentencePiece #OTHEREFR with Byte-Pair Encoding (BPE) which offers interesting performance gains in agglutinative languages like Wolof #OTHEREFR .", "We then generated a vocabulary on all segments of the considered size configuration's training set and performed an automatic model evaluation using BLEU #OTHEREFR .", "BLEU is the most widely used metric in NMT in view of the fairly high correlation it has with human evaluations."], "text_after_citation": [], "citing_paper_content": {"title": "Low-Resourced Machine Translation For Senegalese Wolof Language", "abstract": "Natural Language Processing (NLP) research has made great advancements in recent years with major breakthroughs that have established new benchmarks. However, these advances have mainly benefited a certain group of languages commonly referred to as resource-rich such as English and French. Majority of other languages with weaker resources are then left behind which is the case for most African languages including Wolof. In this work, we present a parallel Wolof/French corpus of 123,000 sentences on which we conducted experiments on machine translation models based on Recurrent Neural Networks (RNN) in different data configurations. We noted performance gains with the models trained on subworded data as well as those trained on the French-English language pair compared to those trained on the French-Wolof pair under the same experimental conditions."}, "cited_paper_content": {"title": "A Call For Clarity In Reporting Bleu Scores", "abstract": "The field of machine translation faces an under-recognized problem because of inconsistency in the reporting of scores from its dominant metric. Although people refer to\"the\"BLEU score, BLEU is in fact a parameterized metric whose values can vary wildly with changes to these parameters. These parameters are often not reported or are hard to find, and consequently, BLEU scores between papers cannot be directly compared. I quantify this variation, finding differences as high as 1.8 between commonly used configurations. The main culprit is different tokenization and normalization schemes applied to the reference. Pointing to the success of the parsing community, I suggest machine translation researchers settle upon the BLEU scheme used by the annual Conference on Machine Translation (WMT), which does not allow for user-supplied reference processing, and provide a new tool, SacreBLEU, to facilitate this."}, "keywords": ["SacreBLEU implementation"], "citation_intent": "method"} {"citing_id": "2303.17707v1", "cited_id": "2002.01650", "section_title": "Plausibility Is A Misaligned Objective For Xai Evaluation And Optimization", "citation": "This may be due to the fact that post-hoc XAI algorithms only summarize partial decision information from the model, whereas information on the full decision process is scattered throughout the model #REFR .", "text_before_citation": ["Doing so is misleading, harmful, and cannot achieve its expected explanation goals of understandability, trustworthiness, and transparency.", "The reasons are as follows: First, using plausibility to evaluate XAI algorithms is based on the assumption that the explanation truthfully reflects the AI model decision process #OTHEREFR .", "Users (including both technical users and non-technical users) hold this implicit assumption for AI explanations when interpreting them and assessing their plausibility.", "But explanation truthfulness or faithfulness is not an intrinsic or de facto property of XAI algorithms, and this assumption may not hold true unless it is explicitly validated #OTHEREFR .", "Indeed, prior systematic evaluations on posthoc XAI algorithms #OTHEREFR -algorithms that act as surrogate models to explain for a black-box AI model -show that post-hoc XAI algorithms do not truthfully reflect the decision process of the to-be-explained AI models #OTHEREFR ."], "text_after_citation": ["Intrinsically interpretable AI models -models that incorporate interpretability into their decision processdo not guarantee the truthfulness assumption, and should be explicitly validated for truthfulness of its explanations as well #OTHEREFR .", "If we select or optimize XAI algorithms solely for their plausibility while violating the truthfulness assumption, the resultant explanation can be misleading: without the truthfulness constraint, the XAI algorithm can be purposely optimized to generate explanations that are close to those of humans, at a time when the model's explanation is not even relevant to its underlying decision process.", "Such explanation cannot provide users with any insightful information about the model decision process to make the AI model more transparent.", "Furthermore, since users still hold the truthfulness assumption for AI explanation, the seemingly plausible explanation may deceitfully persuade users to trust the AI model and adopt its decisions, despite being potentially wrong #OTHEREFR .", "In highstakes applications, this may even lead to harmful consequences."], "citing_paper_content": {"title": "Rethinking Ai Explainability And Plausibility", "abstract": "Setting proper evaluation objectives for explainable artificial intelligence (XAI) is vital for making XAI algorithms follow human communication norms, support human reasoning processes, and fulfill human needs for AI explanations. In this article, we examine explanation plausibility, which is the most pervasive human-grounded concept in XAI evaluation. Plausibility measures how reasonable the machine explanation is compared to the human explanation. Plausibility has been conventionally formulated as an important evaluation objective for AI explainability tasks. We argue against this idea, and show how optimizing and evaluating XAI for plausibility is sometimes harmful, and always ineffective to achieve model understandability, transparency, and trustworthiness. Specifically, evaluating XAI algorithms for plausibility regularizes the machine explanation to express exactly the same content as human explanation, which deviates from the fundamental motivation for humans to explain: expressing similar or alternative reasoning trajectories while conforming to understandable forms or language. Optimizing XAI for plausibility regardless of the model decision correctness also jeopardizes model trustworthiness, as doing so breaks an important assumption in human-human explanation namely that plausible explanations typically imply correct decisions, and violating this assumption eventually leads to either undertrust or overtrust of AI models. Instead of being the end goal in XAI evaluation, plausibility can serve as an intermediate computational proxy for the human process of interpreting explanations to optimize the utility of XAI. We further highlight the importance of explainability-specific evaluation objectives by differentiating the AI explanation task from the object localization task."}, "cited_paper_content": {"title": "Concept Whitening For Interpretable Image Recognition", "abstract": "What does a neural network encode about a concept as we traverse through the layers? Interpretability in machine learning is undoubtedly important, but the calculations of neural networks are very challenging to understand. Attempts to see inside their hidden layers can either be misleading, unusable, or rely on the latent space to possess properties that it may not have. In this work, rather than attempting to analyze a neural network posthoc, we introduce a mechanism, called concept whitening (CW), to alter a given layer of the network to allow us to better understand the computation leading up to that layer. When a concept whitening module is added to a CNN, the axes of the latent space can be aligned with concepts of interest. By experiment, we show that CW can provide us a much clearer understanding for how the network gradually learns concepts over layers without hurting predictive performance."}, "keywords": ["partial decision information"], "citation_intent": "background"} {"citing_id": "2304.07175v1", "cited_id": "1710.08092", "section_title": "Skin Tone And False Match Pairs", "citation": "It is based on the popular ResNet-50 network structure and trained on the VGGFace2 dataset #REFR with standard softmax loss.", "text_before_citation": ["(In light of the recent work of Albiero #OTHEREFR , we might now add that differing social conventions for hairstyle present another confounding effect.) In Issues Related to Face Recognition Accuracy Varying Based on Race and Skin Tone, Krishnapriya et al.", "#OTHEREFR published the first experiment intended to isolate the effect of skin tone alone on accuracy.", "To directly test the premise that face recognition is less accurate for darker skin tones, they examined a #OTHEREFR .) range of tones within the single demographic of African American male (AAM) using the MORPH dataset.", "They used two matchers to produce AAM impostor distributions: ArcFace (as described in Section 0.2) and a publicly-available VGGFace2 model #OTHEREFR .", "VGGFace2 is representative of the state-of-the-art in CNN matchers prior to ArcFace."], "text_after_citation": ["Faces are detected, aligned, and resized to 224\u00d7224 pixels, and a 2048-d feature vector is taken from the next-to-last layer.", "As with ArcFace, cosine similarity is measured between feature vectors.", "Krishnapriya et al.", "compared the frequency of images with darker skin tone in two regions of the AAM impostor distribution.", "The high-similarity tail (HST) is the region containing the non-mated image pairs that are most likely to cause false matches."], "citing_paper_content": {"title": "Exploring Causes Of Demographic Variations In Face Recognition Accuracy", "abstract": "FIGURE 1 Apple's Face ID ad campaign touted ease of use and improved security for face recognition over the prior fingerprint standard, and made bold promises. 0.1 Introduction Automated facial recognition (FR) technology dates back to the early 1970s, with Takeo Kanade's 1973 Ph.D. thesis Picture Processing System by Computer Complex and Recognition of Human Faces [1] often cited as an early landmark work. However, it was not until the late 2010s that increased availability and power of FR technology increased its routine usage. In 2017, Apple introduced the iPhone X as the \"smartphone industry's benchmark\" [2], with their new facial identification system, Face ID, as a primary innovation and selling point [3, 4] (Figure 1). The corresponding security guide [5] even claimed \"the probability that a random person in the population could look at your iPhone X and unlock it using Face ID is approximately 1 in 1,000,000 (versus 1 in 50,000 for Touch ID).\" Yet a security flaw was quickly publicized: some Chinese users reported that their iPhones opened for other, non-authorized individuals [6, 7]. The underpinnings of these incidents would soon be explored by the research community, leading to a major question: does face recognition perform equally across all demographics? In 2018, Buolamwini and Gebru explored a related question in Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification [8]. Their work evaluated the accuracy of commercial gender classification software. For one classifier, they reported that error rates for lighter-skinned males were only 0.8%, while error rates for darker-skinned females were dramatically higher, at 34.7%. Provocative headlines like \"Facial Recognition Is Accurate, if You're a White Guy\" [9] sparked public interest in Buolamwini and Gebru's work. Media coverage generally failed to make any distinction between gender classification, the task of assigning a gender label to one face image, and face recognition, the task of deciding whether or not two face images are from the same person. Government research organizations quickly addressed the growing public concern around possible \"bias\" in face recognition accuracy. As part of their 2018 biometric technology rally, the Department of Homeland Security assessed the effect of demographic factors on performance of commercial face biometric systems, as measured by transaction times and by similarity scores of pairs of images of the same person [10]."}, "cited_paper_content": {"title": "Vggface2: A Dataset For Recognising Faces Across Pose And Age", "abstract": "In this paper, we introduce a new large-scale face dataset named VGGFace2. The dataset contains 3.31 million images of 9131 subjects, with an average of 362.6 images for each subject. Images are downloaded from Google Image Search and have large variations in pose, age, illumination, ethnicity and profession (e.g. actors, athletes, politicians). The dataset was collected with three goals in mind: (i) to have both a large number of identities and also a large number of images for each identity; (ii) to cover a large range of pose, age and ethnicity; and (iii) to minimize the label noise. We describe how the dataset was collected, in particular the automated and manual filtering stages to ensure a high accuracy for the images of each identity. To assess face recognition performance using the new dataset, we train ResNet-50 (with and without Squeeze-and-Excitation blocks) Convolutional Neural Networks on VGGFace2, on MS- Celeb-1M, and on their union, and show that training on VGGFace2 leads to improved recognition performance over pose and age. Finally, using the models trained on these datasets, we demonstrate state-of-the-art performance on the IJB-A and IJB-B face recognition benchmarks, exceeding the previous state-of-the-art by a large margin. Datasets and models are publicly available."}, "keywords": ["VGGFace2 dataset"], "citation_intent": "method"} {"citing_id": "2303.04573v1", "cited_id": "1811.11597", "section_title": "Conclusions And Future Work", "citation": "One aspect in which they can prove useful is in the training of algorithm selection models #REFR , as they can significantly increase the size and variety of training data, which is an important consideration towards testing generalizability.", "text_before_citation": ["This is achieved through several transformations, the most common of which is moving the optimum to a different location in the domain.", "The results we present show that the way in which these optimal locations are chosen can have a large impact on the performance of optimization algorithms.", "Since the optima are not distributed uniformly in the domain, some functions have different kinds of bias, which can be exploited by an algorithm.", "The question on how to fairly consider different instance generation mechanisms when making use of function combination is thus highly interlinked with questions about how well performance observed on a set of BBOB instances generalizes.", "Even with these challenges in mind, there are many potential use cases for these affine function combinations."], "text_after_citation": ["One final aspect in which the benchmark data on these function combinations can be further utilized is by linking it back to the exploratory landscape analysis which inspired their creation.", "Since the combinations can smoothly fill the landscape feature space, this can be combined with algorithm performance to get a more fine-grained view of the way in which the landscape interacts with different algorithms #OTHEREFR ."], "citing_paper_content": {"title": "Using Affine Combinations Of Bbob Problems For Performance Assessment", "abstract": "Benchmarking plays a major role in the development and analysis of optimization algorithms. As such, the way in which the used benchmark problems are defined significantly affects the insights that can be gained from any given benchmark study. One way to easily extend the range of available benchmark functions is through affine combinations between pairs of functions. From the perspective of landscape analysis, these function combinations smoothly transition between the two base functions. In this work, we show how these affine function combinations can be used to analyze the behavior of optimization algorithms. In particular, we highlight that by varying the weighting between the combined problems, we can gain insights into the effects of added global structure on the performance of optimization algorithms. By analyzing performance trajectories on more function combinations, we also show that aspects such as the scaling of objective functions and placement of the optimum can greatly impact how these results are interpreted."}, "cited_paper_content": {"title": "Automated Algorithm Selection: Survey And Perspectives", "abstract": "It has long been observed that for practically any computational problem that has been intensely studied, different instances are best solved using different algorithms. This is particularly pronounced for computationally hard problems, where in most cases, no single algorithm defines the state of the art; instead, there is a set of algorithms with complementary strengths. This performance complementarity can be exploited in various ways, one of which is based on the idea of selecting, from a set of given algorithms, for each problem instance to be solved the one expected to perform best. The task of automatically selecting an algorithm from a given set is known as the per-instance algorithm selection problem and has been intensely studied over the past 15 years, leading to major improvements in the state of the art in solving a growing number of discrete combinatorial problems, including propositional satisfiability and AI planning. Per-instance algorithm selection also shows much promise for boosting performance in solving continuous and mixed discrete/continuous optimisation problems. This survey provides an overview of research in automated algorithm selection, ranging from early and seminal works to recent and promising application areas. Different from earlier work, it covers applications to discrete and continuous problems, and discusses algorithm selection in context with conceptually related approaches, such as algorithm configuration, scheduling or portfolio selection. Since informative and cheaply computable problem instance features provide the basis for effective per-instance algorithm selection systems, we also provide an overview of such features for discrete and continuous problems. Finally, we provide perspectives on future work in the area and discuss a number of open research challenges."}, "keywords": ["algorithm selection models"], "citation_intent": "background"} {"citing_id": "2303.09530v1", "cited_id": "1708.02002", "section_title": "B. Network Architectures And Setup", "citation": "And we employ focal loss #REFR to increase weighting of detections that are especially hard to classify correctly and to better address the severe class imbalance.", "text_before_citation": ["We choose to use the position in the Cartesian vehicle coordinate system (CS) at the time of recording the most recent scan in the point cloud. The timestamp is relative to that moment.", "Moreover, we add the detection's position in the polar sensor CS, which is more useful in some cases, and the measuring sensor's ID.", "When upsampling of the input point cloud is required, that is done by duplicating randomly selected detections. Replicas Finally, we also adjust the training setup. Input data is normalized via standardization.", "To prevent a distortion of following distance calculations, the variances of the Cartesian coordinates are averaged before scaling.", "The learning rate is varied between 10 \u22129 and 10 \u22123 according to a cyclical learning rate policy."], "text_after_citation": ["The remaining training parameters are as described in #OTHEREFR ."], "citing_paper_content": {"title": "Tackling Clutter In Radar Data -Label Generation And Detection Using Pointnet++", "abstract": "Radar sensors employed for environment perception, e.g. in autonomous vehicles, output a lot of unwanted clutter. These points, for which no corresponding real objects exist, are a major source of errors in following processing steps like object detection or tracking. We therefore present two novel neural network setups for identifying clutter. The input data, network architectures and training configuration are adjusted specifically for this task. Special attention is paid to the downsampling of point clouds composed of multiple sensor scans. In an extensive evaluation, the new setups display substantially better performance than existing approaches. Because there is no suitable public data set in which clutter is annotated, we design a method to automatically generate the respective labels. By applying it to existing data with object annotations and releasing its code, we effectively create the first freely available radar clutter data set representing realworld driving scenarios. Code and instructions are accessible at www.github.com/kopp-j/clutter-ds."}, "cited_paper_content": {"title": "Focal Loss For Dense Object Detection", "abstract": "The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors. Code is at: https://github.com/facebookresearch/Detectron ."}, "keywords": ["detections", "focal loss"], "citation_intent": "method"} {"citing_id": "2303.12319v1", "cited_id": "1902.04043", "section_title": "A. Marl Benchmarks", "citation": "For example, SMAC #REFR is based on the popular real-time strategy game StarCraft II, focuses on micromanagement challenges, and is applicable to studying cooperative MARL.", "text_before_citation": ["Multifarious emerging benchmarks have accelerated MARL research in recent years and provide various evaluation criteria for different application scenarios and research domains.", "Some game platforms have been the most popular benchmarks for evaluating MARL algorithms."], "text_after_citation": ["GRF #OTHEREFR , an environment for playing football tasks of varying difficulty in a physics-based 3D simulation, focuses on multi-level, multi-agent learning.", "Wimblepong #OTHEREFR is a 2-player version of the Atari game Pong, where each player controls a paddle to play a ball with the other, and it is a purely competitive scenario.", "Some environments evolve from single-agent tasks, which decompose single-agent control tasks into multi-agent tasks.", "For instance, MaMujoco #OTHEREFR , based on a single-agent robotic MuJoCo control suite, provides a wide variety of continuous multi-agent robotic control scenarios in which multiple agents within a single robot try to complete a task cooperatively.", "DM Control #OTHEREFR is a set of Python RL environments powered by the MuJoCo physics engine and includes multi-agent soccer simulation environments."], "citing_paper_content": {"title": "Neuronsmae: A Novel Multi-Agent Reinforcement Learning Environment For Cooperative And Competitive Multi-Robot Tasks", "abstract": "Multi-agent reinforcement learning (MARL) has achieved remarkable success in various challenging problems. Meanwhile, more and more benchmarks have emerged and provided some standards to evaluate the algorithms in different fields. On the one hand, the virtual MARL environments lack knowledge of real-world tasks and actuator abilities, and on the other hand, the current task-specified multi-robot platform has poor support for the generality of multi-agent reinforcement learning algorithms and lacks support for transferring from simulation to the real environment. Bridging the gap between the virtual MARL environments and the real multi-robot platform becomes the key to promoting the practicability of MARL algorithms. This paper proposes a novel MARL environment for real multi-robot tasks named NeuronsMAE (Neurons Multi-Agent Environment). This environment supports cooperative and competitive multi-robot tasks and is configured with rich parameter interfaces to study the multi-agent policy transfer from simulation to reality. With this platform, we evaluate various popular MARL algorithms and build a new MARL benchmark for multi-robot tasks. We hope that this platform will facilitate the research and application of MARL algorithms for real robot tasks. Information about the benchmark and the open-source code will be released. Index Terms-multi-agent reinforcement learning, benchamark, multi-robot."}, "cited_paper_content": {"title": "The Starcraft Multi-Agent Challenge", "abstract": "In the last few years, deep multi-agent reinforcement learning (RL) has become a highly active area of research. A particularly challenging class of problems in this area is partially observable, cooperative, multi-agent learning, in which teams of agents must learn to coordinate their behaviour while conditioning only on their private observations. This is an attractive research area since such problems are relevant to a large number of real-world systems and are also more amenable to evaluation than general-sum problems. Standardised environments such as the ALE and MuJoCo have allowed single-agent RL to move beyond toy domains, such as grid worlds. However, there is no comparable benchmark for cooperative multi-agent RL. As a result, most papers in this field use one-off toy problems, making it difficult to measure real progress. In this paper, we propose the StarCraft Multi-Agent Challenge (SMAC) as a benchmark problem to fill this gap. SMAC is based on the popular real-time strategy game StarCraft II and focuses on micromanagement challenges where each unit is controlled by an independent agent that must act based on local observations. We offer a diverse set of challenge maps and recommendations for best practices in benchmarking and evaluations. We also open-source a deep multi-agent RL learning framework including state-of-the-art algorithms. We believe that SMAC can provide a standard benchmark environment for years to come. Videos of our best agents for several SMAC scenarios are available at: https://youtu.be/VZ7zmQ_obZ0."}, "keywords": ["cooperative MARL", "StarCraft II"], "citation_intent": "method"} {"citing_id": "2303.14184v1", "cited_id": "1906.08240", "section_title": "B.2. Refine Stage", "citation": "Following #REFR , we rasterize neural points V to multi-scale feature maps S(i, V ), i \u2208 [0, K), K = 3.", "text_before_citation": ["Point cloud rasterization."], "text_after_citation": ["We use a differentiable point rasterizer implemented by PyTorch3D #OTHEREFR to assign every pixel a neural descriptor and a binary scalar that indicates a non-empty pixel.", "We consider the binary mask as a point-based occupancy mask. Background regularization.", "To handle pixels without corresponding point cloud projection, we assign a learnable descriptor as the background.", "During texture enhancement optimization, we additionally add a regularization to encourage the scene to be rendered with a white background according to the binary occupancy mask mentioned above. Deferred neural rendering.", "For deferred rendering of the point clouds, we use a 2D U-Net architecture with gated convolutions #OTHEREFR ."], "citing_paper_content": {"title": "Make-It-3D: High-Fidelity 3D Creation From A Single Image With Diffusion Prior", "abstract": "Reference Normal Novel Views Reference Normal Novel Views Figure 1: Make-It-3D can create high-fidelity 3D content from only a single image. We show the normal map and novel-view renderings of created 3D content, showcasing fine geometry and faithful textures with stunning quality at novel views."}, "cited_paper_content": {"title": "Neural Point-Based Graphics", "abstract": "We present a new point-based approach for modeling the appearance of real scenes. The approach uses a raw point cloud as the geometric representation of a scene, and augments each point with a learnable neural descriptor that encodes local geometry and appearance. A deep rendering network is learned in parallel with the descriptors, so that new views of the scene can be obtained by passing the rasterizations of a point cloud from new viewpoints through this network. The input rasterizations use the learned descriptors as point pseudo-colors. We show that the proposed approach can be used for modeling complex scenes and obtaining their photorealistic views, while avoiding explicit surface estimation and meshing. In particular, compelling results are obtained for scene scanned using hand-held commodity RGB-D sensors as well as standard RGB cameras even in the presence of objects that are challenging for standard mesh-based modeling."}, "keywords": ["multi-scale feature maps"], "citation_intent": "method"} {"citing_id": "2303.09956v1", "cited_id": "1810.00826", "section_title": "Embedding Generation", "citation": "Then, we adopt a GNN model, namely graph isomorphism network (GIN) #REFR , to extract the structural information of cell graphs.", "text_before_citation": ["After constructing the cell graphs, we generate the embeddings of cell nodes and input images for the report generation task.", "For embedding generation of cell nodes, we first use a trainable CNN to process the extracted cell nodes in Section 3.1 and obtain the output", "EQUATION", "|V| ] , which captures morphology features of cell nodes.", "In this way, the network can learn to extract essential morphology features for report generation during training."], "text_after_citation": ["Specifically, one layer of the GNN model is defined as follows:", "EQUATION", "where h ( ) i denotes the hidden embedding of cell node i in a cell graph at the th layer.", "( ) is a learnable parameter to distinguish central nodes from neighbors.", "N i denotes the set of neighbors of cell node i. MLP denotes a multi-layer perceptron. H (0) is the input of the GNN model."], "citing_paper_content": {"title": "Gnnformer: A Graph-Based Framework For Cytopathology Report Generation Technical Report", "abstract": "Cytopathology report generation is a necessary step for the standardized examination of pathology images. However, manually writing detailed reports brings heavy workloads for pathologists. To improve efficiency, some existing works have studied automatic generation of cytopathology reports, mainly by applying image caption generation frameworks with visual encoders originally proposed for natural images. A common weakness of these works is that they do not explicitly model the structural information among cells, which is a key feature of pathology images and provides significant information for making diagnoses. In this paper, we propose a novel graph-based framework called GNNFormer, which seamlessly integrates graph neural network (GNN) and Transformer into the same framework, for cytopathology report generation. To the best of our knowledge, GNNFormer is the first report generation method that explicitly models the structural information among cells in pathology images. It also effectively fuses structural information among cells, fine-grained morphology features of cells and background features to generate high-quality reports. Experimental results on the NMI-WSI dataset show that GNNFormer can outperform other state-of-the-art baselines."}, "cited_paper_content": {"title": "How Powerful Are Graph Neural Networks?", "abstract": "Graph Neural Networks (GNNs) are an effective framework for representation learning of graphs. GNNs follow a neighborhood aggregation scheme, where the representation vector of a node is computed by recursively aggregating and transforming representation vectors of its neighboring nodes. Many GNN variants have been proposed and have achieved state-of-the-art results on both node and graph classification tasks. However, despite GNNs revolutionizing graph representation learning, there is limited understanding of their representational properties and limitations. Here, we present a theoretical framework for analyzing the expressive power of GNNs to capture different graph structures. Our results characterize the discriminative power of popular GNN variants, such as Graph Convolutional Networks and GraphSAGE, and show that they cannot learn to distinguish certain simple graph structures. We then develop a simple architecture that is provably the most expressive among the class of GNNs and is as powerful as the Weisfeiler-Lehman graph isomorphism test. We empirically validate our theoretical findings on a number of graph classification benchmarks, and demonstrate that our model achieves state-of-the-art performance."}, "keywords": ["cell graphs", "namely graph isomorphism"], "citation_intent": "method"} {"citing_id": "2303.05205v1", "cited_id": "1801.01290", "section_title": "Gridzero", "citation": "Additionally, we incorporate the entropy loss term H(p k \u03b8 ), following the SAC's settings, to encourage exploration in the learning process #REFR .", "text_before_citation": ["The loss function is computed over a horizon of K unrolled steps.", "The reward loss l r measures the difference between the estimated reward r k and target reward u k .", "Similarly, The value loss l v indicates the difference between the estimated value v k and bootstrapped target value z k .", "The policy loss l p represents the distance between the output policy p k and the root visit distribution of MCTS \u03c0 k .", "In order to address the challenge of insufficient supervisory signal in dynamic networks in MuZero, we introduce consistency loss l c which maximizes the similarity between the predicted next-state\u015d k+1 and the ground-truth next-state s k+1 ."], "text_after_citation": [], "citing_paper_content": {"title": "Real-Time Scheduling Of Renewable Power Systems Through Planning-Based Reinforcement Learning", "abstract": "The growing renewable energy sources have posed significant challenges to traditional power scheduling. It is difficult for operators to obtain accurate day-ahead forecasts of renewable generation, thereby requiring the future scheduling system to make real-time scheduling decisions aligning with ultra-short-term forecasts. Restricted by the computation speed, traditional optimization-based methods can not solve this problem. Recent developments in reinforcement learning (RL) have demonstrated the potential to solve this challenge. However, the existing RL methods are inadequate in terms of constraint complexity, algorithm performance, and environment fidelity. We are the first to propose a systematic solution 1"}, "cited_paper_content": {"title": "Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning With A Stochastic Actor", "abstract": "Model-free deep reinforcement learning (RL) algorithms have been demonstrated on a range of challenging decision making and control tasks. However, these methods typically suffer from two major challenges: very high sample complexity and brittle convergence properties, which necessitate meticulous hyperparameter tuning. Both of these challenges severely limit the applicability of such methods to complex, real-world domains. In this paper, we propose soft actor-critic, an off-policy actor-critic deep RL algorithm based on the maximum entropy reinforcement learning framework. In this framework, the actor aims to maximize expected reward while also maximizing entropy - that is, succeed at the task while acting as randomly as possible. Prior deep RL methods based on this framework have been formulated as Q-learning methods. By combining off-policy updates with a stable stochastic actor-critic formulation, our method achieves state-of-the-art performance on a range of continuous control benchmark tasks, outperforming prior on-policy and off-policy methods. Furthermore, we demonstrate that, in contrast to other off-policy algorithms, our approach is very stable, achieving very similar performance across different random seeds."}, "keywords": ["learning process", "entropy loss term"], "citation_intent": "method"} {"citing_id": "2303.14082v1", "cited_id": "1907.00123", "section_title": "I. Introduction", "citation": "Specifically, the authors in #REFR consider a two-cell network and propose a centralized DRL-based approach, where only a single agent is employed to control the beamformers and transmit power of both the two cells.", "text_before_citation": ["In #OTHEREFR - #OTHEREFR , the DL-based approaches exploiting expert knowledge, i.e., the known structure of optimal solutions, are studied for the beamforming optimization in a single-cell MU-MISO downlink system.", "Particularly, the exploitation of expert knowledge can improve the performance of the DL-based approaches #OTHEREFR .", "In #OTHEREFR , a bipartite graph neural network based approach is further developed to realize a scalable DL-based solution.", "In #OTHEREFR , a deep unrolling approach is proposed to reduce the number of required iterations by unfolding the iterative optimization procedures as graph neural networks, however, it still requires the real-time global CSI and some iterations to obtain near-optimal beamformers.", "In #OTHEREFR , #OTHEREFR , the joint beamforming, power control and interference coordination problem in cellular networks is investigated."], "text_after_citation": ["For the more general multi-cell networks, a distributed beamforming coordination approach based on multi-agent DRL is proposed in #OTHEREFR .", "It is worth noting that the DRL-based schemes in #OTHEREFR , #OTHEREFR are designed for cellular networks with only a single user per cell, and thus can not be directly extended to more general multi-cell multi-user cellular networks.", "Moreover, the codebook-based method is adopted in #OTHEREFR , #OTHEREFR , where the optimal beamformers can only be selected from a predefined set of available beamformers.", "As such, when the channel characteristics are more complex, e.g., the channels in a lower frequency band instead of the millimeter wave band considered in #OTHEREFR , the optimal beamformers may not fall into the predefined set, and it is almost impossible to obtain the optimal beamformers with the codebook-based method due to the mismatch between the channel characteristics and the codebook.", "Inspired by the huge potential of ML-based beamforming optimization approaches, we propose a DRL-based distributed dynamic coordinated beamforming (DDCBF) framework for a massive MIMO mobile cellular network."], "citing_paper_content": {"title": "Deep Reinforcement Learning For Distributed Dynamic Coordinated Beamforming In Massive Mimo Cellular Networks", "abstract": "To accommodate the explosive wireless traffics, massive multiple-input multiple-output (MIMO) is regarded as one of the key enabling technologies for next-generation communication systems. In massive MIMO cellular networks, coordinated beamforming (CBF), which jointly designs the beamformers of multiple base stations (BSs), is an efficient method to enhance the network performance. In this paper, we investigate the sum rate maximization problem in a massive MIMO mobile cellular network, where in each cell a multi-antenna BS serves multiple mobile users simultaneously via downlink beamforming. Although existing optimization-based CBF algorithms can provide near-optimal solutions, they require realtime and global channel state information (CSI), in addition to their high computation complexity. It is almost impossible to apply them in practical wireless networks, especially highly dynamic mobile cellular networks. Motivated by this, we propose a deep reinforcement learning based distributed dynamic coordinated beamforming (DDCBF) framework, which enables each BS to determine the beamformers with only local CSI and some historical information from other BSs.Besides, the beamformers can be calculated with a considerably lower computational complexity by exploiting neural networks and expert knowledge, i.e., a solution structure observed from the iterative procedure of the weighted minimum mean square error (WMMSE) algorithm. Moreover, we provide extensive numerical simulations to validate the effectiveness of the proposed DRL-based approach. With lower computational complexity"}, "cited_paper_content": {"title": "Deep Reinforcement Learning For 5G Networks: Joint Beamforming, Power Control, And Interference Coordination", "abstract": "The fifth generation of wireless communications (5G) promises massive increases in traffic volume and data rates, as well as improved reliability in voice calls. Jointly optimizing beamforming, power control, and interference coordination in a 5G wireless network to enhance the communication performance to end users poses a significant challenge. In this paper, we formulate the joint design of beamforming, power control, and interference coordination as a non-convex optimization problem to maximize the signal to interference plus noise ratio (SINR) and solve this problem using deep reinforcement learning. By using the greedy nature of deep Q-learning to estimate future rewards of actions and using the reported coordinates of the users served by the network, we propose an algorithm for voice bearers and data bearers in sub-6 GHz and millimeter wave (mmWave) frequency bands, respectively. The algorithm improves the performance measured by SINR and sum-rate capacity. In realistic cellular environments, the simulation results show that our algorithm outperforms the link adaptation industry standards for sub-6 GHz voice bearers. For data bearers in the mmWave frequency band, our algorithm approaches the maximum sum rate capacity, but with less than 4% of the required run time."}, "keywords": ["two-cell network"], "citation_intent": "background"} {"citing_id": "2303.05269v1", "cited_id": "1702.05464", "section_title": "Supervised Cell Detection", "citation": "In addition, annotating the image for each condition is very laborious when the individual images contain many cells. 2015; #REFR .", "text_before_citation": ["proposed Mask R-CNN, which enables instance segmentation by adding mask branches to the head of the Faster R-CNN architecture #OTHEREFR", "(2015) and thereby allowing segmentation of each detected object #OTHEREFR . Fujita et al.", "utilized Mask R-CNN for cell detection and segmentation #OTHEREFR . To reduce annotation costs, Nishimura et al. proposed U-Net Ronneberger et al.", "(2015) , which uses a heatmap as training data #OTHEREFR .", "However, these methods would not work well at cell detection if the domains of the training data (source domain) and test data (target domain) are different (domain shift)."], "text_after_citation": ["This method uses a domain discriminator that distinguishes the source and target domain and, for adaptation, it tries to fool the discriminator into not distinguishing between domains. For the cell segmentation task, Haq et al.", "utilized adversarial learning for domain adaption and introduced an autoencoder to extract invariant features between the source and target domains #OTHEREFR .", "However, this approach does not consider essential features e.g.,(class features in classification task) and only tries to match features between domains. To solve this problem, Saito et al.", "proposed to use the maximum classifier discrepancy that matches features among classes between domains #OTHEREFR .", "The design of an adversarial domain adaptation is more complex for the heatmap prediction task since the network has both an encoder and decoder."], "citing_paper_content": {"title": "Medical Image Analysis", "abstract": "Cell detection is an important task in biomedical research. Recently, deep learning methods have made it possible to improve the performance of cell detection. However, a detection network trained with training data under a specific condition (source domain) may not work well on data under other conditions (target domains), which is called the domain shift problem. In particular, cells are cultured under different conditions depending on the purpose of the research. Characteristics, e.g., the shapes and density of the cells, change depending on the conditions, and such changes may cause domain shift problems. Here, we propose an unsupervised domain adaptation method for cell detection using a pseudo-cell-position heatmap, where the cell centroid is at the peak of a Gaussian distribution in the map and selective pseudo-labeling. In the prediction result for the target domain, even if the peak location is correct, the signal distribution around the peak often has a non-Gaussian shape. The pseudo-cell-position heatmap is thus regenerated using the peak positions in the predicted heatmap to have a clear Gaussian shape. Our method selects confident pseudo-cell-position heatmaps based on uncertainty and curriculum learning. We conducted numerous experiments showing that, compared with the existing methods, our method improved detection performance under different conditions."}, "cited_paper_content": {"title": "Adversarial Discriminative Domain Adaptation", "abstract": "Adversarial learning methods are a promising approach to training robust deep networks, and can generate complex samples across diverse domains. They can also improve recognition despite the presence of domain shift or dataset bias: recent adversarial approaches to unsupervised domain adaptation reduce the difference between the training and test domain distributions and thus improve generalization performance. However, while generative adversarial networks (GANs) show compelling visualizations, they are not optimal on discriminative tasks and can be limited to smaller shifts. On the other hand, discriminative approaches can handle larger domain shifts, but impose tied weights on the model and do not exploit a GAN-based loss. In this work, we first outline a novel generalized framework for adversarial adaptation, which subsumes recent state-of-the-art approaches as special cases, and use this generalized view to better relate prior approaches. We then propose a previously unexplored instance of our general framework which combines discriminative modeling, untied weight sharing, and a GAN loss, which we call Adversarial Discriminative Domain Adaptation (ADDA). We show that ADDA is more effective yet considerably simpler than competing domain-adversarial methods, and demonstrate the promise of our approach by exceeding state-of-the-art unsupervised adaptation results on standard domain adaptation tasks as well as a difficult cross-modality object classification task."}, "keywords": ["many cells", "individual images"], "citation_intent": "background"} {"citing_id": "2304.06440v1", "cited_id": "1705.07750", "section_title": "Advancement In Visual Networks", "citation": "Remarkably, I3D #REFR reveals that successful 2D networks could be seamlessly inflated to corresponding 3D networks and even their parameter.", "text_before_citation": ["The advancement of visual backbone networks prospers various downstream tasks tremendously #OTHEREFR .", "Based on the modality of input data, visual networks can be classified into two types, image networks (i.e. 2D networks) and video networks (i.e. 3D networks).", "C3D #OTHEREFR pioneeringly devises an 11-layer CNN with 3D-CNN to adapt to video inputs.", "Subsequent P3D #OTHEREFR , S3D #OTHEREFR and R(2+1)D #OTHEREFR observe that disentangled spatial and temporal convolutions results in a more favorable speed-accuracy trade-off than the pure 3D convolution."], "text_after_citation": ["In recent days, a shift in backbone architecture, from CNNs to Transformers (ViT) #OTHEREFR , has begun.", "Especially, Swin Transformer #OTHEREFR reintroduces the inductive bias of convolutions (i.e., locality, translation invariance and hierarchy), which enables it to serve as a general-purpose backbone.", "The success of image Transformer leads to further investigation of Transformer-based video networks (e.g., ViViT #OTHEREFR , MViT #OTHEREFR , Video Swin Transformer #OTHEREFR ).", "Among all characteristics of Transformers, the patch-wise operations inherently differentiate the edges of patches, thus making them ideal for handling input sampled by GMS. Figure 2 . Illustration of the framework of Zoom-VQA. The overall architecture consists of two parts.", "One part is a perception network based on image input that obtains global information."], "citing_paper_content": {"title": "Zoom-Vqa: Patches, Frames And Clips Integration For Video Quality Assessment", "abstract": "Video quality assessment (VQA) aims to simulate the human perception of video quality, which is influenced by factors ranging from low-level color and texture details to high-level semantic content. To effectively model these complicated quality-related factors, in this paper, we decompose video into three levels (i.e., patch level, frame level, and clip level), and propose a novel Zoom-VQA architecture to perceive spatio-temporal features at different levels. It integrates three components: patch attention module, frame pyramid alignment, and clip ensemble strategy, respectively for capturing region-of-interest in the spatial dimension, multi-level information at different feature levels, and distortions distributed over the temporal dimension. Owing to the comprehensive design, Zoom-VQA obtains state-of-the-art results on four VQA benchmarks and achieves 2nd place in the NTIRE 2023 VQA challenge. Notably, Zoom-VQA has outperformed the previous best results on two subsets of LSVQ, achieving 0.8860 (+1.0%) and 0.7985 (+1.9%) of SRCC on the respective subsets. Adequate ablation studies further verify the effectiveness of each component. Codes and models are released in https://github.com/k-zha14/Zoom-VQA."}, "cited_paper_content": {"title": "Quo Vadis, Action Recognition? A New Model And The Kinetics Dataset", "abstract": "The paucity of videos in current action classification datasets (UCF-101 and HMDB-51) has made it difficult to identify good video architectures, as most methods obtain similar performance on existing small-scale benchmarks. This paper re-evaluates state-of-the-art architectures in light of the new Kinetics Human Action Video dataset. Kinetics has two orders of magnitude more data, with 400 human action classes and over 400 clips per class, and is collected from realistic, challenging YouTube videos. We provide an analysis on how current architectures fare on the task of action classification on this dataset and how much performance improves on the smaller benchmark datasets after pre-training on Kinetics. ::: We also introduce a new Two-Stream Inflated 3D ConvNet (I3D) that is based on 2D ConvNet inflation: filters and pooling kernels of very deep image classification ConvNets are expanded into 3D, making it possible to learn seamless spatio-temporal feature extractors from video while leveraging successful ImageNet architecture designs and even their parameters. We show that, after pre-training on Kinetics, I3D models considerably improve upon the state-of-the-art in action classification, reaching 80.9% on HMDB-51 and 98.0% on UCF-101."}, "keywords": ["corresponding 3D networks"], "citation_intent": "background"} {"citing_id": "2303.01111v1", "cited_id": "2002.11569", "section_title": "Iii. The Model", "citation": "Overfitting is a general issue in the domain of supervised machine learning and cannot be avoided #REFR .", "text_before_citation": ["The top performing model achieved 49.36% accuracy at the validation level.", "Given three balanced classes of the input data, the result outperforms random guessing by 48% at the aggregate level.", "The shape of both curves indicate that the training was a subject of overfitting.", "An overfitted model may fail to properly generalize features that it is supposed to learn and instead fits the idiosyncrasies of the training sample itself.", "Such model would perform well during the training but unsatisfactorily to any other data but the one on which it was trained."], "text_after_citation": ["To contain the effect of overfitting, we added a dropout layer that discarded 20% of random units to the end of the network just prior to the classification.", "Dropout is generally considered very effective technique against overfitting #OTHEREFR , #OTHEREFR ."], "citing_paper_content": {"title": "Predicting Stock Price Movement As An Image Classification Problem", "abstract": "The paper studies intraday price movement of stocks that is considered as an image classification problem. Using a CNN-based model we make a compelling case for the highlevel relationship between the first hour of trading and the close. The algorithm managed to adequately separate between the two opposing classes and investing according to the algorithm's predictions outperformed all alternative constructs but the theoretical maximum. To support the thesis, we ran several additional tests. The findings in the paper highlight the suitability of computer vision techniques for studying financial markets and in particular prediction of stock price movements."}, "cited_paper_content": {"title": "Overfitting In Adversarially Robust Deep Learning", "abstract": "It is common practice in deep learning to use overparameterized networks and train for as long as possible; there are numerous studies that show, both theoretically and empirically, that such practices surprisingly do not unduly harm the generalization performance of the classifier. In this paper, we empirically study this phenomenon in the setting of adversarially trained deep networks, which are trained to minimize the loss under worst-case adversarial perturbations. We find that overfitting to the training set does in fact harm robust performance to a very large degree in adversarially robust training across multiple datasets (SVHN, CIFAR-10, CIFAR-100, and ImageNet) and perturbation models ($\\ell_\\infty$ and $\\ell_2$). Based upon this observed effect, we show that the performance gains of virtually all recent algorithmic improvements upon adversarial training can be matched by simply using early stopping. We also show that effects such as the double descent curve do still occur in adversarially trained models, yet fail to explain the observed overfitting. Finally, we study several classical and modern deep learning remedies for overfitting, including regularization and data augmentation, and find that no approach in isolation improves significantly upon the gains achieved by early stopping. All code for reproducing the experiments as well as pretrained model weights and training logs can be found at this https URL."}, "keywords": ["supervised machine learning"], "citation_intent": "background"} {"citing_id": "2303.05455v1", "cited_id": "1902.01108", "section_title": "2.", "citation": "Meanwhile, the augmented kNN-graph has approximately L\u223cn vi \u2022M edges, where n vi =nn+rn > 2 #REFR .", "text_before_citation": [" 4.5b) , and the ordering of NN can result from measurement errors.", "We assume that the number of neighbors nn must meet two conditions.", "First, the kNN-graph should be fully connected (or approximately, that is, the size of the largest component should be comparable to the size of the entire graph).", "Second, the kNN-graph augmented with approximately rn edges should be at least a minimal n-rigid graph (in 2D: 2-rigid).", "The lower band of the number of connections L, required to make the 2-rigid augmented k NN-graph, is L\u223c2\u2022M."], "text_after_citation": ["As our experience shows, the probability that the largest connected component is rigid (or approximately rigid) is very high.", "In summary, to obtain the largest connected component approximately equal to the full kNN-graph, the number of nearest neighbors nn can be very low (mostly nn = 2, but for some specific datasets with very similar samples, it can be a bit larger).", "Assuming additionally that rn = 1, we can obtain a stable and rigid 2-D embedding of the kNN-graph.", "This way, instead of the O(M 2 ) floating point D matrix, we have as input data O(nn \u2022 M) integers -the list of edges of kNN graph.", "The indices of rn random neighbors can be generated ad hoc during the embedding process."], "citing_paper_content": {"title": "", "abstract": "Interactive visual exploration of large, high-dimensional datasets plays a very important role in various fields of science, which requires aggregated information about mutual relationships between numerous objects. It enables not only to recognize their important structural features and forms, such as clusters of vertices and their connectivity patterns, but also to assess their mutual relationships in terms of position, distance, shape, and connection density. The structural properties of these large datasets can be scrutinized throughout their interactive visualization. We argue that the visualization of very high-dimensional data is well approximated by the two-dimensional (2D) problem of embedding undirected kNN-graphs. In the advent of the big data era, the size of complex networks (datasets) G(V, E) (|V|=M\u223c10 6+) represents a great challenge for today's computer systems and still requires more efficient ND\u21922D dimensionality reduction (DR) algorithms. The existing DR methods, which involve more computational and memory complexities than O(M), are too slow for interactive manipulation on large networks that involve millions of vertices. We show that high-quality embeddings can be produced with minimal time&memory complexity. Very efficient IVHD (interactive visualization of high-dimensional data) and IVHD-CUDA algorithms are presented and then compared to the state-of-the-art DR methods (both CPU and GPU versions): t-SNE, UMAP, TriMAP, PaCMAP, BH-SNE-CUDA, and AtSNE-CUDA. We show that the memory and time requirements for IVHD are radically lower than those for the baseline codes. For example, IVHD-CUDA is almost 30 times faster in embedding (without the kNN graph generation procedure, which is the same for all methods) of one of the largest datasets used, YAHOO (M=1.4 \u2022 10 6), than AtSNE-CUDA. We conclude that at the expense of a minor deterioration of embedding quality, compared to baseline algorithms, IVHD well preserves the main structural properties of ND data in 2D for a significantly lower computational budget. We also present a meta-algorithm that enables using any unsupervised DR method in supervised fashion and as a result allows for flexible control of global and local properties of the embedding. Thus, our methods can be a good candidate for an interactive visualization of truly big data (M=10 8+) and can be further used to inspect and interpret relationships between alternative representations of observations learned by artificial neural networks (ANN). Additionally, we have provided a framework for testing the trade-off between preservation of global structure and local structure of DR."}, "cited_paper_content": {"title": "2-D Embedding Of Large And High-Dimensional Data With Minimal Memory And Computational Time Requirements", "abstract": "In the advent of big data era, interactive visualization of large data sets consisting of M*10^5+ high-dimensional feature vectors of length N (N ~ 10^3+), is an indispensable tool for data exploratory analysis. The state-of-the-art data embedding (DE) methods of N-D data into 2-D (3-D) visually perceptible space (e.g., based on t-SNE concept) are too demanding computationally to be efficiently employed for interactive data analytics of large and high-dimensional datasets. Herein we present a simple method, ivhd (interactive visualization of high-dimensional data tool), which radically outperforms the modern data-embedding algorithms in both computational and memory loads, while retaining high quality of N-D data embedding in 2-D (3-D). We show that DE problem is equivalent to the nearest neighbor nn-graph visualization, where only indices of a few nearest neighbors of each data sample has to be known, and binary distance between data samples -- 0 to the nearest and 1 to the other samples -- is defined. These improvements reduce the time-complexity and memory load from O(M log M) to O(M), and ensure minimal O(M) proportionality coefficient as well. We demonstrate high efficiency, quality and robustness of ivhd on popular benchmark datasets such as MNIST, 20NG, NORB and RCV1."}, "keywords": ["augmented kNN-graph"], "citation_intent": "background"} {"citing_id": "2304.02263v1", "cited_id": "1503.02531", "section_title": "Comparisons On Downstream Tasks", "citation": "After conducting linear probing, the vanilla knowledge distillation #REFR method is used to train students with hard and soft labels.", "text_before_citation": ["Our model reprogramming in TDMR is to design a proper reprogramming space for KD, that is, a projector with a classifier is simple yet well-suited for the KD on downstream task.", "(ii) In distillation phase, the KD in MRKD usually leverages one-stage distillation, a brute-forced transfer, while our KD in TDMR is a twostage progressive distillation, a mild and more efficient way, which will be demonstrated in the following.", "(iii) More importantly, we reprogram students according to the proxy space, i.e., we consider the knowledge of the reprogrammed teacher classifier, which is missing in MRKD. Comparison with Linear Probing.", "For the foundation model application to downstream tasks, a common transfer method is the linear probing #OTHEREFR , #OTHEREFR which just modifies the output dimension of the teacher classifier to the total number of categories of the target data.", "Then, the classifier is trained from scratch with the feature extractor frozen instead of fully fine-tuning the source model with high computation cost."], "text_after_citation": ["The whole process above is denoted by \"Lin.\" We conduct experiments with various combinations of teachers and students on Caltech-256-60 and Oxford-102 to compare the common method with ours.", "From the results in Table 6 , we can see that our method outperforms the common method \"Lin.\".", "Moreover, our method exhibits a better improvement on the Oxford-102 dataset than the Caltech-256 dataset.", "It can be attributed to the larger gap between the Oxford dataset and the pretraining dataset.", "These results provide further evidence of the superior performance of our proposed method, especially in scenarios with larger domain gaps."], "citing_paper_content": {"title": "Towards Efficient Task-Driven Model Reprogramming With Foundation Models", "abstract": "Vision foundation models exhibit impressive power, benefiting from the extremely large model capacity and broad training data. However, in practice, downstream scenarios may only support a small model due to the limited computational resources or efficiency considerations. Moreover, the data used for pretraining foundation models are usually invisible and very different from the target data of downstream tasks. This brings a critical challenge for the real-world application of foundation models: one has to transfer the knowledge of a foundation model to the downstream task that has a quite different architecture with only downstream target data. Existing transfer learning or knowledge distillation methods depend on either the same model structure or finetuning of the foundation model. Thus, naively introducing these methods can be either infeasible or very inefficient. How to leverage the knowledge from the foundation model to boost the small model has not been well studied. To address this, we propose a Task-Driven Model Reprogramming (TDMR) framework. Specifically, we reprogram the foundation model to project the knowledge into a proxy space, which alleviates the adverse effect of task mismatch and domain inconsistency. In this stage, we maintain the foundation model as a powerful feature extractor frozen. Then, we reprogram the target model via progressive distillation from the proxy space to efficiently learn the knowledge from the reprogrammed foundation model. TDMR is compatible with different pre-trained model types (CNN, transformer or their mix) and limited target data, and promotes the wide applications of vision foundation models to downstream tasks in a cost-effective manner. Extensive experiments on different downstream classification tasks and target model structures demonstrate the effectiveness of our methods with both CNNs and transformer foundation models. For example, on CUB-200, TDMR improves the accuracy of MobileNetV2 from 62.90% to 72.60% using the ResNet-50 as a teacher and to 76.04% using the Swin transformer as a teacher."}, "cited_paper_content": {"title": "Distilling The Knowledge In A Neural Network", "abstract": "A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel."}, "keywords": ["vanilla knowledge distillation"], "citation_intent": "method"} {"citing_id": "2303.16833v1", "cited_id": "1802.03515", "section_title": "C. Multi-View Detection", "citation": "Another method proposes the use of multiple monocular 2D keypoint estimates from spread out viewpoints to estimate the poses of vehicles #REFR .", "text_before_citation": ["A recent work combines Stereo 2D keypoint detections within a trained network to estimate the 3D positions of each keypoint for transparent and translucent objects #OTHEREFR .", "The results show an improvement over the state of the art by a factor of 1.5-3.5, using only two known views from a stereo camera.", "The objects are unoccluded as opposed to placed in bins, however, and only one instance of the object can be present in the scene.", "While this approach is faster, the network architecture is fixed to take in exactly two stereo images as input, and to output exactly one keypoint prediction.", "Multi-object detection will likely prove difficult as repeated features and occlusions from different instances of the same object may confuse the depth estimation within the network."], "text_after_citation": ["The transformations between these viewpoints are unknown; instead the approach adds constraints based on object rigidity and the relative positions of different keypoints in the image to solve for the vehicle poses.", "We borrow the object rigidity contraint from this paper, but also make use of known transformations between viewpoints from a robot arm, to allow more precise fusion of estimates between views.", "CosyPose is able to achieve higher performance in more cluttered scenes by utilizing a larger number of views #OTHEREFR .", "The scene is assumed to be static across the different viewpoints, and the poses of all objects within the scene are estimated.", "This is then used to estimate the camera pose across different viewpoints, followed by bundle adjustment to refine the estimates and generate a globally consistent scene across all views."], "citing_paper_content": {"title": "Multi-View Keypoints For Reliable 6D Object Pose Estimation", "abstract": "6D Object pose estimation is a fundamental component in robotics enabling efficient interaction with the environment. It is particularly challenging in bin-picking applications, where many objects are low-feature and reflective, and self-occlusion between objects of the same type is common. We propose a novel multi-view approach leveraging known camera transformations from an eye-in-hand setup to combine heatmap and keypoint estimates into a probability density map over 3D space. The result is a robust approach that is scalable in the number of views. It relies on a confidence score composed of keypoint probabilities and point-cloud alignment error, which allows reliable rejection of false positives. We demonstrate an average pose estimation error of approximately 0.5 mm and 2 degrees across a variety of difficult low-feature and reflective objects in the ROBI dataset, while also surpassing the stateof-art correct detection rate, measured using the 10% object diameter threshold on ADD error."}, "cited_paper_content": {"title": "Vehicle Pose And Shape Estimation Through Multiple Monocular Vision", "abstract": "In this paper, we present a method to estimate a vehicle's pose and shape from off-board multi-view images. These images are taken from monocular cameras with small overlaps. We utilize state-of-the-art Convolutional Neural Networks (CNNs) to extract vehicles' semantic keypoints and introduce a Cross Projection Optimization (CPO) method to estimate the 3D pose. During the iterative CPO process, an adaptive shape adjustment method named Hierarchical Wireframe Constraint (HWC) is implemented to estimate the shape. Our approach is evaluated under both simulated and real-world scenes for performance verification. It's shown that our algorithm outperforms other existing monocular and stereo methods for vehicles' pose and shape estimation. This approach provides a new and robust solution for off-board visual vehicle localization and tracking, which can be applied to massive surveillance camera networks for intelligent transportation."}, "keywords": ["multiple monocular 2D"], "citation_intent": "method"} {"citing_id": "2303.02874v1", "cited_id": "1412.6572", "section_title": "A. Current Adversarial Techniques 1) Gradient-Based Evasion Attack:", "citation": "Adversarial sampling based on addition of perturbation #REFR We can carry out this type of attack by trial and error method as we don't know in advance, the exact data manipulation that will break the model and make it to classify.", "text_before_citation": ["In gradient base evasion attack, a perturbed image which seems like untampered to human eyes is made to be misclassified by neural network model (Fig. 1 ) #OTHEREFR . Fig. 1 ."], "text_after_citation": ["Let say we want to probe the boundaries of a machine learning model designed to filter out spam emails, it is possible for us to experiment by sending different emails to see what gets through.", "And so, a model has been trained for certain words like \"momentum\", and now we want to make an exceptions for emails that contains other words, if we want to attack, we can craft email with enough extraneous words which will eventually make the model to misclassify it."], "citing_paper_content": {"title": "Adversarial Sampling For Fairness Testing In Deep Neural Network", "abstract": "In this research, we focus on the usage of adversarial sampling to test for the fairness in the prediction of deep neural network model across different classes of image in a given dataset. While several framework had been proposed to ensure robustness of machine learning model against adversarial attack, some of which includes adversarial training algorithm. There is still the pitfall that adversarial training algorithm tends to cause disparity in accuracy and robustness among different group. Our research is aimed at using adversarial sampling to test for fairness in the prediction of deep neural network model across different classes or categories of image in a given dataset. We successfully demonstrated a new method of ensuring fairness across various group of input in deep neural network classifier. We trained our neural network model on the original image, and without training our model on the perturbed or attacked image. When we feed the adversarial samplings to our model, it was able to predict the original category/ class of the image the adversarial sample belongs to. We also introduced and used the separation of concern concept from software engineering whereby there is an additional standalone filter layer that filters perturbed image by heavily removing the noise or attack before automatically passing it to the network for classification, we were able to have accuracy of 93.3%. Cifar-10 dataset have ten categories of dataset, and so, in order to account for fairness, we applied our hypothesis across each categories of dataset and were able to get a consistent result and accuracy."}, "cited_paper_content": {"title": "Explaining And Harnessing Adversarial Examples", "abstract": "Several machine learning models, including neural networks, consistently misclassify adversarial examples---inputs formed by applying small but intentionally worst-case perturbations to examples from the dataset, such that the perturbed input results in the model outputting an incorrect answer with high confidence. Early attempts at explaining this phenomenon focused on nonlinearity and overfitting. We argue instead that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature. This explanation is supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Moreover, this view yields a simple and fast method of generating adversarial examples. Using this approach to provide examples for adversarial training, we reduce the test set error of a maxout network on the MNIST dataset."}, "keywords": ["attack", "perturbation"], "citation_intent": "method"} {"citing_id": "2304.04734v1", "cited_id": "1901.07718", "section_title": "V. Discussion", "citation": "Future research will focus on implementing an SNN version of CMLs based on resonate-and-fire (RF) neurons #REFR .", "text_before_citation": ["b) The EXP model for CML A can be trained with task-agnostic proxy symbols, which are then mapped to application specific inputs x and s D .", "A major challenge to the greater adoption of HDC is the need for algorithms to map real-valued sensor data to hypervector symbols #OTHEREFR .", "Yet since ANNs have a rich history in classification: mapping raw sensor data to arbitrary class labels, there is recent work training ANNs as task-agnostic feature extractors #OTHEREFR then mapping these sparse feature vectors to arbitrary hypervector symbols for subsequent HDC computation #OTHEREFR .", "This approach, in effect, turns ANNs themselves into modular ML components, functioning as the ML equivalent of analog to digital (A2D) converters.", "Lastly, the CML algorithm operates over real-valued neural networks; but the illustrative biology examples described are based on spiking neural networks (SNN)."], "text_after_citation": ["These types of SNNs encode information based on the time a neuron spikes within period \u03c4 as opposed to rate encoding, measuring the number of spikes within a time window.", "Importantly, a spike at time t with respect to a local oscillator of period \u03c4 may be expressed as a complex valued phasor (or phase vector).", "RF neurons therefore also facilitate HDC interfacing via Holographic Reduced Representations (HRR), based on complex phasors #OTHEREFR ."], "citing_paper_content": {"title": "Modularizing And Assembling Cognitive Map Learners Via Hyperdimensional Computing", "abstract": "Biological organisms must learn how to control their own bodies to achieve deliberate locomotion, that is, predict their next body position based on their current position and selected action. Such learning is goal-agnostic with respect to maximizing (minimizing) an environmental reward (penalty) signal. A cognitive map learner (CML) is a collection of three separate yet collaboratively trained artificial neural networks which learn to construct representations for the node states and edge actions of an arbitrary bidirectional graph. In so doing, a CML learns how to traverse the graph nodes; however, the CML does not learn when and why to move from one node state to another. This work created CMLs with node states expressed as high dimensional vectors suitable for hyperdimensional computing (HDC), a form of symbolic machine learning (ML). In so doing, graph knowledge (CML) was segregated from target node selection (HDC), allowing each ML approach to be trained independently. The first approach used HDC to engineer an arbitrary number of hierarchical CMLs, where each graph node state specified target node states for the next lower level CMLs to traverse to. Second, an HDC-based stimulus-response experience model was demonstrated per CML. Because hypervectors may be in superposition with each other, multiple experience models were added together and run in parallel without any retraining. Lastly, a CML-HDC ML unit was modularized: trained with proxy symbols such that arbitrary, application-specific stimulus symbols could be operated upon without retraining either CML or HDC model. These methods provide a template for engineering heterogenous ML systems."}, "cited_paper_content": {"title": "Robust Computation With Rhythmic Spike Patterns", "abstract": "Information coding by precise timing of spikes can be faster and more energy efficient than traditional rate coding. However, spike-timing codes are often brittle, which has limited their use in theoretical neuroscience and computing applications. Here, we propose a type of attractor neural network in complex state space and show how it can be leveraged to construct spiking neural networks with robust computational properties through a phase-to-timing mapping. Building on Hebbian neural associative memories, like Hopfield networks, we first propose threshold phasor associative memory (TPAM) networks. Complex phasor patterns whose components can assume continuous-valued phase angles and binary magnitudes can be stored and retrieved as stable fixed points in the network dynamics. TPAM achieves high memory capacity when storing sparse phasor patterns, and we derive the energy function that governs its fixed-point attractor dynamics. Second, we construct 2 spiking neural networks to approximate the complex algebraic computations in TPAM, a reductionist model with resonate-and-fire neurons and a biologically plausible network of integrate-and-fire neurons with synaptic delays and recurrently connected inhibitory interneurons. The fixed points of TPAM correspond to stable periodic states of precisely timed spiking activity that are robust to perturbation. The link established between rhythmic firing patterns and complex attractor dynamics has implications for the interpretation of spike patterns seen in neuroscience and can serve as a framework for computation in emerging neuromorphic devices."}, "keywords": ["resonate-and-fire (RF) neurons"], "citation_intent": "method"} {"citing_id": "2303.01999v1", "cited_id": "1903.11228", "section_title": "Related Work", "citation": "BAE-Net trains an implicit shape representation that uses multiple decoder \"heads,\" where each head tends to represent the same localized part across many shape instances #REFR .", "text_before_citation": ["Several methods focus on convex polyhedra, either decomposing individual shapes in isolation #OTHEREFR or training a neural network to produce similar convex decompositions for similar shapes from a category #OTHEREFR .", "Another option is to decompose the input shape into pieces which can be represented as generalized cylinders #OTHEREFR .", "These approaches produce clean geometry with a better fit to the input shape than paramatric primitives allow, but they offer no control over the type of decomposition produced.", "They also typically need many primitives to fit the input shape well, making them non-compact and not well-suited for shape editing.", "Recent research in this space has focused on decomposing shapes using neural primitives."], "text_after_citation": ["Other approaches represent neural parts as star domains #OTHEREFR or deformed sphere meshes #OTHEREFR .", "These approaches produce decompositions that fit the input shape well using a small number of primitives.", "However, their output geometry can exhibit undesirable artifacts, and they provide no control over the type of decomposition produced.", "We compare our algorithm to one of these approaches later in the paper and show that ours achieves even better reconstruction accuracy while also producing qualitatively better decompositions.", "Modeling by retrieval and assembly: A large body of work in computer graphics has considered computerassisted or fully-automated 3D modeling via retrieving and assembling pre-existing 3D shapes."], "citing_paper_content": {"title": "Unsupervised 3D Shape Reconstruction By Part Retrieval And Assembly", "abstract": "Figure 1. Our system takes target 3D shapes together with a 3D part library as input and outputs a set of retrieved and transformed parts from the part library that recreates the input target shapes."}, "cited_paper_content": {"title": "Bae-Net: Branched Autoencoder For Shape Co-Segmentation", "abstract": "We treat shape co-segmentation as a representation learning problem and introduce BAE-NET, a branched autoencoder network, for the task. The unsupervised BAE-NET is trained with all shapes in an input collection using a shape reconstruction loss, without ground-truth segmentations. Specifically, the network takes an input shape and encodes it using a convolutional neural network, whereas the decoder concatenates the resulting feature code with a point coordinate and outputs a value indicating whether the point is inside/outside the shape. Importantly, the decoder is branched: each branch learns a compact representation for one commonly recurring part of the shape collection, e.g., airplane wings. By complementing the shape reconstruction loss with a label loss, BAE-NET is easily tuned for one-shot learning. We show unsupervised, weakly supervised, and one-shot learning results by BAE-NET, demonstrating that using only a couple of exemplars, our network can generally outperform state-of-the-art supervised methods trained on hundreds of segmented shapes."}, "keywords": ["implicit shape representation"], "citation_intent": "background"} {"citing_id": "2304.12036v1", "cited_id": "1903.03894", "section_title": "Comparison Of Models Used To Determine Node Importance", "citation": "Although GNNExplainer #REFR was also not designed for unsupervised node embeddings, we could still use the same objective function to quantify the importance of each node by leveraging the available labels.", "text_before_citation": ["However, LIME is not designed for generating global node-level explanations for unsupervised node embeddings.", "For a fair comparison, similar to our GRAPH-wGD, we computed MEAN{LIME", "(v c |v i ), v c \u2208 N(v i )} by regarding v c as local features for v i . N G (v i |\u03c8)", "also returns the neighbors with the same size of neighbor set, \u03c8.", "GNNExplainer."], "text_after_citation": ["Thus, the mutual information between predictions from an input graph and a perturbed graph by v i was leveraged for the importance score of v i .", "In this case, we assumed that nodes that return low mutual information values were more important."], "citing_paper_content": {"title": "Generating Post-Hoc Explanations For Skip-Gram-Based Node Embeddings By Identifying Important Nodes With Bridgeness", "abstract": "Node representation learning in a network is an important machine learning technique for encoding relational information in a continuous vector space while preserving the inherent properties and structures of the network. Recently, unsupervised node embedding methods such as DeepWalk [1], LINE [2], struc2vec [3], PTE [4], UserItem2vec [5], and RWJBG [6] have emerged from the Skip-gram model [7] and perform better performance in several downstream tasks such as node classification and link prediction than the existing relational models. However, providing posthoc explanations of Skip-gram-based embeddings remains a challenging problem because of the lack of explanation methods and theoretical studies applicable for embeddings. In this paper, we first show that global explanations to the Skip-gram-based embeddings can be found by computing bridgeness under a spectral cluster-aware local perturbation. Moreover, a novel gradient-based explanation method, which we call GRAPH-wGD, is proposed that allows the top-q global explanations about learned graph embedding vectors more efficiently. Experiments show that the ranking of nodes by scores using GRAPH-wGD is highly correlated with true bridgeness scores. We also observe that the top-q node-level explanations selected by GRAPH-wGD have higher importance scores and produce more changes in class label prediction when perturbed, compared with the nodes selected by recent alternatives, using five real-world graphs."}, "cited_paper_content": {"title": "Gnnexplainer: Generating Explanations For Graph Neural Networks", "abstract": "Graph Neural Networks (GNNs) are a powerful tool for machine learning on graphs. GNNs combine node feature information with the graph structure by recursively passing neural messages along edges of the input graph. However, incorporating both graph structure and feature information leads to complex models and explaining predictions made by GNNs remains unsolved. Here we propose GnnExplainer, the first general, model-agnostic approach for providing interpretable explanations for predictions of any GNN-based model on any graph-based machine learning task. Given an instance, GnnExplainer identifies a compact subgraph structure and a small subset of node features that have a crucial role in GNN's prediction. Further, GnnExplainer can generate consistent and concise explanations for an entire class of instances. We formulate GnnExplainer as an optimization task that maximizes the mutual information between a GNN's prediction and distribution of possible subgraph structures. Experiments on synthetic and real-world graphs show that our approach can identify important graph structures as well as node features, and outperforms alternative baseline approaches by up to 43.0% in explanation accuracy. GnnExplainer provides a variety of benefits, from the ability to visualize semantically relevant structures to interpretability, to giving insights into errors of faulty GNNs."}, "keywords": ["unsupervised node embeddings"], "citation_intent": "method"} {"citing_id": "2303.11160v1", "cited_id": "1810.04805", "section_title": "Related Works In Counterfactual Explanations", "citation": "Many works have addressed the issue of generating natural-sounding counterfactual explanations in text using language representation models such as BERT #REFR .", "text_before_citation": ["Simply removing words from the text to generate counterfactual explanations is not effective. #OTHEREFR", "(2020) addressed this issue by ensuring that replaced words are grammatically correct.", "They demonstrate their approach on a sentiment analysis task, introducing two lists of words: one containing words that are suitable for replacement based on grammar, and another containing words with opposite senses to those in the sentiment dictionary.", "They then identified the intersection of these two lists and replaced words in the main text with words from this intersection until the predicted class was changed.", "This approach helps to generate counterfactual explanations that are more understandable to humans."], "text_after_citation": ["#OTHEREFR proposed one example of such an approach.", "They first generated a candidate set of words to replace each word in the text.", "They then used BERT as a language model to determine the probability of each candidate token for a given position.", "In the second step, they found the best combination of changes using shapley values #OTHEREFR and generated the explanations using beam search.", "This approach allows the generation of more coherent and understandable counterfactual explanations in text."], "citing_paper_content": {"title": "Explaining Recommendation System Using Counterfactual Textual Explanations", "abstract": "Currently, there is a significant amount of research being conducted in the field of artificial intelligence to improve the explainability and interpretability of deep learning models. It is found that if end-users understand the reason for the production of some output, it is easier to trust the system. Recommender systems are one example of systems that great efforts have been conducted to make their output more explainable. One method for producing a more explainable output is using counterfactual reasoning, which involves altering minimal features to generate a counterfactual item that results in changing the output of the system. This process allows the identification of input features that have a significant impact on the desired output, leading to effective explanations. In this paper, we present a method for generating counterfactual explanations for both tabular and textual features. We evaluated the performance of our proposed method on three real-world datasets and demonstrated a +5% improvement on finding effective features (based on model-based measures) compared to the baseline method."}, "cited_paper_content": {"title": "Bert: Pre-Training Of Deep Bidirectional Transformers For Language Understanding", "abstract": "We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. ::: BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement)."}, "keywords": ["natural-sounding counterfactual explanations", "language representation models"], "citation_intent": "background"} {"citing_id": "2303.08935v1", "cited_id": "1202.5619", "section_title": "Proposition", "citation": "The cost of the walk S in G is at most OPT G + 2 max k S k (see Lemma 5.1 in #REFR ).", "text_before_citation": ["Therefore, an optimal solution to RMCCP on V i will have a total length at most 2 i OPT G (otherwise, the optimal solution to min-max weighted latency problem has cost more than OPT G ).", "As the solution to RMCCP is partitioned into 2 i walks in Line 6, length of the walk W i,j is at most \u03b1OPT G where \u03b1 is the approximation ratio of RMCCP.", "The walk S k is given by [W 0,j0 , \u03bd, W 1,j1 , \u03bd, . . .", ", \u03bd, W log t,j log t ] where k = j i ( mod 2 i ) for 0 \u2264 i < log t.", "Since, there are log \u03c1 G walks in S k , and those walks are connected using the recharging vertex, and (\u03bd, v) < OPT G /2 for all v \u2208 V , the length of the walk S k is at most O(\u03b1 log \u03c1 G )OPT G ."], "text_after_citation": ["Also OPT G \u2264 OPT G \u2264 2OPT G (see Lemma 3.2 in #OTHEREFR ).", "Using the approximation ratio of RMCCP, the maximum weighted latency of the walk returned by algorithm 3 is O(min{log n, log D log log D } log \u03c1)OPT G .", "Algorithm 3 is used as a subroutine in Algorithm 4 to find walks for R robots.", "The following result characterizes the cost of the solution returned by Algorithm 4.", "Proposition VI.5."], "citing_paper_content": {"title": "Multi-Robot Persistent Monitoring: Minimizing Latency And Number Of Robots With Recharging Constraints", "abstract": "In this paper we study multi-robot path planning for persistent monitoring tasks. We consider the case where robots have a limited battery capacity with a discharge time D. We represent the areas to be monitored as the vertices of a weighted graph. For each vertex, there is a constraint on the maximum allowable time between robot visits, called the latency. The objective is to find the minimum number of robots that can satisfy these latency constraints while also ensuring that the robots periodically charge at a recharging depot. The decision version of this problem is known to be PSPACE-complete. We present a O(log D log log D log \u03c1) approximation algorithm for the problem where \u03c1 is the ratio of the maximum and the minimum latency constraints. We also present an orienteering based heuristic to solve the problem and show empirically that it typically provides higher quality solutions than the approximation algorithm. We extend our results to provide an algorithm for the problem of minimizing the maximum weighted latency given a fixed number of robots. We evaluate our algorithms on large problem instances in a patrolling scenario and in a wildfire monitoring application. We also compare the algorithms with an existing solver on benchmark instances."}, "cited_paper_content": {"title": "Persistent Monitoring In Discrete Environments: Minimizing The Maximum Weighted Latency Between Observations", "abstract": "In this paper, we consider the problem of planning a path for a robot to monitor a known set of features of interest in an environment. We represent the environment as a graph with vertex weights and edge lengths. The vertices represent regions of interest, edge lengths give travel times between regions, and the vertex weights give the importance of each region. As the robot repeatedly performs a closed walk on the graph, we define the weighted latency of a vertex to be the maximum time between visits to that vertex, weighted by the importance (vertex weight) of that vertex. Our goal is to find a closed walk that minimizes the maximum weighted latency of any vertex. We show that there does not exist a polynomial time algorithm for the problem. We then provide two approximation algorithms; an $O(\\log n)$-approximation algorithm and an $O(\\log \\rho_G)$-approximation algorithm, where $\\rho_G$ is the ratio between the maximum and minimum vertex weights. We provide simulation results which demonstrate that our algorithms can be applied to problems consisting of thousands of vertices, and a case study for patrolling a city for crime."}, "keywords": ["walk"], "citation_intent": "background"} {"citing_id": "2304.04612v1", "cited_id": "0909.4061", "section_title": "Algorithm 1 Randomized Svd", "citation": "Although this is the same as the complexity of a tSVD algorithm based on the rank-revealing QR decomposition, the Randomized SVD has been shown to be experimentally faster than that algorithm #REFR .", "text_before_citation": ["EQUATION", "where E X is the expectation with respect to X, and || \u2022 || F is the Frobenius norm.", "We show the algorithm of the Randomized SVD in Algorithm 1.", "In the algorithm, the asymptotic computational complexity of the QR factorization (Line 2) and SVD (Line 4) are O(mp 2 ) and O(np 2 ), respectively, and the random projection (Line 1) is O(mnp).", "Therefore, since p m, n typically, the random projection is the most expensive computation in the algorithm, and the total asymptotic complexity of the Randomized SVD is O(mnp)."], "text_after_citation": ["Furthermore, to reduce the complexity of the random projection, there are some studies on using random matrices instead of the Gaussian random matrix.", "For instance, when using subsampled random Fourier transform, the computational complexity is O(mn \u2022 log(p)) #OTHEREFR .", "We can also use sparse random matrices for the random projection, although they were not originally proposed in the context of RandNLA.", "These matrices are sparse from the perspective of non-zero elements and the choice of element values.", "For instance, Achlioptas proposes a random matrix \u2126 \u2208 R n\u00d7q where the (i, j) element is decided as follows #OTHEREFR :"], "citing_paper_content": {"title": "Mixed-Precision Random Projection For Randnla On Tensor Cores", "abstract": "Random projection can reduce the dimension of data while capturing its structure and is a fundamental tool for machine learning, signal processing, and information retrieval, which deal with a large amount of data today. RandNLA (Randomized Numerical Linear Algebra) leverages random projection to reduce the computational complexity of low-rank decomposition of tensors and solve least-square problems. While the computation of the random projection is a simple matrix multiplication, its asymptotic computational complexity is typically larger than other operations in a RandNLA algorithm. Therefore, various studies propose methods for reducing its computational complexity. We propose a fast mixed-precision random projection method on NVIDIA GPUs using Tensor Cores for single-precision tensors. We exploit the fact that the random matrix requires less precision, and develop a highly optimized matrix multiplication between FP32 and FP16 matrices-SHGEMM (Single and Half GEMM)-on Tensor Cores, where the random matrix is stored in FP16. Our method can compute Randomized SVD 1.28 times faster and Random projection high order SVD 1.75 times faster than baseline single-precision implementations while maintaining accuracy."}, "cited_paper_content": {"title": "Finding Structure With Randomness: Probabilistic Algorithms For Constructing Approximate Matrix Decompositions", "abstract": "Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed---either explicitly or implicitly---to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, speed, and robustness. These claims are supported by extensive numerical experiments and a detailed error analysis."}, "keywords": ["Randomized SVD", "rank-revealing QR decomposition"], "citation_intent": "method"} {"citing_id": "2303.15991v3", "cited_id": "1503.07612", "section_title": "C. Performance Evaluation Of The Proposed Resource Management Strategy", "citation": "Specifically, we compare the random realization of channel model #REFR in each training round and the ideal static channel channel model (i.e.", "text_before_citation": ["Furthermore, with a more powerful server, the performance improvements brought by power control and subchannel allocation grow.", "This is because, in this case, the training latency is primarily limited by the computing latency of the client devices and data exchange latency.", "Again, we observe that optimizing the cut layer selection brings better performance improvement than power control and subchannel allocation when the server's computing capability varies. Fig.", "13 shows the effect of channel variation on the performance of the proposed solution.", "Our layer split decision remains unchanged for a period (in the simulations, it remains the same until the model converges), and therefore it is important to evaluate how channel variation would impact its performance."], "text_after_citation": ["channel gain remain unchanged, which is unrealistic but used as the benchmark).", "We observe that channel variation has little impact on the performance of the proposed solution, which demonstrates its robustness in dynamic wireless channel conditions."], "citing_paper_content": {"title": "Efficient Parallel Split Learning Over Resource-Constrained Wireless Edge Networks", "abstract": "The increasingly deeper neural networks hinder the democratization of privacy-enhancing distributed learning, such as federated learning (FL), to resource-constrained devices. To overcome this challenge, in this paper, we advocate the integration of edge computing paradigm and parallel split learning (PSL), allowing multiple client devices to offload substantial training workloads to an edge server via layer-wise model split. By observing that existing PSL schemes incur excessive training latency and large volume of data transmissions, we propose an innovative PSL framework, namely, efficient parallel split learning (EPSL), to accelerate model training. To be specific, EPSL parallelizes client-side model training and reduces the dimension of activations' gradients for back propagation (BP) via last-layer gradient aggregation, leading to a significant reduction in server-side training and communication latency. Moreover, by considering the heterogeneous channel conditions and computing capabilities at client devices, we jointly optimize subchannel allocation, power control, and cut layer selection to minimize the per-round latency. Simulation results show that the proposed EPSL framework significantly decreases the training latency needed to achieve a target accuracy compared with the stateof-the-art benchmarks, and the tailored resource management and layer split strategy can considerably reduce latency than the counterpart without optimization."}, "cited_paper_content": {"title": "Probabilistic Omnidirectional Path Loss Models For Millimeter-Wave Outdoor Communications", "abstract": "This paper presents a probabilistic omnidirectional millimeter-wave path loss model based on real-world 28 GHz and 73 GHz measurements collected in New York City. The probabilistic path loss approach uses a free space line-of-sight propagation model, and for non-line-of-sight conditions uses either a close-in free space reference distance path loss model or a floating-intercept path loss model. The probabilistic model employs a weighting function that specifies the line-of-sight probability for a given transmitter-receiver separation distance. Results show that the probabilistic path loss model offers virtually identical results whether one uses a non-line-of-sight close-in free space reference distance path loss model, with a reference distance of 1 meter, or a floating-intercept path loss model. This letter also shows that site-specific environmental information may be used to yield the probabilistic weighting function for choosing between line-of-sight and non-line-of-sight conditions."}, "keywords": ["channel model"], "citation_intent": "method"} {"citing_id": "2304.14939v1", "cited_id": "1908.05901", "section_title": "Related Work", "citation": "In a systematic review of user studies of multi-factor authentication, Das et al. consistently find low adoption #REFR .", "text_before_citation": ["Researchers have also found that in practice, users conflate encryption and authentication in their mental models of security indicators in the website security context #OTHEREFR .", "Our work parallels extensive research into these passive authentication indicators in browsers, and brings a user-and data-driven analysis of passive verification indicators on social media.", "User perception of online security.", "User perception of security has been studied in a wide variety of online contexts. For example, Dechand et al.", "study user perception of endto-end encryption on WhatsApp and find that users largely do not trust it #OTHEREFR ."], "text_after_citation": ["Alshamsi and Andras study perception of Bitcoin usability among novice users and find that they find credit or debit cards to be more usable #OTHEREFR . Ur et al.", "investigate whether user perception of password security match reality and find significant differences across users' understanding of possible attacks #OTHEREFR . Perceptions of social media users have also been studied.", "However, work in this area has focused on perceptions of social media website quality #OTHEREFR , level of control over information shared #OTHEREFR , and protection from abuse and harrassment #OTHEREFR .", "Usability of security features on social media platforms has only been analyzed in the context of security notices. Benson et al.", "find that users disclose more information in their presence #OTHEREFR . : News consumption and social media usage."], "citing_paper_content": {"title": "Account Verification On Social Media: User Perceptions And Paid Enrollment", "abstract": "We study the gap between user perceptions of social media verification indicators and their actual meaning, and the type of behavior that emerges when such a gap is present. We use recent changes to Twitter's verification process as a unique case study wherein the meaning of a verification indicator has rapidly shifted. The study consists of a U.S. demographicallyrepresentative survey of 300 respondents and quantitative and qualitative analyses of results, and an analysis of verified Twitter accounts sampled from a large-scale dataset of 15 million Tweets collected in October 2022. The survey addresses differences in user-perceived and actual requirements for verification marks on popular social media platforms, with a focus on evolving perceptions of verification marks on Twitter. We find that more than half of survey respondents misunderstood Twitter's criteria for assigning blue verification check marks to user accounts; more than 80% of survey respondents did not understand what differentiated blue check marks from gold and grey check marks. We also note interesting correlations between respondent age and perception of verification marks. From our qualitative analysis of verified user accounts, we find that cryptocurrency promotion accounts constitute significantly more Blue subscribers than our randomly sampled control dataset, indicating that a significant number of Blue users may be leveraging the confusion between legacy and Blue verification to promote their own commodities. Finally, we provide recommendations for improving verification indicators and processes on social media."}, "cited_paper_content": {"title": "Evaluating User Perception Of Multi-Factor Authentication: A Systematic Review", "abstract": "Security vulnerabilities of traditional single factor authentication has become a major concern for security practitioners and researchers. To mitigate single point failures, new and technologically advanced Multi-Factor Authentication (MFA) tools have been developed as security solutions. However, the usability and adoption of such tools have raised concerns. An obvious solution can be viewed as conducting user studies to create more user-friendly MFA tools. To learn more, we performed a systematic literature review of recently published academic papers (N = 623) that primarily focused on MFA technologies. While majority of these papers (m = 300) proposed new MFA tools, only 9.1% of papers performed any user evaluation research. Our meta-analysis of user focused studies (n = 57) showed that researchers found lower adoption rate to be inevitable for MFAs, while avoidance was pervasive among mandatory use. Furthermore, we noted several reporting and methodological discrepancies in the user focused studies. We identified trends in participant recruitment that is indicative of demographic biases."}, "keywords": ["multi-factor authentication"], "citation_intent": "background"} {"citing_id": "2303.12445v1", "cited_id": "1910.10683", "section_title": "Experiments & Results", "citation": "The second one uses an existing NLP model (T5 #REFR ) to produce sentences from structural data.", "text_before_citation": ["Overall, MEDIMP with all medical prompts results in the best predictions at 2 and 4 years post-transplantation.", "Medical Prompt generation.", "To demonstrate the relevance of the proposed approach for medical prompt generation, we compare our main model with two other approaches that produce text information.", "The first one is noted as \"Manual\" and comprises all the templates indicated by the medical experts, corresponding to only one sentence per variable of interest.", "Note that this is the base of our proposed medical prompting without using the prompt expansion method described in Section 2.2."], "text_after_citation": ["For a fair comparison, we train the T5 model on the WebNLG 2020 data #OTHEREFR and infer it on our data to generate text, denoted as \"T5 WebNLG\".", "The results are summarised in Table 2 , highlighting the superiority of our method.", "The \"T5 WebNLG\" approach offers a competitive F1 for all the predictions, although the AUC is the lowest except for the 2 years prediction.", "We show in Appendix B examples of generated texts from these three approaches.", "\"Manual\" approach lacks diversity in the text data, and therefore the training process of our proposed approach without text augmentations is more challenging."], "citing_paper_content": {"title": "Medimp: Medical Images And Prompts For Renal Transplant Representation Learning", "abstract": "Renal transplantation emerges as the most effective solution for end-stage renal disease. Occurring from complex causes, a substantial risk of transplant chronic dysfunction persists and may lead to graft loss. Medical imaging plays a substantial role in renal transplant monitoring in clinical practice. However, graft supervision is multidisciplinary , notably joining nephrology, urology, and radiology, while identifying robust biomarkers from such high-dimensional and complex data for prognosis is challenging. In this work, taking inspiration from the recent success of Large Language Models (LLMs), we propose MEDIMP-Medical Images and Prompts-a model to learn meaningful multi-modal representations of renal transplant Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE MRI) by incorporating structural clinicobiological data after translating them into text prompts. MEDIMP is based on contrastive learning from joint text-image paired embeddings to perform this challenging task. Moreover, we propose a framework that generates medical prompts using automatic textual data augmentations from LLMs. Our goal is to learn meaningful manifolds of renal transplant DCE MRI, interesting for the prognosis of the transplant or patient status (2, 3, and 4 years after the transplant), fully exploiting the available multi-modal data in the most efficient way. Extensive experiments and comparisons with other renal transplant representation learning methods with limited data prove the effectiveness of MEDIMP in a relevant clinical setting, giving new directions toward medical prompts. Our code is available at https://github.com/leomlck/MEDIMP."}, "cited_paper_content": {"title": "Exploring The Limits Of Transfer Learning With A Unified Text-To-Text Transformer", "abstract": "Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new \"Colossal Clean Crawled Corpus\", we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code."}, "keywords": ["existing NLP model"], "citation_intent": "method"} {"citing_id": "2303.17475v1", "cited_id": "2002.12317", "section_title": "Conclusion", "citation": "The simplicity of our framework makes it also easier to study than negative sampling with the techniques described in #REFR , for instance.", "text_before_citation": ["In the earlier case all the information related to the corpus (the graph) is represented in the form of a sparse matrix, hence the offline implementation of EDRep works optimally and significantly outperformed Node2Vec both in terms of speed and of accuracy.", "In the latter case, when encoding the text into a matrix, all the complex relations between the words should be captured by the matrix structure.", "This is a non-trivial task that might penalize infrequent co-occurrences between pairs of words.", "We believe, however, that a crucial advantage of our approach lies in its high interpretability.", "For a given practical deployment of our algorithm, a practitioner only needs to define a sampling strategy that meaningfully encodes proximity for the problem at hand."], "text_after_citation": ["Moreover, our main result holds in a wider range of settings with respect to the one explored and it can be easily generalized to non symmetric, or non normalized P matrices."], "citing_paper_content": {"title": "Efficient Distributed Representations Beyond Negative Sampling", "abstract": "This article describes an efficient method to learn distributed representations, also known as embeddings. This is accomplished minimizing an objective function similar to the one introduced in the Word2Vec algorithm and later adopted in several works. The optimization computational bottleneck is the calculation of the softmax normalization constants for which a number of operations scaling quadratically with the sample size is required. This complexity is unsuited for large datasets and negative sampling is a popular workaround, allowing one to obtain distributed representations in linear time with respect to the sample size. Negative sampling consists, however, in a change of the loss function and hence solves a different optimization problem from the one originally proposed. Our contribution is to show that the sotfmax normalization constants can be estimated in linear time, allowing us to design an efficient optimization strategy to learn distributed representations. We test our approximation on two popular applications related to word and node embeddings. The results evidence competing performance in terms of accuracy with respect to negative sampling with a remarkably lower computational time."}, "cited_paper_content": {"title": "The Spectral Underpinning Of Word2Vec", "abstract": "word2vec due to Mikolov \\textit{et al.} (2013) is a word embedding method that is widely used in natural language processing. Despite its great success and frequent use, theoretical justification is still lacking. The main contribution of our paper is to propose a rigorous analysis of the highly nonlinear functional of word2vec. Our results suggest that word2vec may be primarily driven by an underlying spectral method. This insight may open the door to obtaining provable guarantees for word2vec. We support these findings by numerical simulations. One fascinating open question is whether the nonlinear properties of word2vec that are not captured by the spectral method are beneficial and, if so, by what mechanism."}, "keywords": ["negative sampling", "framework"], "citation_intent": "method"} {"citing_id": "2304.08320v1", "cited_id": "1801.01290", "section_title": "Rl Method Selection For Tsc-Opf", "citation": "Since the SAC algorithm is very sensitive to hyperparameters #REFR , the following study is performed based on the DDPG and TD3 algorithms.", "text_before_citation": ["In addition, the TRPO and PPO algorithms limit the policy updates based on policy similarity, slowing and stabilizing policy updates.", "On the other hand, there is currently a lack of quantitative evaluation of power flow nonconvergence, e.g., the reward in (18) is always equal to -1000 when the power flow does not converge, which cannot provide guiding information for policy updates.", "In contrast, off-policy DRL algorithms store historical experience, which has a chance to influence the training of the current policy before being removed from the replay buffer.", "Therefore, as the exploration continues, the number of experiences corresponding to convergent power flows in the replay buffer increases and the agent can easily learn how to generate actions that lead to convergent power flows.", "Therefore, off-policy DRL algorithms are selected to solve the TSC-OPF problem in this paper."], "text_after_citation": ["The application of the SAC algorithm will be one of the future research directions.", "In the DDPG and TD3 algorithms, during the training process, exploration noise is added to the actions generated by the agent, as shown in #OTHEREFR where o denotes the observation vector, a and a are vectors representing the upper and lower bounds for the action respectively, denotes the exploration rate, \u03c3 represents the noise vector that follows a normal distribution with a mean of zero and a variance of , and the clip function restricts the action with exploration noise to be within its upper and lower bounds.", "The exploration noise is added to explore the action space and obtain transitions The agent structure of the DDPG algorithm is shown in Fig.", "3 (a) , which consists of two sets of actorcritic formats.", "The dotted lines mean that the modules and vectors are obsolete, which will be explained later."], "citing_paper_content": {"title": "On Fast-Converged Reinforcement Learning For Optimal Dispatch Of Large-Scale Power Systems Under Transient Security Constraints", "abstract": "Deep Reinforcement Learning (DRL)-based power system optimal dispatch, which is often modeled as Transient Security-Constrained Optimal Power Flow (TSC-OPF), trains efficient dispatching agents that can adapt to different scenarios and provide control strategies quickly. However, three typical issues seriously affect the training efficiency and the performance of the dispatch agent, namely, the difficulty of quantifying the transient instability level, the high dimensionality of the state space and action space, and the frequent generation of actions that correspond to non-convergent power flows during the early training stage. To address these issues, a fast-converged DRL method for TSC-OPF is proposed in this paper. Firstly, a transient security constraint transcription method based on the simulation time duration of instability is proposed to quantify the instability level. Secondly, a general method for Markov decision process modeling of TSC-OPF is proposed to decrease the dimensionality of the observation space. Finally, two general improvement techniques for off-policy DRL algorithms are proposed. A warm-up training technique is introduced to improve the efficiency of agents learning how to generate actions that lead to convergent power flows. A parallel exploration technique is adopted to improve the efficiency of agents exploring the action space. Based on the above studies, environments for TSC-OPF with the objectives of minimizing generation cost and minimizing control cost are constructed and dispatch agents are built and trained. The proposed method is tested in the IEEE 39-bus system and a practical 710-bus regional power grid. Test results show that the training process converges rapidly, the success rate of dispatch in both cases exceeds 99.70 percent, and the decision-making costs very little time, which verifies the effectiveness and efficiency of the proposed method."}, "cited_paper_content": {"title": "Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning With A Stochastic Actor", "abstract": "Model-free deep reinforcement learning (RL) algorithms have been demonstrated on a range of challenging decision making and control tasks. However, these methods typically suffer from two major challenges: very high sample complexity and brittle convergence properties, which necessitate meticulous hyperparameter tuning. Both of these challenges severely limit the applicability of such methods to complex, real-world domains. In this paper, we propose soft actor-critic, an off-policy actor-critic deep RL algorithm based on the maximum entropy reinforcement learning framework. In this framework, the actor aims to maximize expected reward while also maximizing entropy - that is, succeed at the task while acting as randomly as possible. Prior deep RL methods based on this framework have been formulated as Q-learning methods. By combining off-policy updates with a stable stochastic actor-critic formulation, our method achieves state-of-the-art performance on a range of continuous control benchmark tasks, outperforming prior on-policy and off-policy methods. Furthermore, we demonstrate that, in contrast to other off-policy algorithms, our approach is very stable, achieving very similar performance across different random seeds."}, "keywords": ["TD3 algorithms", "hyperparameters"], "citation_intent": "method"} {"citing_id": "2303.11323v1", "cited_id": "1805.09170", "section_title": "A. Connection Laplacian", "citation": "As reported in #REFR , an intuitive interpretation of (4) is imagining the evolution of the vector field U(x, t) over time as a \"smearing out\" of the initial vector field F(x).", "text_before_citation": ["EQUATION", "EQUATION", "where", "U : M \u00d7 R + 0 \u2192 T M and U(\u2022, t) \u2208 L 2 (T M) \u2200t \u2208 R +", "0 ; we denote the initial condition condition with U(x, 0) = F(x)."], "text_after_citation": ["In this interpretation, the role of the Connection Laplacian can be understood as a means to diffuse vectors from one tangent space to another (indeed, in the \"flat\" case it is sufficient to independently diffuse each scalar component, however, this approach fails for curved space). The solution of (4) is", "EQUATION", "which provides a way to construct tangent bundle convolution, as explained in the following section."], "citing_paper_content": {"title": "Tangent Bundle Convolutional Learning: From Manifolds To Cellular Sheaves And Back", "abstract": "In this work we introduce a convolution operation over the tangent bundle of Riemann manifolds in terms of exponentials of the Connection Laplacian operator. We define tangent bundle filters and tangent bundle neural networks (TNNs) based on this convolution operation, which are novel continuous architectures operating on tangent bundle signals, i.e. vector fields over the manifolds. Tangent bundle filters admit a spectral representation that generalizes the ones of scalar manifold filters, graph filters and standard convolutional filters in continuous time. We then introduce a discretization procedure, both in the space and time domains, to make TNNs implementable, showing that their discrete counterpart is a novel principled variant of the very recently introduced sheaf neural networks. We formally prove that this discretized architecture converges to the underlying continuous TNN. Finally, we numerically evaluate the effectiveness of the proposed architecture on various learning tasks, both on synthetic and real data."}, "cited_paper_content": {"title": "The Vector Heat Method", "abstract": "This paper describes a method for efficiently computing parallel transport of tangent vectors on curved surfaces, or more generally, any vector-valued data on a curved manifold. More precisely, it extends a vector field defined over any region to the rest of the domain via parallel transport along shortest geodesics. This basic operation enables fast, robust algorithms for extrapolating level set velocities, inverting the exponential map, computing geometric medians and Karcher/Fr\\'echet means of arbitrary distributions, constructing centroidal Voronoi diagrams, and finding consistently ordered landmarks. Rather than evaluate parallel transport by explicitly tracing geodesics, we show that it can be computed via a short-time heat flow involving the connection Laplacian. As a result, transport can be achieved by solving three prefactored linear systems, each akin to a standard Poisson problem. Moreover, to implement the method we need only a discrete connection Laplacian, which we describe for a variety of geometric data structures (point clouds, polygon meshes, etc.). We also study the numerical behavior of our method, showing empirically that it converges under refinement, and augment the construction of intrinsic Delaunay triangulations (iDT) so that they can be used in the context of tangent vector field processing."}, "keywords": ["vector field"], "citation_intent": "background"} {"citing_id": "2304.08114v1", "cited_id": "1506.01497", "section_title": "Two-Stage Methods", "citation": "The two-stage HOI detection framework detects human and object with an off-the-shelf detector #REFR and then classifies the interaction label for each human-object pair.", "text_before_citation": [], "text_after_citation": ["After the appearance of HO-RCNN #OTHEREFR , which is a widely used multi-stream framework, many recent studies use a variety of additional information to get richer contextual features for the interaction classifier, such as spatial features #OTHEREFR , pose features #OTHEREFR , and linguistic features #OTHEREFR .", "Several studies #OTHEREFR attempted to encode global contextual information using a message passing mechanism in a graph structure. Figure 2 . Overview of our ViPLO network.", "We first detect human and objects in a given image with Faster-RCNN #OTHEREFR , then estimate each human pose with an off-the-shelf pose estimator.", "Then, we extract features for each human and object using a ViT backbone and our novel MOA module.", "We also extract local features for each human with the estimated pose and ROIAlign #OTHEREFR ."], "citing_paper_content": {"title": "Viplo: Vision Transformer Based Pose-Conditioned Self-Loop Graph For Human-Object Interaction Detection", "abstract": "Human-Object Interaction (HOI) detection, which localizes and infers relationships between human and objects, plays an important role in scene understanding. Although two-stage HOI detectors have advantages of high efficiency in training and inference, they suffer from lower performance than one-stage methods due to the old backbone networks and the lack of considerations for the HOI perception process of humans in the interaction classifiers. In this paper, we propose Vision Transformer based Pose-Conditioned Self-Loop Graph (ViPLO) to resolve these problems. First, we propose a novel feature extraction method suitable for the Vision Transformer backbone, called masking with overlapped area (MOA) module. The MOA module utilizes the overlapped area between each patch and the given region in the attention function, which addresses the quantization problem when using the Vision Transformer backbone. In addition, we design a graph with a pose-conditioned self-loop structure, which updates the human node encoding with local features of human joints. This allows the classifier to focus on specific human joints to effectively identify the type of interaction, which is motivated by the human perception process for HOI. As a result, ViPLO achieves the state-of-the-art results on two public benchmarks, especially obtaining a +2."}, "cited_paper_content": {"title": "Faster R-Cnn: Towards Real-Time Object Detection With Region Proposal Networks", "abstract": "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [1] and Fast R-CNN [2] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features\u2014using the recently popular terminology of neural networks with \u2019attention\u2019 mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model [3] , our detection system has a frame rate of 5 fps ( including all steps ) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available."}, "keywords": ["human-object pair"], "citation_intent": "method"} {"citing_id": "2304.05007v2", "cited_id": "1903.02837", "section_title": "Related Work", "citation": "For general LDP randomizers, the #REFR shows the input-independent part is sampled by each user with probability at least e \u2212 0 (they term it as total variation similarity).", "text_before_citation": ["In this model, a shuffler breaks the association between messages and their owners' identities, which allows for lower levels of randomization in the local model.", "An important aspect of the shuffle model is the analysis of privacy amplification guarantee for shuffled messages, with tighter guarantees leading to better trade-offs between privacy and utility.", "Privacy amplification of LDP randomizers.", "The seminal work #OTHEREFR utilizes the privacy amplification via subsampling #OTHEREFR to analyze the privacy amplification of shuffling, and shows n shuffled messages satisfies ( 0 144 log(1/\u03b4)/n, \u03b4)-DP.", "Latterly, the privacy blanket #OTHEREFR proposes extracting an input-independent part from the output distribution, to work as a \"blanket\" to amplify privacy."], "text_after_citation": ["For specific LDP randomizers (e.g., Laplace #OTHEREFR and generalized randomized response #OTHEREFR ), the total variation similarity can be larger.", "We note that our framework also utilizes the total variation information about output distributions.", "Compared to the total variation similarity that must be shared by all output distributions, we care about the pairwise total variation bound of output distributions, which is at most e 0 \u22121 e 0 +1 #OTHEREFR Theorem 2.4] .", "Recently, the works #OTHEREFR and #OTHEREFR decompose output distributions into mixture distributions with 3 options, and interprets messages from other users as clones of victim user.", "They show the shuffled messages can be deemed as post-processed information on the multiplicity of the 3 options, thus reduces the analyses of privacy amplification on shuffled messages to the indistinguishability of the multiplicities. This clone reduction is near-optimal w.r.t. the dependence on 0 for general LDP randomizers."], "citing_paper_content": {"title": "Privacy Amplification Via Shuffling: Unified, Simplified, And Tightened", "abstract": "In decentralized settings, the shuffle model of differential privacy has emerged as a promising alternative to the classical local model. Analyzing privacy amplification via shuffling is a critical component in both single-message and multi-message shuffle protocols. However, current methods used in these two areas are distinct and specific, making them less convenient for protocol designers and practitioners. In this work, we introduce variation-ratio reduction as a unified framework for privacy amplification analyses in the shuffle model. This framework utilizes total variation bounds of local messages and probability ratio bounds of other users' blanket messages, converting them to indistinguishable levels. Our results indicate that the framework yields tighter bounds for both singlemessage and multi-message encoders (e.g., with local DP, local metric DP, or general multi-message randomizers). Specifically, for a broad range of local randomizers having extremal probability design, our amplification bounds are precisely tight. We also demonstrate that variation-ratio reduction is well-suited for parallel composition in the shuffle model and results in stricter privacy accounting for common sampling-based local randomizers. Our experimental findings show that, compared to existing amplification bounds, our numerical amplification bounds can save up to 30% of the budget for singlemessage protocols, 75% of the budget for multi-message protocols, and 75%-95% of the budget for parallel composition. Additionally, our implementation for numerical amplification bounds has only\u00d5(n) complexity and is highly efficient in practice, taking just 2 minutes for n = 10 8 users. The code for our implementation can be found at https://github.com/wangsw/PrivacyAmplification."}, "cited_paper_content": {"title": "The Privacy Blanket Of The Shuffle Model", "abstract": "This work studies differential privacy in the context of the recently proposed shuffle model. Unlike in the local model, where the server collecting privatized data from users can track back an input to a specific user, in the shuffle model users submit their privatized inputs to a server anonymously. This setup yields a trust model which sits in between the classical curator and local models for differential privacy. The shuffle model is the core idea in the Encode, Shuffle, Analyze (ESA) model introduced by Bittau et al. (SOPS 2017). Recent work by Cheu et al. (EUROCRYPT 2019) analyzes the differential privacy properties of the shuffle model and shows that in some cases shuffled protocols provide strictly better accuracy than local protocols. Additionally, Erlingsson et al. (SODA 2019) provide a privacy amplification bound quantifying the level of curator differential privacy achieved by the shuffle model in terms of the local differential privacy of the randomizer used by each user. In this context, we make three contributions. First, we provide an optimal single message protocol for summation of real numbers in the shuffle model. Our protocol is very simple and has better accuracy and communication than the protocols for this same problem proposed by Cheu et al. Optimality of this protocol follows from our second contribution, a new lower bound for the accuracy of private protocols for summation of real numbers in the shuffle model. The third contribution is a new amplification bound for analyzing the privacy of protocols in the shuffle model in terms of the privacy provided by the corresponding local randomizer. Our amplification bound generalizes the results by Erlingsson et al. to a wider range of parameters, and provides a whole family of methods to analyze privacy amplification in the shuffle model."}, "keywords": ["general LDP randomizers"], "citation_intent": "background"} {"citing_id": "2303.10650v1", "cited_id": "1809.08098", "section_title": "Conclusions, Related And Future Work", "citation": "It has been observed in verification literature that neural networks often fail to satisfy logical constraints #REFR .", "text_before_citation": ["Analysis of properties of loss functions, especially smoothness #OTHEREFR or bilateral properties #OTHEREFR , is a prominent field #OTHEREFR .", "One of LDL's achievements is to expose trade-offs between satisfying desired geometric and logic properties of a loss functions.", "In the future, we plan to explore further technical ideas from this field.", "Neural Network Verification.", "While this work does not attempt to verify neural networks, we draw our motivation from this area of research."], "text_after_citation": ["One of proposed solutions is training the NN to satisfy a constraint prior to verifying them #OTHEREFR 44] , referred to as continuous verification #OTHEREFR . LDL fits into this trend.", "Indeed, the tool Vehicle that implements LDL is also built to work with SMT-solvers and NN verifiers #OTHEREFR .", "Logics for Uncertainty and Probabilistic Logics.", "LDLs have a strong connection to fuzzy logic #OTHEREFR , as we have shown.", "Via the use of probability distributions and expectations, we draw our connection to Probabilistic Prolog and similar languages #OTHEREFR ."], "citing_paper_content": {"title": "Logic Of Differentiable Logics: Towards A Uniform Semantics Of Dl", "abstract": "Differentiable logics (DL) have recently been proposed as a method of training neural networks to satisfy logical specifications. A DL consists of a syntax in which specifications are stated and an interpretation function that translates expressions in the syntax into loss functions. These loss functions can then be used during training with standard gradient descent algorithms. The variety of existing DLs and the differing levels of formality with which they are treated makes a systematic comparative study of their properties and implementations difficult. This paper remedies this problem by suggesting a metalanguage for defining DLs that we call the Logic of Differentiable Logics, or LDL. Syntactically, it generalises the syntax of existing DLs to FOL, and for the first time introduces the formalism for reasoning about vectors and learners. Semantically, it introduces a general interpretation function that can be instantiated to define loss functions arising from different existing DLs. We use LDL to establish several theoretical properties of existing DLs, and to conduct their empirical study in neural network verification."}, "cited_paper_content": {"title": "Efficient Formal Safety Analysis Of Neural Networks", "abstract": "Neural networks are increasingly deployed in real-world safety-critical domains such as autonomous driving, aircraft collision avoidance, and malware detection. However, these networks have been shown to often mispredict on inputs with minor adversarial or even accidental perturbations. Consequences of such errors can be disastrous and even potentially fatal as shown by the recent Tesla autopilot crash. Thus, there is an urgent need for formal analysis systems that can rigorously check neural networks for violations of different safety properties such as robustness against adversarial perturbations within a certain $L$-norm of a given image. An effective safety analysis system for a neural network must be able to either ensure that a safety property is satisfied by the network or find a counterexample, i.e., an input for which the network will violate the property. Unfortunately, most existing techniques for performing such analysis struggle to scale beyond very small networks and the ones that can scale to larger networks suffer from high false positives and cannot produce concrete counterexamples in case of a property violation. In this paper, we present a new efficient approach for rigorously checking different safety properties of neural networks that significantly outperforms existing approaches by multiple orders of magnitude. Our approach can check different safety properties and find concrete counterexamples for networks that are 10$\\times$ larger than the ones supported by existing analysis techniques. We believe that our approach to estimating tight output bounds of a network for a given input range can also help improve the explainability of neural networks and guide the training process of more robust neural networks."}, "keywords": ["neural networks"], "citation_intent": "background"} {"citing_id": "2303.06458v1", "cited_id": "1810.04805", "section_title": "Language Reconstruction", "citation": "For input corruption, we adopt the masking strategy as in BERT #REFR to randomly mask r% tokens of the input sentences S i , obtaining the corrupted input sentences S i .", "text_before_citation": [", y |Si| }, where y 0 and |S i | denote the beginof-sentence token and the number of tokens, respectively, and utilize the cross-entropy loss, which is widely used in natural language generation problems:", "EQUATION", "where we implement y 0 as a language-specific token following #OTHEREFR , #OTHEREFR so that the decoder can be aware of which language to be generated.", "Data Corruption For vision-to-text, due to (i) the large variations of images and videos caused by different object attributes, occlusion, motion blur and etc #OTHEREFR ; (ii) the great disparities between the vision and the language domains #OTHEREFR , we propose two data corruption strategies to further improve the performance and robustness of our ZeroNLG.", "In implementations, we simultaneously consider the input and feature corruptions."], "text_after_citation": ["As a result, the language reconstruction process is defined as follows:", "EQUATION", "For the feature corruption, we propose to add Gaussian noise n \u223c N (0, ) into the text features E m (S i ) (i.e., the coordinates) of input sentences S i , acquiring the corrupted features of input sentences E m (S i ) = E m (S i )+n. Therefore, the reconstruction process is defined as follows:", "EQUATION", "Through data corruption, we can encourage the model to learn more robust latent representations, achieving strong performances on zero-shot natural language generation."], "citing_paper_content": {"title": "Zeronlg: Aligning And Autoencoding Domains For Zero-Shot Multimodal And Multilingual Natural Language Generation", "abstract": "Natural Language Generation (NLG) accepts input data in the form of images, videos, or text and generates corresponding natural language text as output. Existing NLG methods mainly adopt a supervised approach and rely heavily on coupled data-to-text pairs. However, for many targeted scenarios and for non-English languages, sufficient quantities of labeled data are often not available. As a result, it is necessary to collect and label data-text pairs for training, which is both costly and time-consuming. To relax the dependency on labeled data of downstream tasks, we propose an intuitive and effective zero-shot learning framework, ZeroNLG, which can deal with multiple NLG tasks, including image-to-text (image captioning), video-to-text (video captioning), and text-to-text (neural machine translation), across English, Chinese, German, and French within a unified framework. ZeroNLG does not require any labeled downstream pairs for training. During training, ZeroNLG (i) projects different domains (across modalities and languages) to corresponding coordinates in a shared common latent space; (ii) bridges different domains by aligning their corresponding coordinates in this space; and (iii) builds an unsupervised multilingual auto-encoder to learn to generate text by reconstructing the input text given its coordinate in shared latent space. Consequently, during inference, based on the data-to-text pipeline, ZeroNLG can generate target sentences across different languages given the coordinate of input data in the common space. Within this unified framework, given visual (imaging or video) data as input, ZeroNLG can perform zero-shot visual captioning; given textual sentences as input, ZeroNLG can perform zero-shot machine translation. We present the results of extensive experiments on twelve NLG tasks, showing that, without using any labeled downstream pairs for training, ZeroNLG generates high-quality and \"believable\" outputs and significantly outperforms existing zero-shot methods."}, "cited_paper_content": {"title": "Bert: Pre-Training Of Deep Bidirectional Transformers For Language Understanding", "abstract": "We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. ::: BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement)."}, "keywords": ["input sentences"], "citation_intent": "method"} {"citing_id": "2304.12584v1", "cited_id": "1405.0312", "section_title": "A. Pimae Model.", "citation": "The Vision Transformer (ViT) based encoder in Pi-MAE is pre-trained on the COCO dataset #REFR to improve performance.", "text_before_citation": ["The PiMAE model ( Figure 1 ) consists of three key components: (1) a Vision Transformer-based #OTHEREFR encoder-decoder architecture with a mask layer to prevent trivial solutions while estimating emitters, (2) a Convolutional Neural Network as a prior for PSF estimation #OTHEREFR , and (3) a microscopic imaging process that enforces adherence to the microscopy principle.", "Appendix A provides detailed information on the network architecture and the embedding of physical principle.", "PiMAE requires only a few raw images for training, which is attributed to the carefully designed loss function.", "The loss function consists of two parts: one measures the difference between the raw and the reconstruction images, including the mean of the absolute difference and the multiscale structure similarity; the other part is a constraint on the PSF, including the total variation loss measuring the PSF continuity and the offset distance of the PSF's center of mass. Appendix B contains the expressions for the loss functions."], "text_after_citation": ["This pre-training relies on the selfsupervised learning of a masked autoencoder, but does not incorporate any physical information (detailed in Appendix C).", "After pre-training, PiMAE loads the trained encoder parameters and undergoes self-supervised training using raw microscopic images.", "The input image size is 144 pixels, and we use the RAdam optimizer #OTHEREFR with a learning rate of 1e \u22124 and a batch size of 18. The training runs for 5e 4 steps.", "Within PiMAE, the convolutional neural network, depicted in Figure 1 , is initialized randomly and takes a fixed random vector as input, outputting the predicted PSF. Relevant details can be found in Appendix A.", "As Pi-MAE undergoes self-supervised training, the CNN's predicted PSF continually becomes more accurate, moving closer to the true PSF as shown in Figure 2 . The experimental setup is shown in Figure 3 ."], "citing_paper_content": {"title": "Learning Imaging Mechanism Directly From Optical Microscopy Observations", "abstract": "Optical microscopy image plays an important role in scientific research through the direct visualization of the nanoworld, where the imaging mechanism is described as the convolution of the point spread function (PSF) and emitters. Based on a priori knowledge of the PSF or equivalent PSF, it is possible to achieve more precise exploration of the nanoworld. However, it is an outstanding challenge to directly extract the PSF from microscopy images. Here, with the help of self-supervised learning, we propose a physics-informed masked autoencoder (PiMAE) that enables a learnable estimation of the PSF and emitters directly from the raw microscopy images. We demonstrate our method in synthetic data and real-world experiments with significant accuracy and noise robustness. PiMAE outperforms DeepSTORM and the Richardson-Lucy algorithm in synthetic data tasks with an average improvement of 19.6% and 50.7% (35 tasks), respectively, as measured by the normalized root mean square error (NRMSE) metric. This is achieved without prior knowledge of the PSF, in contrast to the supervised approach used by DeepSTORM and the known PSF assumption in the Richardson-Lucy algorithm. Our method, PiMAE, provides a feasible scheme for achieving the hidden imaging mechanism in optical microscopy and has the potential to learn hidden mechanisms in many more systems."}, "cited_paper_content": {"title": "Microsoft Coco: Common Objects In Context", "abstract": "We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model."}, "keywords": ["COCO dataset"], "citation_intent": "method"} {"citing_id": "2303.07224v1", "cited_id": "1904.02216", "section_title": "Related Works", "citation": "DFANet #REFR adopted a lightweight backbone to reduce computational cost and designed crosslevel aggregation for feature refinement.", "text_before_citation": ["As a fundamental task of scene understanding, semantic segmentation has been an active research area for many years #OTHEREFR , which also attracts considerable attention in the study of deep neural networks, e.g., FCN #OTHEREFR , DeepLabs #OTHEREFR and PSPNet #OTHEREFR .", "In order to obtain accurate results in real-time applications, several methods have been proposed to improve the efficiency of semantic segmentation, which we summarize as follows.", "Efficient Image Segmentation Methods.", "Many compact architectures have been proposed for efficient image segmentation."], "text_after_citation": ["DFNet #OTHEREFR utilized a partial order pruning algorithm to search segmentation models for a good trade-off between speed and accuracy.", "ICNet #OTHEREFR used a cascade fusion module and transformed part of the computation from high-resolution to low-resolution. Wang et al. #OTHEREFR designed superresolution learning to improve image segmentation performance.", "BiSeNets #OTHEREFR used two-stream paths for lowlevel details and high-level context information, respectively.", "ESPNet #OTHEREFR used an efficient spatial pyramid to accelerate the convolution computation.", "These efficient backbone networks reduce the computational burden of singleimage segmentation, and can be applied to temporal or spatial frameworks in VSS."], "citing_paper_content": {"title": "Efficient Semantic Segmentation By Altering Resolutions For Compressed Videos", "abstract": "Video semantic segmentation (VSS) is a computationally expensive task due to the per-frame prediction for videos of high frame rates. In recent work, compact models or adaptive network strategies have been proposed for efficient VSS. However, they did not consider a crucial factor that affects the computational cost from the input side: the input resolution. In this paper, we propose an altering resolution framework called AR-Seg for compressed videos to achieve efficient VSS. AR-Seg aims to reduce the computational cost by using low resolution for non-keyframes. To prevent the performance degradation caused by downsampling, we design a Cross Resolution Feature Fusion (CR-eFF) module, and supervise it with a novel Feature Similarity Training (FST) strategy. Specifically, CReFF first makes use of motion vectors stored in a compressed video to warp features from high-resolution keyframes to low-resolution non-keyframes for better spatial alignment, and then selectively aggregates the warped features with local attention mechanism. Furthermore, the proposed FST supervises the aggregated features with high-resolution features through an explicit similarity loss and an implicit constraint from the shared decoding layer. Extensive experiments on CamVid and Cityscapes show that AR-Seg achieves state-of-the-art performance and is compatible with different segmentation backbones. On CamVid, AR-Seg saves 67% computational cost (measured in GFLOPs) with the PSPNet18 backbone while maintaining high segmentation accuracy."}, "cited_paper_content": {"title": "Dfanet: Deep Feature Aggregation For Real-Time Semantic Segmentation", "abstract": "This paper introduces an extremely efficient CNN architecture named DFANet for semantic segmentation under resource constraints. Our proposed network starts from a single lightweight backbone and aggregates discriminative features through sub-network and sub-stage cascade respectively. Based on the multi-scale feature propagation, DFANet substantially reduces the number of parameters, but still obtains sufficient receptive field and enhances the model learning ability, which strikes a balance between the speed and segmentation performance. Experiments on Cityscapes and CamVid datasets demonstrate the superior performance of DFANet with 8$\\times$ less FLOPs and 2$\\times$ faster than the existing state-of-the-art real-time semantic segmentation methods while providing comparable accuracy. Specifically, it achieves 70.3\\% Mean IOU on the Cityscapes test dataset with only 1.7 GFLOPs and a speed of 160 FPS on one NVIDIA Titan X card, and 71.3\\% Mean IOU with 3.4 GFLOPs while inferring on a higher resolution image."}, "keywords": ["crosslevel aggregation"], "citation_intent": "method"} {"citing_id": "2303.11381v1", "cited_id": "1411.7766", "section_title": "Introduction", "citation": "If \"people\" exists, we may select the celebrity model #REFR to further understand whether a celebrity appears and who he/she is.", "text_before_citation": ["Recent years have seen significant advancement for computer vision, thanks to improved network architecture #OTHEREFR , large-scale model training #OTHEREFR , and other factors.", "However, different vision problems typically require different models, which often require manual selection and composition of individual models for each use case.", "For example, when determining if an image contains \"people\", we may choose the image tagging model #OTHEREFR and check if the predicted tag list contains \"people\"."], "text_after_citation": ["One research direction is to combine the vision and language modules as one end-to-end model, such as Flamingo #OTHEREFR , PaLM-E #OTHEREFR , to provide a dialogue-based experience to the end user.", "That is, the user can use natural language to interact with the model around the image content.", "The vision module encodes vision signals into special text tokens or features that the language module can understand, enabling the system to utilize the language module for understanding user queries and providing responses.", "However, these joint finetuning approaches require a large amount of computing resources and annotated data to enable specific capabilities.", "In this work, we aim to combine existing individual vision models with the language model in a more flexible manner to tackle complicated visual understanding problems, e.g., the ones illustrated in Figure 1 ."], "citing_paper_content": {"title": "Mm-React : Prompting Chatgpt For Multimodal Reasoning And Action", "abstract": "Figure 1. MM-REACT allocates specialized vision experts with ChatGPT to solve challenging visual understanding tasks through multimodal reasoning and action. For example, the system could associate information from multiple uploaded receipts and calculate the total travel cost (\"Multi-Image Reasoning\"). We only highlight key information here and postpone full MM-REACT responses to Figures 4-14."}, "cited_paper_content": {"title": "Deep Learning Face Attributes In The Wild", "abstract": "Predicting face attributes in the wild is challenging due to complex face variations. We propose a novel deep learning framework for attribute prediction in the wild. It cascades two CNNs, LNet and ANet, which are fine-tuned jointly with attribute tags, but pre-trained differently. LNet is pre-trained by massive general object categories for face localization, while ANet is pre-trained by massive face identities for attribute prediction. This framework not only outperforms the state-of-the-art with a large margin, but also reveals valuable facts on learning face representation. (1) It shows how the performances of face localization (LNet) and attribute prediction (ANet) can be improved by different pre-training strategies. (2) It reveals that although the filters of LNet are fine-tuned only with image-level attribute tags, their response maps over entire images have strong indication of face locations. This fact enables training LNet for face localization with only image-level annotations, but without face bounding boxes or landmarks, which are required by all attribute recognition works. (3) It also demonstrates that the high-level hidden neurons of ANet automatically discover semantic concepts after pre-training with massive face identities, and such concepts are significantly enriched after fine-tuning with attribute tags. Each attribute can be well explained with a sparse linear combination of these concepts."}, "keywords": ["people", "celebrity model"], "citation_intent": "background"} {"citing_id": "2303.10816v1", "cited_id": "1906.01195", "section_title": "Related Work 5.1 Knowledge Embedding Methods", "citation": "KBAT #REFR employ Graph Neural Networks (GNN) as the encoder to aggregate multi-hop neighborhood information.", "text_before_citation": ["Knowledge embedding methods have been widely used in graph representation learning tasks and have achieved great success on knowledge base completion (a.k.a link prediction).", "Translationbased methods aim at finding the transformation relationships from source to target.", "TransE #OTHEREFR , the most representative translationbased model, projects entities and relations into a unified vector space and minimizes the energy function of triples. Following this route, many translation-based methods have emerged. TransH #OTHEREFR formulates the translating process on relation-specific hyperplanes. TransR #OTHEREFR projects entities and relations into separate spaces.", "Recently, some neural network methods have shown promising results in this task.", "ConvE #OTHEREFR and ConvKB #OTHEREFR utilize Convolutional Neural Network (CNN) to increase parameter interaction between entities and relations."], "text_after_citation": [], "citing_paper_content": {"title": "Imf: Interactive Multimodal Fusion Model For Link Prediction", "abstract": "Link prediction aims to identify potential missing triples in knowledge graphs. To get better results, some recent studies have introduced multimodal information to link prediction. However, these methods utilize multimodal information separately and neglect the complicated interaction between different modalities. In this paper, we aim at better modeling the inter-modality information and thus introduce a novel Interactive Multimodal Fusion (IMF) model to integrate knowledge from different modalities. To this end, we propose a two-stage multimodal fusion framework to preserve modality-specific knowledge as well as take advantage of the complementarity between different modalities. Instead of directly projecting different modalities into a unified space, our multimodal fusion module limits the representations of different modalities independent while leverages bilinear pooling for fusion and incorporates contrastive learning as additional constraints. Furthermore, the decision fusion module delivers the learned weighted average over the predictions of all modalities to better incorporate the complementarity of different modalities. Our approach has been demonstrated to be effective through empirical evaluations on several real-world datasets. The implementation code"}, "cited_paper_content": {"title": "Learning Attention-Based Embeddings For Relation Prediction In Knowledge Graphs", "abstract": "The recent proliferation of knowledge graphs (KGs) coupled with incomplete or partial information, in the form of missing relations (links) between entities, has fueled a lot of research on knowledge base completion (also known as relation prediction). Several recent works suggest that convolutional neural network (CNN) based models generate richer and more expressive feature embeddings and hence also perform well on relation prediction. However, we observe that these KG embeddings treat triples independently and thus fail to cover the complex and hidden information that is inherently implicit in the local neighborhood surrounding a triple. To this effect, our paper proposes a novel attention based feature embedding that captures both entity and relation features in any given entity's neighborhood. Additionally, we also encapsulate relation clusters and multihop relations in our model. Our empirical study offers insights into the efficacy of our attention based model and we show marked performance gains in comparison to state of the art methods on all datasets."}, "keywords": ["Graph Neural Networks"], "citation_intent": "method"} {"citing_id": "2304.08283v1", "cited_id": "1802.06993", "section_title": "Blockchain Type", "citation": "Additionally, as distributed ledgers can be publicly accessed, concerns about privacy and anonymity are duly noted #REFR .", "text_before_citation": ["Public blockchains are also termed as permissionless blockchains, while private and consortium blockchains are permissioned.", "\u2022 Public Blockchains: Public blockchains, with Bitcoin and Ethereum as two typical examples, are decentralized, open to the public, and self-governed.", "Anyone can join or leave a public blockchain at any time, and all can participate in the consensus process as validators.", "Public blockchains often use incentives, such as transaction fees and mining rewards, to encourage nodes to maintain the normal operations of the network.", "However, they face challenges from malicious nodes that can launch token stealing #OTHEREFR , selfish mining #OTHEREFR , Sybil #OTHEREFR , eclipse #OTHEREFR , and various other attacks."], "text_after_citation": ["Power consumption is another significant issue in public proof-of-work-based blockchains, especially when they are deployed in a large scale.", "\u2022 Private Blockchains: Private blockchains, with Multichain #OTHEREFR , Monax [136] , and Blockstack [1] as typical examples, are used and governed by single organizations.", "Instead of being open to anyone, the access to a private blockchain is restricted and requires a verified invitation.", "As a result, private blockchains are not decentralized, but permission-based and closed, which makes it easier to manage them and provides better privacy, but sacrifices decentralization and openness.", "The operator of a private blockchain has the ability to override, roll back, delete, and edit blocks, which undermines blockchain's trustless property."], "citing_paper_content": {"title": "Exploring Blockchain Technology Through A Modular Lens: A Survey", "abstract": "Blockchain has attracted significant attention in recent years due to its potential to revolutionize various industries by providing trustlessness. To comprehensively examine blockchain systems, this article presents both a macro-level overview on the most popular blockchain systems, and a micro-level analysis on a general blockchain framework and its crucial components. The macro-level exploration provides a big picture on the endeavors made by blockchain professionals over the years to enhance the blockchain performance while the micro-level investigation details the blockchain building blocks for deep technology comprehension. More specifically, this article introduces a general modular blockchain analytic framework that decomposes a blockchain system into interacting modules and then examines the major modules to cover the essential blockchain components of network, consensus, and distributed ledger at the micro-level. The framework as well as the modular analysis jointly build a foundation for designing scalable, flexible, and application-adaptive blockchains that can meet diverse requirements. Additionally, this article explores popular technologies that can be integrated with blockchain to expand functionality and highlights major challenges. Such a study provides critical insights to overcome the obstacles in designing novel blockchain systems and facilitates the further development of blockchain as a digital infrastructure to service new applications. CCS Concepts: \u2022 General and reference \u2192 Surveys and overviews."}, "cited_paper_content": {"title": "A Survey On The Security Of Blockchain Systems", "abstract": "Since its inception, the blockchain technology has shown promising application prospects. From the initial cryptocurrency to the current smart contract, blockchain has been applied to many fields. Although there are some studies on the security and privacy issues of blockchain, there lacks a systematic examination on the security of blockchain systems. In this paper, we conduct a systematic study on the security threats to blockchain and survey the corresponding real attacks by examining popular blockchain systems. We also review the security enhancement solutions for blockchain, which could be used in the development of various blockchain systems, and suggest some future directions to stir research efforts into this area."}, "keywords": ["distributed ledgers"], "citation_intent": "background"} {"citing_id": "2304.08466v1", "cited_id": "1905.10887", "section_title": "Model", "citation": "The performance boost remains significant with fine-tuned diffusion models for synthetic data up to a factor of 4 or 5 times the size of the real ImageNet training set, a significant improvement over results reported in #REFR .", "text_before_citation": ["Models trained solely on generated samples perform worse than models trained on real data.", "Nevertheless, augmenting the real data with data generated from the fine-tuned diffusion model provides a substantial boost in performance across many different classifiers.", "creases up to nine times the amount of real data, to a total dataset size of 12M images.", "Performance with higher resolution images, however, does not continue to improve with similarly large amounts of generative data augmentation.", "Table 4 reports performance as the amount of generated data increased over the same range, up to 9\u00d7 the amount of real data, at resolutions 256\u00d7256 and 1024\u00d71024."], "text_after_citation": [], "citing_paper_content": {"title": "Synthetic Data From Diffusion Models Improves Imagenet Classification", "abstract": "Deep generative models are becoming increasingly powerful, now generating diverse high fidelity photo-realistic samples given text prompts. Have they reached the point where models of natural images can be used for generative data augmentation, helping to improve challenging discriminative tasks? We show that large-scale text-toimage diffusion models can be fine-tuned to produce classconditional models with SOTA FID (1.76 at 256\u00d7256 resolution) and Inception Score (239 at 256 \u00d7 256). The model also yields a new SOTA in Classification Accuracy Scores (64.96 for 256\u00d7256 generative samples, improving to 69.24 for 1024\u00d71024 samples). Augmenting the ImageNet training set with samples from the resulting models yields significant improvements in ImageNet classification accuracy over strong ResNet and Vision Transformer baselines."}, "cited_paper_content": {"title": "Classification Accuracy Score For Conditional Generative Models", "abstract": "Deep generative models (DGMs) of images are now sufficiently mature that they produce nearly photorealistic samples and obtain scores similar to the data distribution on heuristics such as Frechet Inception Distance (FID). These results, especially on large-scale datasets such as ImageNet, suggest that DGMs are learning the data distribution in a perceptually meaningful space and can be used in downstream tasks. To test this latter hypothesis, we use class-conditional generative models from a number of model classes\u2014variational autoencoders, autoregressive models, and generative adversarial networks (GANs)\u2014to infer the class labels of real data. We perform this inference by training an image classifier using only synthetic data and using the classifier to predict labels on real data. The performance on this task, which we call Classification Accuracy Score (CAS), reveals some surprising results not identified by traditional metrics and constitute our contributions. First, when using a state-of-the-art GAN (BigGAN-deep), Top-1 and Top-5 accuracy decrease by 27.9% and 41.6%, respectively, compared to the original data; and conditional generative models from other model classes, such as Vector-Quantized Variational Autoencoder-2 (VQ-VAE-2) and Hierarchical Autoregressive Models (HAMs), substantially outperform GANs on this benchmark. Second, CAS automatically surfaces particular classes for which generative models failed to capture the data distribution, and were previously unknown in the literature. Third, we find traditional GAN metrics such as Inception Score (IS) and FID neither predictive of CAS nor useful when evaluating non-GAN models. Furthermore, in order to facilitate better diagnoses of generative models, we open-source the proposed metric."}, "keywords": ["real ImageNet training"], "citation_intent": "result"} {"citing_id": "2304.13771v1", "cited_id": "1507.07775", "section_title": "Concluding Remarks", "citation": "He used this generalized bound to improve a uniform continuity bound for the entanglement of formation originally given by Winter in #REFR .", "text_before_citation": ["We have presented a proof of a tight uniform continuity bound for the conditional Shannon entropy.", "The bound is independent of the alphabet size of the conditioning system.", "However, we have assumed in the proof that the conditioning system has finite support.", "We proved a conjectured bound for the conditional von Neumann entropy in the special case where the two systems have the same dimension and the state with lower conditional entropy is diagonal in a maximally entangled basis.", "In #OTHEREFR , Wilde generalized our bound Eq. 3.3.7 to quantum-classical states."], "text_after_citation": ["In #OTHEREFR , using an approach based on our proof techniques,", "Jabbour"], "citing_paper_content": {"title": "Some Problems Concerning Quantum Channels And Entropies", "abstract": "Finally, I do not believe it is possible for me to include everybody who ought to be included. To insulate myself, I declare that I am grateful for people."}, "cited_paper_content": {"title": "Tight Uniform Continuity Bounds For Quantum Entropies: Conditional Entropy, Relative Entropy Distance And Energy Constraints", "abstract": "We present a bouquet of continuity bounds for quantum entropies, falling broadly into two classes: first, a tight analysis of the Alicki\u2013Fannes continuity bounds for the conditional von Neumann entropy, reaching almost the best possible form that depends only on the system dimension and the trace distance of the states. Almost the same proof can be used to derive similar continuity bounds for the relative entropy distance from a convex set of states or positive operators. As applications, we give new proofs, with tighter bounds, of the asymptotic continuity of the relative entropy of entanglement, ER, and its regularization \\({E_R^{\\infty}}\\), as well as of the entanglement of formation, EF. Using a novel \u201cquantum coupling\u201d of density operators, which may be of independent interest, we extend the latter to an asymptotic continuity bound for the regularized entanglement of formation, aka entanglement cost, \\({E_C=E_F^{\\infty}}\\). Second, we derive analogous continuity bounds for the von Neumann entropy and conditional entropy in infinite dimensional systems under an energy constraint, most importantly systems of multiple quantum harmonic oscillators. While without an energy bound the entropy is discontinuous, it is well-known to be continuous on states of bounded energy. However, a quantitative statement to that effect seems not to have been known. Here, under some regularity assumptions on the Hamiltonian, we find that, quite intuitively, the Gibbs entropy at the given energy roughly takes the role of the Hilbert space dimension in the finite-dimensional Fannes inequality."}, "keywords": ["entanglement"], "citation_intent": "method"} {"citing_id": "2304.13980v1", "cited_id": "1812.02713", "section_title": "Bottom-Up Strategies", "citation": "Zhang and Wonka (2021) introduce a probabilistic embedding instead of a deterministic one, and also propose a new loss function for the clustering step, with which they achieved good performance on the PartNet dataset #REFR .", "text_before_citation": ["(2019b) introduce a linking module between the two decoder branches to exploit synergies between semantic and instance segmentation. Pham et al.", "(2019) employ a multi-value conditional random field model to jointly optimise instance and semantic labels. #OTHEREFR", "(2019) estimate vectors pointing to the potential instance centers, as in the generalised Hough transform, to support the subsequent clustering step.", "In addition to the 3D offset vector, OccuSeg #OTHEREFR also learns occupancy signals, which can guide the subsequent graph-based clustering towards better instance segmentation. #OTHEREFR", "(2019) integrate 2D birds-eye-view information into a network for joint 3D semantic and instance segmentation, in order to better exploit global context."], "text_after_citation": ["In order to mitigate imbalances in the data, which tend to harm the instance segmentation for rare categories, He et al. (2020) propose a memory-augmented network to memorise representative patterns.", "Recently, a number of studies have applied the bottom-up approach to outdoor dataset #OTHEREFR .", "In general, these are variants of the two-branch architecture described above. For instance #OTHEREFR", "(2020) and used spherical projection to implement a real-time, panoptic segmentation algorithm for the autonomous driving setting, while Panoptic-PolarNet #OTHEREFR used a polar bird's eye view. Their instance branch directly regresses the instance's center.", "DS-Net #OTHEREFR utilizes a dynamic shift module that can automatically adjust the kernel function to different point densities and instance sizes."], "citing_paper_content": {"title": "A Review Of Panoptic Segmentation For Mobile Mapping Point Clouds", "abstract": "3D point cloud panoptic segmentation is the combined task to (i) assign each point to a semantic class and (ii) separate the points in each class into object instances. Recently there has been an increased interest in such comprehensive 3D scene understanding, building on the rapid advances of semantic segmentation due to the advent of deep 3D neural networks. Yet, to date there is very little work about panoptic segmentation of outdoor mobile-mapping data, and no systematic comparisons. The present paper tries to close that gap. It reviews the building blocks needed to assemble a panoptic segmentation pipeline and the related literature. Moreover, a modular pipeline is set up to perform comprehensive, systematic experiments to assess the state of panoptic segmentation in the context of street mapping. As a byproduct, we also provide the first public dataset for that task, by extending the NPM3D dataset to include instance labels."}, "cited_paper_content": {"title": "Partnet: A Large-Scale Benchmark For Fine-Grained And Hierarchical Part-Level 3D Object Understanding", "abstract": "We present PartNet: a consistent, large-scale dataset of 3D objects annotated with fine-grained, instance-level, and hierarchical 3D part information. Our dataset consists of 573,585 part instances over 26,671 3D models covering 24 object categories. This dataset enables and serves as a catalyst for many tasks such as shape analysis, dynamic 3D scene modeling and simulation, affordance analysis, and others. Using our dataset, we establish three benchmarking tasks for evaluating 3D part recognition: fine-grained semantic segmentation, hierarchical semantic segmentation, and instance segmentation. We benchmark four state-of-the-art 3D deep learning algorithms for fine-grained semantic segmentation and three baseline methods for hierarchical semantic segmentation. We also propose a baseline method for part instance segmentation and demonstrate its superior performance over existing methods."}, "keywords": ["PartNet dataset"], "citation_intent": "background"} {"citing_id": "2304.03580v1", "cited_id": "1512.03385", "section_title": "Experimental Setup", "citation": "We use ResNet50 #REFR as the backbone with an ImageNet-pretrained model from TORCHVISION in all experiments unless specified otherwise. The AdamW [23] optimizer is used.", "text_before_citation": ["Note that we replace the official noisy annotation that contains 10% labels generated semi-automatically using #OTHEREFR with the one released by BigDetection #OTHEREFR .", "Ob-ject365 is another large-scale dataset, and it contains around 1.72M images with more than 22.8M bounding boxes over 365 categories.", "Then we finetune the pre-trained models on the COCO 2017 dataset. Implementation details.", "The classification head of METR is the form of the dot-product layer #OTHEREFR and the detailed algorithm will be summarized in supplementary materials.", "We conduct our experiments using the PyTorch #OTHEREFR deep learning framework."], "text_after_citation": ["The learning rates for the backbone and the transformer are initially set to be 1e\u22125 and 1e\u22124, respectively.", "The learning rate is dropped by a factor of 10 after 11 epochs for 12 training epochs, and after 40 epochs for 50 training epochs. The weight decay is set to be 1e\u22124.", "We train METR on COCO using 8 Nvidia A100 40G GPUs, and each GPU has a local batch size of 1 image only.", "For language embeddings, we select CLIP-B/16 #OTHEREFR text encoder throughout this study.", "We adopt most of the default hyper-parameters and data augmentation same as DINO."], "citing_paper_content": {"title": "Language-Aware Multiple Datasets Detection Pretraining For Detrs", "abstract": "Pretraining on large-scale datasets can boost the performance of object detectors while the annotated datasets for object detection are hard to scale up due to the high labor cost. What we possess are numerous isolated filed-specific datasets, thus, it is appealing to jointly pretrain models across aggregation of datasets to enhance data volume and diversity. In this paper, we propose a strong framework for utilizing Multiple datasets to pretrain DETR-like detectors, termed METR, without the need for manual label spaces integration. It converts the typical multi-classification in object detection into binary classification by introducing a pre-trained language model. Specifically, we design a category extraction module for extracting potential categories involved in an image and assign these categories into different queries by language embeddings. Each query is only responsible for predicting a class-specific object. Besides, to adapt our novel detection paradigm, we propose a group bipartite matching strategy that limits the ground truths to match queries assigned to the same category. Extensive experiments demonstrate that METR achieves extraordinary results on either multi-task joint training or the pretrain & finetune paradigm. Notably, our pre-trained models have high flexible transferability and increase the performance upon various DETR-like detectors on COCO val2017 benchmark. Codes will be available after this paper is published."}, "cited_paper_content": {"title": "Deep Residual Learning For Image Recognition", "abstract": "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers\u20148\u00d7 deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation."}, "keywords": ["ImageNet-pretrained model"], "citation_intent": "method"} {"citing_id": "2303.07592v1", "cited_id": "1608.03983", "section_title": "Experimental Setup", "citation": "An initial learning rate was 0.001, exponentially decayed with a factor of 0.95 during the distillation step, and varied by SGDR #REFR during the fine-tuning step (T 0 = 2, T mult = 2).", "text_before_citation": ["Analogous to the previous works #OTHEREFR , we trained the model in an end-to-end manner by using a binary target.", "Specifically, we assigned 1 to the n frames around the end-point of wake-up word and 0 to the remainder.", "Here, the end-point was obtained by using a simple energy-based voice activity detector (VAD).", "We decided to use n = 41 (i.e., 20 additional frames each before and after the end-point of the wake-up word) based on the development set result.", "We trained the model during 5 epochs for distillation step and 50 epochs for fine-tuning step with the Adam optimizer #OTHEREFR . A batch size was set to 32 utterances."], "text_after_citation": ["We set the width multiplier \u03b1 = 1/8 and an interpolation coefficient \u03bb = 0.5. Pytorch framework #OTHEREFR was used for all experiments."], "citing_paper_content": {"title": "Lightweight Feature Encoder For Wake-Up Word Detection Based On Self-Supervised Speech Representation", "abstract": "Self-supervised learning method that provides generalized speech representations has recently received increasing attention. Wav2vec 2.0 is the most famous example, showing remarkable performance in numerous downstream speech processing tasks. Despite its success, it is challenging to use it directly for wake-up word detection on mobile devices due to its expensive computational cost. In this work, we propose LiteFEW, a lightweight feature encoder for wake-up word detection that preserves the inherent ability of wav2vec 2.0 with a minimum scale. In the method, the knowledge of the pre-trained wav2vec 2.0 is compressed by introducing an autoencoder-based dimensionality reduction technique and distilled to LiteFEW. Experimental results on the open-source \"Hey Snips\" dataset show that the proposed method applied to various model structures significantly improves the performance, achieving over 20% of relative improvements with only 64k parameters."}, "cited_paper_content": {"title": "Sgdr: Stochastic Gradient Descent With Warm Restarts", "abstract": "Restart techniques are common in gradient-free optimization to deal with multimodal functions. Partial warm restarts are also gaining popularity in gradient-based optimization to improve the rate of convergence in accelerated gradient schemes to deal with ill-conditioned functions. In this paper, we propose a simple warm restart technique for stochastic gradient descent to improve its anytime performance when training deep neural networks. We empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively. We also demonstrate its advantages on a dataset of EEG recordings and on a downsampled version of the ImageNet dataset. Our source code is available at https://github.com/loshchil/SGDR"}, "keywords": ["initial learning rate"], "citation_intent": "method"} {"citing_id": "2304.08811v1", "cited_id": "1803.06978", "section_title": "Experiment Setting", "citation": "In the choice of attack targets, we picked some representative commands as attack targets, #REFR .", "text_before_citation": ["To evaluate the transferability of our proposed combined adversarial attack strategy, we conducted experiments on commercial speech recognition APIs, such as those provided by iFlytek, Alibaba, and Baidu.", "These APIs provide high-level English speech recognition services that directly impact the user experience of millions of people.", "Attacks against these commercial APIs are closer to real-world attack patterns and are more sophisticated."], "text_after_citation": ["In this context, higher transferability implies a greater potential for danger.", "Also, as shown in the previous experiments, the carrier of AEs plays a significant role in transferability.", "The generation and transferability of AEs are directly influenced by the choice of carrier. In CommanderSong #OTHEREFR , Cheng Yuxuan et al.", "first used music segments as the carrier of AEs in attacks, arguing that music has the nature of common consumption, giving it native opportunities in attacks with its popularity and extensive reach.", "Attacks on music segments are likely to raise public concern."], "citing_paper_content": {"title": "Towards The Transferable Audio Adversarial Attack Via Ensemble Methods", "abstract": "In recent years, deep learning (DL) models have achieved significant progress in many domains, such as autonomous driving, facial recognition, and speech recognition. However, the vulnerability of deep learning models to adversarial attacks has raised serious concerns in the community because of their insufficient robustness and generalization. Also, transferable attacks have become a prominent method for black-box attacks. In this work, we explore the potential factors that impact adversarial examples (AEs) transferability in DL-based speech recognition. We also discuss the vulnerability of different DL systems and the irregular nature of decision boundaries. Our results show a remarkable difference in the transferability of AEs between speech and images, with the data relevance being low in images but opposite in speech recognition. Motivated by dropout-based ensemble approaches, we propose random gradient ensembles and dynamic gradient-weighted ensembles, and we evaluate the impact of ensembles on the transferability of AEs. The results show that the AEs created by both approaches are valid for transfer to the black box API."}, "cited_paper_content": {"title": "Improving Transferability Of Adversarial Examples With Input Diversity", "abstract": "Though CNNs have achieved the state-of-the-art performance on various vision tasks, they are vulnerable to adversarial examples --- crafted by adding human-imperceptible perturbations to clean images. However, most of the existing adversarial attacks only achieve relatively low success rates under the challenging black-box setting, where the attackers have no knowledge of the model structure and parameters. To this end, we propose to improve the transferability of adversarial examples by creating diverse input patterns. Instead of only using the original images to generate adversarial examples, our method applies random transformations to the input images at each iteration. Extensive experiments on ImageNet show that the proposed attack method can generate adversarial examples that transfer much better to different networks than existing baselines. By evaluating our method against top defense solutions and official baselines from NIPS 2017 adversarial competition, the enhanced attack reaches an average success rate of 73.0%, which outperforms the top-1 attack submission in the NIPS competition by a large margin of 6.6%. We hope that our proposed attack strategy can serve as a strong benchmark baseline for evaluating the robustness of networks to adversaries and the effectiveness of different defense methods in the future. Code is available at https://github.com/cihangxie/DI-2-FGSM."}, "keywords": ["attack targets"], "citation_intent": "method"} {"citing_id": "2304.00157v1", "cited_id": "1910.02550", "section_title": "A. Datasets", "citation": "ClearGrasp #REFR dataset includes both a highly realistic synthetic dataset and a real-world benchmark.", "text_before_citation": ["Transparent object reconstruction requires the ground truth of the reconstructed depth or 3D shape for evaluation.", "Therefore, datasets with ground truth of depth or 3D shapes are required for model training and evaluation.", "In the subsection, we thoroughly summarise datasets published since 2020 for transparent object reconstruction, regarding year, place of publication (Pub.), number of objects in the images (#Obj.), dataset size (#Imgs), devices, auto-collection ability and special features."], "text_after_citation": ["The synthetic dataset is rendered by using the ray-tracing Cycles rendering engine integrated into Blender, which can provide important effects for transparent objects, such as refraction and soft shadow.", "To capture the depth of transparent objects in the real world, transparent objects are sprayed with rough stone textures that can reflect light evenly and lead to better depth estimates from RGB-D cameras.", "It should be noted that Clear-Grasp is the first large-scale dataset including 50k synthetic images and 286 real images for the depth reconstruction of transparent objects.", "OOD #OTHEREFR : Omniverse Object Dataset consists of 60k synthetic images of five transparent objects from ClearGrasp #OTHEREFR .", "The Omniverse Platform and NVIDIA PhysX engine are used for rendering those images and getting natural poses of objects."], "citing_paper_content": {"title": "Robotic Perception Of Transparent Objects: A Review", "abstract": "Fig. 1. Typical applications of transparent object perception. (a) Robot assistant [1](\u00a9[2021] IEEE); (b) Autonomous robot navigation [2]; (c) Laboratory automation [3]; (d) Waste sorting and recycling [4]."}, "cited_paper_content": {"title": "Cleargrasp: 3D Shape Estimation Of Transparent Objects For Manipulation", "abstract": "Transparent objects are a common part of everyday life, yet they possess unique visual properties that make them incredibly difficult for standard 3D sensors to produce accurate depth estimates for. In many cases, they often appear as noisy or distorted approximations of the surfaces that lie behind them. To address these challenges, we present ClearGrasp -- a deep learning approach for estimating accurate 3D geometry of transparent objects from a single RGB-D image for robotic manipulation. Given a single RGB-D image of transparent objects, ClearGrasp uses deep convolutional networks to infer surface normals, masks of transparent surfaces, and occlusion boundaries. It then uses these outputs to refine the initial depth estimates for all transparent surfaces in the scene. To train and test ClearGrasp, we construct a large-scale synthetic dataset of over 50,000 RGB-D images, as well as a real-world test benchmark with 286 RGB-D images of transparent objects and their ground truth geometries. The experiments demonstrate that ClearGrasp is substantially better than monocular depth estimation baselines and is capable of generalizing to real-world images and novel objects. We also demonstrate that ClearGrasp can be applied out-of-the-box to improve grasping algorithms' performance on transparent objects. Code, data, and benchmarks will be released. Supplementary materials available on the project website: this https URL"}, "keywords": ["ClearGrasp dataset"], "citation_intent": "background"} {"citing_id": "2304.06719v1", "cited_id": "1903.11027", "section_title": "Main Benchmarking Results", "citation": "The performance on nuScenes-C is improved as the performance on the \"clean\" nuScenes #REFR dataset. The relation of absolute performance is close to linear.", "text_before_citation": ["In this study, we conduct a comprehensive benchmarking analysis of 26 existing BEV detectors on the nuScenes-C dataset.", "The main results of our experiments are presented in Tables 2 and 3.", "Our findings indicate that all models exhibit varying degrees of performance declines on the corruption set.", "We observed that Bright, which causes a much larger shift in pixel distribution than Motion Blur, resulted (b) mRR vs. NDS Figure 3 ."], "text_after_citation": ["However, when considering the relative performance, the mRR metric is more randomly distributed without a clear trend to increase. in the smallest performance drop.", "For most of the models, the resilience rate of Bright remains the highest.", "We notice a strong correlation of the absolute performances between nuScenes-C and the \"clean\" dataset.", "Specifically, BEV detectors that perform well on the standard dataset are also likely to perform better on the out-ofdistribution dataset, as illustrated in Figure 3 (a).", "However, a closer examination of the results revealed a more complex situation."], "citing_paper_content": {"title": "Robobev: Towards Robust Bird'S Eye View Perception Under Corruptions", "abstract": "The recent advances in camera-based bird's eye view (BEV) representation exhibit great potential for in-vehicle 3D perception. Despite the substantial progress achieved on standard benchmarks, the robustness of BEV algorithms has not been thoroughly examined, which is critical for safe operations. To bridge this gap, we introduce RoboBEV, a comprehensive benchmark suite that encompasses eight distinct corruptions, including Bright, Dark, Fog, Snow, Motion Blur, Color Quant, Camera Crash, and Frame Lost. Based on it, we undertake extensive evaluations across a wide range of BEV-based models to understand their resilience and reliability. Our findings indicate a strong correlation between absolute performance on in-distribution and out-of-distribution datasets. Nonetheless, there are considerable variations in relative performance across different approaches. Our experiments further demonstrate that pre-training and depth-free BEV transformation has the potential to enhance out-of-distribution robustness. Additionally, utilizing long and rich temporal information largely helps with robustness. Our findings provide valuable insights for designing future BEV models that can achieve both accuracy and robustness in real-world deployments. 1"}, "cited_paper_content": {"title": "Nuscenes: A Multimodal Dataset For Autonomous Driving", "abstract": "Robust detection and tracking of objects is crucial for the deployment of autonomous vehicle technology. Image-based benchmark datasets have driven development in computer vision tasks such as object detection, tracking and segmentation of agents in the environment. Most autonomous vehicles, however, carry a combination of cameras and range sensors such as lidar and radar. As machine learning based methods for detection and tracking become more prevalent, there is a need to train and evaluate such methods on datasets containing range sensor data along with images. In this work we present nuTonomy scenes (nuScenes), the first published dataset to carry the full autonomous vehicle sensor suite: 6 cameras, 5 radars and 1 lidar, all with full 360 degree field of view. nuScenes comprises 1000 scenes, each 20s long and fully annotated with 3D bounding boxes for 23 classes and 8 attributes. It has 7x as many annotations and 100x as many images as the pioneering KITTI dataset. We define novel 3D detection and tracking metrics. We also provide careful dataset analysis as well as baselines for lidar and image based detection and tracking. Data, development kit and more information are available online at this http URL."}, "keywords": ["absolute performance", "nuScenes-C"], "citation_intent": "result"} {"citing_id": "2303.07224v1", "cited_id": "2004.01800", "section_title": "Related Works", "citation": "With global attention mechanism, TD-Net #REFR aggregated the features from different time stamps and replaced the deep model with several shallow models distributed across the timeline.", "text_before_citation": ["Noticing the lack of information from nonkeyframes, Li et al.", "#OTHEREFR extracted shallow features from non-keyframes, and fused them into the propagated deep features by spatially variant convolution.", "To compensate for the spatial misalignment between video frames, Zhu et al. #OTHEREFR and Xu et al.", "#OTHEREFR warped the intermediate features from keyframes by optical flow to produce segmentation results for non-keyframes. Jain et al.", "#OTHEREFR fused the shallow features of non-keyframe into the warped features, and decoded them into better results."], "text_after_citation": ["All the above methods mainly reduced the depth of backbone networks, but neglected the factor of input resolution considered in this paper.", "Instead of processing the image frames as a whole, Verelst et al.", "#OTHEREFR split the frame into blocks and chose to copy or process them by a policy network.", "This block-based method reduces computational overhead from the spatial dimension, but lacks global information on nonkeyframes. Kim et al. #OTHEREFR attempted to improve efficiency by reducing resolution.", "But they directly used the LR segmentation results, thus suffering from severe performance degradation."], "citing_paper_content": {"title": "Efficient Semantic Segmentation By Altering Resolutions For Compressed Videos", "abstract": "Video semantic segmentation (VSS) is a computationally expensive task due to the per-frame prediction for videos of high frame rates. In recent work, compact models or adaptive network strategies have been proposed for efficient VSS. However, they did not consider a crucial factor that affects the computational cost from the input side: the input resolution. In this paper, we propose an altering resolution framework called AR-Seg for compressed videos to achieve efficient VSS. AR-Seg aims to reduce the computational cost by using low resolution for non-keyframes. To prevent the performance degradation caused by downsampling, we design a Cross Resolution Feature Fusion (CR-eFF) module, and supervise it with a novel Feature Similarity Training (FST) strategy. Specifically, CReFF first makes use of motion vectors stored in a compressed video to warp features from high-resolution keyframes to low-resolution non-keyframes for better spatial alignment, and then selectively aggregates the warped features with local attention mechanism. Furthermore, the proposed FST supervises the aggregated features with high-resolution features through an explicit similarity loss and an implicit constraint from the shared decoding layer. Extensive experiments on CamVid and Cityscapes show that AR-Seg achieves state-of-the-art performance and is compatible with different segmentation backbones. On CamVid, AR-Seg saves 67% computational cost (measured in GFLOPs) with the PSPNet18 backbone while maintaining high segmentation accuracy."}, "cited_paper_content": {"title": "Temporally Distributed Networks For Fast Video Semantic Segmentation", "abstract": "We present TDNet, a temporally distributed network designed for fast and accurate video semantic segmentation. We observe that features extracted from a certain high-level layer of a deep CNN can be approximated by composing features extracted from several shallower sub-networks. Leveraging the inherent temporal continuity in videos, we distribute these sub-networks over sequential frames. Therefore, at each time step, we only need to perform a lightweight computation to extract a sub-features group from a single sub-network. The full features used for segmentation are then recomposed by application of a novel attention propagation module that compensates for geometry deformation between frames. A grouped knowledge distillation loss is also introduced to further improve the representation power at both full and sub-feature levels. Experiments on Cityscapes, CamVid, and NYUD-v2 demonstrate that our method achieves state-of-the-art accuracy with significantly faster speed and lower latency."}, "keywords": ["deep model"], "citation_intent": "method"} {"citing_id": "2304.13114v1", "cited_id": "1206.2944", "section_title": "I. Introduction", "citation": "Bayesian optimization is a global optimization scheme that has been commonly used to optimize neural network training parameters #REFR .", "text_before_citation": ["Other methods have utilized global optimization techniques such as branch and bound to determine the optimal transform in certain situations such as scan to model matching #OTHEREFR .", "In situations where outliers, dynamic obstacles, or noisy sensor measurements exist, these global methods cannot guarantee an optimal solution #OTHEREFR .", "The lack of consistently reliable methods for global registration motivates research targeted at more robust initialization techniques.", "In this paper, we outline a framework based on Bayesian optimization (BO) #OTHEREFR , a global optimization method, to systematically compute T 0 , and find this approach produces more accurate alignments than state-of-the-art methods.", "It is fundamentally compatible with variants of ICP that address point correspondences, the weighting of correspondences, and other methods that focus on adjusting how the objective is constructed."], "text_after_citation": ["In practice, BO aims to find the minima or maxima of an objective function and is effective in situations where the objective is either complex, noisy, or expensive to calculate, such as in the case of minimizing the point-topoint correspondence distance.", "BO utilizes a computationally efficient probabilistic model of the objective function (see Figure 1 ) that is inexpensive to evaluate.", "As the point cloud registration problem as formulated by ICP is inherently a non-convex problem that requires expensive iterations for each initial T 0 estimate, it is an ideal candidate for BO.", "We demonstrate that our approach outperforms exhaustive searches for this initial estimate, as well as other \"globally optimal\" methods.", "In this work, we present an open-source Bayesian optimization-based method (BO-ICP) for determining the crucial initial condition for ICP problems."], "citing_paper_content": {"title": "Bo-Icp: Initialization Of Iterative Closest Point Based On Bayesian Optimization", "abstract": "Typical algorithms for point cloud registration such as Iterative Closest Point (ICP) require a favorable initial transform estimate between two point clouds in order to perform a successful registration. State-of-the-art methods for choosing this starting condition rely on stochastic sampling or global optimization techniques such as branch and bound. In this work, we present a new method based on Bayesian optimization for finding the critical initial ICP transform. We provide three different configurations for our method which highlights the versatility of the algorithm to both find rapid results and refine them in situations where more runtime is available such as offline map building. Experiments are run on popular data sets and we show that our approach outperforms state-of-the-art methods when given similar computation time. Furthermore, it is compatible with other improvements to ICP, as it focuses solely on the selection of an initial transform, a starting point for all ICP-based methods."}, "cited_paper_content": {"title": "Practical Bayesian Optimization Of Machine Learning Algorithms", "abstract": "The use of machine learning algorithms frequently involves careful tuning of learning parameters and model hyperparameters. Unfortunately, this tuning is often a \"black art\" requiring expert experience, rules of thumb, or sometimes brute-force search. There is therefore great appeal for automatic approaches that can optimize the performance of any given learning algorithm to the problem at hand. In this work, we consider this problem through the framework of Bayesian optimization, in which a learning algorithm's generalization performance is modeled as a sample from a Gaussian process (GP). We show that certain choices for the nature of the GP, such as the type of kernel and the treatment of its hyperparameters, can play a crucial role in obtaining a good optimizer that can achieve expertlevel performance. We describe new algorithms that take into account the variable cost (duration) of learning algorithm experiments and that can leverage the presence of multiple cores for parallel experimentation. We show that these proposed algorithms improve on previous automatic procedures and can reach or surpass human expert-level optimization for many algorithms including latent Dirichlet allocation, structured SVMs and convolutional neural networks."}, "keywords": ["Bayesian optimization"], "citation_intent": "method"} {"citing_id": "2303.08864v1", "cited_id": "1603.03935", "section_title": "A T E X I T S H A 1 _ B A S E 6 4 = \" Z U D 8 A P B L U T 3 F R + 8 H X N L Y U 7 A S S 3 8 = \" > A", "citation": "Therefore, designing real-time FC search algorithms critically hinges on finding an accurate and efficient solution to Q in #REFR .", "text_before_citation": ["These estimates are widely referred to as Q-values, denoted by", "Q(S i , a i ) \u2208 R,", "where a higher Q(S i , a i ) indicates that taking an action a i from a MDP state S i yields better aggregate future rewards #OTHEREFR .", "Note that, however, when solving Q in a POMDP environment, the ordinary Q-learning algorithm is ineffective as partial observations O i are typically not reflective of the underlying POMDP state S i .", "As a result, Q(O i , a i ) = Q(S i , a i )."], "text_after_citation": ["To this end, we aim to design an approach that judiciously exploits the underlying structure of each POMDP state S i .", "In particular, we develop a graph recurrent Q-network (GRQN) architecture that exploits the following three key elements:"], "citing_paper_content": {"title": "Grnn-Based Real-Time Fault Chain Prediction", "abstract": "This paper proposes a data-driven graphical framework for the real-time search of risky cascading fault chains (FCs). While identifying risky FCs is pivotal to alleviating cascading failures, the complex spatio-temporal dependencies among the components of the power system render challenges to modeling and analyzing FCs. Furthermore, the real-time search of risky FCs faces an inherent combinatorial complexity that grows exponentially with the size of the system. The proposed framework leverages the recent advances in graph recurrent neural networks to circumvent the computational complexities of the real-time search of FCs. The search process is formalized as a partially observable Markov decision process (POMDP), which is subsequently solved via a time-varying graph recurrent neural network (GRNN) that judiciously accounts for the inherent temporal and spatial structures of the data generated by the system. The key features of this structure include (i) leveraging the spatial structure of the data induced by the system topology, (ii) leveraging the temporal structure of data induced by system dynamics, and (iii) efficiently summarizing the system's history in the latent space of the GRNN. The proposed framework's efficiency is compared to the relevant literature on the IEEE 39-bus New England system and the IEEE 118-bus system."}, "cited_paper_content": {"title": "Risk Assessment Of Multi-Timescale Cascading Outages Based On Markovian Tree Search", "abstract": "In the risk assessment of cascading outages, the rationality of simulation and efficiency of computation are both of great significance. To overcome the drawback of sampling-based methods that huge computation resources are required and the shortcoming of initial contingency selection practices that the dependencies in sequences of outages are omitted, this paper proposes a novel risk assessment approach by searching on Markovian Tree. The Markovian tree model is reformulated from the quasi-dynamic multitimescale simulation model proposed recently to ensure reasonable modeling and simulation of cascading outages. Then, a tree search scheme is established to avoid duplicated simulations on same cascade paths, significantly saving the computation time. To accelerate the convergence of a risk assessment, a risk estimation index is proposed to guide the search for states with major contributions to the risk, and the risk assessment is realized based on the risk estimation index with a forward tree search and backward update algorithm. The effectiveness of the proposed method is illustrated on a four-node power system, and its convergence profile as well as efficiency is demonstrated on the RTS-96 test system."}, "keywords": ["real-time FC search"], "citation_intent": "background"} {"citing_id": "2304.04742v1", "cited_id": "1902.09630", "section_title": "Revisit Detr Losses And Matching Costs", "citation": "Similar to the loss functions, the final cost includes three items, a classification cost C cls , a box L1 cost C bbox , and a GIOU cost C GIOU #REFR . We focus only on the classification cost as well.", "text_before_citation": ["Typically, a ground truth will be assigned only one prediction as the positive example.", "Predictions with no ground truths assigned will be viewed as negative examples.", "To assign predictions with ground truths, we first calculate a cost matrix C \u2208 R N pred \u00d7Ngt between them.", "The N pred and N gt are the number for predictions and ground truths.", "Then a Hungarian matching algorithm will perform on the cost matrix to assign each ground truth a prediction by minimizing sum costs."], "text_after_citation": ["For the i th prediction and the j th ground truth, the classification cost is:", "C cls (i, j) = |1 \u2212 p i | \u03b3 BCE(p i , 1)\u2212p \u03b3 i BCE(1\u2212p i , 1). (2)", "The formulation is similar to the focal cost but has a litter modification #OTHEREFR .", "The focal loss only encourages positive examples to predict 1, while the classification cost adds an additional penalty term to avoid it to 0."], "citing_paper_content": {"title": "Detection Transformer With Stable Matching", "abstract": "This paper is concerned with the matching stability problem across different decoder layers in DEtection TRansformers (DETR). We point out that the unstable matching in DETR is caused by a multi-optimization path problem, which is highlighted by the one-to-one matching design in DETR. To address this problem, we show that the most important design is to use and only use positional metrics (like IOU) to supervise classification scores of positive examples. Under the principle, we propose two simple yet effective modifications by integrating positional metrics to DETR's classification loss and matching cost, named positionsupervised loss and position-modulated cost. We verify our methods on several DETR variants. Our methods show consistent improvements over baselines. By integrating our methods with DINO, we achieve 50.4 and 51.5 AP on the COCO detection benchmark using ResNet-50 backbones under 1\u00d7 (12 epochs) and 2\u00d7 (24 epochs) training settings, achieving a new record under the same setting. We achieve 63.8 AP on COCO detection test-dev with a Swin-Large backbone. Our code will be made available at https:// github.com/IDEA-Research/Stable-DINO. * Equal contributions. List order is random. \u2020 This work was done when Shilong Liu, Hao Zhang, Feng Li, and Hongyang Li were interns at IDEA."}, "cited_paper_content": {"title": "Generalized Intersection Over Union: A Metric And A Loss For Bounding Box Regression", "abstract": "Intersection over Union (IoU) is the most popular evaluation metric used in the object detection benchmarks. However, there is a gap between optimizing the commonly used distance losses for regressing the parameters of a bounding box and maximizing this metric value. The optimal objective for a metric is the metric itself. In the case of axis-aligned 2D bounding boxes, it can be shown that IoU can be directly used as a regression loss. However, IoU has a plateau making it infeasible to optimize in the case of non-overlapping bounding boxes. In this paper, we address the this weakness by introducing a generalized version of IoU as both a new loss and a new metric. By incorporating this generalized IoU ( GIoU) as a loss into the state-of-the art object detection frameworks, we show a consistent improvement on their performance using both the standard, IoU based, and new, GIoU based, performance measures on popular object detection benchmarks such as PASCAL VOC and MS COCO."}, "keywords": ["loss functions", "classification cost C"], "citation_intent": "background"} {"citing_id": "2304.07132v1", "cited_id": "1906.04015", "section_title": "Controlled 3D Molecular Generations", "citation": "We use the train/val/test partitions introduced in Anderson et al. #REFR (train/val/test: 100K/18K/13K samples) for evaluation.", "text_before_citation": ["Dataset QM9 #OTHEREFR is a dataset of 130k stable and synthetically accessible organic molecules with up to 9 heavy atoms (29 atoms including hydrogens).", "In this section, we train diffusion models to generate atoms' (1) 3-dimensional coordinates; (2) types (H, C, N, O, F) and 3integer-valued atom charges."], "text_after_citation": ["Evaluation Metrics Our goal here is to generate molecules targeting some desired properties while at the same time not harming general generation quality (e.g., molecules' validity (the proportion of atoms with right valency) and stability, etc.).", "In such a scenario, a molecule is represented as a point cloud, in which each point denotes a single atom and has its own (atom) type.", "Following #OTHEREFR , for each pair of atoms, we use the distance between them and the atoms' types to predict bonds (single, double, triple, or none) between atoms.", "In this section, we consider optimizing two desired properties: (1) quantitative estimate of druglikeness (QED) #OTHEREFR (how likely a molecule is a potential drug candidate based on marketed drug molecules) and (2) synthetic accessibility score (SA) (the difficulty of drug synthesis), which are crucial in drug discovery domain.", "A good method should have a high averaged QED and SA."], "citing_paper_content": {"title": "Towards Controllable Diffusion Models Via Reward-Guided Exploration", "abstract": "By formulating data samples' formation as a Markov denoising process, diffusion models achieve state-of-the-art performances in a collection of tasks. Recently, many variants of diffusion models have been proposed to enable controlled sample generation. Most of these existing methods either formulate the controlling information as an input (i.e.,: conditional representation) for the noise approximator, or introduce a pre-trained classifier in the test-phase to guide the Langevin dynamic towards the conditional goal. However, the former line of methods only work when the controlling information can be formulated as conditional representations, while the latter requires the pre-trained guidance classifier to be differentiable. In this paper, we propose a novel framework named RGDM (Reward-Guided Diffusion Model) that guides the training-phase of diffusion models via reinforcement learning (RL). The proposed training framework bridges the objective of weighted log-likelihood and maximum entropy RL, which enables calculating policy gradients via samples from a pay-off distribution proportional to exponential scaled rewards, rather than from policies themselves. Such a framework alleviates the high gradient variances and enables diffusion models to explore for highly rewarded samples in the reverse process. Experiments on 3D shape and molecule generation tasks show significant improvements over existing conditional diffusion models."}, "cited_paper_content": {"title": "Cormorant: Covariant Molecular Neural Networks", "abstract": "We propose Cormorant, a rotationally covariant neural network architecture for learning the behavior and properties of complex many-body physical systems. We apply these networks to molecular systems with two goals: learning atomic potential energy surfaces for use in Molecular Dynamics simulations, and learning ground state properties of molecules calculated by Density Functional Theory. Some of the key features of our network are that (a) each neuron explicitly corresponds to a subset of atoms; (b) the activation of each neuron is covariant to rotations, ensuring that overall the network is fully rotationally invariant. Furthermore, the non-linearity in our network is based upon tensor products and the Clebsch-Gordan decomposition, allowing the network to operate entirely in Fourier space. Cormorant significantly outperforms competing algorithms in learning molecular Potential Energy Surfaces from conformational geometries in the MD-17 dataset, and is competitive with other methods at learning geometric, energetic, electronic, and thermodynamic properties of molecules on the GDB-9 dataset."}, "keywords": ["(train/val/test"], "citation_intent": "method"} {"citing_id": "2303.10615v1", "cited_id": "0812.1345", "section_title": "General Method", "citation": "This implies J \u03bd (h \u03bd (g), h \u03bd (s)) \u2265 \u03b1 which is equivalent to #REFR .", "text_before_citation": ["To show (1), let \u03b1 be the right-hand side of this inequality. This means that", "\u03b1 = min r\u2208Rs c n(s)\u2212n(r) J \u03bd (h \u03bd (g), h \u03bd (r)).", "If \u03b1 = 0 then (1) is trivially true. Otherwise, put m = h \u03bd (g)/\u03b1.", "Due to the bilinearity of J \u03bd , vector m is a feasible solution to P .", "If the objective value of P is at least 1, then (again by bilinearity) we have in particular J \u03bd (h \u03bd (g)/\u03b1, h \u03bd (s)) \u2265 1."], "text_after_citation": ["Note that if the objective value of the linear program is less than one but still more than zero then exponential bound for a smaller c might hold.", "This theorem also holds for any other linear representation which maps gadgets to non-negative vectors.", "The downside of a linear program is that it is usually solved by a numerical method which is not suitable for a theoretical proof. We circumvent this by solving the dual problem.", "Note that any solution of the dual gives us a lower bound but of course suboptimal solutions will give weaker bounds.", "So for a given solution of the dual we only need to certify that it is indeed a solution which is easy and we do not need to prove optimality."], "citing_paper_content": {"title": "Counting Circuit Double Covers", "abstract": "We study a counting version of Cycle Double Cover Conjecture. We discuss why it is more interesting to count circuits (i.e., graphs isomorphic to C k for some k) instead of cycles (graphs with all degrees even). We give an almost-exponential lower-bound for graphs with a surface embedding of representativity at least 4. We also prove an exponential lower-bound for planar graphs. We conjecture that any bridgeless cubic graph has at least 2 n/2\u22121 circuit double covers and we show an infinite class of graphs for which this bound is tight."}, "cited_paper_content": {"title": "A Unified Approach To Distance-Two Colouring Of Graphs On Surfaces", "abstract": "In this paper we introduce the notion of $\\Sigma$-colouring of a graph $G$: For given subsets $\\Sigma(v)$ of neighbours of $v$, for every $v\\in V(G)$, this is a proper colouring of the vertices of $G$ such that, in addition, vertices that appear together in some $\\Sigma(v)$ receive different colours. This concept generalises the notion of colouring the square of graphs and of cyclic colouring of graphs embedded in a surface. We prove a general result for graphs embeddable in a fixed surface, which implies asymptotic versions of Wegner's and Borodin's Conjecture on the planar version of these two colourings. Using a recent approach of Havet et al., we reduce the problem to edge-colouring of multigraphs, and then use Kahn's result that the list chromatic index is close to the fractional chromatic index. Our results are based on a strong structural lemma for graphs embeddable in a fixed surface, which also implies that the size of a clique in the square of a graph of maximum degree $\\Delta$ embeddable in some fixed surface is at most $\\frac32\\,\\Delta$ plus a constant."}, "keywords": ["\u2265", "h \u03bd"], "citation_intent": "background"} {"citing_id": "2304.05635v1", "cited_id": "1602.05629", "section_title": "Methodology", "citation": "As for the aggregation of the global model, we adopt the weighted averaging strategy in FedAvg #REFR . Site Contrastive based Channel Selection.", "text_before_citation": ["At round t, all sites receive the same parameters (\u03c6 t\u22121 g ,\u03b8 t\u22121 g ) from the server.", "The global part is initialized with \u03c6 t\u22121 g , and the personalized part is initialized with\u03b8 t k which is obtained from the AA module based on \u03b8 t\u22121 g and the local parameters from the previous round (e.g., \u03b8 t\u22121 k for site k).", "Each site updates its model by optimizing the local objective with its own data and its site encoding c k utilized in SCR", "EQUATION", "where GRD(\u2022) denotes the local gradient-based update."], "text_after_citation": ["Personalized FL paradigms may suffer from confusion or over-personalization when data heterogeneity is low, performing even worse than traditional FL methods #OTHEREFR .", "Hence, the SCR module is designed to enhance the distance/contrast of inter-site data representations through site-contrastive learning based channel attention, which in turn facilitates personalization.", "Specifically, taking the k-th site as an example, a one-hot site encoding c k (i.e., the k-th position is 1 and others are 0) and the output feature f k from the encoder F e are given.", "c k is expanded to a length of C through two fully connected layers to obtain c * k , which is then concatenated with the global average pooled feature of f k .", "After passing through a fully connected layer with Sigmoid activation, the site channel attention value\u0109 k is obtained."], "citing_paper_content": {"title": "Unifying And Personalizing Weakly-Supervised Federated Medical Image Segmentation Via Adaptive Representation And Aggregation", "abstract": "Federated learning (FL) enables multiple sites to collaboratively train powerful deep models without compromising data privacy and security. The statistical heterogeneity (e.g., non-IID data and domain shifts) is a primary obstacle in FL, impairing the generalization performance of the global model. Weakly supervised segmentation, which uses sparsely-grained (i.e., point-, bounding box-, scribble-, block-wise) supervision, is increasingly being paid attention to due to its great potential of reducing annotation costs. However, there may exist label heterogeneity, i.e., different annotation forms across sites. In this paper, we propose a novel personalized FL framework for medical image segmentation, named FedICRA, which uniformly leverages heterogeneous weak supervision via adaptIve Contrastive Representation and Aggregation. Concretely, to facilitate personalized modeling and to avoid confusion, a channel selection based site contrastive representation module is employed to adaptively cluster intra-site embeddings and separate inter-site ones. To effectively integrate the common knowledge from the global model with the unique knowledge from each local model, an adaptive aggregation module is applied for updating and initializing local models at the element level. Additionally, a weakly supervised objective function that leverages a multiscale tree energy loss and a gated CRF loss is employed to generate more precise pseudo-labels and further boost the segmentation performance. Through extensive experiments on two distinct medical image segmentation tasks of different modalities, the proposed FedICRA demonstrates overwhelming performance over other state-ofthe-art personalized FL methods. Its performance even approaches that of fully supervised training on centralized data. Our code and data are available at https://github.com/llmir/FedICRA."}, "cited_paper_content": {"title": "Communication-Efficient Learning Of Deep Networks From Decentralized Data", "abstract": "Modern mobile devices have access to a wealth of data suitable for learning models, which in turn can greatly improve the user experience on the device. For example, language models can improve speech recognition and text entry, and image models can automatically select good photos. However, this rich data is often privacy sensitive, large in quantity, or both, which may preclude logging to the data center and training there using conventional approaches. We advocate an alternative that leaves the training data distributed on the mobile devices, and learns a shared model by aggregating locally-computed updates. We term this decentralized approach Federated Learning. We present a practical method for the federated learning of deep networks based on iterative model averaging, and conduct an extensive empirical evaluation, considering five different model architectures and four datasets. These experiments demonstrate the approach is robust to the unbalanced and non-IID data distributions that are a defining characteristic of this setting. Communication costs are the principal constraint, and we show a reduction in required communication rounds by 10-100x as compared to synchronized stochastic gradient descent."}, "keywords": ["Contrastive based Channel", "aggregation"], "citation_intent": "method"} {"citing_id": "2304.05599v1", "cited_id": "1612.00552", "section_title": "I. Introduction", "citation": "In this regard, nonorthogonal multiple access (NOMA) is seen as a strong candidate for IoT networks #REFR since it allows multiple devices to share the same resource blocks by splitting them into the power domain.", "text_before_citation": ["For instance, in 5G new radio (NR) standards #OTHEREFR , with a 20 MHz bandwidth (i.e., maximum bandwidth in LTE legacy ), a maximum number of 92 RBs 1 can be allocated to users within a subframe.", "However, in massive IoT applications (e.g., smart agriculture), thousands of sensor or control nodes may require wireless access.", "In these cases, whether we need more bandwidth, that is costly or more complex resource allocation (RB scheduling algorithms in layer 2) solutions are required.", "Therefore, it is not possible to allocate each IoT device to an orthogonal resource to avoid interference, since a massive number of devices need to be served in small, dense areas.", "For this reason, more than one IoT devices should share a resource block to enable mMTC."], "text_after_citation": ["In this way, the spectral efficiency of the network increases and it becomes possible to serve multiple devices with the number of more than the available resource blocks #OTHEREFR .", "Accordingly, NOMA is seen as an enabler for ultra-dense networks, and tremendous efforts have been devoted to integrating NOMA in IoT applications #OTHEREFR - #OTHEREFR ."], "citing_paper_content": {"title": "Bit-Interleaved Multiple Access: Improved Fairness, Reliability, And Latency For Massive Iot Networks", "abstract": "Internet-of-Things (IoT) networks require massive connections in dense areas. Therefore, a resource efficient multiple access scheme seems inevitable to enable immense connectivity where multiple devices have to share the same resource block. Non-orthogonal multiple access (NOMA) has been considered as the strongest candidate in recent years. However, in this paper, by considering the practical implementation, we first provide a true power allocation (PA) constraint with finite alphabet inputs for conventional downlink NOMA and demonstrate that it cannot support massive connections in practical systems. To this end, we propose bit-interleaved multiple access (BIMA) scheme in downlink IoT networks. The proposed BIMA scheme implements bitwise multiaccess interleaving and deinterleaving at the transceiver ends and there are no strict PA constraints, unlike conventional NOMA, thus allowing a high number of devices in the same resource block. We provide a comprehensive analytical framework for BIMA by investigating all key performance indicators (KPIs) to present both information-theoretic (i.e., ergodic capacity [EC] and outage probability [OP]) and finite alphabet inputs (i.e., bit error rate [BER]) performance metrics with both instantaneous and statistical channel ordering. In addition, we define Jain's fairness index and proportional fairness index in terms of all KPIs. Based on the extensive computer simulations, we reveal that BIMA outperforms conventional NOMA significantly, with a performance gain of up to 20-30 dB in terms of KPIs in some scenarios. In other words, compared to conventional NOMA schemes, the same KPIs are met in BIMA with 20-30 dB less transmit power, which is quite promising for energy-limited use cases. Moreover, this performance gain becomes greater when more IoT devices are supported. BIMA provides a full diversity order for all IoT devices and enables the implementation of an arbitrary number of devices and modulation orders, which is crucial for IoT networks where a huge number of devices should be supported in a single resource block in dense areas. In addition to the overall performance gain, BIMA guarantees a fairness system where none of the devices gets a severely degraded performance and the sum-rate is shared in a fair manner among devices. It guarantees QoS satisfaction for all devices. Finally, we provide an intense complexity and latency analysis for BIMA and demonstrate that it provides lower latency compared to conventional NOMA receivers, since it allows parallel computation at the receivers and no iterative operations are required. We show that compared to conventional NOMA receivers, BIMA reduces latency by up to 350% for specific IoT devices and 170% on average."}, "cited_paper_content": {"title": "Massive Non-Orthogonal Multiple Access For Cellular Iot: Potentials And Limitations", "abstract": "The Internet of Things promises ubiquitous connectivity of everything everywhere, which represents the biggest technology trend in the years to come. It is expected that by 2020 over 25 billion devices will be connected to cellular networks; far beyond the number of devices in current wireless networks. Machine-to-machine communications aims to provide the communication infrastructure for enabling IoT by facilitating the billions of multi-role devices to communicate with each other and with the underlying data transport infrastructure without, or with little, human intervention. Providing this infrastructure will require a dramatic shift from the current protocols mostly designed for human-to-human applications. This article reviews recent 3GPP solutions for enabling massive cellular IoT and investigates the random access strategies for M2M communications, which shows that cellular networks must evolve to handle the new ways in which devices will connect and communicate with the system. A massive non-orthogonal multiple access technique is then presented as a promising solution to support a massive number of IoT devices in cellular networks, where we also identify its practical challenges and future research directions."}, "keywords": ["IoT networks", "nonorthogonal multiple access"], "citation_intent": "background"} {"citing_id": "2304.08851v1", "cited_id": "1205.2618", "section_title": "Model Optimization", "citation": "Since a ranked list of top-K items is required in both stages, Bayesian Personalized Ranking (BPR) #REFR pairwise learning is adopted to optimize the parameters of PEGA.", "text_before_citation": ["We leverage a two-stage training strategy to alleviate the sparsity issue of group-item interaction."], "text_after_citation": ["BPR pairwise learning aims to maximize the score difference between positive and negative items.", "Specifically, we obtain the user and the item embeddings by minimizing the user-level BPR pairwise loss L :", "EQUATION", "where O is the set of user training instances.", "Each instance , , contains a positive item that the user has interacted with and a negative item that the user hasn't interacted with yet."], "citing_paper_content": {"title": "Pega: Personality-Guided Preference Aggregator For Ephemeral Group Recommendation", "abstract": "Recently, making recommendations for ephemeral groups which contain dynamic users and few historic interactions have received an increasing number of attention. The main challenge of ephemeral group recommender is how to aggregate individual preferences to represent the group's overall preference. Score aggregation and preference aggregation are two commonly-used methods that adopt hand-craft predefined strategies and data-driven strategies, respectively. However, they neglect to take into account the importance of the individual inherent factors such as personality in the group. In addition, they fail to work well due to a small number of interactive records. To address these issues, we propose a Personality-Guided Preference Aggregator (PEGA) for ephemeral group recommendation. Concretely, we first adopt hyper-rectangle to define the concept of Group Personality. We then use the personality attention mechanism to aggregate group preferences. The role of personality in our approach is twofold: (1) To estimate individual users' importance in a group and provide explainability; (2) to alleviate the data sparsity issue that occurred in ephemeral groups. The experimental results demonstrate that our model significantly outperforms the state-of-the-art methods w.r.t. the score of both Recall and NDCG on Amazon and Yelp datasets. CCS CONCEPTS \u2022 Information systems \u2192 Recommender systems; \u2022 Computing methodologies \u2192 Neural networks."}, "cited_paper_content": {"title": "Bpr: Bayesian Personalized Ranking From Implicit Feedback", "abstract": "Item recommendation is the task of predicting a personalized ranking on a set of items (e.g. websites, movies, products). In this paper, we investigate the most common scenario with implicit feedback (e.g. clicks, purchases). There are many methods for item recommendation from implicit feedback like matrix factorization (MF) or adaptive k-nearest-neighbor (kNN). Even though these methods are designed for the item prediction task of personalized ranking, none of them is directly optimized for ranking. In this paper we present a generic optimization criterion BPR-Opt for personalized ranking that is the maximum posterior estimator derived from a Bayesian analysis of the problem. We also provide a generic learning algorithm for optimizing models with respect to BPR-Opt. The learning method is based on stochastic gradient descent with bootstrap sampling. We show how to apply our method to two state-of-the-art recommender models: matrix factorization and adaptive kNN. Our experiments indicate that for the task of personalized ranking our optimization method outperforms the standard learning techniques for MF and kNN. The results show the importance of optimizing models for the right criterion."}, "keywords": ["ranked list"], "citation_intent": "method"} {"citing_id": "2304.04874v1", "cited_id": "1502.01852", "section_title": "Implementation Details", "citation": "We train our prompt-based image captioner for 40 epochs from scratch, using the weight initialization strategy described in #REFR .", "text_before_citation": ["-Network and Training Configuration.", "We explore two different prompt-based image captioning models, i.e., SAT #OTHEREFR and GRIT #OTHEREFR to act as classifiers.", "To train our promptbased image captioner, we split the original validation set, around 10 4 images, into training, validation, and testing sets, i.e., 70%, 10%, and 20%, respectively. These splits are balanced based on the protected attribute.", "All captioning models are trained using the same training configurations mentioned in #OTHEREFR .", "The added template T is \"Therefore, the gender is [Answer]\", \"Therefore, The race is [Answer]\" and \"Therefore, the emotion is [Answer]\" for gender, race, and emotions, respectively."], "text_after_citation": ["Adam optimizer #OTHEREFR and mini-batch size of 128 are used for training all our models.", "-Software and Hardware Details.", "Our metric is implemented in Python using the PyTorch framework. All experiments are conducted using four NVIDIA V100 GPUs.", "#OTHEREFR 8.6 (5) 2.9 (5) 6.29 (8) 2.30 (5) Att2in #OTHEREFR 7.6 (4) 1.1 (3) 6.17 (6) 2.78 (6) UpDn #OTHEREFR 9.0 (7) 4.7 (6) 6.64 (9) 2.82 (7) Trans. #OTHEREFR 8.7 (6) 5.9 (8) 6.19 7"], "citing_paper_content": {"title": "Imagecaptioner 2 : Image Captioner For Image Captioning Bias Amplification Assessment", "abstract": "Most pre-trained learning systems are known to suffer from bias, which typically emerges from the data, the model, or both. Measuring and quantifying bias and its sources is a challenging task and has been extensively studied in image captioning. Despite the significant effort in this direction, we observed that existing metrics lack consistency in the inclusion of the visual signal. In this paper, we introduce a new bias assessment metric, dubbed ImageCaptioner 2 , for image captioning. Instead of measuring the absolute bias in the model or the data, ImageCaptioner 2 pay more attention to the bias introduced by the model w.r.t the data bias, termed bias amplification. Unlike the existing methods, which only evaluate the image captioning algorithms based on the generated captions only, ImageCaptioner 2 incorporates the image while measuring the bias. In addition, we design a formulation for measuring the bias of generated captions as promptbased image captioning instead of using language classifiers. Finally, we apply our ImageCaptioner 2 metric across 11 different image captioning architectures on three different datasets, i.e., MS-COCO caption dataset, Artemis V1, and Artemis V2, and on three different protected attributes, i.e., gender, race, and emotions. Consequently, we verify the effectiveness of our ImageCaptioner 2 metric by proposing AnonymousBench, which is a novel human evaluation paradigm for bias metrics. Our metric shows significant superiority over the recent bias metric; LIC, in terms of human alignment, where the correlation scores are 80% and 54% for our metric and LIC, respectively. The code is available at https://eslambakr.github.io/imagecaptioner2.github.io/. Recent efforts focus on estimating model bias, driven by the fact that more than balanced data is needed to create"}, "cited_paper_content": {"title": "Delving Deep Into Rectifiers: Surpassing Human-Level Performance On Imagenet Classification", "abstract": "Rectified activation units (rectifiers) are essential for state-of-the-art neural networks. In this work, we study rectifier neural networks for image classification from two aspects. First, we propose a Parametric Rectified Linear Unit (PReLU) that generalizes the traditional rectified unit. PReLU improves model fitting with nearly zero extra computational cost and little overfitting risk. Second, we derive a robust initialization method that particularly considers the rectifier nonlinearities. This method enables us to train extremely deep rectified models directly from scratch and to investigate deeper or wider network architectures. Based on the learnable activation and advanced initialization, we achieve 4.94% top-5 test error on the ImageNet 2012 classification dataset. This is a 26% relative improvement over the ILSVRC 2014 winner (GoogLeNet, 6.66% [33]). To our knowledge, our result is the first to surpass the reported human-level performance (5.1%, [26]) on this dataset."}, "keywords": ["prompt-based image captioner", "40 epochs"], "citation_intent": "method"} {"citing_id": "2303.11934v1", "cited_id": "1803.03635", "section_title": "C.2 Optimized Top-K", "citation": "There is evidence that artificial neural networks are overparameterized at the start of training and functionally sparse, as supported by results from network pruning, most notably the Lottery Ticket Hypothesis #REFR .", "text_before_citation": ["This is because the ReLU MLP is not constrained to the data manifold, having a bias term and no L 2 normalization.", "This allows it to learn weights and biases that better maximize NCL validation accuracy.", "If we perform joint training of the ConvMixer and SDM module then the ConvMixer can learn to create a manifold for SDM to tile that does maximize validation accuracy, performing on par with the ReLU MLP.", "This is what happens in a test where we train on the whole CIFAR10 dataset in the NCL setting, as long as k \u2265 250, but independent of if there are 1,000 or 10,000 neurons in the SDM layer. This result is shown in Fig.", "14a and suggests that the ReLU network only needs at least 250 neurons in its final layer to backpropagate gradients and achieve high prediction accuracy."], "text_after_citation": ["However, this result is again manifold dependent whereby training instead on ImageNet32, even with k = 2, 500 and 10,000 neurons still harms performance compared to a ReLU network.", "We believe this is because ImageNet32 has a dramatically more complex data manifold with \u223c 1.2M images from 1,000 different classes, compared to 50,000 images in 10 classes for CIFAR10.", "This means that even a k of 2500 is too small and harms the model's representational capacity.", "Figure 14 : Sufficiently small k values harm network training.", "ConvMixers with an SDM layer at the end trained on CIFAR10."], "citing_paper_content": {"title": "Sparse Distributed Memory Is A Continual Learner", "abstract": "Continual learning is a problem for artificial neural networks that their biological counterparts are adept at solving. Building on work using Sparse Distributed Memory (SDM) to connect a core neural circuit with the powerful Transformer model, we create a modified Multi-Layered Perceptron (MLP) that is a strong continual learner. We find that every component of our MLP variant translated from biology is necessary for continual learning. Our solution is also free from any memory replay or task information, and introduces novel methods to train sparse networks that may be broadly applicable."}, "cited_paper_content": {"title": "The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks", "abstract": "Neural network pruning techniques can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance of inference without compromising accuracy. However, contemporary experience is that the sparse architectures produced by pruning are difficult to train from the start, which would similarly improve training performance. We find that a standard technique for pruning weights naturally uncovers subnetworks whose initializations made them capable of training effectively. Based on these results, we articulate the\"lottery ticket hypothesis\": dense, randomly-initialized feed-forward networks contain subnetworks (\"winning tickets\") that - when trained in isolation - arrive at comparable test accuracy in a comparable number of iterations. The winning tickets we find have won the initialization lottery: their connections have initial weights that make training particularly effective. We present an algorithm to identify winning tickets and a series of experiments that support the lottery ticket hypothesis and the importance of these fortuitous initializations. We consistently find winning tickets that are less than 10-20% of the size of several fully-connected and convolutional feed-forward architectures for MNIST and CIFAR10. Furthermore, the winning tickets we find above that size learn faster than the original network and exhibit higher test accuracy."}, "keywords": ["artificial neural networks", "network pruning"], "citation_intent": "result"} {"citing_id": "2303.09905v1", "cited_id": "1802.01886", "section_title": "Synthetic Schema Generation", "citation": "Furthermore, self-BLEU #REFR scores indicate EDA is the least effective in ensuring candidate diversity compared to other approaches.", "text_before_citation": ["Our ranking method generates increasingly lexically diverse schemata as shown by the increase in Jaccard distance across schema variants (Table 1) .", "This aspect is much more difficult to achieve with EDA without significantly affecting semantics."], "text_after_citation": ["The BLEU difference between the SGD-X variants v1 and v5 is 15.2 but smaller (0.66) for our approach.", "Hence, the PEGASUS + BART copies n-grams from the input and includes additional information.", "This information is not always meaning-preserving: City where the event is happening is paraphrased as The bustling city where the event is taking place (v5) but End date for the reservation or to find the house is paraphrased as End date for hotel reservation to allow time for a replacement both at the struck and in the run up to the event (v5).", "The self-BLEU of the SGD-X schemas decreases faster compared to the automatically generated paraphrases, suggesting that Jaccard distance increases partly due to hallucination.", "Entailment scores show that backtranslation is effective in preserving semantics."], "citing_paper_content": {"title": "More Robust Schema-Guided Dialogue State Tracking Via Tree-Based Paraphrase Ranking", "abstract": "The schema-guided paradigm overcomes scalability issues inherent in building task-oriented dialogue (TOD) agents with static ontologies. Instead of operating on dialogue context alone, agents have access to hierarchical schemas containing task-relevant natural language descriptions. Fine-tuned language models excel at schema-guided dialogue state tracking (DST) but are sensitive to the writing style of the schemas. We explore methods for improving the robustness of DST models. We propose a framework 1 for generating synthetic schemas which uses tree-based ranking to jointly optimise lexical diversity and semantic faithfulness. The generalisation of strong baselines is improved when augmenting their training data with prompts generated by our framework, as demonstrated by marked improvements in average joint goal accuracy (JGA) and schema sensitivity (SS) on the SGD-X benchmark."}, "cited_paper_content": {"title": "Texygen: A Benchmarking Platform For Text Generation Models", "abstract": "We introduce Texygen, a benchmarking platform to support research on open-domain text generation models. Texygen has not only implemented a majority of text generation models, but also covered a set of metrics that evaluate the diversity, the quality and the consistency of the generated texts. The Texygen platform could help standardize the research on text generation and facilitate the sharing of fine-tuned open-source implementations among researchers for their work. As a consequence, this would help in improving the reproductivity and reliability of future research work in text generation."}, "keywords": ["candidate diversity"], "citation_intent": "result"} {"citing_id": "2304.11029v1", "cited_id": "1910.01108", "section_title": "Pre-Training", "citation": "Before pre-training, the text encoder was initialized using DistilRoBERTa #REFR , with a maximum length of 128, and the music encoder was initialized using two settings: M3-S/512 and M3-S/1024.", "text_before_citation": [], "text_after_citation": ["Both models were trained for 40 epochs on WebMT with 6 encoder layers and 3 decoder layers, an embedding size of 768, and a mask ratio of 45%.", "Based on these two M3 encoders, we developed CLaMP-S/512 and CLaMP-S/1024.", "Both of them were trained for 20 epochs, using the AdamW optimizer #OTHEREFR with \u03b2 1 = 0.9, \u03b2 2 = 0.999, = 10 \u22128 , and a weight decay coefficient of 0.01.", "The batch size is set to 640, and the temperature \u03c4 = 0.2.", "The training process was accelerated and memory was saved by using mixed precision #OTHEREFR ."], "citing_paper_content": {"title": "Clamp: Contrastive Language-Music Pre-Training For Cross-Modal Symbolic Music Information Retrieval", "abstract": "We introduce CLaMP 1 : Contrastive Language-Music Pre-training, which learns cross-modal representations between natural language and symbolic music using a music encoder and a text encoder trained jointly with a contrastive loss. To pre-train CLaMP, we collected a large dataset of 1.4 million music-text pairs. It employed text dropout as a data augmentation technique and bar patching to efficiently represent music data which reduces sequence length to less than 10%. In addition, we developed a masked music model pre-training objective to enhance the music encoder's comprehension of musical context and structure. CLaMP integrates textual information to enable semantic search and zero-shot classification for symbolic music, surpassing the capabilities of previous models. To support the evaluation of semantic search and music classification, we publicly release WikiMusicText (WikiMT), a dataset of 1010 lead sheets in ABC notation, each accompanied by a title, artist, genre, and description. In comparison to state-of-the-art models that require fine-tuning, zero-shot CLaMP demonstrated comparable or superior performance on score-oriented datasets."}, "cited_paper_content": {"title": "Distilbert, A Distilled Version Of Bert: Smaller, Faster, Cheaper And Lighter", "abstract": "As Transfer Learning from large-scale pre-trained models becomes more prevalent in Natural Language Processing (NLP), operating these large models in on-the-edge and/or under constrained computational training or inference budgets remains challenging. In this work, we propose a method to pre-train a smaller general-purpose language representation model, called DistilBERT, which can then be fine-tuned with good performances on a wide range of tasks like its larger counterparts. While most prior work investigated the use of distillation for building task-specific models, we leverage knowledge distillation during the pre-training phase and show that it is possible to reduce the size of a BERT model by 40%, while retaining 97% of its language understanding capabilities and being 60% faster. To leverage the inductive biases learned by larger models during pre-training, we introduce a triple loss combining language modeling, distillation and cosine-distance losses. Our smaller, faster and lighter model is cheaper to pre-train and we demonstrate its capabilities for on-device computations in a proof-of-concept experiment and a comparative on-device study."}, "keywords": ["music encoder", "text encoder"], "citation_intent": "method"} {"citing_id": "2304.10343v1", "cited_id": "1904.04097", "section_title": "Introduction", "citation": "SOGATs correspond to the class of theories classified by representable map categories, which were introduced by Uemura #REFR .", "text_before_citation": ["This kind of approach does not seem suitable for the coherence theorems that we are interested in.", "The main result of this paper is a coherence theorem for type theories that is analogous to the fact the statement (1) can be deduced from the statement (2).", "It states that to establish the conservativity of the equational extension T \u2192 T E , it suffices to check that the \u221e-congruence over T freely generated by the equations of E exists and is 0-truncated.", "This 0-truncatedness condition encodes the same idea as the fact that every diagram made up of associators and unitors commutes in a freely generated monoidal category.", "In general, we work at the level of second-order generalized algebraic theories (SOGATs) and their classifying (\u03a3, \u03a0 rep )-CwFs."], "text_after_citation": ["First-order generalized algebraic theories are also SOGATs, so our results also apply to theories that are not type theories, such as monoidal categories, etc.", "A SOGAT T is specified by its classifying (\u03a3, \u03a0 rep )-CwF, also denoted by T , which can be seen as a syntactic model of some type theory.", "A SOGAT has functorial semantics in (\u03a3, \u03a0 rep )-CwFs.", "Even if one is only interested in a specific theory, working with SOGATs explicitly is advantageous.", "Indeed, many semantic conditions can instead be stated more concisely and syntactically directly at the level of the (\u03a3, \u03a0 rep )-CwF T ."], "citing_paper_content": {"title": "Towards Coherence Theorems For Equational Extensions Of Type Theories", "abstract": "We study the conservativity of extensions by additional strict equalities of dependent type theories (and more general second-order generalized algebraic theories). The conservativity of Extensional Type Theory over Intensional Type Theory was proven by Hofmann. Our goal is to generalize such results to type theories without the Uniqueness of Identity Proofs principles, such as variants of Homotopy Type Theory. For this purpose, we construct what is essentially the \u221e-congruence on the base theory that is freely generated by the considered equations. This induces a factorization of any equational extension, whose two factors can be studied independently. We conjecture that the first factor is always an equivalence when the base theory is well-behaved. We prove that the second factor is an equivalence when the \u221e-congruence is 0-truncated."}, "cited_paper_content": {"title": "A General Framework For The Semantics Of Type Theory", "abstract": "We propose an abstract notion of a type theory to unify the semantics of various type theories including Martin-Lof type theory, two-level type theory and cubical type theory. We establish basic results in the semantics of type theory: every type theory has a bi-initial model; every model of a type theory has its internal language; the category of theories over a type theory is bi-equivalent to a full sub-2-category of the 2-category of models of the type theory."}, "keywords": ["representable map categories"], "citation_intent": "background"} {"citing_id": "2303.06840v1", "cited_id": "1401.0166", "section_title": "Introduction", "citation": "While MIF can assist in diagnosis and treatment by fusing multiple medical imaging modalities for precise detection of abnormality locations #REFR .", "text_before_citation": ["Image fusion integrates essential information from multiple source images to create high-quality fused images #OTHEREFR , encompassing various source image types like digital #OTHEREFR , multi-modal #OTHEREFR , and remote sensing #OTHEREFR .", "This technology provides a clearer representation of objects and scenes, and has diverse applications such as saliency detection #OTHEREFR , object detection #OTHEREFR , and semantic segmentation #OTHEREFR .", "Among the different subcategories of image fusion, Infrared-Visible image Fusion (IVF) and Medical Image Fusion (MIF) are particularly challenging in Multi-Modality Image Fusion (MMIF) since they focus on modeling crossmodality features and preserving critical information from Figure 2 : DDFM (marked in yellow) outperforms all other methods on MSRS #OTHEREFR and RoadScene #OTHEREFR across six metrics. all sensors and modalities.", "Specifically, in IVF, fused images aim to retain both thermal radiation from infrared images and detailed texture information from visible images, thereby avoiding the limitations of visible images being sensitive to illumination conditions and infrared images being noisy and low-resolution."], "text_after_citation": ["There have been numerous methods devised recently to address the challenges posed by MMIF #OTHEREFR , and generative models #OTHEREFR have been extensively utilized to model the distribution of fused images and achieve satisfactory fusion effects.", "Among them, models based on Generative Adversarial Networks (GANs) #OTHEREFR are dominant. The workflow of GAN-based models, illustrated in Fig.", "1a , involves a generator that creates images containing information from source images, and a discriminator that determines whether the generated images are in a similar manifold to the source images.", "Although GAN-based methods have the ability to generate high-quality fused images, they suffer from unstable training, lack of interpretability and mode collapse, which seriously affect the quality of the generated samples.", "Moreover, as a black-box model, it is difficult to comprehend the internal mechanisms and behaviors of GANs, making it challenging to achieve controllable generation."], "citing_paper_content": {"title": "Ddfm: Denoising Diffusion Model For Multi-Modality Image Fusion", "abstract": "Multi-modality image fusion aims to combine different modalities to produce fused images that retain the complementary features of each modality, such as functional highlights and texture details. To leverage strong generative priors and address challenges such as unstable training and lack of interpretability for GAN-based generative methods, we propose a novel fusion algorithm based on the denoising diffusion probabilistic model (DDPM). The fusion task is formulated as a conditional generation problem under the DDPM sampling framework, which is further divided into an unconditional generation subproblem and a maximum likelihood subproblem. The latter is modeled in a hierarchical Bayesian manner with latent variables and inferred by the expectation-maximization algorithm. By integrating the inference solution into the diffusion sampling iteration, our method can generate high-quality fused images with natural image generative priors and cross-modality information from source images. Note that all we required is an unconditional pre-trained generative model, and no fine-tuning is needed. Our extensive experiments indicate that our approach yields promising fusion results in infrared-visible image fusion and medical image fusion. The code will be released."}, "cited_paper_content": {"title": "Medical Image Fusion: A Survey Of The State Of The Art", "abstract": "Medical image fusion is the process of registering and combining multiple images from single or multiple imaging modalities to improve the imaging quality and reduce randomness and redundancy in order to increase the clinical applicability of medical images for diagnosis and assessment of medical problems. Multi-modal medical image fusion algorithms and devices have shown notable achievements in improving clinical accuracy of decisions based on medical images. This review article provides a factual listing of methods and summarizes the broad scientific challenges faced in the field of medical image fusion. We characterize the medical image fusion research based on (1) the widely used image fusion methods, (2) imaging modalities, and (3) imaging of organs that are under study. This review concludes that even though there exists several open ended technological and scientific challenges, the fusion of medical images has proved to be useful for advancing the clinical reliability of using medical imaging for medical diagnostics and analysis, and is a scientific discipline that has the potential to significantly grow in the coming years."}, "keywords": ["multiple medical imaging"], "citation_intent": "background"} {"citing_id": "2305.02398v1", "cited_id": "1911.11763", "section_title": "Ii. Related Work", "citation": "In our approach, inspired by SuperGlue #REFR , we further process individual object embeddings in a graph neural network which relates objects within images and across images to encode image context.", "text_before_citation": ["of object bounding boxes into feature vectors which are used to measure similarity of detected objects between views using cosine similarity.", "In Associative3D #OTHEREFR , also feature embeddings per objects are learned which are used to match objects between views.", "Both #OTHEREFR and #OTHEREFR leverage semantic and spatial information for finding 3D correspondences.", "Recently, #OTHEREFR propose to use human models to estimate correspondences among wide-baseline view changes.", "Different to object associations, keypoint matching methods yield local image correspondences between pairs of images for moderate view point changes #OTHEREFR , #OTHEREFR , #OTHEREFR ."], "text_after_citation": ["Our approach can be an alternative or complementary approach for matching objects across views to geometric or keypoint-based approaches."], "citing_paper_content": {"title": "Learning-Based Relational Object Matching Across Views", "abstract": "Intelligent robots require object-level scene understanding to reason about possible tasks and interactions with the environment. Moreover, many perception tasks such as scene reconstruction, image retrieval, or place recognition can benefit from reasoning on the level of objects. While keypoint-based matching can yield strong results for finding correspondences for images with small to medium view point changes, for large view point changes, matching semantically on the object-level becomes advantageous. In this paper, we propose a learningbased approach which combines local keypoints with novel object-level features for matching object detections between RGB images. We train our object-level matching features based on appearance and inter-frame and cross-frame spatial relations between objects in an associative graph neural network. We demonstrate our approach in a large variety of views on realistically rendered synthetic images. Our approach compares favorably to previous state-of-the-art object-level matching approaches and achieves improved performance over a pure keypoint-based approach for large viewpoint changes."}, "cited_paper_content": {"title": "Superglue: Learning Feature Matching With Graph Neural Networks", "abstract": "This paper introduces SuperGlue, a neural network that matches two sets of local features by jointly finding correspondences and rejecting non-matchable points. Assignments are estimated by solving a differentiable optimal transport problem, whose costs are predicted by a graph neural network. We introduce a flexible context aggregation mechanism based on attention, enabling SuperGlue to reason about the underlying 3D scene and feature assignments jointly. Compared to traditional, hand-designed heuristics, our technique learns priors over geometric transformations and regularities of the 3D world through end-to-end training from image pairs. SuperGlue outperforms other learned approaches and achieves state-of-the-art results on the task of pose estimation in challenging real-world indoor and outdoor environments. The proposed method performs matching in real-time on a modern GPU and can be readily integrated into modern SfM or SLAM systems. The code and trained weights are publicly available at https://github.com/magicleap/SuperGluePretrainedNetwork."}, "keywords": ["individual object embeddings", "graph neural network"], "citation_intent": "method"} {"citing_id": "2303.06167v1", "cited_id": "1906.02659", "section_title": "Dollar Street And Coco. Applying This Insight From", "citation": "As expected, finetuning GeoDE achieves a higher accuracy (where a prediction is correct if the object is one of the top-5 predicted labels #REFR ) of 13.1% on Dollar Street compared to finetuning ImageNet with 8.4%.", "text_before_citation": ["CelebA that relatively minor manipulations to the proportion of the dataset from underrepresented subcategories can significantly impact the performance of those subcategories, we now turn to the more complex and realistic object datasets of Dollar Street and COCO.", "We consider the task of recognizing 15 objects in Dollar Street that have corresponding objects in COCO.", "Although the object classes are the same between datasets, their visual distribution is different as COCO images largely come from only higher-income regions #OTHEREFR whereas Dollar Street was collected to be more geographically diverse.", "Nevertheless COCO images are more plentiful; to simulate this, we consider a finetuning dataset of 128 images where 90% are from COCO and 10% from Dollar Street.", "We use two pretrained models, named after the dataset each is trained on: ImageNet #OTHEREFR , where the training data is more similar to the COCO distribution, and GeoDE #OTHEREFR , which is trained on a newer and more geographically diverse dataset."], "text_after_citation": ["However, what we ultimately want to investigate is how much investing in a better finetuning dataset can help overcome the problem (Fig. 6) .", "Thus, we manipulate the finetuning dataset (simulating the collection of more Dollar Street-like images, while keeping the overall finetuning number the same), and observe that with just 20% rather than 10% of images coming from Dollar Street, ImageNet is albe to outperform the performance of the GeoDE baseline with an accuracy of 21.7%."], "citing_paper_content": {"title": "Overcoming Bias In Pretrained Models By Manipulating The Finetuning Dataset", "abstract": "Transfer learning is beneficial by allowing the expressive features of models pretrained on large-scale datasets to be finetuned for the target task of smaller, more domainspecific datasets. However, there is a concern that these pretrained models may come with their own biases which would propagate into the finetuned model. In this work, we investigate bias when conceptualized as both spurious correlations between the target task and a sensitive attribute as well as underrepresentation of a particular group in the dataset. Under both notions of bias, we find that (1) models finetuned on top of pretrained models can indeed inherit their biases, but (2) this bias can be corrected for through relatively minor interventions to the finetuning dataset, and often with a negligible impact to performance. Our findings imply that careful curation of the finetuning dataset is important for reducing biases on a downstream task, and doing so can even compensate for bias in the pretrained model."}, "cited_paper_content": {"title": "Does Object Recognition Work For Everyone?", "abstract": "The paper analyzes the accuracy of publicly available object-recognition systems on a geographically diverse dataset. This dataset contains household items and was designed to have a more representative geographical coverage than commonly used image datasets in object recognition. We find that the systems perform relatively poorly on household items that commonly occur in countries with a low household income. Qualitative analyses suggest the drop in performance is primarily due to appearance differences within an object class (e.g., dish soap) and due to items appearing in a different context (e.g., toothbrushes appearing outside of bathrooms). The results of our study suggest that further work is needed to make object-recognition systems work equally well for people across different countries and income levels."}, "keywords": ["ImageNet"], "citation_intent": "result"} {"citing_id": "2303.02715v1", "cited_id": "1702.00758", "section_title": "Iv. Feature Type Transformations", "citation": "The term deep hashing has been coined as an umbrella term for methods which aim at extracting compact and stable representations with deep learning techniques #REFR .", "text_before_citation": ["A binary vector b of length nm can be transformed back to an integer vector q of length n by mapping consecutive chunks of m bits to their decimal representation.", "Further, b can be transformed to an integer set s of variable length.", "For instance, this feature set can consist of all indexes of 1s in the binary vector, s = {i|b i = 1} with b i \u2208 b.", "This feature type transformation has been successfully applied to feature vectors obtained by deep learning-based feature extractors, e.g. in #OTHEREFR . Compact, e.g.", "binary, representations (which additionally turn out to be beneficial for workload reduction in biometric identification systems #OTHEREFR ) can also be extracted by deep learning techniques."], "text_after_citation": ["If applied to biometric data, such methods would need to overcome intra-class variations.", "Consequently, a reliable extraction of stable feature vectors (deep hashes) would enable a subsequent application of conventional (and provable secure) cryptographic algorithms for the purpose of template protection.", "In the recent past, deep hashes have been extracted from different biometric characteristics in various ways, e.g. in #OTHEREFR - #OTHEREFR , including multi-biometrics #OTHEREFR .", "Most of the proposed schemes, however, are focusing on facial images and have been recently surveyed in #OTHEREFR .", "It is well-known that a single biometric characteristic, e.g."], "citing_paper_content": {"title": "Deep Learning In The Field Of Biometric Template Protection: An Overview", "abstract": "Today, deep learning represents the most popular and successful form of machine learning. Deep learning has revolutionised the field of pattern recognition, including biometric recognition. Biometric systems utilising deep learning have been shown to achieve auspicious recognition accuracy, surpassing human performance. Apart from said breakthrough advances in terms of biometric performance, the use of deep learning was reported to impact different covariates of biometrics such as algorithmic fairness, vulnerability to attacks, or template protection. Technologies of biometric template protection are designed to enable a secure and privacy-preserving deployment of biometrics. In the recent past, deep learning techniques have been frequently applied in biometric template protection systems for various purposes. This work provides an overview of how advances in deep learning take influence on the field of biometric template protection. The interrelation between improved biometric performance rates and security in biometric template protection is elaborated. Further, the use of deep learning for obtaining feature representations that are suitable for biometric template protection is discussed. Novel methods that apply deep learning to achieve various goals of biometric template protection are surveyed along with deep learning-based attacks."}, "cited_paper_content": {"title": "Hashnet: Deep Learning To Hash By Continuation", "abstract": "Learning to hash has been widely applied to approximate nearest neighbor search for large-scale multimedia retrieval, due to its computation efficiency and retrieval quality. Deep learning to hash, which improves retrieval quality by end-to-end representation learning and hash encoding, has received increasing attention recently. Subject to the ill-posed gradient difficulty in the optimization with sign activations, existing deep learning to hash methods need to first learn continuous representations and then generate binary hash codes in a separated binarization step, which suffer from substantial loss of retrieval quality. This work presents HashNet, a novel deep architecture for deep learning to hash by continuation method with convergence guarantees, which learns exactly binary hash codes from imbalanced similarity data. The key idea is to attack the ill-posed gradient problem in optimizing deep networks with non-smooth binary activations by continuation method, in which we begin from learning an easier network with smoothed activation function and let it evolve during the training, until it eventually goes back to being the original, difficult to optimize, deep network with the sign activation function. Comprehensive empirical evidence shows that HashNet can generate exactly binary hash codes and yield state-of-the-art multimedia retrieval performance on standard benchmarks."}, "keywords": ["deep hashing"], "citation_intent": "background"} {"citing_id": "2303.15015v1", "cited_id": "1612.00410", "section_title": "Introduction", "citation": "Based on this assumption, we design a new message passing mechanism by resorting to information bottleneck #REFR to only propagate class-agnostic knowledge between nodes of different classes.", "text_before_citation": ["If simply applying these methods to graph-structured data by individually treating each node, the topological structure and the interaction between nodes will be ignored. Recently, #OTHEREFR", "(2020a) ; #OTHEREFR propose to overcome catastrophic forgetting for graph data.", "However, They focus on static graph snapshots, and utilize static GNN for each snapshot, thus largely ignoring fine-grained temporal topological information.", "In this paper, we put forward the first class-incremental learning approach towards open temporal dynamic graphs, called OTGNet.", "To mitigate the issue of heterophily propagation, we assume the information of a node can be disentangled into class-relevant and class-agnostic one."], "text_after_citation": ["In this way, we can well avoid transferring conflictive information.", "To prevent catastrophic knowledge forgetting over old classes, we propose to select representative sub-graph structures generated from old classes, and incorporate them into the learning process of new classes.", "Previous works #OTHEREFR point out triad structure (triangle-shape structure) is a fundamental element of temporal graph and can capture evolution patterns.", "Motivated by this, we devise a value function to select not only important but also diverse triad structures, and replay them for continual learning.", "Due to the combinational property, optimizing the value function is NP-hard."], "citing_paper_content": {"title": "Towards Open Temporal Graph Neural Net-Works", "abstract": "Graph neural networks (GNNs) for temporal graphs have recently attracted increasing attentions, where a common assumption is that the class set for nodes is closed. However, in real-world scenarios, it often faces the open set problem with the dynamically increased class set as the time passes by. This will bring two big challenges to the existing temporal GNN methods: (i) How to dynamically propagate appropriate information in an open temporal graph, where new class nodes are often linked to old class nodes. This case will lead to a sharp contradiction. This is because typical GNNs are prone to make the embeddings of connected nodes become similar, while we expect the embeddings of these two interactive nodes to be distinguishable since they belong to different classes. (ii) How to avoid catastrophic knowledge forgetting over old classes when learning new classes occurred in temporal graphs. In this paper, we propose a general and principled learning approach for open temporal graphs, called OTGNet, with the goal of addressing the above two challenges. We assume the knowledge of a node can be disentangled into class-relevant and class-agnostic one, and thus explore a new message passing mechanism by extending the information bottleneck principle to only propagate class-agnostic knowledge between nodes of different classes, avoiding aggregating conflictive information. Moreover, we devise a strategy to select both important and diverse triad sub-graph structures for effective class-incremental learning. Extensive experiments on three real-world datasets of different domains demonstrate the superiority of our method, compared to the baselines."}, "cited_paper_content": {"title": "Deep Variational Information Bottleneck", "abstract": "We present a variational approximation to the information bottleneck of Tishby et al. (1999). This variational approach allows us to parameterize the information bottleneck model using a neural network and leverage the reparameterization trick for efficient training. We call this method\"Deep Variational Information Bottleneck\", or Deep VIB. We show that models trained with the VIB objective outperform those that are trained with other forms of regularization, in terms of generalization performance and robustness to adversarial attack."}, "keywords": ["class-agnostic knowledge", "information bottleneck"], "citation_intent": "method"} {"citing_id": "2305.01072v1", "cited_id": "1601.04037", "section_title": "D. Re-Timing", "citation": "The re-timing problem #REFR is convex, since the objective is convex, the first constraint is linear, and the second constraint can be expressed as 2(N \u2212 1) linear inequalities.", "text_before_citation": ["Specifically, we limit the maximum change in adjacent scaling factors as |\u03b7 j+1 \u2212 \u03b7 j | \u2264 \u03ba, where \u03ba > 0.", "Overall, our re-timing problem is then", "minimize J 1 + J 2 subject to N j=1 \u03b7 j T j = T, |\u03b7 j+1 \u2212 \u03b7 j | \u2264 \u03ba, j = 1, . . . , N \u2212 1. (19)", "The variables are the factors \u03b7 1 , . . .", ", \u03b7 N , which are subject to the implicit constraint \u03b7 j > 0."], "text_after_citation": ["More precisely, it can be verified that problem #OTHEREFR is representable as an SOCP [39, \u00a72.3] .", "Given that this problem has only N variables and sparse structure, it can be solved extremely quickly.", "After solving problem #OTHEREFR previous times and path.", "Independently of the success of the iteration, we then decrease the value of \u03ba.", "This process is repeated until \u03ba becomes smaller than a fixed tolerance \u03b5 > 0."], "citing_paper_content": {"title": "Fast Path Planning Through Large Collections Of Safe Boxes", "abstract": "We present a fast algorithm for the design of smooth paths (or trajectories) that are constrained to lie in a collection of axis-aligned boxes. We consider the case where the number of these safe boxes is large, and basic preprocessing of them (such as finding their intersections) can be done offline. At runtime we quickly generate a smooth path between given initial and terminal positions. Our algorithm designs trajectories that are guaranteed to be safe at all times, and it detects infeasibility whenever such a trajectory does not exist. Our algorithm is based on two subproblems that we can solve very efficiently: finding a shortest path in a weighted graph, and solving (multiple) convex optimal control problems. We demonstrate the proposed path planner on large-scale numerical examples, and we provide an efficient open-source software implementation, fastpathplanning."}, "cited_paper_content": {"title": "Funnel Libraries For Real-Time Robust Feedback Motion Planning", "abstract": "We consider the problem of generating motion plans for a robot that are guaranteed to succeed despite uncertainty in the environment, parametric model uncertainty, and disturbances. Furthermore, we consider scenarios where these plans must be generated in real-time, because constraints such as obstacles in the environment may not be known until they are perceived (with a noisy sensor) at runtime. Our approach is to pre-compute a library of \"funnels\" along different maneuvers of the system that the state is guaranteed to remain within (despite bounded disturbances) when the feedback controller corresponding to the maneuver is executed. We leverage powerful computational machinery from convex optimization (sums-of-squares programming in particular) to compute these funnels. The resulting funnel library is then used to sequentially compose motion plans at runtime while ensuring the safety of the robot. A major advantage of the work presented here is that by explicitly taking into account the effect of uncertainty, the robot can evaluate motion plans based on how vulnerable they are to disturbances. ::: We demonstrate and validate our method using extensive hardware experiments on a small fixed-wing airplane avoiding obstacles at high speed (~12 mph), along with thorough simulation experiments of ground vehicle and quadrotor models navigating through cluttered environments. To our knowledge, these demonstrations constitute one of the first examples of provably safe and robust control for robotic systems with complex nonlinear dynamics that need to plan in real-time in environments with complex geometric constraints."}, "keywords": ["first constraint"], "citation_intent": "background"} {"citing_id": "2303.15222v1", "cited_id": "1612.00337", "section_title": "On The Complex Plane Region", "citation": "We compare the results with the AAA method #REFR , which is based on randomly sampled points within the region of interest and can produce varying performances.", "text_before_citation": ["In this subsection, we present two numerical examples on a square region and an annulus, respectively."], "text_after_citation": ["To account for this variability, we conduct five tests and use the median result for comparison.", "The first example is on the square region [\u22120.5, 0.5] \u00d7 [\u22120.5, 0.5] with f (z) = exp(1/(5.1 2 + (10z) 2 )), which has two essential singularities at \u00b10.51i.", "In Figure 8 , the AAA method converges rapidly and reaches the error tolerance of 10 \u221210 with 10 4 random sample points.", "However, the method does not accurately represent the drastic local changes near the singularities with only 10 4 random sample points.", "In contrast, the rational interpolation with F = D(0.510001i, 10 \u22126 ) \u222a D(\u22120.510001i, 10 \u22126 ) converges exponentially as n increases."], "citing_paper_content": {"title": "Barycentric Interpolation Based On Equilibrium Logarithmic Potential", "abstract": "A novel barycentric interpolation algorithm with a specific exponential convergence rate is designed for analytic functions defined on the complex plane, with singularities located near the interpolation region, where the region is compact and can be disconnected or multiconnected. The core of the method is the efficient computation of the interpolation nodes and poles using discrete distributions that approximate the equilibrium logarithmic potential, achieved by solving a Symm's integral equation. It takes different strategies to distribute the poles for isolated singularities and branch points, respectively. In particular, if poles are not considered, it derives a polynomial interpolation with exponential convergence. Numerical experiments illustrate the superior performance of the proposed method."}, "cited_paper_content": {"title": "The Aaa Algorithm For Rational Approximation", "abstract": "We introduce a new algorithm for approximation by rational functions on a real or complex set of points, implementable in 40 lines of Matlab and requiring no user input parameters. Even on a disk or interval the algorithm may outperform existing methods, and on more complicated domains it is especially competitive. The core ideas are (1) representation of the rational approximant in barycentric form with interpolation at certain support points and (2) greedy selection of the support points to avoid exponential instabilities. The name AAA stands for\"adaptive Antoulas--Anderson\"in honor of the authors who introduced a scheme based on (1). We present the core algorithm with a Matlab code and nine applications and describe variants targeted at problems of different kinds. Comparisons are made with vector fitting, RKFIT, and other existing methods for rational approximation."}, "keywords": ["randomly sampled points", "AAA method"], "citation_intent": "result"} {"citing_id": "2303.00968v2", "cited_id": "2002.10764", "section_title": "Related Work", "citation": "However, unlike our work and the work of Patro et al. #REFR , S\u00fchr et al.", "text_before_citation": ["It is possible for a user with unique tastes to receive low utility recommendations and still not prefer another user's recommendation lists.", "Also, our fairness formulation extends beyond the users receiving recommendations to providers of recommended items and envy-freeness provides no way to compare users who are getting different types of benefits from a system.", "In addition our fairness definitions are dynamic, a case not considered by #OTHEREFR .", "Like Patro et al. #OTHEREFR , the work of S\u00fchr et al.", "#OTHEREFR investigates fairness in two-sided platforms, specifically those like Uber or Lyft where income opportunities are allocated to drivers."], "text_after_citation": ["#OTHEREFR take proportionality as their definition of fairness, specifically proportionality with respect to time in a dynamic setting, and ensure that there is a fair distribution of income to the provider side of the platform.", "Freeman et al.", "#OTHEREFR investigate what they call dynamic social choice functions in settings where a fixed set of agents select a single item to share over a series of time steps.", "The work focuses on overall utility to the agents instead of considering the multiple sides of the recommendation interaction.", "Their problem is fundamentally a voting problem since all agents share the result, whereas we are focused on personalized recommendation."], "citing_paper_content": {"title": "Dynamic Fairness-Aware Recommendation Through Multi-Agent Social Choice", "abstract": "Algorithmic fairness in the context of personalized recommendation presents significantly different challenges to those commonly encountered in classification tasks. Researchers studying classification have generally considered fairness to be a matter of achieving equality of outcomes between a protected and unprotected group, and built algorithmic interventions on this basis. We argue that fairness in real-world application settings in general, and especially in the context of personalized recommendation, is much more complex and multi-faceted, requiring a more general approach. We propose a model to formalize multistakeholder fairness in recommender systems as a two stage social choice problem. In particular, we express recommendation fairness as a novel combination of an allocation and an aggregation problem, which integrate both fairness concerns and personalized recommendation provisions, and derive new recommendation techniques based on this formulation. Simulations demonstrate the ability of the framework to integrate multiple fairness concerns in a dynamic way. CCS Concepts: \u2022 Information systems \u2192 Recommender systems; \u2022 Computing methodologies \u2192 Multi-agent systems; \u2022 Social and professional topics \u2192 User characteristics."}, "cited_paper_content": {"title": "Fairrec: Two-Sided Fairness For Personalized Recommendations In Two-Sided Platforms", "abstract": "We investigate the problem of fair recommendation in the context of two-sided online platforms, comprising customers on one side and producers on the other. Traditionally, recommendation services in these platforms have focused on maximizing customer satisfaction by tailoring the results according to the personalized preferences of individual customers. However, our investigation reveals that such customer-centric design may lead to unfair distribution of exposure among the producers, which may adversely impact their well-being. On the other hand, a producer-centric design might become unfair to the customers. Thus, we consider fairness issues that span both customers and producers. Our approach involves a novel mapping of the fair recommendation problem to a constrained version of the problem of fairly allocating indivisible goods. Our proposed FairRec algorithm guarantees at least Maximin Share (MMS) of exposure for most of the producers and Envy-Free up to One item (EF1) fairness for every customer. Extensive evaluations over multiple real-world datasets show the effectiveness of FairRec in ensuring two-sided fairness while incurring a marginal loss in the overall recommendation quality."}, "keywords": ["S\u00fchr"], "citation_intent": "result"} {"citing_id": "2303.10770v1", "cited_id": "1507.07629", "section_title": "Model Architecture", "citation": "Each RN in the R in layer processes input spikes asynchronously from a corresponding pixel in the event camera following equation #REFR in real time without preprocessing.", "text_before_citation": ["The network is formed sequentially by an RN layer (R in ) as a feature descriptor for local temporal encoding, Convolution layers (C 1 -C 4 ), a spike conversion (SC) layer, another RN layer (R f ) for global temporal encoding and fully-connected layers (FC, F 1 -F 2 ) for classification.", "Figure 1 illustrates the overall architecture of the proposed network.", "The R in layer has the same size as the inputs."], "text_after_citation": ["Examples of the output generated by R in for DVS Lip inputs are shown in the left bottom insets of Figure 1 .", "Depending on the application, different temporal resolutions can be chosen by adjusting the time constant \u03c4 of equation 2 and the frequency of R in output acquisitions.", "A shorter acquisition interval creates more frequent activations for the following layers.", "A shorter time constant and more frequent acquisitions produce temporally finer outputs, while a longer time constant and less frequent acquisitions produce spatially more detailed outputs owing to the slower decay.", "#OTHEREFR To keep the discussions simple, here we use the same time constant and acquisition frequency for both datasets."], "citing_paper_content": {"title": "Retinanet: Reservoir-Enabled Time Integrated Attention Network For Event-Based Video Processing", "abstract": "Event-based cameras are inspired by the sparse and asynchronous spike representation of the biological visual system. However, processing the even data requires either using expensive feature descriptors to transform spikes into frames, or using spiking neural networks that are difficult to train. In this work, we propose a neural network architecture based on simple convolution layers integrated with dynamic temporal encoding reservoirs with low hardware and training costs. The Reservoir-enabled Time Integrated Attention Network (RetinaNet) allows the network to efficiently process asynchronous temporal features, and achieves the highest accuracy of 99.2% for DVS128 Gesture reported to date, and one of the highest accuracy of 67.5% for DVS Lip dataset at a much smaller network size. By leveraging the internal dynamics of memristors, asynchronous temporal feature encoding can be implemented at very low hardware cost without preprocessing or dedicated memory and arithmetic units. The use of simple DNN blocks and backpropagation based training rules further reduces its implementation cost. Code will be publicly available."}, "cited_paper_content": {"title": "Converting Static Image Datasets To Spiking Neuromorphic Datasets Using Saccades", "abstract": "Creating datasets for Neuromorphic Vision is a challenging task. A lack of available recordings from Neuromorphic Vision sensors means that data must typically be recorded specifically for dataset creation rather than collecting and labelling existing data. The task is further complicated by a desire to simultaneously provide traditional frame-based recordings to allow for direct comparison with traditional Computer Vision algorithms. Here we propose a method for converting existing Computer Vision static image datasets into Neuromorphic Vision datasets using an actuated pan-tilt camera platform. Moving the sensor rather than the scene or image is a more biologically realistic approach to sensing and eliminates timing artifacts introduced by monitor updates when simulating motion on a computer monitor. We present conversion of two popular image datasets (MNIST and Caltech101) which have played important roles in the development of Computer Vision, and we provide performance metrics on these datasets using spike-based recognition algorithms. This work contributes datasets for future use in the field, as well as results from spike-based algorithms against which future works can compare. Furthermore, by converting datasets already popular in Computer Vision, we enable more direct comparison with frame-based approaches."}, "keywords": ["event camera"], "citation_intent": "method"} {"citing_id": "2303.14256v1", "cited_id": "1602.00602", "section_title": "C. Measurement Configuration", "citation": "Since warmup can not be clearly distinguished from warmed up state by statistical methods #REFR , we always use the same count of warmup and measurement iterations.", "text_before_citation": ["In the following, we exemplarily describe two parts of the configuration: How to choose VM and execution count (part of the measurement parametrization), whether to parallelize the measurements (part of the technical measurement environment) and whether to remove outliers (part of the analysis configuration). Finally, we discuss the generalizability of our configuration. Our dataset is available.", "#OTHEREFR a) Parametrization: To identify performance changes reliably, we need to choose the warmup iteration, measurement iteration, repetition and VM count."], "text_after_citation": ["The same product repetitions * iterations implies the same overall workload executions. For every VM, iteration measurement values are taken.", "Figure 5 shows the average F 1 -score of all workload types for different iteration, repetition and VM counts using t-test like recommended by #OTHEREFR .", "Since we only want 1 % false positives, we set the significance level of the t-test to 99 %.", "The F 1 -score is not only influenced by the measurement configuration, but also by the statistical test.", "Additionally to t-test, we used confidence interval comparison like recommended by literature #OTHEREFR , #OTHEREFR , and Mann-Whitney-test like recommended by literature #OTHEREFR , #OTHEREFR ."], "citing_paper_content": {"title": "Automated Identification Of Performance Changes At Code Level", "abstract": "To develop software with optimal performance, even small performance changes need to be identified. Identifying performance changes is challenging since the performance of software is influenced by non-deterministic factors. Therefore, not every performance change is measurable with reasonable effort. In this work, we discuss which performance changes are measurable at code level with reasonable measurement effort and how to identify them. We present (1) an analysis of the boundaries of measuring performance changes, (2) an approach for determining a configuration for reproducible performance change identification, and (3) an evaluation comparing of how well our approach is able to identify performance changes in the application server Jetty compared with the usage of Jetty's own performance regression benchmarks. Thereby, we find (1) that small performance differences are only measurable by fine-grained measurement workloads, (2) that performance changes caused by the change of one operation can be identified using a unit-test-sized workload definition and a suitable configuration, and (3) that using our approach identifies small performance regressions more efficiently than using Jetty's performance regression benchmarks."}, "cited_paper_content": {"title": "Virtual Machine Warmup Blows Hot And Cold", "abstract": "Virtual Machines (VMs) with Just-In-Time (JIT) compilers are traditionally thought to execute programs in two phases: the initial warmup phase determines which parts of a program would most benefit from dynamic compilation, before JIT compiling those parts into machine code; subsequently the program is said to be at a steady state of peak performance. Measurement methodologies almost always discard data collected during the warmup phase such that reported measurements focus entirely on peak performance. We introduce a fully automated statistical approach, based on changepoint analysis, which allows us to determine if a program has reached a steady state and, if so, whether that represents peak performance or not. Using this, we show that even when run in the most controlled of circumstances, small, deterministic, widely studied microbenchmarks often fail to reach a steady state of peak performance on a variety of common VMs. Repeating our experiment on 3 different machines, we found that at most 43.5% of pairs consistently reach a steady state of peak performance."}, "keywords": ["measurement iterations", "warmup"], "citation_intent": "method"} {"citing_id": "2303.11502v1", "cited_id": "1704.03477", "section_title": "Sketch Vector Normalisation", "citation": "Therefore, taking inspiration from sketch/handwriting generation literature #REFR , we define our sketch-coordinate in terms of offset values to make it scale-agnostic.", "text_before_citation": ["Absolute coordinate based sketch-vector representation is scale dependent as the user can draw (see Fig. 2 ) the same concept in varying scales."], "text_after_citation": ["In particular, instead of three elements with absolute coordinate (x i , y i ) and stroke token (b i ), now we represent every point as a five-element vector", "(\u2206x i , \u2206y i , p 1 i , p 2 i , p 3 i ), where \u2206x i = (x i+1 \u2212 x i ) and \u2206y i = (y i+1 \u2212 y i ).", "Consequently, (p 1 i , p 2 i , p 3 i )", "represents three pen-state situations: pen touching the paper, pen being lifted and end of drawing.", "Convolutional Encoder Instead of any complicated backbone architectures #OTHEREFR , we use a straight-forward VGG-16 as the backbone convolutional encoder, which takes a photo (image) P \u2208 R H\u00d7W \u00d73 as input and outputs multiscale convolutional feature maps as F \u2208"], "citing_paper_content": {"title": "Sketch2Saliency: Learning To Detect Salient Objects From Human Drawings", "abstract": "Human sketch has already proved its worth in various visual understanding tasks (e.g., retrieval, segmentation, image-captioning, etc). In this paper, we reveal a new trait of sketches-that they are also salient. This is intuitive as sketching is a natural attentive process at its core. More specifically, we aim to study how sketches can be used as a weak label to detect salient objects present in an image. To this end, we propose a novel method that emphasises on how \"salient object\" could be explained by handdrawn sketches. To accomplish this, we introduce a phototo-sketch generation model that aims to generate sequential sketch coordinates corresponding to a given visual photo through a 2D attention mechanism. Attention maps accumulated across the time steps give rise to salient regions in the process. Extensive quantitative and qualitative experiments prove our hypothesis and delineate how our sketchbased saliency detection model gives a competitive performance compared to the state-of-the-art."}, "cited_paper_content": {"title": "A Neural Representation Of Sketch Drawings", "abstract": "We present sketch-rnn, a recurrent neural network (RNN) able to construct stroke-based drawings of common objects. The model is trained on thousands of crude human-drawn images representing hundreds of classes. We outline a framework for conditional and unconditional sketch generation, and describe new robust training methods for generating coherent sketch drawings in a vector format."}, "keywords": ["sketch-coordinate", "sketch/handwriting generation literature"], "citation_intent": "method"} {"citing_id": "2303.13654v1", "cited_id": "2003.08934", "section_title": "Spherical Multi-Resolution Hash Encoding", "citation": "This representation can be interpreted as an omnidirectional extension of a Normalized Device Coordinates (NDC) used for novel view synthesis in forward facing scenes in #REFR .", "text_before_citation": ["We convert the input location represented in Cartesian coordinates (x, y, z) \u2208 R 3 to spherical coordinates (\u03c6, \u03b8, 1 1+r ) \u2208 [0, 1] 3 as follows:", "EQUATION", "This enables at the same time an efficient feature allocation strategy related to the pixel's measurement accuracy and a representation that is not limited by the frustum boundary.", "Feature Allocation An increasing distance to the camera center leads to a larger measurement uncertainty as disparity is inversely-proportional to its depth #OTHEREFR .", "To allocate more capacity in areas of high certainty, it is therefore efficient to allocate the features evenly in inverse depth space."], "text_after_citation": ["While the NDC parameterization contracts the scene along inverse depth, the region is strictly bounded by the model view frustum, which discards the training view information outside of the frustum and also causes artifacts around the frustum boundary as shown in Fig. 4 .", "In contrast, our spherical representation can make full use of the training view information without being limited to the field of view.", "Unbounded Scenes Our representation can naturally capture unbounded scenes.", "While other parameterizations such as the recently proposed space contraction from #OTHEREFR can also represent 360 \u2022 unbounded scenes, it explicitly separates foreground and background.", "This is effective only when the camera trajectory is object-centric and inward-looking, and the region of interest needs to be known a priori in world coordinates."], "citing_paper_content": {"title": "Newton: Neural View-Centric Mapping For On-The-Fly Large-Scale Slam", "abstract": "Neural field-based 3D representations have recently been adopted in many areas including SLAM systems. Current neural SLAM or online mapping systems lead to impressive results in the presence of simple captures, but they rely on a world-centric map representation as only a single neural field model is used. To define such a world-centric representation, accurate and static prior information about the scene, such as its boundaries and initial camera poses, are required. However, in real-time and on-the-fly scene capture applications, this prior knowledge cannot be assumed as fixed or static, since it dynamically changes and it is subject to significant updates based on run-time observations. Particularly in the context of large-scale mapping, significant camera pose drift is inevitable, necessitating the correction via loop closure. To overcome this limitation, we propose NEWTON, a view-centric mapping method that dynamically constructs neural fields based on run-time observation. In contrast to prior works, our method enables camera pose updates using loop closures and scene boundary updates by representing the scene with multiple neural fields, where each is defined in a local coordinate system of a selected keyframe. The experimental results demonstrate the superior performance of our method over existing world-centric neural field-based SLAM systems, in particular for large-scale scenes subject to camera pose updates."}, "cited_paper_content": {"title": "Nerf: Representing Scenes As Neural Radiance Fields For View Synthesis", "abstract": "We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. Our algorithm represents a scene using a fully-connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location $(x,y,z)$ and viewing direction $(\\theta, \\phi)$) and whose output is the volume density and view-dependent emitted radiance at that spatial location. We synthesize views by querying 5D coordinates along camera rays and use classic volume rendering techniques to project the output colors and densities into an image. Because volume rendering is naturally differentiable, the only input required to optimize our representation is a set of images with known camera poses. We describe how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrate results that outperform prior work on neural rendering and view synthesis. View synthesis results are best viewed as videos, so we urge readers to view our supplementary video for convincing comparisons."}, "keywords": ["forward facing scenes", "novel view synthesis"], "citation_intent": "method"} {"citing_id": "2304.12372v1", "cited_id": "1505.04597", "section_title": "Experimental Results", "citation": "We experiment with FOVs of 180\u00b0, 120\u00b0, 60\u00b0, and uniformly random in the #REFR 120] \u2022 interval.", "text_before_citation": ["The mean relative error and RMSE are 4.25 % and 173.0 on the entire test set.", "We observe that the network struggles with larger temperature variations across the image.", "However, temperature is accurately predicted despite color changes in the input.", "Planar illuminance Here, we experiment with three types of inputs: a calibrated HDR image (\"HDR\"), a linear LDR image (reexposed HDR clipped to the [0, 1] interval) with (\"LDR+scale\") and without (\"LDR\") knowledge of the exposure.", "In addition, we also evaluate the impact of the FOV of the input."], "text_after_citation": ["The image is stored in an equirectangular representation for 180\u00b0, and perspective projection for the other, lower FOVs.", "Tab. 2 shows the results of these series of experiments.", "We report the RMSE and R 2 for each combination of input type and FOV.", "First, observe that the experiment with a FOV of 180\u00b0with the HDR image (top-left in tab. 2) amounts to learning the illuminance integration (eq. (1)).", "Unsurprisingly, narrowing the FOV results in decreased performance, due to the hidden lights beyond the FOV which may directly affect the planar illuminance."], "citing_paper_content": {"title": "Beyond The Pixel: A Photometrically Calibrated Hdr Dataset For Luminance And Color Temperature Prediction", "abstract": "Light plays an important role in human well-being. However, most computer vision tasks treat pixels without considering their relationship to physical luminance. To address this shortcoming, we present the first large-scale photometrically calibrated dataset of high dynamic range 360\u00b0panoramas. Our key contribution is the calibration of an existing, uncalibrated HDR Dataset. We do so by accurately capturing RAW bracketed exposures simultaneously with a professional photometric measurement device (chroma meter) for multiple scenes across a variety of lighting conditions. Using the resulting measurements, we establish the calibration coefficients to be applied to the HDR images. The resulting dataset is a rich representation of indoor scenes which displays a wide range of illuminance and color temperature, and varied types of light sources. We exploit the dataset to introduce three novel tasks: where per-pixel luminance, per-pixel temperature and planar illuminance can be predicted from a single input image. Finally, we also capture another smaller calibrated dataset with a commercial 360\u00b0camera, to experiment on generalization across cameras. We are optimistic that the release of our datasets and associated code will spark interest in physically accurate light estimation within the community."}, "cited_paper_content": {"title": "U-Net: Convolutional Networks For Biomedical Image Segmentation", "abstract": "There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net ."}, "keywords": ["FOVs"], "citation_intent": "method"} {"citing_id": "2304.13649v1", "cited_id": "1906.00067", "section_title": "Experiments 6.1 Datasets", "citation": "Outside-Knowledge Visual Question Answering (OK-VQA) #REFR : This dataset consists of triplets, including an image, a question about the image, and an answer to the mentioned question.", "text_before_citation": [], "text_after_citation": ["Answering most of the questions in this dataset needs a piece of information that is not provided in the image.", "Therefore, accessing an external source of information is required for this task.", "A retrieval dataset based on a Wikipedia dump 7 with 11 million passages was later constructed by Qu et al.", "#OTHEREFR , which we use to train and evaluate our retrievers.", "This dataset contains 9009 questions for training, 2523 questions for validation, and 2523 for testing #OTHEREFR ."], "citing_paper_content": {"title": "A Symmetric Dual Encoding Dense Retrieval Framework For Knowledge-Intensive Visual Question Answering", "abstract": "Knowledge-Intensive Visual Question Answering (KI-VQA) refers to answering a question about an image whose answer does not lie in the image. This paper presents a new pipeline for KI-VQA tasks, consisting of a retriever and a reader. First, we introduce DEDR, a symmetric dual encoding dense retrieval framework in which documents and queries are encoded into a shared embedding space using uni-modal (textual) and multi-modal encoders. We introduce an iterative knowledge distillation approach that bridges the gap between the representation spaces in these two encoders. Extensive evaluation on two well-established KI-VQA datasets, i.e., OK-VQA and FVQA, suggests that DEDR outperforms state-of-the-art baselines by 11.6% and 30.9% on OK-VQA and FVQA, respectively. Utilizing the passages retrieved by DEDR, we further introduce MM-FiD, an encoder-decoder multi-modal fusion-in-decoder model, for generating a textual answer for KI-VQA tasks. MM-FiD encodes the question, the image, and each retrieved passage separately and uses all passages jointly in its decoder. Compared to competitive baselines in the literature, this approach leads to 5.5% and 8.5% improvements in terms of question answering accuracy on OK-VQA and FVQA, respectively."}, "cited_paper_content": {"title": "Ok-Vqa: A Visual Question Answering Benchmark Requiring External Knowledge", "abstract": "Visual Question Answering (VQA) in its ideal form lets us study reasoning in the joint space of vision and language and serves as a proxy for the AI task of scene understanding. However, most VQA benchmarks to date are focused on questions such as simple counting, visual attributes, and object detection that do not require reasoning or knowledge beyond what is in the image. In this paper, we address the task of knowledge-based visual question answering and provide a benchmark, called OK-VQA, where the image content is not sufficient to answer the questions, encouraging methods that rely on external knowledge resources. Our new dataset includes more than 14,000 questions that require external knowledge to answer. We show that the performance of the state-of-the-art VQA models degrades drastically in this new setting. Our analysis shows that our knowledge-based VQA task is diverse, difficult, and large compared to previous knowledge-based VQA datasets. We hope that this dataset enables researchers to open up new avenues for research in this domain."}, "keywords": ["Knowledge", "dataset"], "citation_intent": "method"} {"citing_id": "2304.06114v1", "cited_id": "2004.01177", "section_title": "Related Work", "citation": "CenterTrack #REFR learns to generate a center heatmap for object detection and an offset vector that represents the displacement from one frame to the next using the two consecutive frames and the center heatmap of the previous frame.", "text_before_citation": ["TubeTK proposes the concept of bounding-tubes to represent the spatiotemporal location of objects detected in a video.", "A bounding-tube is defined as a series of three bounding boxes of the same object from different frames.", "Tracks are broken down into a combination of tubes where the bounding box at each frame is defined as the middle bounding box of a tube in order to link tubes together.", "The three bounding boxes forming a tube do not have to be from consecutive frames, which allows the interpolation of the object location within a tube. Tubes are linked using IoU as a distance metric.", "Chained-Tracker takes two adjacent frames as input and generates detection pairs of each object in both frames and then link each consecutive pair using IoU as a distance metric and then the Hungarian algorithm #OTHEREFR for matching."], "text_after_citation": ["It then regresses the bounding boxes from the center points of the objects.", "Data association is done using greedy matching between the position of the objects in the previous frame and the position of the predicted offset.", "Tracktor #OTHEREFR exploits the bounding box regression module of Faster R-CNN to perform tracking directly from the information provided by the detector.", "Moreover, the model can be extended with a motion model and a reidentification algorithm to achieve better results.", "FairMOT #OTHEREFR aims to reconcile the inherent bias in favor of the detection task when training a MOT model."], "citing_paper_content": {"title": "Toptrack: Tracking Objects By Their Top", "abstract": "In recent years, the joint detection-and-tracking paradigm has been a very popular way of tackling the multi-object tracking (MOT) task. Many of the methods following this paradigm use the object center keypoint for detection. However, we argue that the center point is not optimal since it is often not visible in crowded scenarios, which results in many missed detections when the objects are partially occluded. We propose TopTrack, a joint detection-and-tracking method that uses the top of the object as a keypoint for detection instead of the center because it is more often visible. Furthermore, Top-Track processes consecutive frames in separate streams in order to facilitate training. We performed experiments to show that using the object top as a keypoint for detection can reduce the amount of missed detections, which in turn leads to more complete trajectories and less lost trajectories. TopTrack manages to achieve competitive results with other state-of-the-art trackers on two MOT benchmarks."}, "cited_paper_content": {"title": "Tracking Objects As Points", "abstract": "Tracking has traditionally been the art of following interest points through space and time. This changed with the rise of powerful deep networks. Nowadays, tracking is dominated by pipelines that perform object detection followed by temporal association, also known as tracking-by-detection. In this paper, we present a simultaneous detection and tracking algorithm that is simpler, faster, and more accurate than the state of the art. Our tracker, CenterTrack, applies a detection model to a pair of images and detections from the prior frame. Given this minimal input, CenterTrack localizes objects and predicts their associations with the previous frame. That's it. CenterTrack is simple, online (no peeking into the future), and real-time. It achieves 67.3% MOTA on the MOT17 challenge at 22 FPS and 89.4% MOTA on the KITTI tracking benchmark at 15 FPS, setting a new state of the art on both datasets. CenterTrack is easily extended to monocular 3D tracking by regressing additional 3D attributes. Using monocular video input, it achieves 28.3% AMOTA@0.2 on the newly released nuScenes 3D tracking benchmark, substantially outperforming the monocular baseline on this benchmark while running at 28 FPS."}, "keywords": ["object detection"], "citation_intent": "method"} {"citing_id": "2303.10181v1", "cited_id": "1609.07061", "section_title": "Methods For Resource Efficiency", "citation": "The implication of using quantised or low-precision variables is that it not only reduces memory footprint but can also reduce the compute cost of matrix multiplications #REFR at the expense of loss in precision.", "text_before_citation": ["Thus, for a neural network with |W| parameters, at any point during the training there are an additional \u2248 3 \u22c5 |W| variables stored in memory.", "Further, the intermediate activations at layer l, h l are also stored in memory to efficiently perform backpropagation.", "Note that all the scalar entries of W, g t , m t , v t , h l \u2208 R.", "On most computers, these real numbers are discretised into floating point-32 (FP32) format; wherein each variable requires 32 bits.", "Resource efficiency in this work is primarily addressed by reducing the precision of these variables by quantisation #OTHEREFR ."], "text_after_citation": ["Note, however, that the overhead of performing quantisation in some cases might outweigh the gains in computation.", "In this work, we investigate a combination of the following three quantisation strategies:", "1. Gradients and intermediate activations:", "Drastic quantisation of h l , g t", "have been studied extensively in literature #OTHEREFR ."], "citing_paper_content": {"title": "Operating Critical Machine Learning Models In Resource Constrained Regimes", "abstract": "The accelerated development of machine learning methods, primarily deep learning, are causal to the recent breakthroughs in medical image analysis and computer aided intervention. The resource consumption of deep learning models in terms of amount of training data, compute and energy costs are known to be massive. These large resource costs can be barriers in deploying these models in clinics, globally. To address this, there are cogent efforts within the machine learning community to introduce notions of resource efficiency. For instance, using quantisation to alleviate memory consumption. While most of these methods are shown to reduce the resource utilisation, they could come at a cost in performance. In this work, we probe into the trade-off between resource consumption and performance, specifically, when dealing with models that are used in critical settings such as in clinics."}, "cited_paper_content": {"title": "Quantized Neural Networks: Training Neural Networks With Low Precision Weights And Activations", "abstract": "We introduce a method to train Quantized Neural Networks (QNNs) -- neural networks with extremely low precision (e.g., 1-bit) weights and activations, at run-time. At traintime the quantized weights and activations are used for computing the parameter gradients. During the forward pass, QNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations. As a result, power consumption is expected to be drastically reduced. We trained QNNs over the MNIST, CIFAR-10, SVHN and ImageNet datasets. The resulting QNNs achieve prediction accuracy comparable to their 32-bit counterparts. For example, our quantized version of AlexNet with 1-bit weights and 2-bit activations achieves 51% top-1 accuracy. Moreover, we quantize the parameter gradients to 6-bits as well which enables gradients computation using only bit-wise operation. Quantized recurrent neural networks were tested over the Penn Treebank dataset, and achieved comparable accuracy as their 32-bit counterparts using only 4-bits. Last but not least, we programmed a binary matrix multiplication GPU kernel with which it is possible to run our MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The QNN code is available online."}, "keywords": ["quantised"], "citation_intent": "background"} {"citing_id": "2303.08816v1", "cited_id": "1803.02349", "section_title": "Introduction", "citation": "For instance, an e-commerce item carries its category as well as many other attributes, and the user might have a preference for a certain category #REFR .", "text_before_citation": ["The Borda winner is intuitively appealing and always well-defined for any set of preferential probabilities.", "The Borda score also does not require the problem instance to obey any consistency or transitivity, and it is considered one of the most general criteria.", "To identify the Borda winner, estimations of the Borda scores are needed.", "Since estimating the Borda score for one item requires comparing it with every other items, the sample complexity is prohibitively high when there are numerous items.", "On the other hand, in many real-world applications, the agent has access to side information that can assist the evaluation of p i,j ."], "text_after_citation": ["For a movie, the genre and the plot as well as the directors and actors can also be taken into consideration when making choices #OTHEREFR .", "Based on the above motivation, we consider Generalized Linear Dueling Bandits.", "At each round, the agent selects two items from a finite set of items and receives a comparison result of the preferred item.", "The comparisons depend on known intrinsic contexts/features associated with each pair of items.", "The contexts can be obtained from upstream tasks, such as topic modeling #OTHEREFR or embedding #OTHEREFR ."], "citing_paper_content": {"title": "Borda Regret Minimization For Generalized Linear Dueling Bandits", "abstract": "Dueling bandits are widely used to model preferential feedback that is prevalent in machine learning applications such as recommendation systems and ranking. In this paper, we study the Borda regret minimization problem for dueling bandits, which aims to identify the item with the highest Borda score while minimizing the cumulative regret. We propose a new and highly expressive generalized linear dueling bandits model, which covers many existing models. Surprisingly, the Borda regret minimization problem turns out to be difficult, as we prove a regret lower bound of order \u2126(d 2/3 T 2/3), where d is the dimension of contextual vectors and T is the time horizon. To attain the lower bound, we propose an explore-then-commit type algorithm, which has a nearly matching regret upper bound O(d 2/3 T 2/3). When the number of items/arms K is small, our algorithm can achieve a smaller regret O((d log K) 1/3 T 2/3) with proper choices of hyperparameters. We also conduct empirical experiments on both synthetic data and a simulated real-world environment, which corroborate our theoretical analysis."}, "cited_paper_content": {"title": "Billion-Scale Commodity Embedding For E-Commerce Recommendation In Alibaba", "abstract": "Recommender systems (RSs) have been the most important technology for increasing the business in Taobao, the largest online consumer-to-consumer (C2C) platform in China. There are three major challenges facing RS in Taobao: scalability, sparsity and cold start. In this paper, we present our technical solutions to address these three challenges. The methods are based on a well-known graph embedding framework. We first construct an item graph from users' behavior history, and learn the embeddings of all items in the graph. The item embeddings are employed to compute pairwise similarities between all items, which are then used in the recommendation process. To alleviate the sparsity and cold start problems, side information is incorporated into the graph embedding framework. We propose two aggregation methods to integrate the embeddings of items and the corresponding side information. Experimental results from offline experiments show that methods incorporating side information are superior to those that do not. Further, we describe the platform upon which the embedding methods are deployed and the workflow to process the billion-scale data in Taobao. Using A/B test, we show that the online Click-Through-Rates (CTRs) are improved comparing to the previous collaborative filtering based methods widely used in Taobao, further demonstrating the effectiveness and feasibility of our proposed methods in Taobao's live production environment."}, "keywords": ["preference", "e-commerce item"], "citation_intent": "background"} {"citing_id": "2304.00320v1", "cited_id": "1904.09080", "section_title": "Remark 3 (Escape And Converge). When The Noise \u039e", "citation": "Similar results have been obtained in #REFR when assuming the deep learning algorithms are driven by an Ornstein-Uhlenbeck process.", "text_before_citation": ["1 N N i=1 \u2207 \u03b8 f (x i ,\u03b8) 2", "2 , as the scale of noise \u03be ULN k (\u03b8) is large.", "Reciprocally, we follow #OTHEREFR and suggest that when the SGD with unbiased random label noises converges, the algorithm would converge to a point", "\u03b8 * with small 1 N N i=1 \u2207 \u03b8 f (x i , \u03b8 * ) 2", "2 ."], "text_after_citation": ["Remark 4 (Performance Tuning).", "Considering \u03b7\u03c3 2 /B as the coefficient balancing the implicit regularizer and vanilla SGD, one can regularize/penalize the SGD learning procedure with the fixed \u03b7 and B more fiercely using a larger \u03c3 2 .", "More specifically, we could expect to obtain a regularized solution with #OTHEREFR 2 or higher inference stability of neural networks, as regularization effects become stronger when \u03c3 2 increases.", "lower 1 N N i=1 \u2207 \u03b8 f (x i , \u03b8)"], "citing_paper_content": {"title": "Stochastic Gradient Descent With Random Label Noises: Doubly Stochastic Models And Inference Stabilizer", "abstract": "Random label noises (or observational noises) widely exist in practical machine learning settings. While previous studies primarily focus on the affects of label noises to the performance of learning, our work intends to investigate the implicit regularization effects of the label noises, under mini-batch sampling settings of stochastic gradient descent (SGD), with assumptions that label noises are unbiased. Specifically, we analyze the learning dynamics of SGD over the quadratic loss with unbiased label noises, where we model the dynamics of SGD as a stochastic differentiable equation (SDE) with two diffusion terms (namely a Doubly Stochastic Model). While the first diffusion term is caused by mini-batch sampling over the (label-noiseless) loss gradients as many other works on SGD [1, 2], our model investigates the second noise term of SGD dynamics, which is caused by mini-batch sampling over the label noises, as an implicit regularizer. Our theoretical analysis finds such implicit regularizer would favor some convergence points that could stabilize model outputs against perturbation of parameters (namely inference stability). Though similar phenomenon have been investigated in [3], our work"}, "cited_paper_content": {"title": "Implicit Regularization For Deep Neural Networks Driven By An Ornstein-Uhlenbeck Like Process", "abstract": "We consider deep networks, trained via stochastic gradient descent to minimize L2 loss, with the training labels perturbed by independent noise at each iteration. We characterize the behavior of the training dynamics near any parameter vector that achieves zero training error, in terms of an implicit regularization term corresponding to the sum over the data points, of the squared L2 norm of the gradient of the model with respect to the parameter vector, evaluated at each data point. We then leverage this general characterization, which holds for networks of any connectivity, width, depth, and choice of activation function, to show that for 2-layer ReLU networks of arbitrary width and L2 loss, when trained on one-dimensional labeled data $(x_1,y_1),\\ldots,(x_n,y_n),$ the only stable solutions with zero training error correspond to functions that: 1) are linear over any set of three or more co-linear training points (i.e. the function has no extra \"kinks\"); and 2) change convexity the minimum number of times that is necessary to fit the training data. Additionally, for 2-layer networks of arbitrary width, with tanh or logistic activations, we show that when trained on a single $d$-dimensional point $(x,y)$ the only stable solutions correspond to networks where the activations of all hidden units at the datapoint, and all weights from the hidden units to the output, take at most two distinct values, or are zero. In this sense, we show that when trained on \"simple\" data, models corresponding to stable parameters are also \"simple\"; in short, despite fitting in an over-parameterized regime where the vast majority of expressible functions are complicated and badly behaved, stable parameters reached by training with noise express nearly the \"simplest possible\" hypothesis consistent with the data. These results shed light on the mystery of why deep networks generalize so well in practice."}, "keywords": ["deep learning algorithms"], "citation_intent": "result"} {"citing_id": "2304.03593v1", "cited_id": "1911.03074", "section_title": "B. Crowd Navigation", "citation": "While the arrival times of our robot were longer than the results of #REFR , we note that their robot was traveling at a speed (1.5m/s) about 7 times faster than our robot (0.22m/s).", "text_before_citation": ["Likewise, higher crowd density (more dangerous) resulted in longer arrival time.", "Exceptions were seen in the ahead crowd behavior, where the arrival times with fast moving obstacles ahead were shorter than with slow moving obstacles.", "The fast moving obstacles were traveling at a speed close to the speed of the robot.", "In this case, when the obstacles were moving fast ahead of the robot, there was very little chance of the robot being confronted by the obstacles. The ahead environments were quite safe.", "Consequently, there were very few safety violations as seen from the high ego and social scores in the results of ahead crowd behavior in Fig. 4 ."], "text_after_citation": ["Taking into account the speed difference, our approach has performed relatively faster and with higher success rate than the approach of #OTHEREFR .", "In addition, we have observed an interesting policy learned by our model.", "In cases where there were too dense obstacles in the way of the robot, the robot would take a detour and avoid the crowd cluster to reach the goal.", "However, in most cases, the robot navigated through the crowd. Fig. 5 illustrates the robot's behaviors during navigation.", "Egosafety and social-safety violations do not necessary result in collisions."], "citing_paper_content": {"title": "Deep Reinforcement Learning-Based Mapless Crowd Navigation With Perceived Risk Of The Moving Crowd For Mobile Robots", "abstract": "Classical map-based navigation methods are commonly used for robot navigation, but they often struggle in crowded environments due to the Frozen Robot Problem (FRP). Deep reinforcement learning-based methods address the FRP problem, however, suffer from the issues of generalization and scalability. To overcome these challenges, we propose a method that uses Collision Probability (CP) to help the robot navigate safely through crowds. The inclusion of CP in the observation space gives the robot a sense of the level of danger of the moving crowd. The robot will navigate through the crowd when it appears safe but will take a detour when the crowd is moving aggressively. By focusing on the most dangerous obstacle, the robot will not be confused when the crowd density is high, ensuring scalability of the model. Our approach was developed using deep reinforcement learning (DRL) and trained using the Gazebo simulator in a non-cooperative crowd environment with obstacles moving at randomized speeds and directions. We then evaluated our model on four different crowd-behavior scenarios with varying densities of crowds. The results shown that our method achieved a 100% success rate in all test settings. We compared our approach with a current state-of-the-art DRLbased approach, and our approach has performed significantly better. Importantly, our method is highly generalizable and requires no fine-tuning after being trained once. We further demonstrated the crowd navigation capability of our model in real-world tests."}, "cited_paper_content": {"title": "Mapless Navigation Among Dynamics With Social-Safety-Awareness: A Reinforcement Learning Approach From 2D Laser Scans", "abstract": "We propose a method to tackle the problem of mapless collision-avoidance navigation where humans are present using 2D laser scans. Our proposed method uses ego-safety to measure collision from the robot's perspective while social-safety to measure the impact of our robot's actions on surrounding pedestrians. Specifically, the social-safety part predicts the intrusion impact of our robot's action into the interaction area with surrounding humans. We train the policy using reinforcement learning on a simple simulator and directly evaluate the learned policy in Gazebo and real robot tests. Experiments show the learned policy can be smoothly transferred without any fine tuning. We observe that our method demonstrates time-efficient path planning behavior with high success rate in mapless navigation tasks. Furthermore, we test our method in a navigation among dynamic crowds task considering both low and high volume traffic. Our learned policy demonstrates cooperative behavior that actively drives our robot into traffic flows while showing respect to nearby pedestrians. Evaluation videos are at this https URL"}, "keywords": ["robot"], "citation_intent": "result"} {"citing_id": "2304.12876v1", "cited_id": "1902.06705", "section_title": "Advanced Guided-Lfi", "citation": "First, we can notice that experimental and simulation results are almost similar, meaning that we can guide our LFI with high reliability and confidence #REFR .", "text_before_citation": ["For that purpose, we need to put to the test that LFI reach (near) identical performance than what expected by simulations.", "We ran a BSCA simulation (in Python) over all the weight columns and bit lines that pointed out the MSB of the second column weight as the most sensitive.", "Therefore, contrary to the previous experiments, the laser source was triggered only when the 20 most sensitive weights were read from the Flash.", "The laser location was set accordingly (X = 760 \u00b5m) and the power increased to 360 mW to ensure a higher success rate on weights stored in distant addresses.", "The blue curve in Fig.5b represents our experimental results (mean accuracy over 100 inferences) while the red one is the BSCA simulations for the MSB."], "text_after_citation": ["For an adversarial budget of only 5 bit-sets (0.1% faulted bits) the embedded model accuracy drops to 39% which represents a significant loss and a strong integrity impact compared to the nominal performance of 92%.", "Moreover, after 10 bit-sets (accuracy to 25%), the most effective faults have been injected and the accuracy did not decrease anymore.", "In a security evaluation context, this observation positions the level of robustness of the model according to the adversarial budget."], "citing_paper_content": {"title": "Evaluation Of Parameter-Based Attacks Against Embedded Neural Networks With Laser Injection", "abstract": "Upcoming certification actions related to the security of machine learning (ML) based systems raise major evaluation challenges that are amplified by the large-scale deployment of models in many hardware platforms. Until recently, most of research works focused on API-based attacks that consider a ML model as a pure algorithmic abstraction. However, new implementation-based threats have been revealed, emphasizing the urgency to propose both practical and simulation-based methods to properly evaluate the robustness of models. A major concern is parameter-based attacks (such as the Bit-Flip Attack-BFA) that highlight the lack of robustness of typical deep neural network models when confronted by accurate and optimal alterations of their internal parameters stored in memory. Setting in a security testing purpose, this work practically reports, for the first time, a successful variant of the BFA on a 32-bit Cortex-M microcontroller using laser fault injection. It is a standard fault injection means for security evaluation, that enables to inject spatially and temporally accurate faults. To avoid unrealistic brute-force strategies, we show how simulations help selecting the most sensitive set of bits from the parameters taking into account the laser fault model."}, "cited_paper_content": {"title": "On Evaluating Adversarial Robustness", "abstract": "Correctly evaluating defenses against adversarial examples has proven to be extremely difficult. Despite the significant amount of recent work attempting to design defenses that withstand adaptive attacks, few have succeeded; most papers that propose defenses are quickly shown to be incorrect. ::: We believe a large contributing factor is the difficulty of performing security evaluations. In this paper, we discuss the methodological foundations, review commonly accepted best practices, and suggest new methods for evaluating defenses to adversarial examples. We hope that both researchers developing defenses as well as readers and reviewers who wish to understand the completeness of an evaluation consider our advice in order to avoid common pitfalls."}, "keywords": ["LFI", "high reliability"], "citation_intent": "result"} {"citing_id": "2304.00953v1", "cited_id": "1903.05714", "section_title": "I. Introduction", "citation": "State-of-the-Art commercial platforms integrate Intel Optane DC Persistent Memory (DCPM) modules along with DRAM, leading to heterogeneous memory systems #REFR .", "text_before_citation": ["The memory system is one of the main components that limit the scalability and contribute to the energy consumption of supercomputers #OTHEREFR .", "The integration of more DRAM modules to enable more complex simulations, analytics and effective in-memory processing has negative impact on the sustainability and maintenance costs of supercomputing centres.", "In particular, despite the low access latency of traditional DRAM technologies, the increased leakage and refresh power requirements limit DRAM scalability and introduce a significant challenge towards reaching exascale performance.", "In order to overcome DRAM limitations, non-volatile memory (NVM) technologies have been introduced, such as the 3D-XPoint, which is a subclass of the Phase-Change Memories (PCM) #OTHEREFR , Spin-Transfer Torque RAM (STT-RAM) #OTHEREFR and Resistive RAM (ReRAM) #OTHEREFR ."], "text_after_citation": ["For instance, the upcoming Aurora exascale supercomputer employs the DAOS storage architecture, which integrates a complex memory and storage hierarchy, including Intel Optane DCPM modules #OTHEREFR .", "This work has been partially funded by EU Horizon 2020 program under grant agreement No 101015922 AI@EDGE (https://aiatedge.eu/).", "These emerging memory technologies provide higher density than DRAM, enabling increased aggregate memory capacities with fewer nodes, having positive impact on the energy consumption, resilience and sustainability.", "Additionally, the data persistence features of the NVM technologies can be used to provide fault tolerance support to applications.", "On the other hand, the Optane DCPM provides, in general, higher access latency and lower bandwidth compared to DRAM #OTHEREFR , #OTHEREFR , #OTHEREFR ."], "citing_paper_content": {"title": "Energy Consumption Evaluation Of Optane Dc Persistent Memory For Indexing Data Structures", "abstract": "The Intel Optane DC Persistent Memory (DCPM) is an attractive novel technology for building storage systems for data intensive HPC applications, as it provides lower cost per byte, low standby power and larger capacities than DRAM, with comparable latency. This work provides an in-depth evaluation of the energy consumption of the Optane DCPM, using wellestablished indexes specifically designed to address the challenges and constraints of the persistent memories. We study the energy efficiency of the Optane DCPM for several indexing data structures and for the LevelDB key-value store, under different types of YCSB workloads. By integrating an Optane DCPM in a memory system, the energy drops by 71.2% and the throughput increases by 37.3% for the LevelDB experiments, compared to a typical SSD storage solution."}, "cited_paper_content": {"title": "Basic Performance Measurements Of The Intel Optane Dc Persistent Memory Module", "abstract": "Scalable nonvolatile memory DIMMs will finally be commercially available with the release of the Intel Optane DC Persistent Memory Module (or just \"Optane DC PMM\"). This new nonvolatile DIMM supports byte-granularity accesses with access times on the order of DRAM, while also providing data storage that survives power outages. This work comprises the first in-depth, scholarly, performance review of Intel's Optane DC PMM, exploring its capabilities as a main memory device, and as persistent, byte-addressable memory exposed to user-space applications. This report details the technologies performance under a number of modes and scenarios, and across a wide variety of macro-scale benchmarks. Optane DC PMMs can be used as large memory devices with a DRAM cache to hide their lower bandwidth and higher latency. When used in this Memory (or cached) mode, Optane DC memory has little impact on applications with small memory footprints. Applications with larger memory footprints may experience some slow-down relative to DRAM, but are now able to keep much more data in memory. When used under a file system, Optane DC PMMs can result in significant performance gains, especially when the file system is optimized to use the load/store interface of the Optane DC PMM and the application uses many small, persistent writes. For instance, using the NOVA-relaxed NVMM file system, we can improve the performance of Kyoto Cabinet by almost 2x. Optane DC PMMs can also enable user-space persistence where the application explicitly controls its writes into persistent Optane DC media. In our experiments, modified applications that used user-space Optane DC persistence generally outperformed their file system counterparts. For instance, the persistent version of RocksDB performed almost 2x faster than the equivalent program utilizing an NVMM-aware file system."}, "keywords": ["Persistent Memory (DCPM)"], "citation_intent": "background"} {"citing_id": "2303.01241v1", "cited_id": "1908.01843", "section_title": "Evaluation Results", "citation": "In addition, Figure 5 shows that NLI-SAN achieves similar performance with KGAT , while having a simpler architecture for the application, and outperforms GEAR #REFR Table 1 : Document retrieval on the PANACEA dataset.", "text_before_citation": ["Fact-Checking We investigate the performance of our system in document retrieval and veracity assessment in (Arana-Catania et al., 2022) .", "Table 1 shows that combining BM25 and MonoT5 is the most effective approach for document retrieval of the selected techniques."], "text_after_citation": ["Rumour Detection As shown in Figure 6 , our comparison #OTHEREFR among various models, including branchLSTM #OTHEREFR , TD-RvNN #OTHEREFR , BiGCN #OTHEREFR , SAVED (Dougrez-Lewis et al., 2021) and BERT #OTHEREFR for rumour detection evaluated on Twitter15, Twitter16 and PHEME #OTHEREFR , reveals there is no model that always performs the best.", "Although state-of-the-art models can achieve high accuracy on their training datasets, such performance drops quickly while evaluating on a different dataset #OTHEREFR .", "Due to the limitation of existing models in generalisation, users should interpret this result with caution as the system cannot guarantee output correctness.", "This paper introduces a web-based system on factchecking and rumour detection based on novel natural language processing models for COVID-19 misinformation detection.", "Going forward, we will keep updating the data and explore other methods for misinformation identification to improve the current system and introduce more functions to the system as part of our continuing efforts to support the general public to identify misinformation."], "citing_paper_content": {"title": "Panacea: An Automated Misinformation Detection System On Covid-19", "abstract": "In this demo, we introduce a web-based misinformation detection system PANACEA on COVID-19 related claims, which has two modules, fact-checking and rumour detection. Our fact-checking module, which is supported by novel natural language inference methods with a self-attention network, outperforms state-ofthe-art approaches. It is also able to give automated veracity assessment and ranked supporting evidence with the stance towards the claim to be checked. In addition, PANACEA adapts the bi-directional graph convolutional networks model, which is able to detect rumours based on comment networks of related tweets, instead of relying on the knowledge base. This rumour detection module assists by warning the users in the early stages when a knowledge base may not be available."}, "cited_paper_content": {"title": "Gear: Graph-Based Evidence Aggregating And Reasoning For Fact Verification", "abstract": "Fact verification (FV) is a challenging task which requires to retrieve relevant evidence from plain text and use the evidence to verify given claims. Many claims require to simultaneously integrate and reason over several pieces of evidence for verification. However, previous work employs simple models to extract information from evidence without letting evidence communicate with each other, e.g., merely concatenate the evidence for processing. Therefore, these methods are unable to grasp sufficient relational and logical information among the evidence. To alleviate this issue, we propose a graph-based evidence aggregating and reasoning (GEAR) framework which enables information to transfer on a fully-connected evidence graph and then utilizes different aggregators to collect multi-evidence information. We further employ BERT, an effective pre-trained language representation model, to improve the performance. Experimental results on a large-scale benchmark dataset FEVER have demonstrated that GEAR could leverage multi-evidence information for FV and thus achieves the promising result with a test FEVER score of 67.10%. Our code is available at this https URL."}, "keywords": ["PANACEA dataset", "Document retrieval"], "citation_intent": "result"} {"citing_id": "2304.07567v1", "cited_id": "1607.06215", "section_title": "Introduction", "citation": "To address this problem, many approaches have been designed to minimize the representation divergence using the alignment annotations #REFR .", "text_before_citation": ["In real-world applications, objects can always be represented with multiple modalities.", "For example, articles are with image and text modalities, videos are with image and audio modalities, etc.", "To relate information from multiple modalities, an important task in multimodal machine learning is cross-modal retrieval, which aims to search one modal instances for given other modal instances.", "In this paper, we focus primarily, but not exclusively, on two modalities: visual signals and natural language.", "Actually, the main challenge of vision-language retrieval is the semantic divergence of heterogeneous data."], "text_after_citation": ["Initial approaches are always dual stream models, which typically build independent embedding network for each modality and constrain the consistency of cross-modal output representations with various similarity measures.", "For example, #OTHEREFR constrained the consistency of global representations between two modalities.", "Furthermore, to consider the fine-grained similarity, #OTHEREFR turned to measure the consistency of regional representations, #OTHEREFR developed the graph-level consistency by considering both regions and edges.", "It is notable that vision and language encoders can adopt either shallow or deep models depending on the design.", "With the development of vision-language Transformer, single-stream approaches are proposed #OTHEREFR , in which the two modalities interact from the input level."], "citing_paper_content": {"title": "Covlr: Coordinating Cross-Modal Consistency And Intra-Modal Structure For Vision-Language Retrieval", "abstract": "Current vision-language retrieval aims to perform cross-modal instance search, in which the core idea is to learn the consistent visionlanguage representations. Although the performance of cross-modal retrieval has greatly improved with the development of deep models, we unfortunately find that traditional hard consistency may destroy the original relationships among single-modal instances, leading the performance degradation for single-modal retrieval. To address this challenge, in this paper, we experimentally observe that the vision-language divergence may cause the existence of strong and weak modalities, and the hard cross-modal consistency cannot guarantee that strong modal instances' relationships are not affected by weak modality, resulting in the strong modal instances' relationships perturbed despite learned consistent representations. To this end, we propose a novel and directly Coordinated Vision-Language Retrieval method (dubbed CoVLR), which aims to study and alleviate the desynchrony problem between the cross-modal alignment and single-modal cluster-preserving tasks. CoVLR addresses this challenge by developing an effective meta-optimization based strategy, in which the cross-modal consistency objective and the intra-modal relation preserving objective are acted as the meta-train and meta-test tasks, thereby CoVLR encourages both tasks to be optimized in a coordinated way. Consequently, we can simultaneously insure cross-modal consistency and intra-modal structure. Experiments on different datasets validate CoVLR can improve single-modal retrieval accuracy whilst preserving crossmodal retrieval capacity compared with the baselines. CCS CONCEPTS \u2022 Computing methodologies \u2192 Supervised learning by classification; \u2022 Information systems \u2192 Retrieval tasks and goals."}, "cited_paper_content": {"title": "A Comprehensive Survey On Cross-Modal Retrieval", "abstract": "In recent years, cross-modal retrieval has drawn much attention due to the rapid growth of multimodal data. It takes one type of data as the query to retrieve relevant data of another type. For example, a user can use a text to retrieve relevant pictures or videos. Since the query and its retrieved results can be of different modalities, how to measure the content similarity between different modalities of data remains a challenge. Various methods have been proposed to deal with such a problem. In this paper, we first review a number of representative methods for cross-modal retrieval and classify them into two main groups: 1) real-valued representation learning, and 2) binary representation learning. Real-valued representation learning methods aim to learn real-valued common representations for different modalities of data. To speed up the cross-modal retrieval, a number of binary representation learning methods are proposed to map different modalities of data into a common Hamming space. Then, we introduce several multimodal datasets in the community, and show the experimental results on two commonly used multimodal datasets. The comparison reveals the characteristic of different kinds of cross-modal retrieval methods, which is expected to benefit both practical applications and future research. Finally, we discuss open problems and future research directions."}, "keywords": ["representation divergence"], "citation_intent": "method"} {"citing_id": "2303.00147v1", "cited_id": "1209.0194", "section_title": "Upper Bound", "citation": "As with any type of planar straight-line graph, the number of linear forests on O(k) points is singly exponential in k #REFR , so P [K] can be encoded with O(k) bits of information.", "text_before_citation": ["Q can be encoded by specifying its size and the subset of L \u2229 S of that size, out of |Q| possibilities, so by Lemma 11 the number of bits needed to specify it is O log n + k + k log(n/k) .", "(Both the log n term and the +1 in the statement of the lemma are included to handle the case when k = 0 but |Q| > 0.", "Lemma 11 applies only when k \u2264 n/2 but for larger k the bound to be proven is superlinear and the result is immediate.)", "\u2022 For each point in Q, a specification of whether it has a neighbor in L, and if so in which direction. This takes O(k) bits of information.", "\u2022 The induced subgraph P [K \u222a Q], a linear forest using only the points in K \u222a Q, and omitting the edges of P that lie entirely within L."], "text_after_citation": ["Then P may be recovered by combining the induced subgraph P [K \u222a Q] with segments of L starting and ending at points of Q and continuing in the specified direction from each of these points.", "All pieces of this encoding add up to the stated bound on the number of bits needed to encode the entire path."], "citing_paper_content": {"title": "Non-Crossing Hamiltonian Paths And Cycles In Output-Polynomial Time", "abstract": "We show that, for planar point sets, the number of non-crossing Hamiltonian paths is polynomially bounded in the number of non-crossing paths, and the number of non-crossing Hamiltonian cycles (polygonalizations) is polynomially bounded in the number of surrounding cycles. As a consequence, we can list the non-crossing Hamiltonian paths or the polygonalizations, in time polynomial in the output size, by filtering the output of simple backtracking algorithms for non-crossing paths or surrounding cycles respectively. To prove these results we relate the numbers of non-crossing structures to two easily-computed parameters of the point set: the minimum number of points whose removal results in a collinear set, and the number of points interior to the convex hull. These relations also lead to polynomial-time approximation algorithms for the numbers of structures of all four types, accurate to within a constant factor of the logarithm of these numbers."}, "cited_paper_content": {"title": "Counting Plane Graphs: Cross-Graph Charging Schemes", "abstract": "We study cross-graph charging schemes for graphs drawn in the plane. These are charging schemes where charge is moved across vertices of different graphs. Such methods have been recently applied to obtain various properties of triangulations that are embedded over a fixed set of points in the plane. We show how this method can be generalized to obtain results for various other types of graphs that are embedded in the plane. Specifically, we obtain a new bound of $O^*(187.53^N)$ (where the $O^*()$ notation hides polynomial factors) for the maximum number of crossing-free straight-edge graphs that can be embedded over any specific set of $N$ points in the plane (improving upon the previous best upper bound $207.85^N$ in Hoffmann et al.). We also derive upper bounds for numbers of several other types of plane graphs (such as connected and bi-connected plane graphs), and obtain various bounds on expected vertex-degrees in graphs that are uniformly chosen from the set of all crossing-free straight-edge graphs that can be embedded over a specific point set. We then show how to apply the cross-graph charging-scheme method for graphs that allow certain types of crossings. Specifically, we consider graphs with no set of $k$ pairwise-crossing edges (more commonly known as $k$-quasi-planar graphs). For $k=3$ and $k=4$, we prove that, for any set $S$ of $N$ points in the plane, the number of graphs that have a straight-edge $k$-quasi-planar embedding over $S$ is only exponential in $N$."}, "keywords": ["linear forests", "planar straight-line graph"], "citation_intent": "background"} {"citing_id": "2303.09732v1", "cited_id": "1701.04082", "section_title": "C Omitted Evaluation Results", "citation": "As a result, RIGA has the similar vulnerability of #REFR as their watermark extraction procedures only differ into the type of extractor, which is also inexecutable due to the incompatible input dimension of the trained extractor for RIGA. Evaluation Results.", "text_before_citation": ["Meanwhile, they replace the watermark extractor, which has been previously implemented with a predefined linear transformation #OTHEREFR , with a learnable fully-connected neural network (FCN), for boosting the encoding capacity of watermarking messages. Similar to Uchida et al.", "#OTHEREFR , the watermark-related weights are first selected from the target model and then projected to a binary string s via the FCN-based extractor during the ownership verification procedure.", "Discussion.", "Simply replacing the linear transformation matrix in Uchida et al.", "#OTHEREFR to a learnable extractor can not completely eliminate the removal threats from our attack based on model structural obfuscation."], "text_after_citation": ["We follow their evaluation settings to watermark Inception-V3 trained on CelebA, which achieves 95.90% accuracy and 0% BER [74] .", "We employ the default setups that the watermark is embedded into the third convolutional layer of the target model and the extractor is a multiple layer perceptron with one hidden layer.", "With our attack framework, we successfully inhibit the ownership verification of RIGA without any loss to the utility of victim model.", "Even applying the error-handling mechanisms, the BER of extracted message is increased to an unacceptable level.", "For example, when we utilize Max-First error-handling to obtain the embedded watermark, the BER is increased to 76.04% when we inject the dummy neurons generated via NeuronSplit. Passport-aware Normalization. Zhang et al."], "citing_paper_content": {"title": "Rethinking White-Box Watermarks On Deep Learning Models Under Neural Structural Obfuscation", "abstract": "Copyright protection for deep neural networks (DNNs) is an urgent need for AI corporations. To trace illegally distributed model copies, DNN watermarking is an emerging technique for embedding and verifying secret identity messages in the prediction behaviors or the model internals. Sacrificing less functionality and involving more knowledge about the target DNN, the latter branch called white-box DNN watermarking is believed to be accurate, credible and secure against most known watermark removal attacks, with emerging research efforts in both the academy and the industry. In this paper, we present the first systematic study on how the mainstream white-box DNN watermarks are commonly vulnerable to neural structural obfuscation with dummy neurons, a group of neurons which can be added to a target model but leave the model behavior invariant. Devising a comprehensive framework to automatically generate and inject dummy neurons with high stealthiness, our novel attack intensively modifies the architecture of the target model to inhibit the success of watermark verification. With extensive evaluation, our work for the first time shows that nine published watermarking schemes require amendments to their verification procedures."}, "cited_paper_content": {"title": "Embedding Watermarks Into Deep Neural Networks", "abstract": "Significant progress has been made with deep neural networks recently. Sharing trained models of deep neural networks has been a very important in the rapid progress of research and development of these systems. At the same time, it is necessary to protect the rights to shared trained models. To this end, we propose to use digital watermarking technology to protect intellectual property and detect intellectual property infringement in the use of trained models. First, we formulate a new problem: embedding watermarks into deep neural networks. Second, we propose a general framework for embedding a watermark in model parameters, using a parameter regularizer. Our approach does not impair the performance of networks into which a watermark is placed because the watermark is embedded while training the host network. Finally, we perform comprehensive experiments to reveal the potential of watermarking deep neural networks as the basis of this new research effort. We show that our framework can embed a watermark during the training of a deep neural network from scratch, and during fine-tuning and distilling, without impairing its performance. The embedded watermark does not disappear even after fine-tuning or parameter pruning; the watermark remains complete even after 65% of parameters are pruned."}, "keywords": ["watermark extraction procedures"], "citation_intent": "result"} {"citing_id": "2305.02656v1", "cited_id": "1905.00258", "section_title": "A. Previous Work", "citation": "In #REFR , a decentralized adaptive routing scheme has been developed, in which the imperfection of quantum memories is taken into account.", "text_before_citation": ["Quantum entanglement routing has been the focus of many works in the past few years.", "A design of an optimal routing scheme has been presented in #OTHEREFR , with the end-toend entanglement rate was set as the optimality figure of merit.", "In #OTHEREFR , the authors have presented a remote entanglement distribution scheme for linear repeater chains, where an optimal route maximizing the end-to-end entanglement is determined alongside with the optimal sequence of the entanglement swaps.", "Distributed entanglement routing algorithms, with latency taken into account has been the purpose of #OTHEREFR .", "The problem of optimizing end-to-end entanglement in a many source-destination scenario has been studied in details as a multicommodity flow problem in #OTHEREFR ."], "text_after_citation": ["A multipath routing approach for multiple end-to-end entanglement has been thoroughly investigated in #OTHEREFR .", "In #OTHEREFR , the authors handled the case where quantum repeaters are allowed to perform quantum encoding, where it has been shown that the later increases drastically the end-toend entanglement rates with respect to usual protocols.", "This has been recently experimentally investigated in NV centers in #OTHEREFR .", "The effect of this type of intermediate encoding on end-to-end key rate generation in QKD has been studied in #OTHEREFR .", "For Multipartite entanglement generation and distribution in quantum networks relying on a central node connected by EPR pairs to the remote clients has been the focus of #OTHEREFR ."], "citing_paper_content": {"title": "The Quantum Internet: An Efficient Stabilizer States Distribution Scheme", "abstract": "Quantum networks constitute a major part of quantum technologies. They will boost distributed quantum computing drastically by providing a scalable modular architecture of quantum chips, or by establishing an infrastructure for measurement based quantum computing. Moreover, they will provide the backbone of the future quantum internet, allowing for high margins of security. Interestingly, the advantages that the quantum networks would provide for communications, rely on entanglement distribution, which suffers from high latency in protocols based on Bell pair distribution and bipartite entanglement swapping. Moreover, the designed algorithms for multipartite entanglement routing suffer from intractability issues making them unsolvable exactly in polynomial time. In this paper, we investigate a new approach for graph states distribution in quantum networks relying inherently on local quantum coding-LQC-isometries and on multipartite states transfer. Additionally, single-shot bounds for stabilizer states distribution are provided. Analogously to network coding, these bounds are shown to be achievable if appropriate isometries/stabilizer codes in relay nodes are chosen, which induces a lower latency entanglement distribution. As a matter of fact, the advantages of the protocol for different figures of merit of the network are provided."}, "cited_paper_content": {"title": "Opportunistic Entanglement Distribution For The Quantum Internet", "abstract": "Quantum entanglement is a building block of the entangled quantum networks of the quantum Internet. A fundamental problem of the quantum Internet is entanglement distribution. Since quantum entanglement will be fundamental to any future quantum networking scenarios, the distribution mechanism of quantum entanglement is a critical and emerging issue in quantum networks. Here we define the method of opportunistic entanglement distribution for the quantum Internet. The opportunistic model defines distribution sets that are aimed to select those quantum nodes for which the cost function picks up a local minimum. The cost function utilizes the error patterns of the local quantum memories and the predictability of the evolution of the entanglement fidelities. Our method provides efficient entanglement distributing with respect to the actual statuses of the local quantum memories of the node pairs. The model provides an easily-applicable, moderate-complexity solution for high-fidelity entanglement distribution in experimental quantum Internet scenarios."}, "keywords": ["quantum memories"], "citation_intent": "method"} {"citing_id": "2304.00974v1", "cited_id": "1508.04983", "section_title": "A. Related Works", "citation": "In #REFR , the robust stabilization of the FM algorithm when uncertainty exists in the disturbance is given by the linear matrix inequality (LMI).", "text_before_citation": ["It is widely used in various fields, such as the optimization design in the chemical industry #OTHEREFR , power control #OTHEREFR , and resource allocation #OTHEREFR , #OTHEREFR .", "Since GP contains and optimizes only positive parameters, it plays a crucial role in the positive system #OTHEREFR . GP-based algorithms generally are robust and time-efficient in solving.", "As shown in #OTHEREFR , the optimization framework for parameter tuning problems constrained by H 2 norm, H \u221e norm, Hankel norm or Schattern p-norm can be established and solved efficiently via GP.", "For the uncertainty of the log-quantized feedback errors in the FM algorithm, the stabilization problem for bounded-input bounded-output systems is investigated in #OTHEREFR .", "The refinement of the quantization level enables the cellular network to obtain a better QoS."], "text_after_citation": ["Compared with LMI, GP allows greater scalability since it can solve more complex and larger-scale networks with higher accuracy.", "Therefore, the robust stabilization and resource allocation problems of the FM algorithm with structured uncertainties are further investigated through GP in #OTHEREFR .", "In this work, we propose a convex optimization framework, specifically GP, for the robust stabilization problem under structured uncertainties of the discrete-time FM algorithm.", "The GP formulation is extended to an iterative algorithm to address the resilient stabilization problem of cellular network QoS under the threat of addingedge attacks, that is, to determine the precise GNE of the two subnetwork policymakers.", "The attacker can be regarded as the worst-case structured uncertainty in this GP framework with norm constraints."], "citing_paper_content": {"title": "Optimal Resource Allocation Between Two Nonfully Cooperative Wireless Networks Under Malicious Attacks: A Gestalt Game Perspective", "abstract": "In this paper, the problem of seeking optimal distributed resource allocation (DRA) policies on cellular networks in the presence of an unknown malicious adding-edge attacker is investigated. This problem is described as the games of games (GoG) model. Specifically, two subnetwork policymakers constitute a Nash game, while the confrontation between each subnetwork policymaker and the attacker is captured by a Stackelberg game. First, we show that the communication resource allocation of cellular networks based on the Foschini-Miljanic (FM) algorithm can be transformed into a geometric program and be efficiently solved via convex optimization. Second, the upper limit of attack magnitude that can be tolerated by the network is calculated by the corresponding theory, and it is proved that the above geometric programming (GP) framework is solvable within the attack bound, that is, there exists a Gestalt Nash equilibrium (GNE) in our GoG. Third, a heuristic algorithm that iteratively uses GP is proposed to identify the optimal policy profiles of both subnetworks, for which asymptotic convergence is also confirmed. Fourth, a greedy heuristic adding-edge strategy is developed for the attacker to determine the set of the most vulnerable edges. Finally, simulation examples illustrate that the proposed theoretical results are robust and can achieve the GNE. It is verified that the transmission gains and interference gains of all channels are well tuned within a limited budget, despite the existence of malicious attacks."}, "cited_paper_content": {"title": "A Convex Characterization Of Robust Stability For Positive And Positively Dominated Linear Systems", "abstract": "We provide convex necessary and sufficient conditions for the robust stability of linear positively dominated systems. In particular we show that the structured singular value is always equal to its convex upper bound for nonnegative matrices and we use this result to derive necessary and sufficient Linear Matrix Inequality (LMI) conditions for robust stability that involve only the system's static gain. We show how this approach can be applied to test the robust stability of the Foschini-Miljanic algorithm for power control in wireless networks in presence of uncertain interference."}, "keywords": ["FM algorithm", "robust stabilization"], "citation_intent": "background"} {"citing_id": "2303.01668v1", "cited_id": "1801.01290", "section_title": "Experiments", "citation": "For DMControl, we collect the offline dataset in a similar way to the procedure to collect the Mixed dataset using SAC #REFR .", "text_before_citation": ["The Weak dataset is collected from the first 1M transitions generated by DQN.", "The Mixed dataset is obtained by concatenating multiple checkpoints evenly throughout the training of DQN.", "The quality of the dataset increases from Random to Mixed.", "We also evaluate the algorithms on datasets of different sizes.", "Larger datasets can be obtained by running the above procedure multiple times with different random seeds."], "text_after_citation": ["Baselines algorithms.", "In our experiments, we compare our algorithm with a wide range of previous algorithms including both sample-efficient RL algorithms and pretraining algorithms for RL.", "For Atari games, we incorporate the representation pretrained by RePreM with Rainbow #OTHEREFR in downstream tasks (except for dynamic prediction).", "The baseline algorithms include not only sample-efficient online RL algorithms (such as Rainbow, SimPLe #OTHEREFR", "2019] , data-effecient Rainbow/DER #OTHEREFR , DrQ #OTHEREFR , and SPR #OTHEREFR"], "citing_paper_content": {"title": "Reprem: Representation Pre-Training With Masked Model For Reinforcement Learning", "abstract": "Inspired by the recent success of sequence modeling in RL and the use of masked language model for pre-training, we propose a masked model for pre-training in RL, RePreM (Representation Pre-training with Masked Model), which trains the encoder combined with transformer blocks to predict the masked states or actions in a trajectory. RePreM is simple but effective compared to existing representation pretraining methods in RL. It avoids algorithmic sophistication (such as data augmentation or estimating multiple models) with sequence modeling and generates a representation that captures long-term dynamics well. Empirically, we demonstrate the effectiveness of RePreM in various tasks, including dynamic prediction, transfer learning, and sample-efficient RL with both value-based and actor-critic methods. Moreover, we show that RePreM scales well with dataset size, dataset quality, and the scale of the encoder, which indicates its potential towards big RL models."}, "cited_paper_content": {"title": "Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning With A Stochastic Actor", "abstract": "Model-free deep reinforcement learning (RL) algorithms have been demonstrated on a range of challenging decision making and control tasks. However, these methods typically suffer from two major challenges: very high sample complexity and brittle convergence properties, which necessitate meticulous hyperparameter tuning. Both of these challenges severely limit the applicability of such methods to complex, real-world domains. In this paper, we propose soft actor-critic, an off-policy actor-critic deep RL algorithm based on the maximum entropy reinforcement learning framework. In this framework, the actor aims to maximize expected reward while also maximizing entropy - that is, succeed at the task while acting as randomly as possible. Prior deep RL methods based on this framework have been formulated as Q-learning methods. By combining off-policy updates with a stable stochastic actor-critic formulation, our method achieves state-of-the-art performance on a range of continuous control benchmark tasks, outperforming prior on-policy and off-policy methods. Furthermore, we demonstrate that, in contrast to other off-policy algorithms, our approach is very stable, achieving very similar performance across different random seeds."}, "keywords": ["Mixed dataset", "offline dataset"], "citation_intent": "method"} {"citing_id": "2303.05798v1", "cited_id": "1806.06823", "section_title": "B.3. Domain Adaptation For Bci", "citation": "First, the data from the BCI Competition IV 2a are preprocessed using the code from #REFR available at https://github.com/MultiScale-BCI/IV-2a.", "text_before_citation": ["For both the optimization over particles and over transformations, we use geoopt #OTHEREFR with the Riemannian gradient descent. We now detail the hyperparameters and the procedure."], "text_after_citation": ["We applied a band-pass filter between 8 and 30 Hz.", "With these hyper-parameters, we get one regularized covariance matrix per subject.", "For all experiments, we report the results averaged over 5 runs.", "For the sliced discrepancies, we always use L = 500 projections which we draw only once a the beginning.", "When optimizing over particles, we used a learning rate of 1000 for the sliced methods and of 10 for Wasserstein and Sinkhorn."], "citing_paper_content": {"title": "Sliced-Wasserstein On Symmetric Positive Definite Matrices For M/Eeg Signals", "abstract": "When dealing with electro or magnetoencephalography records, many supervised prediction tasks are solved by working with covariance matrices to summarize the signals. Learning with these matrices requires using Riemanian geometry to account for their structure. In this paper, we propose a new method to deal with distributions of covariance matrices and demonstrate its computational efficiency on M/EEG multivariate time series. More specifically, we define a Sliced-Wasserstein distance between measures of symmetric positive definite matrices that comes with strong theoretical guarantees. Then, we take advantage of its properties and kernel methods to apply this distance to brain-age prediction from MEG data and compare it to state-of-the-art algorithms based on Riemannian geometry. Finally, we show that it is an efficient surrogate to the Wasserstein distance in domain adaptation for Brain Computer Interface applications."}, "cited_paper_content": {"title": "Fast And Accurate Multiclass Inference For Mi-Bcis Using Large Multiscale Temporal And Spectral Features", "abstract": "Accurate, fast, and reliable multiclass classification of electroencephalography (EEG) signals is a challenging task towards the development of motor imagery brain-computer interface (MI-BCI) systems. We propose enhancements to different feature extractors, along with a support vector machine (SVM) classifier, to simultaneously improve classification accuracy and execution time during training and testing. We focus on the well-known common spatial pattern (CSP) and Riemannian covariance methods, and significantly extend these two feature extractors to multiscale temporal and spectral cases. The multiscale CSP features achieve 73.70$\\pm$15.90% (mean$\\pm$ standard deviation across 9 subjects) classification accuracy that surpasses the state-of-the-art method [1], 70.6$\\pm$14.70%, on the 4-class BCI competition IV-2a dataset. The Riemannian covariance features outperform the CSP by achieving 74.27$\\pm$15.5% accuracy and executing 9x faster in training and 4x faster in testing. Using more temporal windows for Riemannian features results in 75.47$\\pm$12.8% accuracy with 1.6x faster testing than CSP."}, "keywords": ["BCI Competition IV"], "citation_intent": "method"} {"citing_id": "2303.15435v1", "cited_id": "1405.0312", "section_title": "Text-To-Image Watermarking Performance", "citation": "We apply generative models watermarked with 48-bit signatures on prompts of the MS-COCO #REFR validation set.", "text_before_citation": ["This section shows the potential of our method for detection and identification or images generated by a Stable-Diffusion-like model #OTHEREFR 2 ."], "text_after_citation": ["We evaluate detection and identification on the outputs, as illustrated in Figure 1 .", "We evaluate their robustness to different transformations applied to generated images: strong cropping (10% of the image remaining), brightness shift (strength factor 2.0), as well as a combination of crop 50%, brightness shift 1.5 and JPEG 80. This covers typical geometric and photometric edits (see Fig. 5 for visual examples).", "The performance is partly obtained from experiments and partly by extrapolating small-scale measurements."], "citing_paper_content": {"title": "The Stable Signature: Rooting Watermarks In Latent Diffusion Models", "abstract": "Generative image modeling enables a wide range of applications but raises ethical concerns about responsible deployment. This paper introduces an active strategy combining image watermarking and Latent Diffusion Models. The goal is for all generated images to conceal an invisible watermark allowing for future detection and/or identification. The method quickly fine-tunes the latent decoder of the image generator, conditioned on a binary signature. A pre-trained watermark extractor recovers the hidden signature from any generated image and a statistical test then determines whether it comes from the generative model. We evaluate the invisibility and robustness of the watermarks on a variety of generation tasks, showing that Stable Signature works even after the images are modified. For instance, it detects the origin of an image generated from a text prompt, then cropped to keep 10% of the content, with 90+% accuracy at a false positive rate below 10 \u22126. 1. Introduction Recent progress in generative modeling and natural language processing has made it easy to create and manipulate images in a photorealistic manner. For instance, DALL\u2022E 2 [60] or Stable Diffusion [64] generate images from text, which are often indistinguishable from real artworks. They have given birth to many image edition tools like ControlNet [100], Instruct-Pix2Pix [7], and others [13, 27, 67]. They are establishing themselves as creative tools for artists, designers, and the general public. While this is a great step forward for generative AI, it raises new ethical concerns. Indeed, their sophistication is such that it will soon be impossible to distinguish AI generations from real pictures. For example, a generated picture recently won an art competition [28]. Not being able to identify that images are generated by AI makes it difficult to remove them from certain platforms and to ensure their compliance with ethical standards. The lack of traceability also opens the door to new threats such as deep fakes, impersonation or copyright usurpation [8, 17]."}, "cited_paper_content": {"title": "Microsoft Coco: Common Objects In Context", "abstract": "We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model."}, "keywords": ["generative models", "MS-COCO validation set"], "citation_intent": "method"} {"citing_id": "2303.17780v1", "cited_id": "1709.06182", "section_title": "Background", "citation": "Pre-trained language models (PTLMs) are neural networks that aim to learn the statistical patterns in programming languages #REFR .", "text_before_citation": [], "text_after_citation": ["PTLMs are first pre-trained with the next token prediction objective on a large-scale unlabeled code corpus.", "Given a code file, PTLMs are trained to predict the next token given some previous write a function to remove first and last occurrence of a given character from the string"], "citing_paper_content": {"title": "Towards Enhancing In-Context Learning For Code Generation", "abstract": "In-context learning (ICL) with pre-trained language models (PTLMs) has shown great success in code generation. ICL does not require training. PTLMs take as the input a prompt consisting of a few requirement-code examples and a new requirement, and output a new program. However, existing studies simply reuse ICL techniques for natural language generation and ignore unique features of code generation. We refer to these studies as standard ICL. Inspired by observations of the human coding process, we propose a novel ICL approach for code generation named AceCoder. Compared to standard ICL, AceCoder has two novelties. (1) Example retrieval. It retrieves similar programs as examples and learns programming skills (e.g., algorithms, APIs) from them. (2) Guided Code Generation. It encourages PTLMs to output an intermediate preliminary (e.g., test cases, APIs) before generating programs. The preliminary can help PTLMs understand requirements and guide the next code generation. We apply AceCoder to six PTLMs (e.g., Codex) and evaluate it on three public benchmarks using the Pass@k. Results show that AceCoder can significantly improve the performance of PTLMs on code generation. (1) In terms of Pass@1, AceCoder outperforms standard ICL by up to 79.7% and fine-tuned models by up to 171%. (2) AceCoder is effective in PTLMs with different sizes (e.g., 1B to 175B) and different languages (e.g., Python, Java, and JavaScript). (3) We investigate multiple choices of the intermediate preliminary. (4) We manually evaluate generated programs in three aspects and prove the superiority of AceCoder. (5) Finally, we discuss some insights about ICL for practitioners."}, "cited_paper_content": {"title": "A Survey Of Machine Learning For Big Code And Naturalness", "abstract": "Research at the intersection of machine learning, programming languages, and software engineering has recently taken important steps in proposing learnable probabilistic models of source code that exploit the abundance of patterns of code. In this article, we survey this work. We contrast programming languages against natural languages and discuss how these similarities and differences drive the design of probabilistic models. We present a taxonomy based on the underlying design principles of each model and use it to navigate the literature. Then, we review how researchers have adapted these models to application areas and discuss cross-cutting and application-specific challenges and opportunities."}, "keywords": ["Pre-trained language models", "programming languages"], "citation_intent": "background"} {"citing_id": "2303.12621v1", "cited_id": "1912.13192", "section_title": "Results On Wod", "citation": "In particular, for pedestrian, we outperform the baseline model PV-RCNN++ #REFR by 2.77%/2.56% in terms of both L1 and L2 mAP, which indicates the effectiveness of the proposed model in handling hard examples.", "text_before_citation": ["The results on the validation set are displayed in Tab.", "1, and it can be seen that we achieve new state-of-the-art performance on all the three classes."], "text_after_citation": ["In comparison with other Transformer-based models, we focus on vehicle since the counterparts only report the performance on it. As Tab.", "2 shows, our OcTr achieves the best mAP among all these convolution-and Transformer-based backbones.", "It also outperforms the Transformer-based detection head network CT3D #OTHEREFR by 2.52% and 1.46% in L1 and L2 mAP.", "Regarding the accuracies at different distances, OcTr ranks the first place in the range of 30m-50m and 50m-inf, which surpasses the previous best by 0.45%, 2.66% in L1 mAP and 1.18%, 2.26% in L2 mAP respectively.", "It clearly illustrates that OcTr has the advantage in capturing long-range fine-grained context, which facilitates dealing with objects far away."], "citing_paper_content": {"title": "Octr: Octree-Based Transformer For 3D Object Detection", "abstract": "A key challenge for LiDAR-based 3D object detection is to capture sufficient features from large scale 3D scenes especially for distant or/and occluded objects. Albeit recent efforts made by Transformers with the long sequence modeling capability, they fail to properly balance the accuracy and efficiency, suffering from inadequate receptive fields or coarse-grained holistic correlations. In this paper, we propose an Octree-based Transformer, named OcTr, to address this issue. It first constructs a dynamic octree on the hierarchical feature pyramid through conducting self-attention on the top level and then recursively propagates to the level below restricted by the octants, which captures rich global context in a coarse-to-fine manner while maintaining the computational complexity under control. Furthermore, for enhanced foreground perception, we propose a hybrid positional embedding, composed of the semantic-aware positional embedding and attention mask, to fully exploit semantic and geometry clues. Extensive experiments are conducted on the Waymo Open Dataset and KITTI Dataset, and OcTr reaches newly state-of-the-art results."}, "cited_paper_content": {"title": "Pv-Rcnn: Point-Voxel Feature Set Abstraction For 3D Object Detection", "abstract": "We present a novel and high-performance 3D object detection framework, named PointVoxel-RCNN (PV-RCNN), for accurate 3D object detection from point clouds. Our proposed method deeply integrates both 3D voxel Convolutional Neural Network (CNN) and PointNet-based set abstraction to learn more discriminative point cloud features. It takes advantages of efficient learning and high-quality proposals of the 3D voxel CNN and the flexible receptive fields of the PointNet-based networks. Specifically, the proposed framework summarizes the 3D scene with a 3D voxel CNN into a small set of keypoints via a novel voxel set abstraction module to save follow-up computations and also to encode representative scene features. Given the high-quality 3D proposals generated by the voxel CNN, the RoI-grid pooling is proposed to abstract proposal-specific features from the keypoints to the RoI-grid points via keypoint set abstraction with multiple receptive fields. Compared with conventional pooling operations, the RoI-grid feature points encode much richer context information for accurately estimating object confidences and locations. Extensive experiments on both the KITTI dataset and the Waymo Open dataset show that our proposed PV-RCNN surpasses state-of-the-art 3D detection methods with remarkable margins by using only point clouds."}, "keywords": ["L2 mAP", "baseline model"], "citation_intent": "result"} {"citing_id": "2303.02874v1", "cited_id": "1707.07397", "section_title": "C. Current Defense Strategy And Limitation", "citation": "High success had been achieved by successful defense of Randomization-based defense technique against both blackbox and gray-box based attacks, but it is still vulnerable whitebox based attack, for example, the EoT method #REFR can be easily attacked and compromised simply by considering the randomization process during attack.", "text_before_citation": ["The approach is similar to a typical antivirus software, which is constantly being updated on a regular basis.", "As effective as adversarial training may be in defense against adversarial attack, it still requires continuous maintenance or update in order to be effective in combating new threats and it is still suffering from the fundamental problem of the fact that it can only successfully defend against threats or attack that has already happened and is already trained against.", "2) Randomization: Several adversarial defense methods relied on randomization as a technique for mitigating the effects of adversarial Perturbations in the input and/or feature domain #OTHEREFR .", "The idea behind this defense technique is the robustness of deep neural network model to random perturbation.", "The aim of randomization-based defense is to randomize the adversarial effects of the adversarial sampling into several random effects which is a very ok and normal thing for varieties of deep neural network models."], "text_after_citation": [], "citing_paper_content": {"title": "Adversarial Sampling For Fairness Testing In Deep Neural Network", "abstract": "In this research, we focus on the usage of adversarial sampling to test for the fairness in the prediction of deep neural network model across different classes of image in a given dataset. While several framework had been proposed to ensure robustness of machine learning model against adversarial attack, some of which includes adversarial training algorithm. There is still the pitfall that adversarial training algorithm tends to cause disparity in accuracy and robustness among different group. Our research is aimed at using adversarial sampling to test for fairness in the prediction of deep neural network model across different classes or categories of image in a given dataset. We successfully demonstrated a new method of ensuring fairness across various group of input in deep neural network classifier. We trained our neural network model on the original image, and without training our model on the perturbed or attacked image. When we feed the adversarial samplings to our model, it was able to predict the original category/ class of the image the adversarial sample belongs to. We also introduced and used the separation of concern concept from software engineering whereby there is an additional standalone filter layer that filters perturbed image by heavily removing the noise or attack before automatically passing it to the network for classification, we were able to have accuracy of 93.3%. Cifar-10 dataset have ten categories of dataset, and so, in order to account for fairness, we applied our hypothesis across each categories of dataset and were able to get a consistent result and accuracy."}, "cited_paper_content": {"title": "Synthesizing Robust Adversarial Examples", "abstract": "Standard methods for generating adversarial examples for neural networks do not consistently fool neural network classifiers in the physical world due to a combination of viewpoint shifts, camera noise, and other natural transformations, limiting their relevance to real-world systems. We demonstrate the existence of robust 3D adversarial objects, and we present the first algorithm for synthesizing examples that are adversarial over a chosen distribution of transformations. We synthesize two-dimensional adversarial images that are robust to noise, distortion, and affine transformation. We apply our algorithm to complex three-dimensional objects, using 3D-printing to manufacture the first physical adversarial objects. Our results demonstrate the existence of 3D adversarial objects in the physical world."}, "keywords": ["Randomization-based defense technique", "gray-box based attacks"], "citation_intent": "method"} {"citing_id": "2303.10753v1", "cited_id": "0807.4462", "section_title": "Related Works:", "citation": "Bonnabel and Sepulchre #REFR proposed a metric for S + (p, n) that is invariant with respect to all transformations that preserve angles and derived the geometric mean.", "text_before_citation": ["In contrast to SPD matrices, symmetric positive semidefinite (SPSD) matrices are singular and do not have matrix logarithms.", "There are two main approaches to define metrics on SPSD matrices.", "The first approach #OTHEREFR , #OTHEREFR exploits S + (p, n), the manifold of rank-p SPSDs of size n, which can be identified with the quotient manifold R", "n\u00d7p * /O p , where R n\u00d7p *", "is the set of full-rank n \u00d7 p matrices and O p is the orthogonal group of order p."], "text_after_citation": ["The second approach involves adding a regularization term to transform the SPSD matrix to a PSD or truncating the spectrum. Dodero et al.", "#OTHEREFR regularized the graph Laplacian to become positive definite by adding a regularization term and used the Log-Euclidean metric for downstream classification tasks. Shnitzer et al.", "#OTHEREFR truncated the full spectrum of diffusion operators to a fixed length and proved that the spectrum truncation preserves the lower bound of the Log-Euclidean metric.", "(2) Fr\u00e9chet Analysis of Graph Laplacians.", "The Fr\u00e9chet mean #OTHEREFR is a concept in statistics that provides a representative center of a set of data objects in a metric space."], "citing_paper_content": {"title": "Fr\u00e9chet Statistics Based Change Point Detection In Dynamic Social Networks", "abstract": "This paper proposes a method to detect change points in dynamic social networks using Fr\u00e9chet statistics. We address two main questions: (1) what metric can quantify the distances between graph Laplacians in a dynamic network and enable efficient computation, and (2) how can the Fr\u00e9chet statistics be extended to detect multiple change points while maintaining the significance level of the hypothesis test? Our solution defines a metric space for graph Laplacians using the Log-Euclidean metric, enabling a closed-form formula for Fr\u00e9chet mean and variance. We present a framework for change point detection using Fr\u00e9chet statistics and extend it to multiple change points with binary segmentation. The proposed algorithm uses incremental computation for Fr\u00e9chet mean and variance to improve efficiency and is validated on simulated and two real-world datasets, namely the UCI message dataset and the Enron email dataset."}, "cited_paper_content": {"title": "Riemannian Metric And Geometric Mean For Positive Semidefinite Matrices Of Fixed Rank", "abstract": "This paper introduces a new metric and mean on the set of positive semidefinite matrices of fixed-rank. The proposed metric is derived from a well-chosen Riemannian quotient geometry that generalizes the reductive geometry of the positive cone and the associated natural metric. The resulting Riemannian space has strong geometrical properties: it is geodesically complete, and the metric is invariant with respect to all transformations that preserve angles (orthogonal transformations, scalings, and pseudoinversion). A meaningful approximation of the associated Riemannian distance is proposed, that can be efficiently numerically computed via a simple algorithm based on SVD. The induced mean preserves the rank, possesses the most desirable characteristics of a geometric mean, and is easy to compute."}, "keywords": ["metric"], "citation_intent": "background"} {"citing_id": "2303.17958v1", "cited_id": "1801.05039", "section_title": "C. Global Convergence Of Deepo", "citation": "In contrast to the existing literature #REFR on PO for the LQR problem, the cost J(G) here is projected gradient dominated, meaning that G is optimal if the projected gradient \u03a0 X\u2212 \u2207J(G) is equal to zero.", "text_before_citation": ["V \u2208 N (X \u2212 ) with V F = 1 in the descent cone of S G (a) such that J (G)[V ] \u2264 \u2212c(\u03b1(a)(J(G) \u2212 J * ) 1/2 ), where J (G)[V ] denotes the derivative along the direction V .", "Let V = \u03a0 X\u2212 \u2207J(G)/ \u03a0 X\u2212 \u2207J(G) F be the normalized projected gradient.", "Then, we have J (G)[V ] \u2264 J (G)[V ] since both V and V are in N (X \u2212 )", ", and V is the direction of the projection of the gradient.", "The proof is completed by letting \u00b5(a) = 1/(c\u03b1(a)) #OTHEREFR ."], "text_after_citation": ["It is usually regarded as a weaker condition than strong convexity in nonconvex optimization theory.", "Under Lemma 4, one can show global convergence of projected gradient update #OTHEREFR .", "To further show a linear convergence rate, we require the smoothness of J(G).", "However, since J(K) tends extremely to infinity as G approaches the boundary \u2202S G , we can only show that J(G) is locally smooth over any sublevel set. Define the Hessian acting on the direction Z \u2208", "R T \u00d7n as \u2207 2 J(G)[Z, Z] := d 2 dt 2 J(G + tZ) t=0"], "citing_paper_content": {"title": "Data-Enabled Policy Optimization For The Linear Quadratic Regulator", "abstract": "Policy optimization (PO), an essential approach of reinforcement learning for a broad range of system classes, requires significantly more system data than indirect (identification-followed-by-control) methods or behavioralbased direct methods even in the simplest linear quadratic regulator (LQR) problem. In this paper, we take an initial step towards bridging this gap by proposing the data-enabled policy optimization (DeePO) method, which requires only a finite number of sufficiently exciting data to iteratively solve the LQR via PO. Based on a data-driven closed-loop parameterization, we are able to directly compute the policy gradient from a bath of persistently exciting data. Next, we show that the nonconvex PO problem satisfies a projected gradient dominance property by relating it to an equivalent convex program, leading to the global convergence of DeePO. Moreover, we apply regularization methods to enhance certainty-equivalence and robustness of the resulting controller and show an implicit regularization property. Finally, we perform simulations to validate our results."}, "cited_paper_content": {"title": "Global Convergence Of Policy Gradient Methods For The Linear Quadratic Regulator", "abstract": "Direct policy gradient methods for reinforcement learning and continuous control problems are a popular approach for a variety of reasons: 1) they are easy to implement without explicit knowledge of the underlying model 2) they are an \"end-to-end\" approach, directly optimizing the performance metric of interest 3) they inherently allow for richly parameterized policies. A notable drawback is that even in the most basic continuous control problem (that of linear quadratic regulators), these methods must solve a non-convex optimization problem, where little is understood about their efficiency from both computational and statistical perspectives. In contrast, system identification and model based planning in optimal control theory have a much more solid theoretical footing, where much is known with regards to their computational and statistical properties. This work bridges this gap showing that (model free) policy gradient methods globally converge to the optimal solution and are efficient (polynomially so in relevant problem dependent quantities) with regards to their sample and computational complexities."}, "keywords": ["projected gradient"], "citation_intent": "result"} {"citing_id": "2303.08439v1", "cited_id": "1909.12962", "section_title": "Cross-Domain Performance Evaluation", "citation": "We train our model on the Faceforensics++ dataset and evaluate its performance on the test sets of Celeb-DF #REFR and DFDC [11] .", "text_before_citation": ["We ran the public code of methods marked with \"*\" to produce results under identical settings (HQ for training and single frames for testing). mum improvement of 10.25% (F2F \u2192FSW).", "Meanwhile, our model remains effective under the four intra-domain settings, which are shown in gray.", "The method tends to slightly underperform when trained on NeuralTextures, likely because its manipulation patterns only exist in certain small regions, and may be neglected during our block sampling.", "Nevertheless, compared to existing methods, our deepfake detector yields much better overall performances.", "Cross-dataset evaluations."], "text_after_citation": ["Specifically, following the previous practice in #OTHEREFR , we validate the model on Celeb-DF and use the selected model to test on DFDC.", "We adopt the HQ version of FF for training, and only use one frame every video for testing.", "Under the same setting, we ran the public code of RECCE #OTHEREFR , UIA-ViT #OTHEREFR and SBI #OTHEREFR to produce corresponding results.", "In Table 2 , we show a competitive performance with existing image-based methods, signaling satisfying adaptability of RFFR to different datasets, especially high quality datasets like Celeb-DF.", "SBI #OTHEREFR is a recent powerful deepfake detection method."], "citing_paper_content": {"title": "Real Face Foundation Representation Learning For Generalized Deepfake Detection", "abstract": "The emergence of deepfake technologies has become a matter of social concern as they pose threats to individual privacy and public security. It is now of great significance to develop reliable deepfake detectors. However, with numerous face manipulation algorithms present, it is almost impossible to collect sufficient representative fake faces, and it is hard for existing detectors to generalize to all types of manipulation. Therefore, we turn to learn the distribution of real faces, and indirectly identify fake images that deviate from the real face distribution. In this study, we propose Real Face Foundation Representation Learning (RFFR), which aims to learn a general representation from large-scale real face datasets and detect potential artifacts outside the distribution of RFFR. Specifically, we train a model on real face datasets by masked image modeling (MIM), which results in a discrepancy between input faces and the reconstructed ones when applying the model on fake samples. This discrepancy reveals the low-level artifacts not contained in RFFR, making it easier to build a deepfake detector sensitive to all kinds of potential artifacts outside the distribution of RFFR. Extensive experiments demonstrate that our method brings about better generalization performance, as it significantly outperforms the state-of-the-art methods in crossmanipulation evaluations, and has the potential to further improve by introducing extra real faces for training RFFR."}, "cited_paper_content": {"title": "Celeb-Df: A Large-Scale Challenging Dataset For Deepfake Forensics", "abstract": "AI-synthesized face-swapping videos, commonly known as DeepFakes, is an emerging problem threatening the trustworthiness of online information. The need to develop and evaluate DeepFake detection algorithms calls for large-scale datasets. However, current DeepFake datasets suffer from low visual quality and do not resemble DeepFake videos circulated on the Internet. We present a new large-scale challenging DeepFake video dataset, Celeb-DF, which contains 5,639 high-quality DeepFake videos of celebrities generated using improved synthesis process. We conduct a comprehensive evaluation of DeepFake detection methods and datasets to demonstrate the escalated level of challenges posed by Celeb-DF."}, "keywords": ["Faceforensics++ dataset"], "citation_intent": "method"} {"citing_id": "2303.12936v1", "cited_id": "1903.05987", "section_title": "Comparing Bert And Distilbert", "citation": "DistilBERT is on par with or exceeding ELMo on a binary text classification task #REFR .", "text_before_citation": ["And this would enable a fairer comparison.", "Recently, it was shown that ELMo and BERT make no significant difference in semantic analysis #OTHEREFR .", "Here it is observed that although they are close-by in the null context, DistilBERT is more robust than ELMo in the cross-context in text classification.", "The findings of this study are in line with prior work.", "The fairly comparable scores of ELMo and the traditional baselines in the null context supports the observation of #OTHEREFR that is, when it comes to contextual embeddings, there is only a small improvement in learning semantics over traditional ML methods."], "text_after_citation": ["DistilBERT, as a transformerbased model, is better in capturing long-term dependencies in an input sequence #OTHEREFR .", "DistilBERT is lighter than ELMo and has a shorter training time #OTHEREFR .", "Here it should be noted that the experimental settings of the previous work and In this study, ELMo and DistilBERT are compared on their fine-tuning performance on two binary text classification tasks.", "The main focus was to see how much can these models be benefited in a practical way without any modification to the pretraining outputs.", "But the models were actually pretrained on entirely different corpora (ELMo on One Billion Words Benchmark #OTHEREFR , DistilBERT on English Wikipedia and Toronto BookCorpus #OTHEREFR )."], "citing_paper_content": {"title": "", "abstract": "I am grateful to my family for their unconditional love and patience. I am grateful to Arzucan\u00d6zg\u00fcr, for being such an inspiring figure by her selfless devotion to research the most righteous way with the passion to contribute to the community. I am grateful to Ali H\u00fcrriyetoglu, for being such a role model, who could somehow always find a way to turn the mist of research questions into a structured path to create practical solutions by combining creativity and technique. I cannot thank enough my dear friends who put up with my whims throughout this journey. I thank fellows from TabiLAB for inspiring me with their brilliance, invaluable insights and recommendations. I thank Ko\u00e7 University EMW research team for their generosity in sharing the data which was created with blood, sweat and tears. I feel lucky that I got to meet fellows in EMW project engineering team who invested their precious time and energy to support me in this study from the very beginning. Lastly, I owe the deepest gratitude to our professors and staff members in our department who taught us how to form such a great community and made it feel like the dearest home from the day one."}, "cited_paper_content": {"title": "To Tune Or Not To Tune? Adapting Pretrained Representations To Diverse Tasks", "abstract": "While most previous work has focused on different pretraining objectives and architectures for transfer learning, we ask how to best adapt the pretrained model to a given target task. We focus on the two most common forms of adaptation, feature extraction (where the pretrained weights are frozen), and directly fine-tuning the pretrained model. Our empirical results across diverse NLP tasks with two state-of-the-art models show that the relative performance of fine-tuning vs. feature extraction depends on the similarity of the pretraining and target tasks. We explore possible explanations for this finding and provide a set of adaptation guidelines for the NLP practitioner."}, "keywords": ["binary text classification"], "citation_intent": "background"} {"citing_id": "2303.04274v1", "cited_id": "1911.00222", "section_title": "C. Sensitivity And Privacy Analysis", "citation": "It is consistent with the amplitude of the DP noise in a Gaussian noise perturbation mechanism developed in #REFR .", "text_before_citation": ["given , \u03b4, and M , we adjust the DP noise variance to balance privacy preservation and the convergence of FL training.", "Remark 2.", "Given a privacy budget for M global aggregations, more clients involved in the model updates, i.e., a larger q in (8), lead to requirements of stronger perturbation noises being added to the local model of each involved client.", "This indicates less privacy leakage for each client, which is consistent with the conclusion drawn in #OTHEREFR .", "m = 1, \u2022 \u2022 \u2022 , M , since \u03d1\u2212\u03d1 1\u2212M \u03d1\u22121 \u03d1\u21921 \u2212 \u2212\u2212 \u2192 M ."], "text_after_citation": [], "citing_paper_content": {"title": "Amplitude-Varying Perturbation For Balancing Privacy And Utility In Federated Learning", "abstract": "While preserving the privacy of federated learning (FL), differential privacy (DP) inevitably degrades the utility (i.e., accuracy) of FL due to model perturbations caused by DP noise added to model updates. Existing studies have considered exclusively noise with persistent root-mean-square amplitude and overlooked an opportunity of adjusting the amplitudes to alleviate the adverse effects of the noise. This paper presents a new DP perturbation mechanism with a time-varying noise amplitude to protect the privacy of FL and retain the capability of adjusting the learning performance. Specifically, we propose a geometric series form for the noise amplitude and reveal analytically the dependence of the series on the number of global aggregations and the (, \u03b4)-DP requirement. We derive an online refinement of the series to prevent FL from premature convergence resulting from excessive perturbation noise. Another important aspect is an upper bound developed for the loss function of a multi-layer perceptron (MLP) trained by FL running the new DP mechanism. Accordingly, the optimal number of global aggregations is obtained, balancing the learning and privacy. Extensive experiments are conducted using MLP, supporting vector machine, and convolutional neural network models on four public datasets. The contribution of the new DP mechanism to the convergence and accuracy of privacy-preserving FL is corroborated, compared to the state-of-the-art Gaussian noise mechanism with a persistent noise amplitude."}, "cited_paper_content": {"title": "Federated Learning With Differential Privacy: Algorithms And Performance Analysis", "abstract": "In this paper, to effectively prevent information leakage, we propose a novel framework based on the concept of differential privacy (DP), in which artificial noises are added to the parameters at the clients side before aggregating, namely, noising before model aggregation FL (NbAFL). First, we prove that the NbAFL can satisfy DP under distinct protection levels by properly adapting different variances of artificial noises. Then we develop a theoretical convergence bound of the loss function of the trained FL model in the NbAFL. Specifically, the theoretical bound reveals the following three key properties: 1) There is a tradeoff between the convergence performance and privacy protection levels, i.e., a better convergence performance leads to a lower protection level; 2) Given a fixed privacy protection level, increasing the number $N$ of overall clients participating in FL can improve the convergence performance; 3) There is an optimal number of maximum aggregation times (communication rounds) in terms of convergence performance for a given protection level. Furthermore, we propose a $K$-random scheduling strategy, where $K$ ($1