{"citing_id": "2305.01064v1", "cited_id": "1910.11333", "section_title": "Summary", "citation": "This finding weakens the claim made in #REFR of achieving a successful sampling task, namely achieving approximate sampling for some specific distribution, and may shed doubt on the potential of NISQ systems to achieve sampling tasks.", "text_before_citation": ["In this paper we studied three main issues, mostly from a statistical point of view.", "The first issue, already demonstrated in #OTHEREFR is the large gap between the Google samples and the Google noise model or any specific noise model we studied."], "text_after_citation": ["The second issue concerns Formula (77) of the Google paper (4) with its simple independence-like form.", "This, and the noise and calibration involved in the experimental process, make it surprising that Google's fidelity estimates appear to be rather close to Formula (77) for hundreds of different experiments.", "Surprises do occur in science, and may lead to important scientific discoveries, but from a statistical point of view, the subsumed independence between components of systems such as quantum computers, which are known to be sensitive to noise and errors caused by interactions with their environment, is striking.", "Related questions arise from the systematic deviation of Formula (77) for patch circuits, and the large difference between the F XEB fidelity of the two patches. The remarkable verification of Kalachev et al. #OTHEREFR and Liu et al.", "#OTHEREFR weakened these concerns by showing that Formula (77) provides a good approximation to hundreds of circuits for which the Google team stated that they had not computed the amplitudes."], "citing_paper_content": {"title": "Questions And Concerns About Google'S Quantum Supremacy Claim", "abstract": "In October 2019, Nature published a paper [6] describing an experimental work that was performed at Google. The paper claims to demonstrate quantum (computational) supremacy on a 53-qubit quantum computer. Since then we have been involved in a long-term project to study various statistical aspects of the Google experiment. In [30] we studied Google's statistical framework that we found to be very sound and offered some technical improvements. This document describes three main concerns (based on statistical analysis) about the Google 2019 experiment. The first concern is that the data do not agree with Google's noise model (or any other specific model). The second concern is that a crucial simple formula for a priori estimation of the fidelity seems to involve an unexpected independence assumption, and yet it gives very accurate predictions. The third concern is about statistical properties of the calibration process. 1"}, "cited_paper_content": {"title": "Supplementary Information For\"Quantum Supremacy Using A Programmable Superconducting Processor\"", "abstract": "The promise of quantum computers is that certain computational tasks might be executed exponentially faster on a quantum processor than on a classical processor1. A fundamental challenge is to build a high-fidelity processor capable of running quantum algorithms in an exponentially large computational space. Here we report the use of a processor with programmable superconducting qubits2\u20137 to create quantum states on 53 qubits, corresponding to a computational state-space of dimension 253 (about 1016). Measurements from repeated experiments sample the resulting probability distribution, which we verify using classical simulations. Our Sycamore processor takes about 200 seconds to sample one instance of a quantum circuit a million times\u2014our benchmarks currently indicate that the equivalent task for a state-of-the-art classical supercomputer would take approximately 10,000 years. This dramatic increase in speed compared to all known classical algorithms is an experimental realization of quantum supremacy8\u201314 for this specific computational task, heralding a much-anticipated computing paradigm. Quantum supremacy is demonstrated using a programmable superconducting processor known as Sycamore, taking approximately 200 seconds to sample one instance of a quantum circuit a million times, which would take a state-of-the-art supercomputer around ten thousand years to compute."}, "keywords": ["NISQ systems"], "citation_intent": "background"} {"citing_id": "2303.05283v1", "cited_id": "1909.05787", "section_title": "B. Haptic Communication", "citation": "Alternatively, the transmitter can predict its future haptic data and transmit the predicted data to compensate for the transmission delay #REFR .", "text_before_citation": ["In the tele-operated needle insertion, the force/torque feedback from the patient is predicted by inputting the force/torque commands of the surgeon to the hidden Markov model (HMM) #OTHEREFR .", "Audiovisual data collected in the interaction with a surface material are input to a neural network-based semantic learning algorithm to predict the texture of the surface material #OTHEREFR .", "Haptic data can be predicted either at the receiver side or at the transmitter side to compensate for an excessive delay or packet loss.", "The receiver can predict the haptic data from the transmitter when an excessive delay occurs #OTHEREFR .", "For example, digital twin-based prediction can be used by the receiver for low-latency interactions #OTHEREFR ."], "text_after_citation": ["In this case, the prediction of whether haptic interaction is about to occur can assist to determine whether the haptic data prediction and the subsequent transmission are necessary #OTHEREFR .", "Haptic data prediction algorithms, such as AI-based ones, can be computing-intensive.", "To this end, they can be implemented using computing resources in the network to satisfy the stringent delay requirements #OTHEREFR , #OTHEREFR .", "In a teleoperation scenario, each of the two interacting haptic devices is associated with one edge server which caches the haptic interaction data, trains and implements the LSTM networkbased prediction algorithm, and delivers the predicted haptic data to its associated haptic device #OTHEREFR .", "Furthermore, with close proximity, auxiliary robots can be deployed around haptic devices to implement haptic data prediction and deliver the results to the devices using device-to-device (D2D) communications #OTHEREFR ."], "citing_paper_content": {"title": "Toward Immersive Communications In 6G", "abstract": "The sixth generation (6G) networks are expected to enable immersive communications and bridge the physical and the virtual worlds. Integrating extended reality, holography, and haptics, immersive communications will revolutionize how people work, entertain, and communicate by enabling lifelike interactions. However, the unprecedented demand for data transmission rate and the stringent requirements on latency and reliability create challenges for 6G networks to support immersive communications. In this survey article, we present the prospect of immersive communications and investigate emerging solutions to the corresponding challenges for 6G. First, we introduce use cases of immersive communications, in the fields of entertainment, education, and healthcare. Second, we present the concepts of immersive communications, including extended reality, haptic communication, and holographic communication, their basic implementation procedures, and their requirements on networks in terms of transmission rate, latency, and reliability. Third, we summarize the potential solutions to addressing the challenges from the aspects of communication, computing, and networking. Finally, we discuss future research directions and conclude this study."}, "cited_paper_content": {"title": "Prediction And Communication Co-Design For Ultra-Reliable And Low-Latency Communications", "abstract": "Ultra-reliable and low-latency communications (URLLC) are considered as one of three new application scenarios in the fifth generation cellular networks. In this work, we aim to reduce the user experienced delay through prediction and communication co-design, where each mobile device predicts its future states and sends them to a data center in advance. Since predictions are not error-free, we consider prediction errors and packet losses in communications when evaluating the reliability of the system. Then, we formulate an optimization problem that maximizes the number of URLLC services supported by the system by optimizing time and frequency resources and the prediction horizon. Simulation results verify the effectiveness of the proposed method, and show that the tradeoff between user experienced delay and reliability can be improved significantly via prediction and communication co-design. Furthermore, we carried out an experiment on the remote control in a virtual factory, and validated our concept on prediction and communication co-design with the practical mobility data generated by a real tactile device."}, "keywords": ["future haptic data", "transmission delay"], "citation_intent": "background"} {"citing_id": "2303.08610v1", "cited_id": "2002.04745", "section_title": "Blind Estimation System", "citation": "For the graph decoder, we use the 6-layer transformer encoder with the pre-layer normalization #REFR and 16 heads.", "text_before_citation": ["We reuse the prototype decoder for the parameter estimation; we add another projection head, append a task token xT to differentiate the two tasks, and remove the causal attention mask.", "Since each parameter has a different range and scale, we translate and rescale the ground-truth value to fit into [0, 1] range. Architecture Details.", "The FFT size, hop length, and the number of Mel filter banks of the reference Mel spectrogram are 1536, 384, and 256, respectively.", "The convolutional backbone is a VGGish model #OTHEREFR with the following modifications: (i) depthwise separable convolutions #OTHEREFR , (ii) channels divided into four groups with dilations of #OTHEREFR , (iii) and the use of layer normalization #OTHEREFR .", "We used a transformer decoder layer with 1 (singing) and 6 (drum) queries for the pooling."], "text_after_citation": ["While the original paper used eigenvectors of the normalized Laplacian matrix as node id encoding, we use the sinusoidal embeddings since the eigenvectors are intractable during the decoding. Training.", "We train the prototype decoding task in a teacher-forcing manner using the cross-entropy losses with label-smoothing of 0.1.", "At the same time, we train the parameter estimation task by feeding an oracle prototype G0 as input and using the l1 distance as an objective.", "We use AdamW #OTHEREFR optimizer, a linear learning rate scheduler with 5e-4 peak learning rate, 50k warmup steps, 200k total training steps, and batch size of 32."], "citing_paper_content": {"title": "Blind Estimation Of Audio Processing Graph", "abstract": "Musicians and audio engineers sculpt and transform their sounds by connecting multiple processors, forming an audio processing graph. However, most deep-learning methods overlook this real-world practice and assume fixed graph settings. To bridge this gap, we develop a system that reconstructs the entire graph from a given reference audio. We first generate a realistic graph-reference pair dataset and train a simple blind estimation system composed of a convolutional reference encoder and a transformer-based graph decoder. We apply our model to singing voice effects and drum mixing estimation tasks. Evaluation results show that our method can reconstruct complex signal routings, including multi-band processing and sidechaining."}, "cited_paper_content": {"title": "On Layer Normalization In The Transformer Architecture", "abstract": "The Transformer is widely used in natural language processing tasks. To train a Transformer however, one usually needs a carefully designed learning rate warm-up stage, which is shown to be crucial to the final performance but will slow down the optimization and bring more hyper-parameter tunings. In this paper, we first study theoretically why the learning rate warm-up stage is essential and show that the location of layer normalization matters. Specifically, we prove with mean field theory that at initialization, for the original-designed Post-LN Transformer, which places the layer normalization between the residual blocks, the expected gradients of the parameters near the output layer are large. Therefore, using a large learning rate on those gradients makes the training unstable. The warm-up stage is practically helpful for avoiding this problem. On the other hand, our theory also shows that if the layer normalization is put inside the residual blocks (recently proposed as Pre-LN Transformer), the gradients are well-behaved at initialization. This motivates us to remove the warm-up stage for the training of Pre-LN Transformers. We show in our experiments that Pre-LN Transformers without the warm-up stage can reach comparable results with baselines while requiring significantly less training time and hyper-parameter tuning on a wide range of applications."}, "keywords": ["pre-layer normalization"], "citation_intent": "method"} {"citing_id": "2305.00604v1", "cited_id": "1912.10985", "section_title": "Introduction", "citation": "Inspired by K-FAC, there have been other works discussing approximations of G \u03b8 and its inverse #REFR .", "text_before_citation": ["where \u03bb > 0 is the so-called Tikhonov regularization parameter.", "It is well-known #OTHEREFR , #OTHEREFR , that under the assumption of approximating the model f with its first-order Taylor expansion, the Hessian corresponds with the so-called generalized Gauss-Newton (GGN) matrix G \u03b8 , and hence (4) can be expressed as", "EQUATION", "A major practical limitation of (5) is the computation of the inverse term.", "A method that alleviates this difficulty is known as Kronecker-Factored Approximate Curvature (K-FAC) #OTHEREFR which approximates the block-diagonal (i.e., layer-wise) empirical Hessian or GGN matrix."], "text_after_citation": ["In the following, we discuss a popular approach that allows for (moderately) efficient computation.", "The generalized Gauss-Newton matrix G \u03b8 is defined as", "EQUATION", "where J and \u2207 2 denote the Jacobian and Hessian matrices, respectively.", "Correspondingly, the diagonal block of G \u03b8 corresponding to the weights of the ith layer"], "citing_paper_content": {"title": "Isaac Newton: Input-Based Approximate Curvature For Newton'S Method", "abstract": "We present ISAAC (Input-baSed ApproximAte Curvature), a novel method that conditions the gradient using selected second-order information and has an asymptotically vanishing computational overhead, assuming a batch size smaller than the number of neurons. We show that it is possible to compute a good conditioner based on only the input to a respective layer without a substantial computational overhead. The proposed method allows effective training even in small-batch stochastic regimes, which makes it competitive to first-order as well as second-order methods."}, "cited_paper_content": {"title": "Backpack: Packing More Into Backprop", "abstract": "Automatic differentiation frameworks are optimized for exactly one thing: computing the average mini-batch gradient. Yet, other quantities such as the variance of the mini-batch gradients or many approximations to the Hessian can, in theory, be computed efficiently, and at the same time as the gradient. While these quantities are of great interest to researchers and practitioners, current deep learning software does not support their automatic calculation. Manually implementing them is burdensome, inefficient if done naively, and the resulting code is rarely shared. This hampers progress in deep learning, and unnecessarily narrows research to focus on gradient descent and its variants; it also complicates replication studies and comparisons between newly developed methods that require those quantities, to the point of impossibility. To address this problem, we introduce BackPACK, an efficient framework built on top of PyTorch, that extends the backpropagation algorithm to extract additional information from first-and second-order derivatives. Its capabilities are illustrated by benchmark reports for computing additional quantities on deep neural networks, and an example application by testing several recent curvature approximations for optimization."}, "keywords": ["approximations"], "citation_intent": "background"} {"citing_id": "2304.01552v1", "cited_id": "1707.09835", "section_title": "Discussion", "citation": "Meta-SGD #REFR is a well-known constant diagonal preconditioner (i.e., diag(a 1 , \u2022 \u2022 \u2022 , a n )) that does not need to be positive definite and we also investigate its modification with a constraint on positive definiteness. The results are shown in Table 8 .", "text_before_citation": ["simple constant preconditioners: Approximate GAP is a low-complexity method where SVD Meta-SGD #OTHEREFR 2.4218 \u00d7 10 5 100.0% MC #OTHEREFR 2.7106 \u00d7 10 6 2140.4% PAMELA #OTHEREFR 1.6239 \u00d7 10 5 34.1% MH #OTHEREFR 7.2196 \u00d7 10 7 59586.7% Sparse-MAML #OTHEREFR 2.4218 \u00d7 10 5 100.0% GAP 1.2131 \u00d7 10 5 0.2% Table 9 . Ablation study of PGAP on mini-ImageNet.", "Performance of the GAP-trained model is significantly affected by not applying PGAP.", "operation is avoided by approximating GAP with a constant diagonal preconditioner.", "A natural question to ask is how does Approximate GAP compare with other constant diagonal preconditioners.", "To answer this question, we have compared Approximate GAP with Meta-SGD and a modified Meta-SGD."], "text_after_citation": ["It can be observed that enforcing positive definiteness can improve Meta-SGD.", "Furthermore, an additional improvement can be achieved by Approximate GAP.", "While both modified Meta-SGD and Approximate GAP are positive definite, Approximate GAP is different because it inherits an additional constraint from GAP -a block diagonal structure where a constant diagonal matrix M is repeated (i.e., blkdiag(M, \u2022 \u2022 \u2022 , M )).", "The inherited constraint provides a gain over the modified Meta-SGD.", "Does GAP learn a useful preconditioner: While a Riemannian metric can be helpful, it does not mean any Riemannian metric will result in an improvement."], "citing_paper_content": {"title": "Meta-Learning With A Geometry-Adaptive Preconditioner", "abstract": "Model-agnostic meta-learning (MAML) is one of the most successful meta-learning algorithms. It has a bi-level optimization structure where the outer-loop process learns a shared initialization and the inner-loop process optimizes task-specific weights. Although MAML relies on the standard gradient descent in the inner-loop, recent studies have shown that controlling the inner-loop's gradient descent with a meta-learned preconditioner can be beneficial. Existing preconditioners, however, cannot simultaneously adapt in a task-specific and path-dependent way. Additionally, they do not satisfy the Riemannian metric condition, which can enable the steepest descent learning with preconditioned gradient. In this study, we propose Geometry-Adaptive Preconditioned gradient descent (GAP) that can overcome the limitations in MAML; GAP can efficiently meta-learn a preconditioner that is dependent on task-specific parameters, and its preconditioner can be shown to be a Riemannian metric. Thanks to the two properties, the geometryadaptive preconditioner is effective for improving the innerloop optimization. Experiment results show that GAP outperforms the state-of-the-art MAML family and preconditioned gradient descent-MAML (PGD-MAML) family in a variety of few-shot learning tasks. Code is available at: https://github.com/Suhyun777/CVPR23-GAP."}, "cited_paper_content": {"title": "Meta-Sgd: Learning To Learn Quickly For Few-Shot Learning", "abstract": "Few-shot learning is challenging for learning algorithms that learn each task in isolation and from scratch. In contrast, meta-learning learns from many related tasks a meta-learner that can learn a new task more accurately and faster with fewer examples, where the choice of meta-learners is crucial. In this paper, we develop Meta-SGD, an SGD-like, easily trainable meta-learner that can initialize and adapt any differentiable learner in just one step, on both supervised learning and reinforcement learning. Compared to the popular meta-learner LSTM, Meta-SGD is conceptually simpler, easier to implement, and can be learned more efficiently. Compared to the latest meta-learner MAML, Meta-SGD has a much higher capacity by learning to learn not just the learner initialization, but also the learner update direction and learning rate, all in a single meta-learning process. Meta-SGD shows highly competitive performance for few-shot learning on regression, classification, and reinforcement learning."}, "keywords": ["Meta-SGD"], "citation_intent": "result"} {"citing_id": "2304.14106v1", "cited_id": "1907.09190", "section_title": "Opinion Analysis", "citation": "Nonetheless, this study is limited in its scope of long-form QA task on ELI5 #REFR dataset, and further research is required to explore more elaborate knowledge analysis for more tasks and to evaluate the generalizability of our findings.", "text_before_citation": ["This suggests that ChatGPT is becoming better at generating responses that contain semantic frames or frames used to represent entities and events.", "Similarly, we can observe an decreasing trend in the frequency of argument role features such as Agent, Patient, and Theme in ChatGPT's responses over time.", "This indicates an improvement in ChatGPT's ability to recognize the argument roles of entities and events in generating responses that convey the meaning correctly.", "Overall, the results presented in Figure 6 , 7 and 8 demonstrate that ChatGPT's patterns on knowledge are showing improvement over time.", "The decreasing frequency values of named entity, relation, opinion and frame features suggest that ChatGPT has become more proficient in generating responses that contain less complex knowledge to fit the instruction \"Explain like I am five\"."], "text_after_citation": ["P e r s o n -0 1 -8 O r g a n iz a t io n -0 1 -1 8 L o c a t io n -0 1 -1 8 W o r k -0 1 -1 8 T im e -0 1 -1 R e la t io n -0 1 -1 8 P e r s o n -0 3 -0 5 O r g a n iz a t io n -0 3 -0 5 L o c a t io n -0 3 -0 5 W o r k -0 3 -0 5 T im e -0 3 -0 5 R e la t io n -0 3 -0 5 P e r s o n -0 4 -0 9 O r g a n iz a t io n -0 4 -0 9 L o c a t io n -0 4 -0 9 W o r k -0 4 -0 9 T im e -0 4 -0 9 R e la t io n -0 4 -0 9 O p in io n -0 1 -1 8 S e n t im e n t -0 1 -1 8 O p in io n -0 3 -0 5 S e n t im e n t -0 3 -0 5 O p in io n -0 4 -0 9 S e n t im e n t -0 4 -0 9"], "citing_paper_content": {"title": "Chatlog: Recording And Analyzing Chatgpt Across Time", "abstract": "While there are abundant researches about evaluating ChatGPT on natural language understanding and generation tasks, few studies have investigated how ChatGPT's behavior changes over time. In this paper, we collect a coarse-to-fine temporal dataset called ChatLog, consisting of two parts that update monthly and daily: ChatLog-Monthly is a dataset of 38,730 question-answer pairs collected every month including questions from both the reasoning and classification tasks. ChatLog-Daily, on the other hand, consists of ChatGPT's responses to 1000 identical questions for long-form generation every day. We conduct comprehensive automatic and human evaluation to provide the evidence for the existence of ChatGPT evolving patterns. We further analyze the unchanged characteristics of ChatGPT over time by extracting its knowledge and linguistic features. We find some stable features to improve the robustness of a RoBERTa-based detector on new versions of ChatGPT. We will continuously maintain our project at https://github.com/ THU-KEG/ChatLog."}, "cited_paper_content": {"title": "Eli5: Long Form Question Answering", "abstract": "We introduce the first large-scale corpus for long-form question answering, a task requiring elaborate and in-depth answers to open-ended questions. The dataset comprises 270K threads from the Reddit forum ``Explain Like I'm Five'' (ELI5) where an online community provides answers to questions which are comprehensible by five year olds. Compared to existing datasets, ELI5 comprises diverse questions requiring multi-sentence answers. We provide a large set of web documents to help answer the question. Automatic and human evaluations show that an abstractive model trained with a multi-task objective outperforms conventional Seq2Seq, language modeling, as well as a strong extractive baseline. However, our best model is still far from human performance since raters prefer gold responses in over 86% of cases, leaving ample opportunity for future improvement."}, "keywords": ["long-form QA task"], "citation_intent": "result"} {"citing_id": "2303.07926v1", "cited_id": "1712.01980", "section_title": "Introduction", "citation": "Following this framework, semiring semantics for full first-order logic (FO) were developed in #REFR .", "text_before_citation": ["Prior to this work, these adaptations of team semantics have been studied in isolation from one another.", "Data provenance provides means to describe the origins of data, allowing to give information about the witnesses to a query, or determining how a certain output is derived.", "Provenance semirings were introduced in #OTHEREFR to devise a general framework that allows to uniformly treat extensions of positive relational algebra, where the tuples have annotations that reflect very diverse information.", "Some motivating examples of said relations come from incomplete and probabilistic databases, and bag semantics.", "This semiring framework captures a notion of data provenance called howprovenance, where the semiring operations essentially capture how each output is produced from the source."], "text_after_citation": ["The semiring semantics for FO refines, in particular, the classical Boolean semantics by allowing formulae to be evaluated as values from a semiring.", "This allows for example counting proof trees, or winning strategies in the model checking game for A and \u03c6.", "In databases, dependencies are applied as integrity constraints (ICs) that specify sets of rules that the database needs to satisfy.", "Formal analysis of the rules is facilitated by viewing them as FO sentences that usually follow certain syntactic patterns.", "This approach is sometimes inadequate because query languages such as SQL operate with multisets (i.e., bags) of tuples instead of sets. Recently, (Chu et al."], "citing_paper_content": {"title": "Unified Foundations Of Team Semantics Via Semirings", "abstract": "Semiring semantics for first-order logic provides a way to trace how facts represented by a model are used to deduce satisfaction of a formula. Team semantics is a framework for studying logics of dependence and independence in diverse contexts such as databases, quantum mechanics, and statistics by extending first-order logic with atoms that describe dependencies between variables. Combining these two, we propose a unifying approach for analysing the concepts of dependence and independence via a novel semiring team semantics, which subsumes all the previously considered variants for first-order team semantics. In particular, we study the preservation of satisfaction of dependencies and formulae between different semirings. In addition we create links to reasoning tasks such as provenance, counting, and repairs."}, "cited_paper_content": {"title": "Semiring Provenance For First-Order Model Checking", "abstract": "Given a first-order sentence, a model-checking computation tests whether the sentence holds true in a given finite structure. Data provenance extracts from this computation an abstraction of the manner in which its result depends on the data items that describe the model. Previous work on provenance was, to a large extent, restricted to the negation-free fragment of first-order logic and showed how provenance abstractions can be usefully described as elements of commutative semirings --- most generally as multivariate polynomials with positive integer coefficients. ::: In this paper we introduce a novel approach to dealing with negation and a corresponding commutative semiring of polynomials with dual indeterminates. These polynomials are used to perform reverse provenance analysis, i.e., finding models that satisfy various properties under given provenance tracking assumptions."}, "keywords": ["full first-order logic"], "citation_intent": "method"} {"citing_id": "2303.07641v1", "cited_id": "1911.10683", "section_title": "Table Structure Recognition", "citation": "These results show that the proposed model works well on both simple and complex tables and outperforms EDD #REFR ) by about 8.1% and the best model in (Nassar et al., 2022) by about 2%.", "text_before_citation": ["In this experiment, we evaluate the effectiveness of WSTabNet for recognizing the structure of the table images on PubTabNet and FinTabNet datasets. Table 5 shows the table structure recognition performance (TEDS-struc.", "scores) of the proposed model and the previous table structure recognition methods on the validation set of the PubTabNet and the test set of the FinTabNet.", "On the test set of the FinTabNet dataset, the proposed model achieved TEDS-struc.", "of 99.06%, 98.33%, and 98.72% on the simple, complex, and all table images, respectively."], "text_after_citation": ["On the validation set of the PubTabNet dataset, the proposed model achieved TEDS-struc.", "of 97.74% on all table images which again improves EDD by about 7.8% and the best model in (Nassar et al., 2022) by about 1%.", "Note that all other methods except EDD are fully supervised approaches that require both HTML and cell bounding boxes annotations in the training step. (FT) Model was trained on PubTabNet and then finetuned."], "citing_paper_content": {"title": "Rethinking Image-Based Table Recognition Using Weakly Supervised Methods", "abstract": "Most of the previous methods for table recognition rely on training datasets containing many richly annotated table images. Detailed table image annotation, e.g., cell or text bounding box annotation, however, is costly and often subjective. In this paper, we propose a weakly supervised model named WSTabNet for table recognition that relies only on HTML (or LaTeX) code-level annotations of table images. The proposed model consists of three main parts: an encoder for feature extraction, a structure decoder for generating table structure, and a cell decoder for predicting the content of each cell in the table. Our system is trained end-to-end by stochastic gradient descent algorithms, requiring only table images and their ground-truth HTML (or LaTeX) representations. To facilitate table recognition with deep learning, we create and release WikiTableSet, the largest publicly available image-based table recognition dataset built from Wikipedia. WikiTableSet contains nearly 4 million English table images, 590K Japanese table images, and 640k French table images with corresponding HTML representation and cell bounding boxes. The extensive experiments on WikiTableSet and two large-scale datasets: FinTabNet and PubTabNet demonstrate that the proposed weakly supervised model achieves better, or similar accuracies compared to the state-of-the-art models on all benchmark datasets."}, "cited_paper_content": {"title": "Image-Based Table Recognition: Data, Model, And Evaluation", "abstract": "Important information that relates to a specific topic in a document is often organized in tabular format to assist readers with information retrieval and comparison, which may be difficult to provide in natural language. However, tabular data in unstructured digital documents, e.g., Portable Document Format (PDF) and images, are difficult to parse into structured machine-readable format, due to complexity and diversity in their structure and style. To facilitate image-based table recognition with deep learning, we develop the largest publicly available table recognition dataset PubTabNet (this https URL), containing 568k table images with corresponding structured HTML representation. PubTabNet is automatically generated by matching the XML and PDF representations of the scientific articles in PubMed Central Open Access Subset (PMCOA). We also propose a novel attention-based encoder-dual-decoder (EDD) architecture that converts images of tables into HTML code. The model has a structure decoder which reconstructs the table structure and helps the cell decoder to recognize cell content. In addition, we propose a new Tree-Edit-Distance-based Similarity (TEDS) metric for table recognition. The experiments demonstrate that the EDD model can accurately recognize complex tables solely relying on the image representation, outperforming the state-of-the-art by 7.7% absolute TEDS score."}, "keywords": ["complex tables"], "citation_intent": "result"} {"citing_id": "2303.09949v1", "cited_id": "1909.08423", "section_title": "Properties Of Our Ansatz", "citation": "This is in contrast to previous approaches which were based on incorporating ab-initio orbitals #REFR , which could not reach chemical accuracy even for small molecules.", "text_before_citation": ["Therefore during supervised pre-training, the undetermined sign of the reference orbitals becomes irrelevant, leading to faster convergence as demonstrated in Sec. 2.4.", "\u2022 Locality: When using localized HF-orbitals as input, the resulting TAOs are also localized.", "Localized HF-orbitals are orbitals which have non-zero orbital features c Ik only on some subset of atoms.", "Since we enforce the backflow f a \u03b8 to be antisymmetric (and thus f a (0) = 0), the resulting TAOs have zero contribution from atoms I with c Ik = 0.", "\u2022 High expressivity: We empirically find that our ansatz is sufficiently expressive to model ground-state wavefunctions to high accuracy."], "text_after_citation": [], "citing_paper_content": {"title": "Towards A Foundation Model For Neural Network Wavefunctions", "abstract": "Deep neural networks have become a highly accurate and powerful wavefunction ansatz in combination with variational Monte Carlo methods for solving the electronic Schr\u00f6dinger equation. However, despite their success and favorable scaling, these methods are still computationally too costly for wide adoption. A significant obstacle is the requirement to optimize the wavefunction from scratch for each new system, thus requiring long optimization. In this work, we propose a novel neural network ansatz, which effectively maps uncorrelated, computationally cheap Hartree-Fock orbitals, to correlated, high-accuracy neural network orbitals. This ansatz is inherently capable of learning a single wavefunction across multiple compounds and geometries, as we demonstrate by successfully transferring a wavefunction model pre-trained on smaller fragments to larger compounds. Furthermore, we provide ample experimental evidence to support the idea that extensive pre-training of a such a generalized wavefunction model across different compounds and geometries could lead to a foundation wavefunction model. Such a model could yield high-accuracy ab-initio energies using only minimal computational effort for fine-tuning and evaluation of observables."}, "cited_paper_content": {"title": "Deep Neural Network Solution Of The Electronic Schr\u00f6dinger Equation.", "abstract": "The electronic Schrodinger equation describes fundamental properties of molecules and materials, but can only be solved analytically for the hydrogen atom. The numerically exact full configuration-interaction method is exponentially expensive in the number of electrons. Quantum Monte Carlo is a possible way out: it scales well to large molecules, can be parallelized, and its accuracy has, as yet, only been limited by the flexibility of the used wave function ansatz. Here we propose PauliNet, a deep-learning wave function ansatz that achieves nearly exact solutions of the electronic Schrodinger equation. PauliNet has a multireference Hartree-Fock solution built in as a baseline, incorporates the physics of valid wave functions, and is trained using variational quantum Monte Carlo (VMC). PauliNet outperforms comparable state-of-the-art VMC ansatzes for atoms, diatomic molecules and a strongly-correlated hydrogen chain by a margin and is yet computationally efficient. We anticipate that thanks to the favourable scaling with system size, this method may become a new leading method for highly accurate electronic-strucutre calculations on medium-sized molecular systems."}, "keywords": ["ab-initio orbitals"], "citation_intent": "result"} {"citing_id": "2303.02918v2", "cited_id": "1912.01703", "section_title": "A.1. Experimental Settings", "citation": "Our code is implemented using PyTorch #REFR and PyTorch-Geometric , and all our experiments are ran on Nvidia RTX3090 GPUs with 24GB of memory.", "text_before_citation": [], "text_after_citation": ["Natural baselines.", "RFP is compared with a number of popular and recent methods.", "In particular, we focus on the comparison of RNF (Abboud et al., 2020) , Laplacian PE #OTHEREFR and PowerEmbed #OTHEREFR which is arguably the closest approach to RFP.", "Additionally, we introduce further baselines based on our RFP method, such as including both the RFP and eigenvectors of the propagation operators without a complete RFP trajectory in order to demonstrate the importance of considering the complete trajectory.", "For the baseline case of Laplacian eigenvectors only (denoted byL EIGVECS \u2020 ) we use the eigenvectors corresponding to the smallest eigenvalues, in order to be consistent with the literature #OTHEREFR ."], "citing_paper_content": {"title": "Graph Positional Encoding Via Random Feature Propagation", "abstract": "Two main families of node feature augmentation schemes have been explored for enhancing GNNs: random features and spectral positional encoding. Surprisingly, however, there is still no clear understanding of the relation between these two augmentation schemes. Here we propose a novel family of positional encoding schemes which draws a link between the above two approaches and improves over both. The new approach, named Random Feature Propagation (RFP), is inspired by the power iteration method and its generalizations. It concatenates several intermediate steps of an iterative algorithm for computing the dominant eigenvectors of a propagation matrix, starting from random node features. Notably, these propagation steps are based on graph-dependent propagation operators that can be either predefined or learned. We explore the theoretical and empirical benefits of RFP. First, we provide theoretical justifications for using random features, for incorporating early propagation steps, and for using multiple random initializations. Then, we empirically demonstrate that RFP significantly outperforms both spectral PE and random features in multiple node classification and graph classification benchmarks."}, "cited_paper_content": {"title": "Pytorch: An Imperative Style, High-Performance Deep Learning Library", "abstract": "Deep learning frameworks have often focused on either usability or speed, but not both. PyTorch is a machine learning library that shows that these two goals are in fact compatible: it provides an imperative and Pythonic programming style that supports code as a model, makes debugging easy and is consistent with other popular scientific computing libraries, while remaining efficient and supporting hardware accelerators such as GPUs. ::: In this paper, we detail the principles that drove the implementation of PyTorch and how they are reflected in its architecture. We emphasize that every aspect of PyTorch is a regular Python program under the full control of its user. We also explain how the careful and pragmatic implementation of the key components of its runtime enables them to work together to achieve compelling performance. ::: We demonstrate the efficiency of individual subsystems, as well as the overall speed of PyTorch on several common benchmarks."}, "keywords": ["PyTorch"], "citation_intent": "method"} {"citing_id": "2303.02691v1", "cited_id": "2002.07530", "section_title": "Algorithm And Regret Guarantee", "citation": "At the same time, in near-stationary environments (P T is small enough), our result can recover to the performance of LogUCB1 algorithm #REFR .", "text_before_citation": ["Theorem 3.", "For all \u03b3 \u2208 (1/T, 1), \u03bb = d log(T )/c \u00b5 , the dynamic regret of SCB-WeightUCB is bounded with probability at least 1 \u2212 1/T , by", "R T \u2264 O k 2 \u00b5 \u221a c \u00b5 1 (1 \u2212 \u03b3) 3 /2 P T + k \u00b5 \u221a c \u00b5 d(1 \u2212 \u03b3) 1 /2 T .", "By setting \u03b3 = 1 \u2212 max{1/T, k \u00b5 P T /(dT )}, we achieve R T \u2264 \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f3 O k 5/4 \u00b5 c 1/2 \u00b5 d 3 /4 P 1 /4 T T 3 /4 when P T \u2265 d k\u00b5T , O k\u00b5 c 1/2 \u00b5 d \u221a T when 0 \u2264 P T < d k\u00b5T .", "Compared to GLB, we improve the order of c \u00b5 from c \u22121 \u00b5 to c \u2212 1 /2 \u00b5 by exploiting the self-concordant properties."], "text_after_citation": ["The proof of Theorem 3 is presented in Appendix C.2.", "In addition, for the piecewise-stationary SCB, we propose SCB-PW-WeightUCB algorithm that gets rid of influence of c \u00b5 and thus directly improves upon .", "Theorem 4.", "For all \u03b3 \u2208 (1/2, 1), D = log(T )/ log(1/\u03b3) and \u03bb = d log(T )/c \u00b5 , the regret of SCB-PW-WeightUCB is bounded with probability at least 1 \u2212 1/T , by", "R T \u2264 O 1 1 \u2212 \u03b3 \u0393 T + 1 \u221a 1 \u2212 \u03b3 + d (1 \u2212 \u03b3)T ."], "citing_paper_content": {"title": "Revisiting Weighted Strategy For Non-Stationary Parametric Bandits", "abstract": "Non-stationary parametric bandits have attracted much attention recently. There are three principled ways to deal with non-stationarity, including sliding-window, weighted, and restart strategies. As many non-stationary environments exhibit gradual drifting patterns, the weighted strategy is commonly adopted in real-world applications. However, previous theoretical studies show that its analysis is more involved and the algorithms are either computationally less efficient or statistically suboptimal. This paper revisits the weighted strategy for non-stationary parametric bandits. In linear bandits (LB), we discover that this undesirable feature is due to an inadequate regret analysis, which results in an overly complex algorithm design. We propose a refined analysis framework, which simplifies the derivation and importantly produces a simpler weight-based algorithm that is as efficient as window/restart-based algorithms while retaining the same regret as previous studies. Furthermore, our new framework can be used to improve regret bounds of other parametric bandits, including Generalized Linear Bandits (GLB) and Self-Concordant Bandits (SCB). For example, we develop a simple weighted GLB algorithm with an O(k 5 /4 \u00b5 c \u2212 3 /4 \u00b5 d 3 /4 P 1 /4 T T 3 /4) regret, improving the O(k 2 \u00b5 c \u22121 \u00b5 d 9 /10 P 1 /5 T T 4 /5) bound in prior work, where k \u00b5 and c \u00b5 characterize the reward model's nonlinearity, P T measures the non-stationarity, d and T denote the dimension and time horizon."}, "cited_paper_content": {"title": "Improved Optimistic Algorithms For Logistic Bandits", "abstract": "The generalized linear bandit framework has attracted a lot of attention in recent years by extending the well-understood linear setting and allowing to model richer reward structures. It notably covers the logistic model, widely used when rewards are binary. For logistic bandits, the frequentist regret guarantees of existing algorithms are $\\tilde{\\mathcal{O}}(\\kappa \\sqrt{T})$, where $\\kappa$ is a problem-dependent constant. Unfortunately, $\\kappa$ can be arbitrarily large as it scales exponentially with the size of the decision set. This may lead to significantly loose regret bounds and poor empirical performance. In this work, we study the logistic bandit with a focus on the prohibitive dependencies introduced by $\\kappa$. We propose a new optimistic algorithm based on a finer examination of the non-linearities of the reward function. We show that it enjoys a $\\tilde{\\mathcal{O}}(\\sqrt{T})$ regret with no dependency in $\\kappa$, but for a second order term. Our analysis is based on a new tail-inequality for self-normalized martingales, of independent interest."}, "keywords": ["LogUCB1 algorithm"], "citation_intent": "result"} {"citing_id": "2303.17835v1", "cited_id": "1201.0490", "section_title": "Support Vector Classifier", "citation": "The linear kernel SVC implementation LinearSVC from Scikit-learn library #REFR was used to conduct the experiment.", "text_before_citation": ["The experiments were repeated with the SVC model to the same two datasets."], "text_after_citation": ["Linear kernel SVC was chosen due to large dataset size.", "Other kernel types were tested, however they did not scale to the large number of samples.", "The samples were normalized using the Scikit-learn StandardScaler to ease the model convergence. Table 2 displays the results from the experiments.", "The proposed method is clearly superior to the conventional method in both experiments.", "The performance in the statistical change dataset is considerably worse when compared to the shift change dataset."], "citing_paper_content": {"title": "Improved Difference Images For Change Detection Classifiers In Sar Imagery Using Deep Learning", "abstract": "Satellite-based Synthetic Aperture Radar (SAR) images can be used as a source of remote sensed imagery regardless of cloud cover and day-night cycle. However, the speckle noise and varying image acquisition conditions pose a challenge for change detection classifiers. This paper proposes a new method of improving SAR image processing to produce higher quality difference images for the classification algorithms. The method is built on a neural network-based mapping transformation function that produces artificial SAR images from a location in the requested acquisition conditions. The inputs for the model are: previous SAR images from the location, imaging angle information from the SAR images, digital elevation model, and weather conditions. The method was tested with data from a location in NorthEast Finland by using Sentinel-1 SAR images from European Space Agency, weather data from Finnish Meteorological Institute, and a digital elevation model from National Land Survey of Finland. In order to verify the method, changes to the SAR images were simulated, and the performance of the proposed method was measured using experimentation where it gave substantial improvements to performance when compared to a more conventional method of creating difference images."}, "cited_paper_content": {"title": "Scikit-Learn: Machine Learning In Python", "abstract": "Scikit-learn is a Python module integrating a wide range of state-of-the-art machine learning algorithms for medium-scale supervised and unsupervised problems. This package focuses on bringing machine learning to non-specialists using a general-purpose high-level language. Emphasis is put on ease of use, performance, documentation, and API consistency. It has minimal dependencies and is distributed under the simplified BSD license, encouraging its use in both academic and commercial settings. Source code, binaries, and documentation can be downloaded from http://scikit-learn.sourceforge.net."}, "keywords": ["Scikit-learn library"], "citation_intent": "method"} {"citing_id": "2303.00298v1", "cited_id": "1812.01601", "section_title": "Related Work", "citation": "HMMR #REFR learns the human dynamics to predict pose and shape for past and future frames.", "text_before_citation": ["Regression-based scheme has recently received extensive research #OTHEREFR Lin et al., 2021b,a) , due to its directness and effectiveness.", "HMR #OTHEREFR , is the representative regression-based methods, using an image encoder and regressor to predict the pose, shape and camera parameters.", "To train the model well and make sure the realistic of the pose and shape, the reprojection loss and adversarial loss are introduced to leverage unpaired 2D-to-3D supervision.", "In addition, several non-parameteric mesh regression methods are proposed to directly regress the mesh vertices coordinates, including Pose2Mesh #OTHEREFR , Convolution Mesh Regression #OTHEREFR , I2L-MeshNet , and the Transformer-based METRO #OTHEREFR and Mesh Graphormer #OTHEREFR .", "Beyond estimating pose and shape from a singe image, video-based methods consider to fully dig the temporal motion information hidden in video data to improve the accuracy and robustness."], "text_after_citation": ["VIBE #OTHEREFR encodes temporal feature using a GRU and adopts an adversarial learning framework to learn kinematically plausible motion from a large-scale motion capture dataset.", "TCMR #OTHEREFR introduces PoseForecast to forecast additional temporal features from past and future frames without a current frame.", "MAED #OTHEREFR proposes to use a spatial-temporal encoder to learn temporally enhanced image features and regress the joint rotations following a defined kinematic topology.", "In contrast to these methods, our goal is to encode joint-level features, shape and camera information separately, rather than encoding all the information into a unified image feature vector.", "Since we use independent tokens to encode the rotational information of each joint, we can model the inner temporal patterns when each joint rotates over time."], "citing_paper_content": {"title": "Capturing The Motion Of Every Joint: 3D Hu-Man Pose And Shape Estimation With Indepen-Dent Tokens", "abstract": "In this paper we present a novel method to estimate 3D human pose and shape from monocular videos. This task requires directly recovering pixel-alignment 3D human pose and body shape from monocular images or videos, which is challenging due to its inherent ambiguity. To improve precision, existing methods highly rely on the initialized mean pose and shape as prior estimates and parameter regression with an iterative error feedback manner. In addition, video-based approaches model the overall change over the image-level features to temporally enhance the single-frame feature, but fail to capture the rotational motion at the joint level, and cannot guarantee local temporal consistency. To address these issues, we propose a novel Transformer-based model with a design of independent tokens. First, we introduce three types of tokens independent of the image feature: joint rotation tokens, shape token, and camera token. By progressively interacting with image features through Transformer layers, these tokens learn to encode the prior knowledge of human 3D joint rotations, body shape, and position information from large-scale data, and are updated to estimate SMPL parameters conditioned on a given image. Second, benefiting from the proposed token-based representation, we further use a temporal model to focus on capturing the rotational temporal information of each joint, which is empirically conducive to preventing large jitters in local parts. Despite being conceptually simple, the proposed method attains superior performances on the 3DPW and Human3.6M datasets. Using ResNet-50 and Transformer architectures, it obtains 42.0 mm error on the PA-MPJPE metric of the challenging 3DPW, outperforming state-of-the-art counterparts by a large margin. Code will be publicly available 1 ."}, "cited_paper_content": {"title": "Learning 3D Human Dynamics From Video", "abstract": "From an image of a person in action, we can easily guess the 3D motion of the person in the immediate past and future. This is because we have a mental model of 3D human dynamics that we have acquired from observing visual sequences of humans in motion. We present a framework that can similarly learn a representation of 3D dynamics of humans from video via a simple but effective temporal encoding of image features. At test time, from video, the learned temporal representation give rise to smooth 3D mesh predictions. From a single image, our model can recover the current 3D mesh as well as its 3D past and future motion. Our approach is designed so it can learn from videos with 2D pose annotations in a semi-supervised manner. Though annotated data is always limited, there are millions of videos uploaded daily on the Internet. In this work, we harvest this Internet-scale source of unlabeled data by training our model on unlabeled video with pseudo-ground truth 2D pose obtained from an off-the-shelf 2D pose detector. Our experiments show that adding more videos with pseudo-ground truth 2D pose monotonically improves 3D prediction performance. We evaluate our model on the recent challenging dataset of 3D Poses in the Wild and obtain state-of-the-art performance on the 3D prediction task without any fine-tuning. The project website with video can be found at https://akanazawa.github.io/human_dynamics/."}, "keywords": ["pose", "human dynamics"], "citation_intent": "background"} {"citing_id": "2303.07228v2", "cited_id": "1701.03081", "section_title": "D. Extending The Method To The Two-Way Distillable Entanglement", "citation": "However,\u00ca npt rev (\u2022) is efficiently computable by SDP and tightens the upper bound of the example states illustrated in #REFR , whose details can be found in Appendix C.", "text_before_citation": ["It also follows an efficiently computable relaxation as\u00ca npt", "rev (\u03c1 AB ) = [1 \u2212 2 \u2212R max,PPT (\u03c1 AB ) ] i \u03bb i S(B) \u03c8 i , where \u03c9 AB = i \u03bb i |\u03c8 i \u03c8 i |", "is the spectral decomposition of the PPT-squeezed state \u03c9 AB of \u03c1 AB .", "In fact,\u00ca npt rev (\u2022) can be interpreted as an easily computable version of the bound E MP (\u2022) in #OTHEREFR , utilizing the convexity of E D,\u2194 (\u03c1 AB ) on the convex decomposition of \u03c1 AB into MC states and PPT states.", "Since the set of all MC states is not convex, tracking all possible decompositions to compute E MP (\u2022) is generally hard."], "text_after_citation": ["We note that R(\u03c1 AB ) \u2264 E MP (\u03c1 AB ) \u2264 E npt rev (\u03c1 AB ) where R(\u2022) is the Rains bound for the two-way distillable entanglement.", "Nevertheless,\u00ca npt rev (\u2022) connects the reverse max-relative entropy of NPT entanglement with the entanglement of formation, and we believe such connection would shed light on the study of other quantum resource theories as well."], "citing_paper_content": {"title": "Estimate Distillable Entanglement And Quantum Capacity By Squeezing Useless Entanglement", "abstract": "Entanglement distillation is crucial in quantum information processing. But it remains challenging to estimate the distillable entanglement and its closely related essential quantity, the quantum capacity of a noisy quantum channel. In this work, we propose methods for evaluating both quantities by squeezing out useless entanglement within a state or a quantum channel, whose contributions are expected to be ignored for the distillable entanglement or the quantum capacity, respectively. We first consider a general resource measure called the reverse divergence of resources to quantify the minimum divergence between a target state and the set of free states. We then introduce the reverse max-relative entropy of entanglement and apply it to establish efficiently computable upper bounds on the distillable entanglement. We also extend the reverse divergence of resources to quantum channels and derive upper bounds on the quantum capacity. We further apply our method to investigate purifying the maximally entangled states under practical noises, such as depolarizing and amplitude damping noises, and notably establish improvements in estimating the one-way distillable entanglement. Our bounds also offer useful benchmarks for evaluating the quantum capacities of qubit quantum channels of interest, including the Pauli channels and the random mixed unitary channels. CONTENTS I. Introduction A. Background B. Main contributions II. Reverse divergence of resources A. Preliminaries B. Reverse divergence of resources III. Applications on distillable entanglement A. Upper bound on the one-way distillable entanglement B. Continuity bounds of the one-way distillable entanglement C. Examples of less-entangled states D. Extending the method to the two-way distillable entanglement IV. Applications on quantum channel capacity A. Quantum capacity of qubit channels V. Concluding remarks Acknowledgements."}, "cited_paper_content": {"title": "Useful States And Entanglement Distillation", "abstract": "We derive general upper bounds on the distillable entanglement of a mixed state under one-way and two-way local operations and classical communication (LOCC). In both cases, the upper bound is based on a convex decomposition of the state into \u201cuseful\u201d and \u201cuseless\u201d quantum states. By \u201cuseful,\u201d we mean a state whose distillable entanglement is non-negative and equal to its coherent information (and thus given by a single-letter, tractable formula). On the other hand, \u201cuseless\u201d states are undistillable, i.e., their distillable entanglement is zero. We prove that in both settings, the distillable entanglement is convex on such decompositions. Hence, an upper bound on the distillable entanglement is obtained from the contributions of the useful states alone, being equal to the convex combination of their coherent informations. Optimizing over all such decompositions of the input state yields our upper bound. The useful and useless states are given by degradable and antidegradable states in the one-way LOCC setting, and by maximally correlated and positive partial transpose (PPT) states in the two-way LOCC setting, respectively. We also illustrate how our method can be extended to quantum channels. Interpreting our upper bound as a convex roof extension, we show that it reduces to a particularly simple, non-convex optimization problem for the classes of isotropic states and Werner states. In the one-way LOCC setting, this non-convex optimization yields an upper bound on the quantum capacity of the qubit depolarizing channel that is strictly tighter than previously known bounds for large values of the depolarizing parameter. In the two-way LOCC setting, the non-convex optimization achieves the PPT-relative entropy of entanglement for both isotropic and Werner states."}, "keywords": ["example states"], "citation_intent": "background"} {"citing_id": "2304.13633v1", "cited_id": "1807.03748", "section_title": "Introduction", "citation": "InfoNCE #REFR (Information Noise-Contrastive Estimation) is based on contrastive learning, which uses noise-contrastive estimation (Noise-Contrastive Estimation, NCE) as a bound to approximate mutual information, and uses neural networks to parameterize critic in NCE, so that it can be used more flexibly.", "text_before_citation": ["EQUATION", "Where is a multidimensional variable with ( > 2) dimensions, and ( ) obtains the total correlation among all dimensions of .", "Since the exact computation of mutual information and total correlation is only tractable for discrete variables, or for a limited family of problems where the probability distributions are known #OTHEREFR .", "Therefore, many methods have been proposed to estimate MI based on theoretical upper or lower bounds.", "For instance, MINE #OTHEREFR maximizes an implicit function lower bound obtained by Jensen's inequality to estimate the mutual information between two random variables."], "text_after_citation": ["The CLUB #OTHEREFR algorithm employs unsupervised learning methods of logarithmic comparison and log-linear models to avoid overfitting and improve the accuracy of estimates.", "CLUB uses a technique called Contrastive Log-ratio Upper Bound to calculate the confidence interval and upper bound of the estimated TC value.", "This technique is based on the distribution of log comparison values and some statistical assumptions, which can effectively control the error and confidence of the estimate.", "The same issue of an intractable computation arises when estimating the TC value of multidimensional variables without knowledge of the ( ) distribution.", "In representation learning, a straightforward prior distribution assumption of ( ) is frequently made to handle this issue #OTHEREFR ."], "citing_paper_content": {"title": "Understanding The Limitation Of Total Correlation Estimation Based On Mutual Information Bounds", "abstract": "The total correlation(TC) is a crucial index to measure the correlation between marginal distribution in multidimensional random variables, and it is frequently applied as an inductive bias in representation learning. Previous research has shown that the TC value can be estimated using mutual information boundaries through decomposition. However, we found through theoretical derivation and qualitative experiments that due to the use of importance sampling in the decomposition process, the bias of TC value estimated based on MI bounds will be amplified when the proposal distribution in the sampling differs significantly from the target distribution. To reduce estimation bias issues, we propose a TC estimation correction model based on supervised learning, which uses the training iteration loss sequence of the TC estimator based on MI bounds as input features to output the true TC value. Experiments show that our proposed method can improve the accuracy of TC estimation and eliminate the variance generated by the TC estimation process."}, "cited_paper_content": {"title": "Representation Learning With Contrastive Predictive Coding", "abstract": "While supervised learning has enabled great progress in many applications, unsupervised learning has not seen such widespread adoption, and remains an important and challenging endeavor for artificial intelligence. In this work, we propose a universal unsupervised learning approach to extract useful representations from high-dimensional data, which we call Contrastive Predictive Coding. The key insight of our model is to learn such representations by predicting the future in latent space by using powerful autoregressive models. We use a probabilistic contrastive loss which induces the latent space to capture information that is maximally useful to predict future samples. It also makes the model tractable by using negative sampling. While most prior work has focused on evaluating representations for a particular modality, we demonstrate that our approach is able to learn useful representations achieving strong performance on four distinct domains: speech, images, text and reinforcement learning in 3D environments."}, "keywords": ["mutual information", "contrastive learning"], "citation_intent": "method"} {"citing_id": "2303.00091v1", "cited_id": "1608.03983", "section_title": "Details Of Model Training", "citation": "The VLP model was pre-trained using the Adam optimizer with decoupled weight decay (AdamW), along with a warmup cosine annealing scheduler #REFR ).", "text_before_citation": [], "text_after_citation": ["The initial learning rate for the warm-up phase was set to 1e\u22125, which was gradually increased to a maximum learning rate of 1e \u2212 4 before being decayed using the cosine annealing scheduler.", "The visual encoder was initialized with self-supervised weights from ImageNet, while the text encoder was trained from scratch.", "The model was trained for 15 epochs with a warm-up epoch of 5, and a batch size of 12.", "All VLP model training was performed on a GeForce RTX 3090.", "For fine-tuning, we used the same optimizer and scheduler, with a learning rate of 1e\u22124 and a warm-up learning rate of 1e\u2212 5."], "citing_paper_content": {"title": "Improving Medical Speech-To-Text Accuracy With Vision-Language Pre-Training Model", "abstract": "Automatic Speech Recognition (ASR) is a technology that converts spoken words into text, facilitating interaction between humans and machines. One of the most common applications of ASR is Speech-To-Text (STT) technology, which simplifies user workflows by transcribing spoken words into text. In the medical field, STT has the potential to significantly reduce the workload of clinicians who rely on typists to transcribe their voice recordings. However, developing an STT model for the medical domain is challenging due to the lack of sufficient speech and text datasets. To address this issue, we propose a medical-domain text correction method that modifies the output text of a general STT system using the Vision Language Pre-training (VLP) method. VLP combines textual and visual information to correct text based on image knowledge. Our extensive experiments demonstrate that the proposed method offers quantitatively and clinically significant improvements in STT performance in the medical field. We further show that multi-modal understanding of image and text information outperforms single-modal understanding using only text information."}, "cited_paper_content": {"title": "Sgdr: Stochastic Gradient Descent With Warm Restarts", "abstract": "Restart techniques are common in gradient-free optimization to deal with multimodal functions. Partial warm restarts are also gaining popularity in gradient-based optimization to improve the rate of convergence in accelerated gradient schemes to deal with ill-conditioned functions. In this paper, we propose a simple warm restart technique for stochastic gradient descent to improve its anytime performance when training deep neural networks. We empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively. We also demonstrate its advantages on a dataset of EEG recordings and on a downsampled version of the ImageNet dataset. Our source code is available at https://github.com/loshchil/SGDR"}, "keywords": ["VLP model", "Adam optimizer"], "citation_intent": "method"} {"citing_id": "2303.11866v1", "cited_id": "1810.04805", "section_title": "Conclusion & Future Work", "citation": "For the text encoder, we choose between bert-base-uncased #REFR and SimCSE Gao et al. (2021) .", "text_before_citation": ["Our approach presents one simple but effective way to use that knowledge.", "We follow the standard training procedure ( \u00a72.4) and train a CLIP-base model where both of the encoders are initialized randomly, instead of using weights initialized from unimodally pretrained models (DeIT Touvron et al. (2021) and SimCSE Gao et al. (2021) ). We train three models, one for each dataset size. The results can be seen in Fig 6.", "Compared to the randomly initialized model, the pretrained model is substantially better across all three datasets and all 3 model sizes.", "However, it is likely that the benefit of unimodal pretraining will be diminished as the number of training pairs available for multimodal vision-language pretraining increases, although we do not explore this.", "We train LilT-base models with encoders initialized from different kinds of pretraining methods."], "text_after_citation": ["For the image encoder, we choose between DeiTTouvron et al. 2021"], "citing_paper_content": {"title": "Contrastive Alignment Of Vision To Language Through Parameter-Efficient Transfer Learn-Ing", "abstract": "Contrastive vision-language models (e.g. CLIP) are typically created by updating all the parameters of a vision model and language model through contrastive training. Can such models be created by a small number of parameter updates to an already-trained language model and vision model? The literature describes techniques that can create vision-language models by updating a small number of parameters in a language model, but these require already aligned visual representations and are non-contrastive, hence unusable for latency-sensitive applications such as neural search. We explore the feasibility and benefits of parameter-efficient contrastive vision-language alignment through transfer learning: creating a model such as CLIP by minimally updating an already-trained vision and language model. We find that a minimal set of parameter updates (<7%) can achieve the same performance as full-model training, and updating specific components (<1% of parameters) can match 75% of full-model training. We describe a series of experiments: we show that existing knowledge is conserved more strongly in parameter-efficient training and that parameter-efficient scaling scales with model and dataset size. Where paired-image text data is scarce but strong multilingual language models exist (e.g. low resource languages), parameter-efficient training is even preferable to full-model training. Given a fixed compute budget, parameter-efficient training allows training larger models on the same hardware, achieving equivalent performance in less time. Parameter-efficient training hence constitutes an energyefficient and effective training strategy for contrastive vision-language models that may be preferable to the full-model training paradigm for common use cases. Code and weights at https://github.com/codezakh/LilT."}, "cited_paper_content": {"title": "Bert: Pre-Training Of Deep Bidirectional Transformers For Language Understanding", "abstract": "We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. ::: BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement)."}, "keywords": ["text encoder"], "citation_intent": "method"} {"citing_id": "2303.08694v1", "cited_id": "1204.3476", "section_title": "Introduction", "citation": "In the last decades multilevel Monte Carlo (MLMC) methods have been applied to a plethora of problems in stochastic modelling and uncertainty quantification (see e.g. [5, 11, 14, 19, #REFR ).", "text_before_citation": [], "text_after_citation": ["The method relies on a hierarchy of approximations arranged as a telescoping sum, resulting in a variance reduction.", "In fact, in certain situations the multilevel estimator has asymptotically the same computational complexity as one solve of the deterministic problem on the finest discretization of the hierarchy.", "Having a preset finest discretization level the multilevel estimator is a biased estimator.", "The continuous level Monte Carlo (CLMC) estimator circumvents this issue by defining the estimator as a stochastic process (see [12] ).", "The level of refinement is here given by an exponentially distributed random variable, which in turn means that the computational complexity of a CLMC estimator relies heavily on the concrete samples of this random variable in the simulation."], "citing_paper_content": {"title": "Quasi Continuous Level Monte Carlo For Random Elliptic Pdes", "abstract": "This paper provides a framework in which multilevel Monte Carlo and continuous level Monte Carlo can be compared. In continuous level Monte Carlo the level of refinement is determined by an exponentially distributed random variable, which therefore heavily influences the computational complexity. We propose in this paper a variant of the algorithm, where the exponentially distributed random variable is generated by a quasi Monte Carlo sequence, resulting in a significant variance reduction. In the examples presented the quasi continuous level Monte Carlo algorithm outperforms multilevel and continuous level Monte Carlo by a clear margin."}, "cited_paper_content": {"title": "Further Analysis Of Multilevel Monte Carlo Methods For Elliptic Pdes With Random Coefficients", "abstract": "We consider the application of multilevel Monte Carlo methods to elliptic PDEs with random coefficients. We focus on models of the random coefficient that lack uniform ellipticity and boundedness with respect to the random parameter, and that only have limited spatial regularity. We extend the finite element error analysis for this type of equation, carried out recently by Charrier, Scheichl and Teckentrup, to more difficult problems, posed on non--smooth domains and with discontinuities in the coefficient. For this wider class of model problem, we prove convergence of the multilevel Monte Carlo algorithm for estimating any bounded, linear functional and any continuously Fr\\'echet differentiable non--linear functional of the solution. We further improve the performance of the multilevel estimator by introducing level dependent truncations of the Karhunen--Lo\\`eve expansion of the random coefficient. Numerical results complete the paper."}, "keywords": ["Monte Carlo (MLMC)"], "citation_intent": "method"} {"citing_id": "2303.16761v1", "cited_id": "1806.00525", "section_title": "Arxiv:2303.16761V1 [Cs.Ir] 23 Mar 2023", "citation": "To validate the effectiveness of our approach, we conduct dialogue-to-video experiments on a benchmark dataset AVSD #REFR .", "text_before_citation": ["Such discussion contains subtle information about the video of interest and thus cannot be treated as a plain-text query.", "Therefore, to incorporate the conversational information from dialogues, we propose a novel dialogue-to-video retrieval approach.", "In our proposed model, we sequentially encode each turn of the dialogue to obtain a dialogue-aware query representation with the purpose of retaining the dialogue information.", "Then we calculate the similarity between this dialogue-aware query representation and individual frames in the video in order to obtain a weighted video representation.", "Finally, we use the video representation to compute an overall similarity score with the dialogue-aware query."], "text_after_citation": ["Experimental results show that our approach achieves significant improvements over previous state-of-the-art models including FiT and ViReD #OTHEREFR .", "In this section, we describe how our dialogue-to-video retrieval system works.", "Our retrieval system consists of two major components: 1) a temporal-aware video encoder responsible for encoding the image frames in video with temporal information.", "2) a dialogue-query encoder responsible for encoding the dialogue query with conversational information.", "As shown in Figure 1 , our model receives video-query pairs and produces similarity scores."], "citing_paper_content": {"title": "Dialogue-To-Video Retrieval", "abstract": "Recent years have witnessed an increasing amount of dialogue/conversation on the web especially on social media. That inspires the development of dialogue-based retrieval, in which retrieving videos based on dialogue is of increasing interest for recommendation systems. Different from other video retrieval tasks, dialogue-to-video retrieval uses structured queries in the form of user-generated dialogue as the search descriptor. We present a novel dialogue-to-video retrieval system, incorporating structured conversational information. Experiments conducted on the AVSD dataset show that our proposed approach using plain-text queries improves over the previous counterpart model by 15.8% on R@1. Furthermore, our approach using dialogue as a query, improves retrieval performance by 4."}, "cited_paper_content": {"title": "Audio Visual Scene-Aware Dialog (Avsd) Challenge At Dstc7", "abstract": "Scene-aware dialog systems will be able to have conversations with users about the objects and events around them. Progress on such systems can be made by integrating state-of-the-art technologies from multiple research areas including end-to-end dialog systems visual dialog, and video description. We introduce the Audio Visual Scene Aware Dialog (AVSD) challenge and dataset. In this challenge, which is one track of the 7th Dialog System Technology Challenges (DSTC7) workshop1, the task is to build a system that generates responses in a dialog about an input video"}, "keywords": ["video", "AVSD"], "citation_intent": "method"} {"citing_id": "2304.13085v2", "cited_id": "1412.6980", "section_title": "Implementation Details", "citation": "The Adaptive Moment Estimation (Adam) #REFR is used as our optimizer with a learning rate of 0.0001 and a batch size of 32.", "text_before_citation": ["We utilize the RawNet2 [4] model as the backbone for feature learning."], "text_after_citation": ["The loss weight \u03bb is set as 0.5 in the experiment.", "To report the detection performance, we calculate the Equal Error Rate (EER) following previous studies #OTHEREFR 29, #OTHEREFR . RawNet2 #OTHEREFR .", "The RawNet2 model is based on DNN speaker embedding extraction with the raw waveform as inputs.", "This powerful model uses a technique named feature map scaling which scales feature maps similar to squeezeexcitation.", "It performed the best in the ASVspoof 2021 Speech Deepfake track."], "citing_paper_content": {"title": "Ai-Synthesized Voice Detection Using Neural Vocoder Artifacts", "abstract": "Advancements in AI-synthesized human voices have created a growing threat of impersonation and disinformation, making it crucial to develop methods to detect synthetic human voices. This study proposes a new approach to identifying synthetic human voices by detecting artifacts of vocoders in audio signals. Most DeepFake audio synthesis models use a neural vocoder, a neural network that generates waveforms from temporal-frequency representations like mel-spectrograms. By identifying neural vocoder processing in audio, we can determine if a sample is synthesized. To detect synthetic human voices, we introduce a multi-task learning framework for a binaryclass RawNet2 model that shares the feature extractor with a vocoder identification module. By treating vocoder identification as a pretext task, we constrain the feature extractor to focus on vocoder artifacts and provide discriminative features for the final binary classifier. Our experiments show that the improved RawNet2 model based on vocoder identification achieves high classification performance on the binary task overall. Codes and data can be found at https://github.com/csun22/Synthetic-Voice-Detection-Vocoder-Artifacts."}, "cited_paper_content": {"title": "Adam: A Method For Stochastic Optimization", "abstract": "We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and/or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm."}, "keywords": ["learning rate", "Adaptive Moment Estimation"], "citation_intent": "method"} {"citing_id": "2304.07971v1", "cited_id": "1808.10523", "section_title": "Related Works 6.1 Collaborative Filtering", "citation": "In recent years, this type of global relationships has gradually received more attention and has been incorporated in CF models in the form of interaction graph #REFR .", "text_before_citation": ["Since CF can be considered as a task to complete entries in the user-item interaction matrix, Matrix Factorization (MF) #OTHEREFR , as a strategy for matrix completion, naturally becomes the foundation of the mainstream approach in CF.", "MF assumes that the user-item interaction matrix is low-rank and can be recovered by learning the embedding vectors of users and items.", "Most MF models generate predicted entries by the dot product of user and item embedding vectors, while they can be optimized by minimizing the error of individual entries #OTHEREFR or maximizing the difference between positive and negative samples #OTHEREFR .", "They are both widely adopted in the subsequent proposed methods which introduce refined structures like neural networks #OTHEREFR .", "These approaches enable a light design by focusing on the modeling of single user-item entry, while neglecting the synergy between different interactions."], "text_after_citation": ["With the emerging of Graph Convolutional Networks (GCN), GCN models #OTHEREFR quickly become the state-of-the-art in MF-based models and are continuously improved to achieve advances in efficiency and accuracy #OTHEREFR .", "Unlike MF-based models, another class of method implements CF by treating the user's historical interactions as features, and modeling the relationships between items #OTHEREFR .", "A classical approach is the linear autoencoder #OTHEREFR , which models an item-item relationship matrix to encode user features.", "This idea is then extended by subsequent studies and applied to nonlinear denoising autoencoders #OTHEREFR and variational autoencoders #OTHEREFR .", "A recent work #OTHEREFR considers CF in terms of graph signal processing and proposes a framework for signal-based models, which can incorporate the linear autoencoders and the ideal case of MF and GCN models."], "citing_paper_content": {"title": "Collaborative Residual Metric Learning", "abstract": "In collaborative filtering, distance metric learning has been applied to matrix factorization techniques with promising results. However, matrix factorization lacks the ability of capturing collaborative information, which has been remarked by recent works and improved by interpreting user interactions as signals. This paper aims to find out how metric learning connect to these signal-based models. By adopting a generalized distance metric, we discovered that in signal-based models, it is easier to estimate the residual of distances, which refers to the difference between the distances from a user to a target item and another item, rather than estimating the distances themselves. Further analysis also uncovers a link between the normalization strength of interaction signals and the novelty of recommendation, which has been overlooked by existing studies. Based on the above findings, we propose a novel model to learn a generalized distance user-item distance metric to capture user preference in interaction signals by modeling the residuals of distance. The proposed CoRML model is then further improved in training efficiency by a newly introduced approximated ranking weight. Extensive experiments conducted on 4 public datasets demonstrate the superior performance of CoRML compared to the state-of-the-art baselines in collaborative filtering, along with high efficiency and the ability of providing novelty-promoted recommendations, shedding new light on the study of metric learning-based recommender systems. CCS CONCEPTS \u2022 Information systems \u2192 Recommender systems; Collaborative filtering."}, "cited_paper_content": {"title": "Spectral Collaborative Filtering", "abstract": "Despite the popularity of Collaborative Filtering (CF), CF-based methods are haunted by the cold-start problem, which has a significantly negative impact on users' experiences with Recommender Systems (RS). In this paper, to overcome the aforementioned drawback, we first formulate the relationships between users and items as a bipartite graph. Then, we propose a new spectral convolution operation directly performing in the spectral domain, where not only the proximity information of a graph but also the connectivity information hidden in the graph are revealed. With the proposed spectral convolution operation, we build a deep recommendation model called Spectral Collaborative Filtering (SpectralCF). Benefiting from the rich information of connectivity existing in the spectral domain, SpectralCF is capable of discovering deep connections between users and items and therefore, alleviates the cold-start problem for CF. To the best of our knowledge, SpectralCF is the first CF-based method directly learning from the spectral domains of user-item bipartite graphs. We apply our method on several standard datasets. It is shown that SpectralCF significantly out-performs state-of-the-art models. Code and data are available at https://github.com/lzheng21/SpectralCF."}, "keywords": ["interaction graph"], "citation_intent": "background"} {"citing_id": "2303.02995v1", "cited_id": "1906.02890", "section_title": "Introduction", "citation": "To be specific, for modeling hierarchies in natural language, we share similar intuitions with the previous studies on unsupervised grammar induction, which aim at unsupervised hierarchical mining #REFR Drozdov et al., 2019) .", "text_before_citation": ["To this end, we introduce hierarchy-aware attention into CLIP, denoted as HiCLIP.", "Hierarchy-aware attention applies an attention mask to the conventional attention mechanism to indicate the tendency to merge certain vision patches and language tokens into groups because they are spatially and semantically or visually similar.", "We generalize hierarchy-aware attention to both images and texts, where its mask is obtained by first calculating the neighbouring affinity score among adjacent patches or tokens, and then propagating the scores across any given patch or token pairs.", "In addition, we formulate the affinity score with an increasing trend as the layer gets deeper to ensure the merged groups remains the same.", "In this way, we progressively aggregate hierarchies in a layer-by-layer manner for both images and texts."], "text_after_citation": ["Tree Transformer proposes a similar modified attention mechanism which is essentially a special case of hierarchy-aware attention, where the attention mask is instantiated as constituent prior to encourage the merging of semantically similar tokens.", "Capturing hierarchies in visual contents is more challenging, because spatial correlation should also be considered in addition to visual similarities.", "Therefore, we extend the hierarchy-aware attention to Vision Transformers #OTHEREFR by creating a Group Transformer to progressively aggregate image patches into semantic groups until all patches are merged in one common group which is the original image.", "Different from the 1D scenario in Tree Transformer, the neighboring affinity score is computed among the four adjacent neighbors of each image patch ( Fig. 1 (a) ).", "Afterwards, we propagate neighboring affinity scores by comparing two special paths connecting image patches on the 2D grid graph."], "citing_paper_content": {"title": "Hiclip: Contrastive Language-Image Pre-Training With Hierarchy-Aware Attention", "abstract": "The success of large-scale contrastive vision-language pretraining (CLIP) has benefited both visual recognition and multimodal content understanding. The concise design brings CLIP the advantage in inference efficiency against other visionlanguage models with heavier cross-attention fusion layers, making it a popular choice for a wide spectrum of downstream tasks. However, CLIP does not explicitly capture the hierarchical nature of high-level and fine-grained semantics conveyed in images and texts, which is arguably critical to vision-language understanding and reasoning. To this end, we equip both the visual and language branches in CLIP with hierarchy-aware attentions, namely Hierarchy-aware CLIP (HiCLIP), to progressively discover semantic hierarchies layer-by-layer from both images and texts in an unsupervised manner. As a result, such hierarchical aggregation significantly improves the cross-modal alignment. To demonstrate the advantages of HiCLIP, we conduct qualitative analysis on its unsupervised hierarchy induction during inference, as well as extensive quantitative experiments on both visual recognition and vision-language downstream tasks. 1 * This work was conducted while interning at ByteDance."}, "cited_paper_content": {"title": "Visually Grounded Neural Syntax Acquisition", "abstract": "We present the Visually Grounded Neural Syntax Learner (VG-NSL), an approach for learning syntactic representations and structures without any explicit supervision. The model learns by looking at natural images and reading paired captions. VG-NSL generates constituency parse trees of texts, recursively composes representations for constituents, and matches them with images. We define concreteness of constituents by their matching scores with images, and use it to guide the parsing of text. Experiments on the MSCOCO data set show that VG-NSL outperforms various unsupervised parsing approaches that do not use visual grounding, in terms of F1 scores against gold parse trees. We find that VGNSL is much more stable with respect to the choice of random initialization and the amount of training data. We also find that the concreteness acquired by VG-NSL correlates well with a similar measure defined by linguists. Finally, we also apply VG-NSL to multiple languages in the Multi30K data set, showing that our model consistently outperforms prior unsupervised approaches."}, "keywords": ["unsupervised grammar induction"], "citation_intent": "result"} {"citing_id": "2304.11928v1", "cited_id": "1712.02560", "section_title": "1) Distribution Divergence", "citation": "The authors propose the sliced Wasserstein discrepancy, which considers the properties of the underlying geometry of probability space and thus im-proves upon the L1 norm used in #REFR . Further follow-up work is presented in Li et al.", "text_before_citation": ["In the third step, the feature extractor is then optimized to minimize the MCD for the target domain.", "This causes those samples from the target domain far away from the source domain distribution to move closer to the source domain distribution (here, the support of the source domain is given, and the two classifiers agree).", "The three steps are iterated, which results in an adversarial optimization.", "Lee et al.", "#OTHEREFR advance this work by introducing an improved way of computing the discrepancy between the classifier probability outputs."], "text_after_citation": ["#OTHEREFR where two classifiers are trained on the source domain while also updating the feature generator.", "Then they maximize the classifier discrepancy on the target domain while ensuring the source domain classification stays the same.", "In the final stage, they train the feature generator to minimize the classifier discrepancy, pushing the target domain data to the statistical support of the source domain.", "Other methods for distribution divergence minimization: MMD #OTHEREFR uses the soft paste algorithm, combining two images by a weighted overlay (see Section III-B).", "A reference source domain image is pasted into a target and source domain image."], "citing_paper_content": {"title": "Survey On Unsupervised Domain Adaptation For Semantic Segmentation For Visual Perception In Automated Driving", "abstract": "The research leading to these results is funded by the German Federal Ministry for Economic Affairs and Energy within the project \"KI Delta Learning\" (F\u00f6rderkennzeichen 19A19013C and 19A19013K). The authors would like to thank the consortium for the successful cooperation."}, "cited_paper_content": {"title": "Maximum Classifier Discrepancy For Unsupervised Domain Adaptation", "abstract": "In this work, we present a method for unsupervised domain adaptation (UDA), where we aim to transfer knowledge from a label-rich domain (i.e., a source domain) to an unlabeled domain (i.e., a target domain). Many adversarial learning methods have been proposed for this task. These methods train domain classifier networks (i.e., a discriminator) to discriminate distinguish the features as either a source or target and train a feature generator network to mimic the discriminator.However, the domain classifier only tries to distinguish the features as a source or target and thus does not consider task-specific decision boundaries between classes. Therefore, a trained generator can generate ambiguous features near class boundaries. ::: To solve the problem, we propose a new approach that attempts to align distributions of source and target by utilizing the task-specific decision boundaries. We propose to utilize task-specific classifiers as discriminators that try to detect target samples that are far from the support of the source. A feature generator learns to generate target features inside the support to fool the classifiers. Since the generator uses feedback from task-specific classifiers, it avoids generating target features near class boundaries. Our method outperforms other methods on several datasets of image classification and semantic segmentation."}, "keywords": ["sliced Wasserstein discrepancy"], "citation_intent": "background"} {"citing_id": "2304.12217v1", "cited_id": "physics/0508025", "section_title": "Introduction", "citation": "First, the standard indicator and list view exemplified by Google Scholar, as shown in Figure 1 (a), provides key bibliographic indicators (# of citations, h-index #REFR , etc.) as an overview of the scholar.", "text_before_citation": ["Commonly, scholar profiling is composed of two steps: 1) accurate extraction of 360-degree academic demographics of a scholar from the web, which is normally considered an information retrieval task; and 2) appropriate aggregation, analysis, and representation of the extracted academic data, which is largely a data mining task.", "This work focuses on the latter step and formulates the impactoriented scholar profiling problem -how to arrange a scholar's academic data to best represent his/her scientific impact.", "Here the impact is defined as the breadth and depth of one's scientific contribution, as well as their community recognition.", "The problem is closely related to important applications of scholar profiles, such as serving possible references for academic award selection and tenure evaluation #OTHEREFR , and predicting one's future academic output [20] .", "Existing scholar profiling websites generally design two types of views."], "text_after_citation": ["The view is complemented with multiple lists as the context (publications, co-authors, etc.).", "The indicators are believed to be helpful for scholar ranking, though the supplemented paper/coauthor lists are normally unprocessed.", "There is a missed opportunity to exploit the structure within these lists for impact-oriented contextual tasks beyond ranking, such as reasoning and understanding of scholar profiles.", "Second, scholarly websites from academics often exhibit various bibliometric networks.", "These networks somehow mitigate the drawback of indicator and list by illustrating the relational context of a scholar."], "citing_paper_content": {"title": "Impact-Oriented Contextual Scholar Profiling Using Self-Citation Graphs", "abstract": "Quantitatively profiling a scholar's scientific impact is important to modern research society. Current practices with bibliometric indicators (e.g., h-index), lists, and networks perform well at scholar ranking, but do not provide structured context for scholar-centric, analytical tasks such as profile reasoning and understanding. This work presents GeneticFlow (GF), a suite of novel graph-based scholar profiles that fulfill three essential requirements: structured-context, scholar-centric, and evolution-rich. We propose a framework to compute GF over large-scale academic data sources with millions of scholars. The framework encompasses a new unsupervised advisoradvisee detection algorithm, a well-engineered citation type classifier using interpretable features, and a fine-tuned graph neural network (GNN) model. Evaluations are conducted on the real-world task of scientific award inference. Experiment outcomes show that the F1 score of best GF profile significantly outperforms alternative methods of impact indicators and bibliometric networks in all the 6 computer science fields considered. Moreover, the core GF profiles, with 63.6%\u223c66.5% nodes and 12.5%\u223c29.9% edges of the full profile, still significantly outrun existing methods in 5 out of 6 fields studied. Visualization of GF profiling result also reveals human explainable patterns for high-impact scholars."}, "cited_paper_content": {"title": "An Index To Quantify An Individual'S Scientific Research Output", "abstract": "I propose the index $h$, defined as the number of papers with citation number higher or equal to $h$, as a useful index to characterize the scientific output of a researcher."}, "keywords": ["Google Scholar", "h-index"], "citation_intent": "method"} {"citing_id": "2304.07769v1", "cited_id": "2001.06591", "section_title": "Related Work", "citation": "Usually, generative models attempt to learn the reconstruction of the data and use this reconstruction to identify anomalous samples #REFR .", "text_before_citation": ["Deep support vector data description (DSVDD) finds a hypersphere to enclose the representation of normal samples #OTHEREFR .", "Liu and Gryllias constructed frequency domain features using cyclic spectral analysis and applied them in the support vector data description (SVDD) framework.", "This method has been proven robust against outliers and can achieve a high detection rate for detecting anomalies #OTHEREFR .", "In #OTHEREFR , researchers presented a new approach to identify imagery anomalies by training the model on normal images altered by geometric transformation.", "In this model, the classifier calculates the anomaly score using softmax statistics."], "text_after_citation": ["For instance, auto-encoders model the normal data distribution, and the reconstruction error is used as the anomaly score #OTHEREFR .", "Deep structured energy-based models (DSEBMs) learn an energy-based model and map each sample to an energy score #OTHEREFR .", "Deep autoencoding Gaussian mixture model (DAGMM) estimates a mixed Gaussian distribution by using an encoder for normal samples #OTHEREFR .", "A recent line of work on anomaly detection has focused on adversarial neural networks.", "For example, this structure has been used to identify anomalies in medical images #OTHEREFR ."], "citing_paper_content": {"title": "Regularized Complete Cycle Consistent Gan For Anomaly Detection", "abstract": "This study presents an adversarial method for anomaly detection in real-world applications, leveraging the power of generative adversarial neural networks (GANs) through cycle consistency in reconstruction error. Previous methods suffer from the high variance between class-wise accuracy which leads to not being applicable for all types of anomalies. The proposed method named RCALAD tries to solve this problem by introducing a novel discriminator to the structure, which results in a more efficient training process. Additionally, RCALAD employs a supplementary distribution in the input space to steer reconstructions toward the normal data distribution, effectively separating anomalous samples from their reconstructions and facilitating more accurate anomaly detection. To further enhance the performance of the model, two novel anomaly scores are introduced. The proposed model has been thoroughly evaluated through extensive experiments on six various datasets, yielding results that demonstrate its superiority over existing state-of-the-art models. The code is readily available to the research community at https://github.com/zahraDehghanian97/RCALAD."}, "cited_paper_content": {"title": "Regularized Cycle Consistent Generative Adversarial Network For Anomaly Detection", "abstract": "In this paper, we investigate algorithms for anomaly detection. Previous anomaly detection methods focus on modeling the distribution of non-anomalous data provided during training. However, this does not necessarily ensure the correct detection of anomalous data. We propose a new Regularized Cycle Consistent Generative Adversarial Network (RCGAN) in which deep neural networks are adversarially trained to better recognize anomalous samples. This approach is based on leveraging a penalty distribution with a new definition of the loss function and novel use of discriminator networks. It is based on a solid mathematical foundation, and proofs show that our approach has stronger guarantees for detecting anomalous examples compared to the current state-of-the-art. Experimental results on both real-world and synthetic data show that our model leads to significant and consistent improvements on previous anomaly detection benchmarks. Notably, RCGAN improves on the state-of-the-art on the KDDCUP, Arrhythmia, Thyroid, Musk and CIFAR10 datasets."}, "keywords": ["anomalous samples"], "citation_intent": "background"} {"citing_id": "2303.05071v1", "cited_id": "1903.11027", "section_title": "Results", "citation": "NuScenes #REFR presents a greater challenge for 3D SOT task than KITTI due to its larger data volumes and lower frequency for annotated frames (2Hz for NuScenes v.s. 10Hz for KITTI and WOD).", "text_before_citation": ["However, only MBP-Track can accurately track the target after the occlusion disappears, owing to the sufficient use of temporal information.", "To demonstrate the generalization ability of our proposed MBPTrack, we evaluate the KITTI pretrained models on WOD #OTHEREFR , following previous work #OTHEREFR .", "The corresponding categories between KITTI and WOD datasets are Car\u2192Vehicle and Pedestrian\u2192Pedestrian. The experimental results, as presented in Tab.", "2, indicate that MBP-Track yields competitive or better tracking results than other methods under different levels of sparsity.", "In conclusion, our proposed method not only precisely tracks targets of all sizes but also generalizes well to unseen scenarios."], "text_after_citation": ["We conduct a comparison of our approach with previous methods on the NuScenes dataset following M2-Track #OTHEREFR . As shown in Tab.", "3, our method achieves a consistent and large performance gain compared with the previous state-of-the-art method, M2-Track.", "Leveraging the rich temporal and spatial information contained in the historical frames, MBPTrack exhibits superior performance over methods that only consider two frames when large appearance variation occurs between them. Fig.", "4 shows the model complexity and average inference time of different components in the Car category on KITTI.", "Our experiments are conducted on a single NVIDIA RTX 3090."], "citing_paper_content": {"title": "Mbptrack: Improving 3D Point Cloud Tracking With Memory Networks And Box Priors", "abstract": "3D single object tracking has been a crucial problem for decades with numerous applications such as autonomous driving. Despite its wide-ranging use, this task remains challenging due to the significant appearance variation caused by occlusion and size differences among tracked targets. To address these issues, we present MBPTrack, which adopts a Memory mechanism to utilize past information and formulates localization in a coarse-to-fine scheme using Box Priors given in the first frame. Specifically, past frames with targetness masks serve as an external memory, and a transformer-based module propagates tracked target cues from the memory to the current frame. To precisely localize objects of all sizes, MBPTrack first predicts the target center via Hough voting. By leveraging box priors given in the first frame, we adaptively sample reference points around the target center that roughly cover the target of different sizes. Then, we obtain dense feature maps by aggregating point features into the reference points, where localization can be performed more effectively. Extensive experiments demonstrate that MBPTrack achieves state-of-the-art performance on KITTI, nuScenes and Waymo Open Dataset, while running at 50 FPS on a single RTX3090 GPU."}, "cited_paper_content": {"title": "Nuscenes: A Multimodal Dataset For Autonomous Driving", "abstract": "Robust detection and tracking of objects is crucial for the deployment of autonomous vehicle technology. Image-based benchmark datasets have driven development in computer vision tasks such as object detection, tracking and segmentation of agents in the environment. Most autonomous vehicles, however, carry a combination of cameras and range sensors such as lidar and radar. As machine learning based methods for detection and tracking become more prevalent, there is a need to train and evaluate such methods on datasets containing range sensor data along with images. In this work we present nuTonomy scenes (nuScenes), the first published dataset to carry the full autonomous vehicle sensor suite: 6 cameras, 5 radars and 1 lidar, all with full 360 degree field of view. nuScenes comprises 1000 scenes, each 20s long and fully annotated with 3D bounding boxes for 23 classes and 8 attributes. It has 7x as many annotations and 100x as many images as the pioneering KITTI dataset. We define novel 3D detection and tracking metrics. We also provide careful dataset analysis as well as baselines for lidar and image based detection and tracking. Data, development kit and more information are available online at this http URL."}, "keywords": ["annotated frames", "3D SOT task"], "citation_intent": "background"} {"citing_id": "2303.13009v1", "cited_id": "1804.02516", "section_title": "B. Dataset Details", "citation": "On the other hand, the 1kB split uses the identical 1,000 test split of 1kA for the test, whereas the train set is a subset of 1kA's containing 6,656 samples #REFR .", "text_before_citation": ["The train dataset contains 1,261 samples, and the test set contains 439 samples, respectively.", "MSRVTT.", "The original MSRVTT-full #OTHEREFR dataset, used on video captioning task, contains 6,513 train, 497 validation, and 2,990 test samples.", "However, we have observed a wide range of dataset split variations throughout research on text-to-video retrieval.", "One split variant randomly samples 1,000 clip-text pairs from the test set for evaluation and uses the rest of the 9,000 samples as train data #OTHEREFR , which is commonly denoted as the 1kA split."], "text_after_citation": ["Another commonly used data split also uses the identical 1,000 test set, while adopting both the train and validation set from the standard MSRVTT for training.", "We evaluated our method on two split protocols most prominently observed in the literature, 1kA, and 7k.", "For convenience, we denote the former as MSRVTT-9k and the latter as MSRVTT-7k. MSVD-QA.", "MSVD-QA #OTHEREFR contains 47k open-ended questions on 2k videos, derived from the original MSVD dataset #OTHEREFR .", "We construct the answer set with 1,000 most frequently appeared answers."], "citing_paper_content": {"title": "Meltr: Meta Loss Transformer For Learning To Fine-Tune Video Foundation Models", "abstract": "Foundation models have shown outstanding performance and generalization capabilities across domains. Since most studies on foundation models mainly focus on the pretraining phase, a naive strategy to minimize a single task-specific loss is adopted for fine-tuning. However, such fine-tuning methods do not fully leverage other losses that are potentially beneficial for the target task. Therefore, we propose MEta Loss TRansformer (MELTR), a plug-in module that automatically and non-linearly combines various loss functions to aid learning the target task via auxiliary learning. We formulate the auxiliary learning as a bi-level optimization problem and present an efficient optimization algorithm based on Approximate Implicit Differentiation (AID). For evaluation, we apply our framework to various video foundation models (UniVL, Violet and All-in-one), and show significant performance gain on all four downstream tasks: text-to-video retrieval, video question answering, video captioning, and multimodal sentiment analysis. Our qualitative analyses demonstrate that MELTR adequately 'transforms' individual loss functions and 'melts' them into an effective unified loss."}, "cited_paper_content": {"title": "Learning A Text-Video Embedding From Incomplete And Heterogeneous Data", "abstract": "Joint understanding of video and language is an active research area with many applications. Prior work in this domain typically relies on learning text-video embeddings. One difficulty with this approach, however, is the lack of large-scale annotated video-caption datasets for training. To address this issue, we aim at learning text-video embeddings from heterogeneous data sources. To this end, we propose a Mixture-of-Embedding-Experts (MEE) model with ability to handle missing input modalities during training. As a result, our framework can learn improved text-video embeddings simultaneously from image and video datasets. We also show the generalization of MEE to other input modalities such as face descriptors. We evaluate our method on the task of video retrieval and report results for the MPII Movie Description and MSR-VTT datasets. The proposed MEE model demonstrates significant improvements and outperforms previously reported methods on both text-to-video and video-to-text retrieval tasks. Code is available at: https://github.com/antoine77340/Mixture-of-Embedding-Experts"}, "keywords": ["6,656 samples", "train set"], "citation_intent": "method"} {"citing_id": "2303.14012v1", "cited_id": "1612.00593", "section_title": "I. Introduction", "citation": "We then use PointNet #REFR architecture to form a mapping from point-cloud observation of the target object, and pose of the deformable tool to 3D representation of the contact points between the two bodies.", "text_before_citation": ["However, this contact reasoning concept is more suitable for the control aspect than for the planning aspect due to its ability to track the interaction in real time.", "Thus, the question of how to predict the interaction between deformable and rigid objects and exploit such interactions for planning remains open.", "To address the aforementioned open issues, we propose Sequence Planning with deformable-ON-rigid contact prediction from GEometric features (SPONGE), a sequence planning pipeline powered by a contact prediction model that predicts contact between deformable and rigid bodies, with the aim of providing robots with the aforementioned human-like planning skill in order to efficiently automate downstream deformable object manipulation tasks such as cleaning dishes (Fig. 1 ).", "Instead of contact reasoning, in this paper we tackle the concept of contact prediction of a 3D deformable tool acting on rigid objects, which is better suited for planning purposes.", "We take a data-driven approach with physics-based simulation to model the interactions between 3D deformable objects and rigid objects."], "text_after_citation": ["The trained contact prediction model is then used as the driving force behind the planning of a subsequent task.", "Finally, we deploy SPONGE in the real world to demonstrate that the contact prediction model trained only with synthetic data from physics-based simulation can help to produce an efficient plan for a manipulation task to be successfully executed in the real world.", "In summary, the main contributions of this paper are as follows:", "\u2022 A deep learning-based contact prediction model that predicts the contacts between 3D deformable and rigid objects under interactions.", "\u2022 A planning pipeline powered by the proposed contact prediction model to efficiently automate deformable object manipulation tasks."], "citing_paper_content": {"title": "Sponge: Sequence Planning With Deformable-On-Rigid Contact Prediction From Geometric Features", "abstract": "Planning robotic manipulation tasks, especially those that involve interaction between deformable and rigid objects, is challenging due to the complexity in predicting such interactions. We introduce SPONGE, a sequence planning pipeline powered by a deep learning-based contact prediction model for contacts between deformable and rigid bodies under interactions. The contact prediction model is trained on synthetic data generated by a developed simulation environment to learn the mapping from point-cloud observation of a rigid target object and the pose of a deformable tool, to 3D representation of the contact points between the two bodies. We experimentally evaluated the proposed approach for a dish cleaning task both in simulation and on a real Franka Emika Panda with real-world objects. The experimental results demonstrate that in both scenarios the proposed planning pipeline is capable of generating high-quality trajectories that can accomplish the task by achieving more than 90% area coverage on different objects of varying sizes and curvatures while minimizing travel distance. Code and video are available at: https://irobotics.aalto.fi/sponge/."}, "cited_paper_content": {"title": "Pointnet: Deep Learning On Point Sets For 3D Classification And Segmentation", "abstract": "Point cloud is an important type of geometric data structure. Due to its irregular format, most researchers transform such data to regular 3D voxel grids or collections of images. This, however, renders data unnecessarily voluminous and causes issues. In this paper, we design a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input. Our network, named PointNet, provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing. Though simple, PointNet is highly efficient and effective. Empirically, it shows strong performance on par or even better than state of the art. Theoretically, we provide analysis towards understanding of what the network has learnt and why the network is robust with respect to input perturbation and corruption."}, "keywords": ["deformable tool", "PointNet architecture"], "citation_intent": "method"} {"citing_id": "2304.06632v1", "cited_id": "1406.2661", "section_title": "B. Necessary Conditions Of Aigc", "citation": "Goodfellow proposed the first generative model, Generative Adversarial Network (GAN), in 2014 #REFR . Table II shows the evolution timeline of generative algorithms.", "text_before_citation": ["The algorithms simulate the simple human brain, which improves the learning model through experience accumulation #OTHEREFR .", "Neural network models further emulate the signal processing and thinking mechanisms of human brain nerves #OTHEREFR , #OTHEREFR .", "Furthermore, generative algorithms, such as Google's Transformer architecture #OTHEREFR , draw on human attention mechanisms to enable the completion of multiple tasks by an algorithm.", "They can model the probability distribution of the input data and then generate new data.", "They are trained by processing large amounts of text data, then fine-tuned on specific tasks with labeled data, enabling them to interact according to contextual content and chat in a manner similar to human beings."], "text_after_citation": ["In most cases, the significance of GAN is a source of inspiration for many popular variations and architectures.", "The transformer model has a wide range of applications in various domains (including NLP and CV).", "In addition, several pre-training models, such as BERT, GPT-3, and LaMDA, have been developed based on the Transformer model.", "The diffusion model is currently the most advanced image generation model because of its optimized performance.", "With the development of generative models, language models have also made great progress. For example, Devlin et al."], "citing_paper_content": {"title": "Ai-Generated Content (Aigc): A Survey", "abstract": "To address the challenges of digital intelligence in the digital economy, artificial intelligence-generated content (AIGC) has emerged. AIGC uses artificial intelligence to assist or replace manual content generation by generating content based on user-inputted keywords or requirements. The development of large model algorithms has significantly strengthened the capabilities of AIGC, which makes AIGC products a promising generative tool and adds convenience to our lives. As an upstream technology, AIGC has unlimited potential to support different downstream applications. It is important to analyze AIGC's current capabilities and shortcomings to understand how it can be best utilized in future applications. Therefore, this paper provides an extensive overview of AIGC, covering its definition, essential conditions, cutting-edge capabilities, and advanced features. Moreover, it discusses the benefits of large-scale pretrained models and the industrial chain of AIGC. Furthermore, the article explores the distinctions between auxiliary generation and automatic generation within AIGC, providing examples of text generation. The paper also examines the potential integration of AIGC with the Metaverse. Lastly, the article highlights existing issues and suggests some future directions for application. Impact Statement-It is necessary for academia and industry to take an overview of what AIGC is, how AIGC works, how AIGC changes our lifestyles, and what AIGC will be in the future. This article proposes a survey of AIGC from its definition, pros, cons, applications, current challenges, and future directions to answer these urgent questions. We summarize the existing major literature, which helps relevant researchers become familiar with and understand the existing works and unsolved problems. Based on the review of literature and the commercialization of scientific and research findings, we conduct some cutting-edge AIGC research. In particular, the challenges and future directions of AIGC can be helpful for developing AI. Relevant technologies of AIGC will boost the development of artificial intelligence, better serve human society, and achieve sustainable development."}, "cited_paper_content": {"title": "Generative Adversarial Networks", "abstract": "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to 1/2 everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples."}, "keywords": ["generative algorithms", "Generative Adversarial Network"], "citation_intent": "background"} {"citing_id": "2304.02714v1", "cited_id": "1505.04597", "section_title": "B. Networks And Algorithms 1) Whistle Extraction Network:", "citation": "The generators follow the U-Net #REFR architecture, which has 6 U-Net blocks with a basic width of 64. InstanceNorm layers are used in the U-Net blocks.", "text_before_citation": ["It contains 4 convolutional layers with a stride of 2 and a fully connected layer.", "The networks are optimized by Adam optimizers (initial learning rate = 1 \u00d7 10 \u22124 , betas = [0.5, 0.9], batch size = 64) for 3 \u00d7 10 4 and 5 \u00d7 10 4 iterations on the reduced and full datasets, respectively.", "In each WGAN training iteration, the discriminator is optimized for 5 steps while the generator is optimized for 1 step, where the network parameters are updated by applying the optimizer to one minibatch of data in each step.", "For sample selection, we used T e =70, T c =0.5, T p =64.", "3) CycleGAN: The GAN model that we used to add whistles on synthetic noise employs the CycleGAN architecture of #OTHEREFR ."], "text_after_citation": ["The discriminator is a fully convolutional network with 3 convolutional layers.", "We trained the generators and discriminators with Adam optimizers (learning rate = 2 \u00d7 10 \u22124 , betas = [0.5, 0.999], batch size = 64) for 25,120 iterations (160 epochs for 10,000 real positive samples) for the reduced dataset and 50 epochs for the full dataset.", "We set \u03bb 0 =10, \u03bb 1 =0.5, and \u03bb 2 =0.5 for Eq. 11.", "We apply a random \u03b3 following a unified distribution between (0.5, 1.5) in Eq. 3."], "citing_paper_content": {"title": "Learning Stage-Wise Gans For Whistle Extraction In Time-Frequency Spectrograms", "abstract": "Whistle contour extraction aims to derive animal whistles from time-frequency spectrograms as polylines. For toothed whales, whistle extraction results can serve as the basis for analyzing animal abundance, species identity, and social activities. During the last few decades, as long-term recording systems have become affordable, automated whistle extraction algorithms were proposed to process large volumes of recording data. Recently, a deep learning-based method demonstrated superior performance in extracting whistles under varying noise conditions. However, training such networks requires a large amount of labor-intensive annotation, which is not available for many species. To overcome this limitation, we present a framework of stage-wise generative adversarial networks (GANs), which compile new whistle data suitable for deep model training via three stages: generation of background noise in the spectrogram, generation of whistle contours, and generation of whistle signals. By separating the generation of different components in the samples, our framework composes visually promising whistle data and labels even when few expert annotated data are available. Regardless of the amount of human-annotated data, the proposed data augmentation framework leads to a consistent improvement in performance of the whistle extraction model, with a maximum increase of 1.69 in the whistle extraction mean F1-score. Our stage-wise GAN also surpasses one single GAN in improving whistle extraction models with augmented data."}, "cited_paper_content": {"title": "U-Net: Convolutional Networks For Biomedical Image Segmentation", "abstract": "There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net ."}, "keywords": ["InstanceNorm layers"], "citation_intent": "method"} {"citing_id": "2303.11041v1", "cited_id": "1807.08555", "section_title": "Results", "citation": "We compare our loss (editing loss) with the following baselines: #REFR No Editing: the initial segmentation y init is used as the final segmentation\u0177, and the overall error in this case is the distance from the CAS contours to y init . This should serve as an upper bound for error.", "text_before_citation": ["We use the editing evaluation metric D and report the 95 th percentile of the overall editing error, the error near the user input, and far from the user input (mm).", "The near and far regions are defined by thresholding A at 0.5.", "For the CV results, we report the mean and standard deviation over the five folds.", "The statistical significance is computed for the difference with InterCNN.", "\u2020 : p-value < 0.01, \u2021 : p-value < 0.001."], "text_after_citation": ["(2) CE Loss: an editing model trained using the standard CE segmentation loss.", "(3) Dice Loss #OTHEREFR : an editing model trained using Dice segmentation loss.", "4InterCNN #OTHEREFR : for every training sample, simulated user edits based on the prediction are accumulated with any previous edits and re-input to the model for 10 iterations, trained using CE loss.", "We report the results after a single edit (the furthest CAS contour from\u0177) in Table 1 .", "A single training epoch takes \u2248 3 minutes for all the models except for InterCNN, which takes \u2248 14 minutes, on a single NVIDIA Tesla V100 GPU."], "citing_paper_content": {"title": "From Sparse To Precise: A Practical Editing Approach For Intracardiac Echocardiography Segmentation", "abstract": "Accurate and safe catheter ablation procedures for patients with atrial fibrillation require precise segmentation of cardiac structures in Intracardiac Echocardiography (ICE) imaging. Prior studies have suggested methods that employ 3D geometry information from the ICE transducer to create a sparse ICE volume by placing 2D frames in a 3D grid, enabling training of 3D segmentation models. However, the resulting 3D masks from these models can be inaccurate and may lead to serious clinical complications due to the sparse sampling in ICE data, frames misalignment, and cardiac motion. To address this issue, we propose an interactive editing framework that allows users to edit segmentation output by drawing scribbles on a 2D frame. The user interaction is mapped to the 3D grid and utilized to execute an editing step that modifies the segmentation in the vicinity of the interaction while preserving the previous segmentation away from the interaction. Furthermore, our framework accommodates multiple edits to the segmentation output in a sequential manner without compromising previous edits. This paper presents a novel loss function and a novel evaluation metric specifically designed for editing. Results from cross-validation and testing indicate that our proposed loss function outperforms standard losses and training strategies in terms of segmentation quality and following user input. Additionally, we show quantitatively and qualitatively that subsequent edits do not compromise previous edits when using our method, as opposed to standard segmentation losses. Overall, our approach enhances the accuracy of the segmentation while avoiding undesired changes away from user interactions and without compromising the quality of previously edited regions, leading to better patient outcomes."}, "cited_paper_content": {"title": "Iterative Interaction Training For Segmentation Editing Networks", "abstract": "Automatic segmentation has great potential to facilitate morphological measurements while simultaneously increasing efficiency. Nevertheless often users want to edit the segmentation to their own needs and will need different tools for this. There has been methods developed to edit segmentations of automatic methods based on the user input, primarily for binary segmentations. Here however, we present an unique training strategy for convolutional neural networks (CNNs) trained on top of an automatic method to enable interactive segmentation editing that is not limited to binary segmentation. By utilizing a robot-user during training, we closely mimic realistic use cases to achieve optimal editing performance. In addition, we show that an increase of the iterative interactions during the training process up to ten improves the segmentation editing performance substantially. Furthermore, we compare our segmentation editing CNN (interCNN) to state-of-the-art interactive segmentation algorithms and show a superior or on par performance."}, "keywords": ["initial segmentation", "final segmentation\u0177"], "citation_intent": "method"} {"citing_id": "2304.03510v1", "cited_id": "2001.01202", "section_title": "Results And Discussion", "citation": "The detection error with the deep features method #REFR on WL images indicates a performance similar to that of both the VIS and NIR spectral bands.", "text_before_citation": ["The deep features method #OTHEREFR , which is based on facial features, indicates less variation in degraded performance, especially in the NIR bands.", "However, the Hierarchical Deep Residual SLERP #OTHEREFR , which is based on texture features that indicate higher variation in the detection performance, especially in NIR spectral bands, compared to the deep features method #OTHEREFR .", "The possible degradation of the Hierarchical Deep Residual SLERP #OTHEREFR can be attributed to the lack of texture information in the NIR spectral bands.", "Thus, the detection performance of D-MAD in the NIR spectrum depends on the type of feature used for morphing de-tection.", "\u2022 The use of wholeLight (WL) that is captured without any spectral filtering indicates the varied detection performance with respect to D-MAD techniques."], "text_after_citation": ["Because the deep features method #OTHEREFR is based on facial features, it is less sensitive to different spectral bands, which may be due to the backbone model that is trained only using VIS images.", "However, the performance of Hierarchical Deep Residual SLERP #OTHEREFR on WL images shows degraded performance compared to the VIS spectral bands.", "Table 3 lists the performance of the D-MAD algorithms on the proposed multispectral framework and visible images.", "Both multispectral and visible data are collected using the same data subjects, and this can provide insights into the utility of multispectral images for reliable morphing attack detection.", "Furthermore, the performance of the multispectral framework was presented by fusing the results of all individual spectral bands."], "citing_paper_content": {"title": "Multispectral Imaging For Differential Face Morphing Attack Detection: A Preliminary Study", "abstract": "Face morphing attack detection is emerging as an increasingly challenging problem owing to advancements in high-quality and realistic morphing attack generation. Reliable detection of morphing attacks is essential because these attacks are targeted for border control applications. This paper presents a multispectral framework for differential morphing-attack detection (D-MAD). The D-MAD methods are based on using two facial images that are captured from the ePassport (also called the reference image) and the trusted device (for example, Automatic Border Control (ABC) gates) to detect whether the face image presented in ePassport is morphed. The proposed multispectral D-MAD framework introduce a multispectral image captured as a trusted capture to capture seven different spectral bands to detect morphing attacks. Extensive experiments were conducted on the newly created datasets with 143 unique data subjects that were captured using both visible and multispectral cameras in multiple sessions. The results indicate the superior performance of the proposed multispectral framework compared to visible images."}, "cited_paper_content": {"title": "Deep Face Representations For Differential Morphing Attack Detection", "abstract": "The vulnerability of facial recognition systems to face morphing attacks is well known. Many different approaches for morphing attack detection have been proposed in the scientific literature. However, the morphing attack detection algorithms proposed so far have only been trained and tested on datasets whose distributions of image characteristics are either very limited (e.g. only created with a single morphing tool) or rather unrealistic (e.g. no print-scan transformation). As a consequence, these methods easily overfit on certain image types and the results presented cannot be expected to apply to real-world scenarios. For example, the results of the latest NIST Face Recognition Vendor Test MORPH show that the submitted MAD algorithms lack robustness and performance when considering unseen and challenging datasets. In this work, subsets of the FERET and FRGCv2 face databases are used to create a large realistic database for training and testing of morphing attack detection algorithms, containing a large number of ICAO-compliant bona fide facial images, corresponding unconstrained probe images, and morphed images created with four different tools. Furthermore, multiple post-processings are applied on the reference images, e.g. print-scan and JPEG2000 compression. On this database, previously proposed differential morphing algorithms are evaluated and compared. In addition, the application of deep face representations for differential morphing attack detection algorithms is investigated. It is shown that algorithms based on deep face representations can achieve very high detection performance (less than 3\\%~\\mbox{D-EER}) and robustness with respect to various post-processings. Finally, the limitations of the developed methods are analyzed."}, "keywords": ["deep features method"], "citation_intent": "result"} {"citing_id": "2303.05796v2", "cited_id": "1906.04032", "section_title": "A.4 Model Details", "citation": "As seen in Table 5 , we found that a more expressive normalizing flow with resampled base #REFR Stimper et al., 2022) improves significantly the results over a simpler radial normalizing flow (Rezende & Mohamed, 2015) across all the metrics.", "text_before_citation": ["The list of core architectures used across the experiments are: ResNet18 / ResNet50 / EfficientNet / Swin #OTHEREFR Tan & Le, 2021; from the torchvision repository 2 and Wide-ResNet-28-10 (Zagoruyko & Komodakis, 2016) from the original implementation of DUE.", "Except for the experiment on architecture type and size where ResNet18 has output channels for the residual blocks with size [64, 128, 256, 512] , ResNet18 has output channels for the residual blocks with size [32, 64, 128, 256] which causes small differences in final accuracy.", "Uncertainty head.", "For DUE we use the original implementation 3 with by default we use the RBF kernel function.", "For NatPN we use the original implementation 4 but change the uncertainty head with a more expressive density estimator."], "text_after_citation": ["For all the experiments (except toys where we use radial flow) we use NSF-R with 16 layers."], "citing_paper_content": {"title": "Workshop On The Pitfalls Of Limited Data And Computation For Trustworthy Ml, Iclr 2023 Training, Architecture, And Prior For Deterministic Uncertainty Methods", "abstract": "Accurate and efficient uncertainty estimation is crucial to build reliable Machine Learning (ML) models capable to provide calibrated uncertainty estimates, generalize and detect Out-Of-Distribution (OOD) datasets. To this end, Deterministic Uncertainty Methods (DUMs) is a promising model family capable to perform uncertainty estimation in a single forward pass. This work investigates important design choices in DUMs: (1) we show that training schemes decoupling the core architecture and the uncertainty head schemes can significantly improve uncertainty performances. (2) we demonstrate that the core architecture expressiveness is crucial for uncertainty performance and that additional architecture constraints to avoid feature collapse can deteriorate the trade-off between OOD generalization and detection. (3) Contrary to other Bayesian models, we show that the prior defined by DUMs do not have a strong effect on the final performances."}, "cited_paper_content": {"title": "Neural Spline Flows", "abstract": "A normalizing flow models a complex probability density as an invertible transformation of a simple base density. Flows based on either coupling or autoregressive transforms both offer exact density evaluation and sampling, but rely on the parameterization of an easily invertible elementwise transformation, whose choice determines the flexibility of these models. Building upon recent work, we propose a fully-differentiable module based on monotonic rational-quadratic splines, which enhances the flexibility of both coupling and autoregressive transforms while retaining analytic invertibility. We demonstrate that neural spline flows improve density estimation, variational inference, and generative modeling of images."}, "keywords": ["expressive normalizing flow"], "citation_intent": "result"} {"citing_id": "2303.03319v1", "cited_id": "quant-ph/0401091", "section_title": "Path Finding In Arbitrary Graphs", "citation": "However, we provide an alternative approach that, while not as fast as Algorithm 3, still provides an improvement over the algorithm of #REFR for graphs in which all (self-avoiding) paths from s to t are short.", "text_before_citation": ["For the more general case of st-path G (x) when G(x) is not known to only have one st-path, while it is possible that an algorithm similar to Algorithm 3 would work, we have not been able to bound the running time effectively.", "This is because in the case of a single path, once you find an intermediate edge on the path, the longest paths from s and t to that edge must be shorter than the length of the longest path from s to t.", "This ensures that subproblems take shorter time than the original problem. With multiple paths, we no longer have that guarantee."], "text_after_citation": ["Our approach does not make use of our path-edge sampling algorithm as a subroutine, and instead uses the path detection algorithm of Lemma 8 to decide whether there are paths through various subgraphs, and then uses that information to find each edge in a path in order from s to t.", "In this way, we avoid the problem of subproblems being larger than the original problem, since if the longest path from s to t has length L, and the first edge we find on the path is (s, u), then longest path from u to t that doesn't go through s must have length at most L \u2212 1. However, we lose the advantage of a divide-and-conquer approach.", "To find the first edge on a path, we use a group testing approach.", "We divide the neighbors of s in G into two sets, S 1 and S 2 and run path detection algorithms in parallel on two subgraphs of G(x), one with edges from s removed, except those to vertices in S 1 (that is, G \u2212 {{s,u}\u2208E:u\u2208S 1 } ), and one with edges from s removed, except those to vertices in S 2 .", "We will detect which of these subgraphs contains a path, and we will know there is a path whose first edge goes from s to a vertex in the corresponding set (S 1 or S 2 )."], "citing_paper_content": {"title": "Quantum Algorithm For Path-Edge Sampling", "abstract": "We present a quantum algorithm for sampling an edge on a path between two nodes s and t in an undirected graph given as an adjacency matrix, and show that this can be done in query complexity that is asymptotically the same, up to log factors, as the query complexity of detecting a path between s and t. We use this path sampling algorithm as a subroutine for st-path finding and st-cut-set finding algorithms in some specific cases. Our main technical contribution is an algorithm for generating a quantum state that is proportional to the positive witness vector of a span program."}, "cited_paper_content": {"title": "Quantum Query Complexity Of Some Graph Problems", "abstract": "Quantum algorithms for graph problems are considered, both in the adjacency matrix model and in an adjacency list-like array model. We give almost tight lower and upper bounds for the bounded error quantum query complexity of Connectivity, Strong Connectivity, Minimum Spanning Tree, and Single Source Shortest Paths. For example we show that the query complexity of Minimum Spanning Tree is in \u0398(n 3/2) in the matrix model and in \\(\\Theta(\\sqrt{nm})\\) in the array model, while the complexity of Connectivity is also in \u0398(n 3/2) in the matrix model, but in \u0398(n) in the array model. The upper bounds utilize search procedures for finding minima of functions under various conditions."}, "keywords": ["algorithm"], "citation_intent": "method"} {"citing_id": "2303.06544v1", "cited_id": "1609.04836", "section_title": "Other Hyperparameters", "citation": "In our preliminary experiments, small batches provided models with worse generalization, and sets of 256 and 512 objects produced similar results; hence, to prioritize a flat minimum #REFR , we chose 256 objects in each batch.", "text_before_citation": ["Each of those hyperparameters is relevant and has an impact on the final results, but focusing on generalization, it is essential to highlight the early stopping criteria.", "The patience hyperparameter allows managing the overfitting; when a higher patience value is used, the overfitting is increased, and obviously, the performance in the testing set is decreased.", "Another relevant pattern was observed in the batch size; the impact of the size batch on optimization results has been studied.", "The evidence shows that a small batch size generates flat minima (better generalization), and on the other hand, bigger batches tend to lead to sharp minima #OTHEREFR .", "However, this evidence is based on a representative training set, so each batch is a good sample; in a case considering data shift, that is not true, and because of that, small batches can provide very wrong directions for the optimizer."], "text_after_citation": [], "citing_paper_content": {"title": "Informative Regularization For A Multi-Layer Perceptron Rr Lyrae Classifier Under Data Shift", "abstract": "In recent decades, machine learning has provided valuable models and algorithms for processing and extracting knowledge from time-series surveys. Different classifiers have been proposed and performed to an excellent standard. Nevertheless, few papers have tackled the data shift problem in labeled training sets, which occurs when there is a mismatch between the data distribution in the training set and the testing set. This drawback can damage the prediction performance in unseen data. Consequently, we propose a scalable and easily adaptable approach based on an informative regularization and an ad-hoc training procedure to mitigate the shift problem during the training of a multi-layer perceptron for RR Lyrae classification. We collect ranges for characteristic features to construct a symbolic representation of prior knowledge, which was used to model the informative regularizer component. Simultaneously, we design a two-step back-propagation algorithm to integrate this knowledge into the neural network, whereby one step is applied in each epoch to minimize classification error, while another is applied to ensure regularization. Our algorithm defines a subset of parameters (a mask) for each loss function. This approach handles the forgetting effect, which stems from a trade-off between these loss functions (learning from data versus learning expert knowledge) during training. Experiments were conducted using recently proposed shifted benchmark sets for RR Lyrae stars, outperforming baseline models by up to 3% through a more reliable classifier. Our method provides a new path to incorporate knowledge from characteristic features into artificial neural networks to manage the underlying data shift problem."}, "cited_paper_content": {"title": "On Large-Batch Training For Deep Learning: Generalization Gap And Sharp Minima", "abstract": "The stochastic gradient descent (SGD) method and its variants are algorithms of choice for many Deep Learning tasks. These methods operate in a small-batch regime wherein a fraction of the training data, say $32$-$512$ data points, is sampled to compute an approximation to the gradient. It has been observed in practice that when using a larger batch there is a degradation in the quality of the model, as measured by its ability to generalize. We investigate the cause for this generalization drop in the large-batch regime and present numerical evidence that supports the view that large-batch methods tend to converge to sharp minimizers of the training and testing functions - and as is well known, sharp minima lead to poorer generalization. In contrast, small-batch methods consistently converge to flat minimizers, and our experiments support a commonly held view that this is due to the inherent noise in the gradient estimation. We discuss several strategies to attempt to help large-batch methods eliminate this generalization gap."}, "keywords": ["batch", "small batches"], "citation_intent": "result"} {"citing_id": "2304.11267v1", "cited_id": "1406.2661", "section_title": "Related Work", "citation": "Image generation has garnered significant research interest, particularly in recent years, with the advent of GANs #REFR .", "text_before_citation": [], "text_after_citation": ["GANs consist of two neural networks, a generator and a discriminator, that compete with each other to create realistic images #OTHEREFR .", "While GANs demonstrate the capability to generate high resolution images with good perceptual quality #OTHEREFR , it suffers from the difficulty to train effectively #OTHEREFR .", "VAEs #OTHEREFR emerged as a popular generative model, utilizing probabilistic graphical models to generate images through latent space representation, enabling efficient synthesis but with lower sample quality than GANs.", "Denoising Diffusion Probabilistic Model (DDPM) #OTHEREFR marked a significant milestone in the development of diffusion-based generative models.", "DDPM demonstrated the potential of these models to generate high-quality images through a series of iterative noise-removal steps."], "citing_paper_content": {"title": "Speed Is All You Need: On-Device Acceleration Of Large Diffusion Models Via Gpu-Aware Optimizations", "abstract": "The rapid development and application of foundation models have revolutionized the field of artificial intelligence. Large diffusion models have gained significant attention for their ability to generate photorealistic images and support various tasks. On-device deployment of these models provides benefits such as lower server costs, offline functionality, and improved user privacy. However, common large diffusion models have over 1 billion parameters and pose challenges due to restricted computational and memory resources on devices. We present a series of implementation optimizations for large diffusion models that achieve the fastest reported inference latency to-date(under 12 seconds for Stable Diffusion 1.4 without INT8 quantization for a 512 \u00d7 512 image with 20 iterations) on GPUequipped mobile devices. These enhancements broaden the applicability of generative AI and improve the overall user experience across a wide range of devices."}, "cited_paper_content": {"title": "Generative Adversarial Networks", "abstract": "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to 1/2 everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples."}, "keywords": ["Image generation", "GANs"], "citation_intent": "background"} {"citing_id": "2303.08977v1", "cited_id": "1811.10180", "section_title": "Motion Deblurring", "citation": "In particular, we point out that our results are sharper than the EDI #REFR reconstruction, which assumes all events correspond to the same amount of intensity change.", "text_before_citation": ["We use three image quality metrics to compare DeblurSR with different baseline approaches: the Mean Squared Error (MSE, lower is better), the Peak Signal-to-Noise Ratio (PSNR, higher is better), and the Structural Similarity Index Measure (SSIM, higher is better).", "Our method demonstrates an impressive ability in motion deblurring.", "Specifically, on the REDS dataset, De-blurSR improves the current best-performing method by 12.3% in MSE, 4.7% in PSNR, and 4.9% in SSIM.", "On HQF, DeblurSR outperforms the state-of-the-art approach by 22.2% in MSE, 10.1% in PSNR, and 14.0% in SSIM.", "Qualitatively, as shown in Figure 4 and Figure 5 , De-blurSR generates smooth and sharp frames."], "text_after_citation": ["Meanwhile, our recon-structed frames are significantly less noisy than the eSL-Net #OTHEREFR reconstruction, which overly emphasizes texture details.", "Compared with E-CIR #OTHEREFR , DeblurSR offers more realistic details around the thin edges.", "The difference between E-CIR and DeblurSR is sometimes subtle and hard to notice from static images.", "We encourage readers to watch the animated visualizations provided in the supplementary material, which additionally demonstrate the temporal smoothness of our results in the video format.", "DeblurSR is computationally efficient."], "citing_paper_content": {"title": "Deblursr: Event-Based Motion Deblurring Under The Spiking Representation", "abstract": "We present DeblurSR, a novel motion deblurring approach that converts a blurry image into a sharp video. DeblurSR utilizes event data to compensate for motion ambiguities and exploits the spiking representation to parameterize the sharp output video as a mapping from time to intensity. Our key contribution, the Spiking Representation (SR), is inspired by the neuromorphic principles determining how biological neurons communicate with each other in living organisms. We discuss why the spikes can represent sharp edges and how the spiking parameters are interpreted from the neuromorphic perspective. DeblurSR has higher output quality and requires fewer computing resources than state-of-the-art event-based motion deblurring methods. We additionally show that our approach easily extends to video super-resolution when combined with recent advances in implicit neural representation. The implementation and animated visualization of DeblurSR are available at https://github.com/chensong1995/DeblurSR."}, "cited_paper_content": {"title": "Bringing A Blurry Frame Alive At High Frame-Rate With An Event Camera", "abstract": "Event-based cameras can measure intensity changes (called \u2018events\u2019) with microsecond accuracy under high-speed motion and challenging lighting conditions. With the active pixel sensor (APS), the event camera allows simultaneous output of the intensity frames. However, the output images are captured at a relatively low frame-rate and often suffer from motion blur. A blurry image can be regarded as the integral of a sequence of latent images, while the events indicate the changes between the latent images. Therefore, we are able to model the blur-generation process by associating event data to a latent image. In this paper, we propose a simple and effective approach, the Event-based Double Integral (EDI) model, to reconstruct a high frame-rate, sharp video from a single blurry frame and its event data. The video generation is based on solving a simple non-convex optimization problem in a single scalar variable. Experimental results on both synthetic and real images demonstrate the superiority of our EDI model and optimization method in comparison to the state-of-the-art."}, "keywords": ["intensity change", "events"], "citation_intent": "result"} {"citing_id": "2303.01155v2", "cited_id": "1606.05830", "section_title": "I. Introduction", "citation": "Visual SLAM (VSLAM) systems can employ a wide range of vision sensors, such as monocular, stereo, Red Green Blue-Depth (RGB-D), omnidirectional, and eventbased cameras to estimate the environmental map while localizing the camera #REFR .", "text_before_citation": [], "text_after_citation": ["The primary advantage of vision sensors is that they need low-cost hardware to supply rich visual and semantic information from surroundings for various tasks #OTHEREFR .", "Semantic data, which refers to high-level information acquired from the environment, make VSLAM tasks more robust and expand the range of applications that can employ the reconstructed maps #OTHEREFR , #OTHEREFR .", "For instance, robots may need to identify objects in the scene Fig.", "1 : The final reconstructed map of the environment using the proposed method in a hierarchical representation.", "Accordingly, the primary map entities are detected 3D points extracted from the environment and visited ArUco markers."], "citing_paper_content": {"title": "Marker-Based Visual Slam Leveraging Hierarchical Representations", "abstract": "Fiducial markers can encode rich information about the environment and can aid Visual SLAM (VSLAM) approaches in reconstructing maps with practical semantic information. Current marker-based VSLAM approaches mainly utilize markers for improving feature detections in low-feature environments and/or for incorporating loop closure constraints, generating only low-level geometric maps of the environment prone to inaccuracies in complex environments. To bridge this gap, this paper presents a VSLAM approach utilizing a monocular camera along with fiducial markers to generate hierarchical representations of the environment while improving the camera pose estimate. The proposed approach detects semantic entities from the surroundings, including walls, corridors, and rooms encoded within markers, and appropriately adds topological constraints among them. Experimental results on a real-world dataset collected with a robot demonstrate that the proposed approach outperforms a traditional marker-based VSLAM baseline in terms of accuracy, given the addition of new constraints while creating enhanced map representations. Furthermore, it shows satisfactory results when comparing the reconstructed map quality to the one reconstructed using a LiDAR SLAM approach."}, "cited_paper_content": {"title": "Past, Present, And Future Of Simultaneous Localization And Mapping: Toward The Robust-Perception Age", "abstract": "Simultaneous localization and mapping (SLAM) consists in the concurrent construction of a model of the environment (the map ), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications and witnessing a steady transition of this technology to industry. We survey the current state of SLAM and consider future directions. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors\u2019 take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved?"}, "keywords": ["Visual SLAM (VSLAM)"], "citation_intent": "background"} {"citing_id": "2303.08595v1", "cited_id": "2003.02389", "section_title": "Background And Related Works", "citation": "Its counterpart in structured pruning-Iterative L1-norm-based pruning (ILP) #REFR , which removes filters based on their L1-norm values, cannot effectively prune a model while maintaining its accuracy.", "text_before_citation": ["But these methods have to explore a large search space of all available layer-wise sparsity, which is time consuming when neural networks are large and datasets are complex.", "Therefore, there is a great need for an automatic, iterative, structured pruning solution that can automatically and efficiently generate small, accurate, and hardware-efficient models. The challenges are three-fold.", "First, how to effectively identify the insignificant parameters in a model to prune? Existing works have explored different mechanisms, e.g., L2-Norm in Soft Filter Pruning (SFP) #OTHEREFR , Soft Channel Pruning (SCP) [Kang and Han, 2020] and EagleEye #OTHEREFR , geometric median in FPGM #OTHEREFR , Hessian in EigenDamage , Empirical Sensitivity in Provable Filter Pruning (PFP) #OTHEREFR , adversarial knockoff features in SCOP #OTHEREFR , polarization regularizer in Neuron-level Structured Pruning (NSP) #OTHEREFR , LASSO regression in Channel Pruning (CP) #OTHEREFR , and other information considering the relationship between neighboring layers (Gate Batch Normalization (GBN) #OTHEREFR , Sparse Structure Selection (SSS) #OTHEREFR , Hinge #OTHEREFR , Pruning From Scratch (PFS) and Stripe-Wise Pruning (SWP) #OTHEREFR ).", "In comparison, activation-based attention, proposed in this paper, can more effectively capture the importance of filters, and pruning based on attention values can produce much better models, as quantitatively shown in our evaluation (Section 4).", "Second, how to design an effective iterative pruning process to recover the accuracy loss caused by structured pruning? LTH-based iterative pruning is a promising approach, but it has only been shown to work with unstructured pruning such as IMP."], "text_after_citation": ["For example, ILP can prune ResNet-50 by at most 11.5% of parameters when the maximum accuracy loss is limited to 1% on ImageNet.", "So directly applying iterative pruning with existing weightmagnitude-based structured pruning methods does not produce accurate pruned models.", "This paper proposes a novel LTH-based iterative, structured pruning solution using attentions, and it significantly outperforms ILP and other related structured pruning works that involve an iterative process (GDP #OTHEREFR , ACTD , Quantization and Pruning (QP) #OTHEREFR , IMP-Refill and IMP-Regroup #OTHEREFR ).", "The third challenge is how to automate the pruning process so it does not require any human intervention? The existing structured pruning works all require difficult hand-tuning of many hyper-parameters, e.g., DCP #OTHEREFR and MDP need multiple hyper-parameters to balance the original taskspecific loss and the additional pruning loss; VCNNP #OTHEREFR requires careful settings of \u03c4 and \u03b8 to decide which filters to prune; DMC #OTHEREFR and DeepHoyer ] require parameters to decide the regularization strength with different settings for different datasets and models.", "To address this challenge, this paper proposes a fully automated pruning solution that can automatically generate pruned models that meet users' diverse model accuracy, size, and speed requirements."], "citing_paper_content": {"title": "Automatic Attention Pruning: Improving And Automating Model Pruning Using Attentions", "abstract": "Pruning is a promising approach to compress deep learning models in order to deploy them on resource-constrained edge devices. However, many existing pruning solutions are based on unstructured pruning, which yields models that cannot efficiently run on commodity hardware; and they often require users to manually explore and tune the pruning process, which is timeconsuming and often leads to sub-optimal results. To address these limitations, this paper presents Automatic Attention Pruning (AAP), an adaptive, attention-based, structured pruning approach to automatically generate small, accurate, and hardware-efficient models that meet user objectives. First, it proposes iterative structured pruning using activation-based attention maps to effectively identify and prune unimportant filters. Then, it proposes adaptive pruning policies for automatically meeting the pruning objectives of accuracy-critical, memory-constrained, and latency-sensitive tasks. A comprehensive evaluation shows that AAP substantially outperforms the state-of-the-art structured pruning works for a variety of model architectures. Our code is at: https://github.com/kaiqi123/ Automatic-Attention-Pruning.git."}, "cited_paper_content": {"title": "Comparing Rewinding And Fine-Tuning In Neural Network Pruning", "abstract": "Many neural network pruning algorithms proceed in three steps: train the network to completion, remove unwanted structure to compress the network, and retrain the remaining structure to recover lost accuracy. The standard retraining technique, fine-tuning, trains the unpruned weights from their final trained values using a small fixed learning rate. In this paper, we compare fine-tuning to alternative retraining techniques. Weight rewinding (as proposed by Frankle et al., (2019)), rewinds unpruned weights to their values from earlier in training and retrains them from there using the original training schedule. Learning rate rewinding (which we propose) trains the unpruned weights from their final values using the same learning rate schedule as weight rewinding. Both rewinding techniques outperform fine-tuning, forming the basis of a network-agnostic pruning algorithm that matches the accuracy and compression ratios of several more network-specific state-of-the-art techniques."}, "keywords": ["structured pruning-Iterative L1-norm-based", "pruning"], "citation_intent": "background"} {"citing_id": "2303.16053v1", "cited_id": "1902.07891", "section_title": "Benchmark Results On Mpeblink Dataset", "citation": "A similar conclusion has already been made in #REFR but the properties of multi-person and long video in MPEblink make it worse for landmark-based eyeblink detection methods.", "text_before_citation": ["\u2022 For Blink-AP, InstBlink significantly outperforms the others by large margins (i.e., 21% at least of Blink-AP 50 and 6% at least of Blink-AP 75 ), which verifies the superiority of the proposed framework.", "We argue that one essential reason is that our framework can model a better long-term temporal eyeblink representation than the frame-based method #OTHEREFR and the sliding window based methods #OTHEREFR .", "Moreover, under the proposed framework, eyeblink features can be facilitated via face's global context (e.g., head pose and illumination condition) with joint optimization and interaction, while previous works that utilize a sequential manner cannot.", "From Table 2 , it can also be summarized that the landmark-based methods #OTHEREFR perform poorer than the region-based counterparts.", "We think that one essential reason is that landmark detection is unreliable under unconstrained conditions."], "text_after_citation": ["Our method is region-based, but localizes eye region in an implicit way where global face context is included and thus becomes more robust.", "\u2022 InstBlink also outperforms others on Inst-AP.", "We think that is because our framework can better model the longterm spatio-temporal instance representations, while the counterparts achieve tracking under a tracking-by-detection framework, which contains limited spatio-temporal modeling and may suffer from heavy occlusion. Inference speed analysis.", "The result is listed in Table 3 , assuming the 4 compared methods use InsightFace for face & landmark detection and InstBlink inferences within a clip length of 36.", "It can be seen that InstBlink is also of high inference speed (i.e., 112 FPS for network forwarding) while the real-time capacity of other methods is not superior."], "citing_paper_content": {"title": "Real-Time Multi-Person Eyeblink Detection In The Wild For Untrimmed Video", "abstract": "Real-time eyeblink detection in the wild can widely serve for fatigue detection, face anti-spoofing, emotion analysis, etc. The existing research efforts generally focus on singleperson cases towards trimmed video. However, multiperson scenario within untrimmed videos is also important for practical applications, which has not been well concerned yet. To address this, we shed light on this research field for the first time with essential contributions on dataset, theory, and practices. In particular, a large-scale dataset termed MPEblink that involves 686 untrimmed videos with 8748 eyeblink events is proposed under multiperson conditions. The samples are captured from unconstrained films to reveal \"in the wild\" characteristics. Meanwhile, a real-time multi-person eyeblink detection method is also proposed. Being different from the existing counterparts, our proposition runs in a one-stage spatio-temporal way with end-to-end learning capacity. Specifically, it simultaneously addresses the sub-tasks of face detection, face tracking, and human instance-level eyeblink detection. This paradigm holds 2 main advantages: (1) eyeblink features can be facilitated via the face's global context (e.g., head pose and illumination condition) with joint optimization and interaction, and (2) addressing these sub-tasks in parallel instead of sequential manner can save time remarkably to meet the real-time running requirement. Experiments on MPEblink verify the essential challenges of real-time multiperson eyeblink detection in the wild for untrimmed video. Our method also outperforms existing approaches by large margins and with a high inference speed."}, "cited_paper_content": {"title": "Towards Real-Time Eyeblink Detection In The Wild:Dataset,Theory And Practices", "abstract": "Effective and real-time eyeblink detection is of wide-range applications, such as deception detection, drive fatigue detection, face anti-spoofing, etc. Although numerous of efforts have already been paid, most of them focus on addressing the eyeblink detection problem under the constrained indoor conditions with the relative consistent subject and environment setup. Nevertheless, towards the practical applications eyeblink detection in the wild is more required, and of greater challenges. However, to our knowledge this has not been well studied before. In this paper, we shed the light to this research topic. A labelled eyeblink in the wild dataset (i.e., HUST-LEBW) of 673 eyeblink video samples (i.e., 381 positives, and 292 negatives) is first established by us. These samples are captured from the unconstrained movies, with the dramatic variation on human attribute, human pose, illumination condition, imaging configuration, etc. Then, we formulate eyeblink detection task as a spatial-temporal pattern recognition problem. After locating and tracking human eye using SeetaFace engine and KCF tracker respectively, a modified LSTM model able to capture the multi-scale temporal information is proposed to execute eyeblink verification. A feature extraction approach that reveals appearance and motion characteristics simultaneously is also proposed. The experiments on HUST-LEBW reveal the superiority and efficiency of our approach. It also verifies that, the existing eyeblink detection methods cannot achieve satisfactory performance in the wild."}, "keywords": ["landmark-based eyeblink detection"], "citation_intent": "result"} {"citing_id": "2304.14570v1", "cited_id": "1803.00188", "section_title": "Design Principles In Kaiaulu", "citation": "We chose to use the R language #REFR , due to the familiarity of the authors with the language and a preference for its package architecture.", "text_before_citation": ["In this section, we discuss how our design principles are translated into Kaiaulu's specific design decisions.", "In the following section, we fully flesh out Kaiaulu's modules and features.", "Batch Mode, Interactive Mode, and Literate Programming in Kaiaulu."], "text_after_citation": ["Minimally, the structure of an R package consists of the package metadata and its API.", "In addition, the R ecosystem encourages and promotes best practices to include documentation packages called vignettes, which leads R users to expect an API and R Notebooks when installing packages from CRAN (The Comprehensive R Archive Network).", "#OTHEREFR CRAN treats R Notebooks as first class citizens in an R package #OTHEREFR showing on each package's website any R Notebooks available.", "Because of R package structure, complying with familiar software abstractions (see Section 2.5) automatically brings the benefits of literate programming (see Section 2.3).", "Abstraction Debt in Kaiaulu."], "citing_paper_content": {"title": "Building The Msr Tool Kaiaulu: Design Principles And Experiences", "abstract": "Background: Since Alitheia Core was proposed and subsequently retired, tools that support empirical studies of software projects continue to be proposed, such as Codeface, Codeface4Smells, Grimoire-Lab and SmartSHARK, but they all make different design choices and provide overlapping functionality. Aims: We seek to understand the design decisions adopted by these tools-the good and the bad-along with their consequences, to understand why their authors reinvented functionality already present in other tools, and to help inform the design of future tools. Method: We used action research to evaluate the tools, and to determine a set of principles and anti-patterns to motivate a new tool design. Results: We identified 7 major design choices among the tools: 1) Abstraction Debt, 2) the use of Project Configuration Files, 3) the choice of Batch or Interactive Mode, 4) Minimal Paths to Data, 5) Familiar Software Abstractions, 6) Licensing and 7) the Perils of Code Reuse. Building on the observed good and bad design decisions, we created our own tool architecture and implemented it as an R package. Conclusions: Tools should not require onerous setup for users to obtain data. Authors should consider the conventions and abstractions used by their chosen language and build upon these instead of redefining them. Tools should encourage best practices in experiment reproducibility by leveraging self-contained and readable schemas that are used for tool automation, and reuse must be done with care to avoid depending on dead code."}, "cited_paper_content": {"title": "Xnmt: The Extensible Neural Machine Translation Toolkit", "abstract": "This paper describes XNMT, the eXtensible Neural Machine Translation toolkit. XNMT distin- guishes itself from other open-source NMT toolkits by its focus on modular code design, with the purpose of enabling fast iteration in research and replicable, reliable results. In this paper we describe the design of XNMT and its experiment configuration system, and demonstrate its utility on the tasks of machine translation, speech recognition, and multi-tasked machine translation/parsing. XNMT is available open-source at this https URL"}, "keywords": ["R language"], "citation_intent": "method"} {"citing_id": "2304.03481v1", "cited_id": "1902.10811", "section_title": "-Number Of Branches In The Ladder Self-Attention Block.", "citation": "The top-1 accuracy of our PSLT and DeiT-S on both the ImageNet and ImageNet-V2 validation set conforms to the linear fit in #REFR .", "text_before_citation": ["-Stacking the proposed modules.", "As described in Section 3.1, our proposed PSLT is composed of the proposed ladder self-attention block, Light FFN and the pixel-adaptive fusion module.", "Table 15 shows the performance of models by stacking the proposed modules one by one, which demonstrates the effectiveness of the modules.", "Compared to the ViT, although applying our ladder self-attetion (LSA) and Light FFN (LFFN) does not improve the performance, the number of parameters and FLOPs are largely reduced, where the parameters have been reduced by 9 times and 15 times respectively, and the FLOPs have been reduced by 27 times and 37 times respectively.", "The pixel-adaptive fusion module (PAFM) improves the model performance by integrating information of all branches in the block."], "text_after_citation": ["Our PSLT (including DeiT-S) achieves smaller improvement over ViT-B on ImageNet-V2 validation (2% and 1% top-1 accuracy improvement on ImageNet and ImageNet-V2 validation set, respectively).", "Such a phenomenon has been similarly observed in #OTHEREFR , #OTHEREFR .", "As described in #OTHEREFR , this shows the model generalization ability is challenged on different validation set."], "citing_paper_content": {"title": "Pslt: A Light-Weight Vision Transformer With Ladder Self-Attention And Progressive Shift", "abstract": "Vision Transformer (ViT) has shown great potential for various visual tasks due to its ability to model long-range dependency. However, ViT requires a large amount of computing resource to compute the global self-attention. In this work, we propose a ladder self-attention block with multiple branches and a progressive shift mechanism to develop a lightweight transformer backbone that requires less computing resources (e.g. a relatively small number of parameters and FLOPs), termed Progressive Shift Ladder Transformer (PSLT). First, the ladder self-attention block reduces the computational cost by modelling local self-attention in each branch. In the meanwhile, the progressive shift mechanism is proposed to enlarge the receptive field in the ladder self-attention block by modelling diverse local self-attention for each branch and interacting among these branches. Second, the input feature of the ladder self-attention block is split equally along the channel dimension for each branch, which considerably reduces the computational cost in the ladder self-attention block (with nearly 1 3 the amount of parameters and FLOPs), and the outputs of these branches are then collaborated by a pixel-adaptive fusion. Therefore, the ladder self-attention block with a relatively small number of parameters and FLOPs is capable of modelling long-range interactions. Based on the ladder self-attention block, PSLT performs well on several vision tasks, including image classification, objection detection and person re-identification. On the ImageNet-1k dataset, PSLT achieves a top-1 accuracy of 79.9% with 9.2M parameters and 1.9G FLOPs, which is comparable to several existing models with more than 20M parameters and 4G FLOPs. Code is available at https://isee-ai.cn/wugaojie/PSLT.html."}, "cited_paper_content": {"title": "Do Imagenet Classifiers Generalize To Imagenet?", "abstract": "We build new test sets for the CIFAR-10 and ImageNet datasets. Both benchmarks have been the focus of intense research for almost a decade, raising the danger of overfitting to excessively re-used test sets. By closely following the original dataset creation processes, we test to what extent current classification models generalize to new data. We evaluate a broad range of models and find accuracy drops of 3% - 15% on CIFAR-10 and 11% - 14% on ImageNet. However, accuracy gains on the original test sets translate to larger gains on the new test sets. Our results suggest that the accuracy drops are not caused by adaptivity, but by the models' inability to generalize to slightly \"harder\" images than those found in the original test sets."}, "keywords": ["top-1 accuracy"], "citation_intent": "result"} {"citing_id": "2303.17857v1", "cited_id": "2002.02445", "section_title": "I. Introduction", "citation": "A framework used for dataset generation was also proposed for cooperatively exploiting both visual and wireless data #REFR .", "text_before_citation": ["The traditional vision-based object tracking algorithms, such as the sparse representation and correlation filtering, have difficulty in accurately locating high-mobility UEs in real-time #OTHEREFR .", "Furthermore, in complex environments, tracking multiple UEs in the face of blockage and uneven light, the localization accuracy of UEs tends to degrade significantly.", "As a result, the traditional CV-related ML algorithms cannot satisfy the high location accuracy required by mmWave communications.", "Finally, gathering abundant labelled data from real-world environments including both visual data and wireless signals to train ML models is still challenging.", "Nevertheless, vision-assisted beam management methods that predict the optimal mmWave beams have been investigated in the last few years."], "text_after_citation": ["However, these methods may still be plagued by a number of issues:", "\u2022 The robustness of the existing ML models for visionassisted beam management has to be improved #OTHEREFR .", "When the classical image classification models designed for prediction are used for mmWave link blockage prediction and beam prediction, the target accuracy cannot be always satisfied, for example due to the over-fitting issues.", "In this context, a preliminary study was conducted in #OTHEREFR by relying on a simple dataset, for investigating visionassisted beam management in multi-user scenarios.", "\u2022 The scalability of the methods is not guaranteed in practical scenarios."], "citing_paper_content": {"title": "Vision-Assisted Mmwave Beam Management For Next-Generation Wireless Systems: Concepts, Solutions And Open Challenges", "abstract": "Beamforming techniques have been widely used in the millimeter wave (mmWave) bands to mitigate the path loss of mmWave radio links as the narrow straight beams by directionally concentrating the signal energy. However, traditional mmWave beam management algorithms usually require excessive channel state information overhead, leading to extremely high computational and communication costs. This hinders the widespread deployment of mmWave communications. By contrast, the revolutionary vision-assisted beam management system concept employed at base stations (BSs) can select the optimal beam for the target user equipment (UE) based on its location information determined by machine learning (ML) algorithms applied to visual data, without requiring channel information. In this paper, we present a comprehensive framework for a vision-assisted mmWave beam management system, its typical deployment scenarios as well as the specifics of the framework. Then, some of the challenges faced by this system and their efficient solutions are discussed from the perspective of ML. Next, a new simulation platform is conceived to provide both visual and wireless data for model validation and performance evaluation. Our simulation results indicate that the vision-assisted beam management is indeed attractive for next-generation wireless systems."}, "cited_paper_content": {"title": "Viwi Vision-Aided Mmwave Beam Tracking: Dataset, Task, And Baseline Solutions", "abstract": "Vision-aided wireless communication is motivated by the recent advances in deep learning and computer vision as well as the increasing dependence on line-of-sight links in millimeter wave (mmWave) and terahertz systems. By leveraging vision, this new research direction enables an interesting set of new capabilities such as vision-aided mmWave beam and blockage prediction, proactive hand-off, and resource allocation among others. These capabilities have the potential of reliably supporting highly-mobile applications such as vehicular/drone communications and wireless virtual/augmented reality in mmWave and terahertz systems. Investigating these interesting applications, however, requires the development of special dataset and machine learning tasks. Based on the Vision-Wireless (ViWi) dataset generation framework [1], this paper develops an advanced and realistic scenario/dataset that features multiple base stations, mobile users, and rich dynamics. Enabled by this dataset, the paper defines the vision-wireless mmWave beam tracking task (ViWi-BT) and proposes a baseline solution that can provide an initial benchmark for the future ViWi-BT algorithms."}, "keywords": ["wireless data"], "citation_intent": "method"} {"citing_id": "2303.07200v1", "cited_id": "1907.12207", "section_title": "Objective Function:", "citation": "Additionally, it is important that function f that can learn a fruitful representation and complex data dependencies #REFR .", "text_before_citation": ["EQUATION", "where F * s is the final selected feature set, J is a desired loss function, and f (x", "(i)", "Fs ; \u03b8) is a classification function parameterized by \u03b8 aiming at estimating the target for the i-th sample using a subset of features x (i) Fs . Solving this optimization problem can be a challenging task.", "As the choice of feature subset F s grows exponentially with increasing number of features d, solving Equation 1 is a NP-hard problem."], "text_after_citation": ["We choose artificial neural networks due to their high expressive power; a simple one-hidden layer feed-forward neural network is known to be a universal approximator #OTHEREFR .", "Finally, as we aim to select features in a computationally efficient manner, in this paper, we choose sparse neural networks to represent the data and perform feature selection."], "citing_paper_content": {"title": "Supervised Feature Selection With Neuron Evolution In Sparse Neural Networks", "abstract": "Feature selection that selects an informative subset of variables from data not only enhances the model interpretability and performance but also alleviates the resource demands. Recently, there has been growing attention on feature selection using neural networks. However, existing methods usually suffer from high computational costs when applied to high-dimensional datasets. In this paper, inspired by evolution processes, we propose a novel resource-efficient supervised feature selection method using sparse neural networks, named \"NeuroFS\". By gradually pruning the uninformative features from the input layer of a sparse neural network trained from scratch, NeuroFS derives an informative subset of features efficiently. By performing several experiments on 11 low and high-dimensional real-world benchmarks of different types, we demonstrate that NeuroFS achieves the highest ranking-based score among the considered state-of-the-art supervised feature selection models. The code is available on GitHub 1 ."}, "cited_paper_content": {"title": "Lassonet: Neural Networks With Feature Sparsity", "abstract": "We propose a neural network model, with a separate linear (residual) term, that explicitly bounds the input layer weights for a feature by the linear weight for that feature. The model can be seen as a modification of so-called residual neural networks to produce a path of models that are feature-sparse, that is, use only a subset of the features. This is analogous to the solution path from the usual Lasso ($\\ell_1$-regularized) linear regression. We call the proposed procedure {\\tt LassoNet} and develop a projected proximal gradient algorithm for its optimization. This approach can sometimes give as low or lower test error than a standard neural network, and its feature selection provides more interpretable solutions. We illustrate the method using both simulated and real data examples, and show that it is often able to achieve competitive performance with a much smaller number of input feature"}, "keywords": ["fruitful representation"], "citation_intent": "background"} {"citing_id": "2304.04911v1", "cited_id": "1904.12901", "section_title": "D. Problem Formulation", "citation": "The assumption of the MDP does not hold on most realworld systems due to delays and hence those maybe only partially observable #REFR .", "text_before_citation": ["In this force trajectory tracking problem, the trajectories are restricted to sinusoidal waveforms between 0.05 to 0.35 Hz.", "This is because (as discussed in Section III-B) the input motor current is saturated to constrain the pendulum within safe position bounds and avoid any damage to the ball screw during learning.", "In addition, the initial random policy with abruptly switching high current values can induce vibrationbased damage to the system."], "text_after_citation": ["This problem is often countered by adding more information in the observation space of the agent.", "The observation vector includes the following: the absolute encoder-based angle, q [rad], angular velocity,q", "EQUATION", "where the divisor, D = 10 6 , maintains an episodic reward between [-30, 0] .", "In absence of this, the summation of reward per time step was observed to undergo catastrophic forgetting #OTHEREFR resulting in unlearning and poor performance."], "citing_paper_content": {"title": "Real-Time Model-Free Deep Reinforcement Learning For Force Control Of A Series Elastic Actuator", "abstract": "Many state-of-the art robotic applications utilize series elastic actuators (SEAs) with closed-loop force control to achieve complex tasks such as walking, lifting, and manipulation. Model-free PID control methods are more prone to instability due to nonlinearities in the SEA where cascaded model-based robust controllers can remove these effects to achieve stable force control. However, these model-based methods require detailed investigations to characterize the system accurately. Deep reinforcement learning (DRL) has proved to be an effective model-free method for continuous control tasks, where few works deal with hardware learning. This paper describes the training process of a DRL policy on hardware of an SEA pendulum system for tracking force control trajectories from 0.05-0.35 Hz at 50 N amplitude using the Proximal Policy Optimization (PPO) algorithm. Safety mechanisms are developed and utilized for training the policy for 12 hours (overnight) without an operator present within the full 21 hours training period. The tracking performance is evaluated showing improvements of 25 N in mean absolute error when comparing the first 18 min. of training to the full 21 hours for a 50 N amplitude, 0.1 Hz sinusoid desired force trajectory. Finally, the DRL policy exhibits better tracking and stability margins when compared to a model-free PID controller for a 50 N chirp force trajectory."}, "cited_paper_content": {"title": "Challenges Of Real-World Reinforcement Learning", "abstract": "Reinforcement learning (RL) has proven its worth in a series of artificial domains, and is beginning to show some successes in real-world scenarios. However, much of the research advances in RL are often hard to leverage in real-world systems due to a series of assumptions that are rarely satisfied in practice. We present a set of nine unique challenges that must be addressed to productionize RL to real world problems. For each of these challenges, we specify the exact meaning of the challenge, present some approaches from the literature, and specify some metrics for evaluating that challenge. An approach that addresses all nine challenges would be applicable to a large number of real world problems. We also present an example domain that has been modified to present these challenges as a testbed for practical RL research."}, "keywords": ["realworld systems"], "citation_intent": "background"} {"citing_id": "2304.05405v2", "cited_id": "2001.00326", "section_title": "Sdarts.", "citation": "E 2 NAS successfully overperformed previous works on the three datasets available in NAS-Bench-201 #REFR (e.g., a +29.45 % top 1 accuracy improvement on ImageNet-16-120).", "text_before_citation": ["EQUATION", "where U is a function updating the hidden state \u210e obtained by aggregating all its predecessors with function G.", "Since", "EQUATION", "where L is the Cross-Entropy loss, R is a 2 regularization term, is a value that balances between optimizing the current architecture (exploitation) or preventing other alternatives from vanishing (exploration)."], "text_after_citation": ["Zhang et al.", "#OTHEREFR proposed iDARTS, a solution that reformulates the optimization process of DARTS with a Neumann-approximation of the Implicit Theorem Function (IFT) #OTHEREFR . Concretely, the architectural parameter gradients \u0394 L (see Eq. 3) are calculated as follows:", "EQUATION", "where L is the validation loss, L is the training loss, and are the network weights.", "However, it is computationally intensive to compute the inverse of the Hessian matrix"], "citing_paper_content": {"title": "Efficient Automation Of Neural Network Design: A Survey On Differentiable Neural Architecture Search", "abstract": "In the past few years, Differentiable Neural Architecture Search (DNAS) rapidly imposed itself as the trending approach to automate the discovery of deep neural network architectures. This rise is mainly due to the popularity of DARTS, one of the first major DNAS methods. In contrast with previous works based on Reinforcement Learning or Evolutionary Algorithms, DNAS is faster by several orders of magnitude and uses fewer computational resources. In this comprehensive survey, we focus specifically on DNAS and review recent approaches in this field. Furthermore, we propose a novel challenge-based taxonomy to classify DNAS methods. We also discuss the contributions brought to DNAS in the past few years and its impact on the global NAS field. Finally, we conclude by giving some insights into future research directions for the DNAS field. CCS Concepts: \u2022 General and reference \u2192 Surveys and overviews; \u2022 Computing methodologies \u2192 Search methodologies; Computer vision."}, "cited_paper_content": {"title": "Nas-Bench-201: Extending The Scope Of Reproducible Neural Architecture Search", "abstract": "Neural architecture search (NAS) has achieved breakthrough success in a great number of applications in the past few years. It could be time to take a step back and analyze the good and bad aspects in the field of NAS. A variety of algorithms search architectures under different search space. These searched architectures are trained using different setups, e.g., hyper-parameters, data augmentation, regularization. This raises a comparability problem when comparing the performance of various NAS algorithms. NAS-Bench-101 has shown success to alleviate this problem. In this work, we propose an extension to NAS-Bench-101: NAS-Bench-102 with a different search space, results on multiple datasets, and more diagnostic information. NAS-Bench-102 has a fixed search space and provides a unified benchmark for almost any up-to-date NAS algorithms. The design of our search space is inspired by the one used in the most popular cell-based searching algorithms, where a cell is represented as a directed acyclic graph. Each edge here is associated with an operation selected from a predefined operation set. For it to be applicable for all NAS algorithms, the search space defined in NAS-Bench-102 includes all possible architectures generated by 4 nodes and 5 associated operation options, which results in 15,625 neural cell candidates in total. The training log using the same setup and the performance for each architecture candidate are provided for three datasets. This allows researchers to avoid unnecessary repetitive training for selected architecture and focus solely on the search algorithm itself. The training time saved for every architecture also largely improves the efficiency of most NAS algorithms and presents a more computational cost friendly NAS community for a broader range of researchers. We provide additional diagnostic information such as fine-grained loss and accuracy, which can give inspirations to new designs of NAS algorithms. In further support of the proposed NAS-Bench-102, we have analyzed it from many aspects and benchmarked 10 recent NAS algorithms, which verify its applicability."}, "keywords": ["NAS", "E 2 NAS"], "citation_intent": "result"} {"citing_id": "2304.08756v2", "cited_id": "1801.08297", "section_title": "Related Work", "citation": "NDDR-CNN #REFR proposes a novel CNN structure to learn a discriminative feature embedding for each task.", "text_before_citation": ["As we will explain in the next section, directly extending the one-shot weight-sharing strategy from single-task NAS to multi-task NAS incurs interferences among tasks, leading to significant performance degradation.", "AutoTaskFormer tackles this problem using a novel NAS pipeline that involves two search stages instead of one, specifically designed for multi-task learning.", "Multi-task Learning (MTL). MTL aims to learn parameter sharing across multiple tasks.", "For example, Cross-Stitch #OTHEREFR manually employs an additional group of shared units to merge multiple task backbones.", "Sluice #OTHEREFR jointly learns a latent multi-task architecture and task-specific models."], "text_after_citation": ["There are several works that design task-sharing architectures through NAS.", "In particular, MTL-NAS #OTHEREFR starts with two taskspecific networks and seeks optimal edges between the intertask branches.", "AdaShare #OTHEREFR learns the sharing weights through a task-specific policy that selectively chooses which layers to execute for a given task.", "AutoMTL #OTHEREFR proposes a source-to-source compiler that transforms the backbone CNN into a supermodel.", "However, due to the bottom-up approach of combining task-specific networks, the sharing between tasks and networks needs to be meticulously designed and hence can only support a small number of tasks."], "citing_paper_content": {"title": "Autotaskformer: Searching Vision Transformers For Multi-Task Learning", "abstract": "Vision Transformers have shown great performance in single tasks such as classification and segmentation. However, real-world problems are not isolated, which calls for vision transformers that can perform multiple tasks concurrently. Existing multi-task vision transformers are handcrafted and heavily rely on human expertise. In this work, we propose a novel one-shot neural architecture search framework, dubbed AutoTaskFormer (Automated Multi-Task Vision TransFormer), to automate this process. AutoTask-Former not only identifies the weights to share across multiple tasks automatically, but also provides thousands of well-trained vision transformers with a wide range of parameters (e.g., number of heads and network depth) for deployment under various resource constraints. Experiments on both small-scale (2-task Cityscapes and 3-task NYUv2) and large-scale (16-task Taskonomy) datasets show that Auto-TaskFormer outperforms state-of-the-art handcrafted vision transformers in multi-task learning. The entire code and models will be open-sourced."}, "cited_paper_content": {"title": "Nddr-Cnn: Layerwise Feature Fusing In Multi-Task Cnns By Neural Discriminative Dimensionality Reduction", "abstract": "In this paper, we propose a novel Convolutional Neural Network (CNN) structure for general-purpose multi-task learning (MTL), which enables automatic feature fusing at every layer from different tasks. This is in contrast with the most widely used MTL CNN structures which empirically or heuristically share features on some specific layers (e.g., share all the features except the last convolutional layer). The proposed layerwise feature fusing scheme is formulated by combining existing CNN components in a novel way, with clear mathematical interpretability as discriminative dimensionality reduction, which is referred to as Neural Discriminative Dimensionality Reduction (NDDR). Specifically, we first concatenate features with the same spatial resolution from different tasks according to their channel dimension. Then, we show that the discriminative dimensionality reduction can be fulfilled by 1x1 Convolution, Batch Normalization, and Weight Decay in one CNN. The use of existing CNN components ensures the end-to-end training and the extensibility of the proposed NDDR layer to various state-of-the-art CNN architectures in a \"plug-and-play\" manner. The detailed ablation analysis shows that the proposed NDDR layer is easy to train and also robust to different hyperparameters. Experiments on different task sets with various base network architectures demonstrate the promising performance and desirable generalizability of our proposed method. The code of our paper is available at https://github.com/ethanygao/NDDR-CNN."}, "keywords": ["novel CNN structure"], "citation_intent": "background"} {"citing_id": "2305.00455v1", "cited_id": "1706.03762", "section_title": "Causal Semantics Extractor", "citation": "In #REFR , the vanilla attention matrix is based on the calculation of all the query-key pairs.", "text_before_citation": ["According to #OTHEREFR , the computation of the vanilla attention matrix A \u2208 R n\u00d7n is based on the dot-product.", "It is defined as A = softmax QK \u221a d ; Q = TW q , K = TW k , where the query matrix Q \u2208 R n\u00d7d and key matrix K \u2208 R n\u00d7d are generated by the linear projection of the input token matrix T \u2208 R n\u00d7dm based on the learnable weights matrices W q \u2208 R dm\u00d7d and W k \u2208 R dm\u00d7d . n indicates the total number of input tokens.", "d represents the embedding dimension and d m denotes the dimension of an input token.", "The new value matrix V new \u2208 R n\u00d7d can be obtained via", "V new = A V; V = TW v , where the value matrix V \u2208 R n\u00d7d and W v \u2208 R dm\u00d7d ."], "text_after_citation": ["However, in the proposed Causal Semantics Extractor, only the top \u03ba most similar keys and values for each query are used to compute the causal attention matrix.", "Similar to #OTHEREFR , all the queries and keys are calculated by the dot-product.", "Then, the row-wise top \u03ba elements are used for the softmax calculation.", "In the proposed Causal Semantics Extractor, the value matrix #OTHEREFR , for the input video, and VisualAtten(\u2022) indicates a visual attention mechanism based on the element-wise multiplication.", "X mul = FC(Z ta Z va ), where denotes the operation of feature concatenation and FC(\u2022) indicates a fully connected layer."], "citing_paper_content": {"title": "Causalainer: Causal Explainer For Automatic Video Summarization", "abstract": "The goal of video summarization is to automatically shorten videos such that it conveys the overall story without losing relevant information. In many application scenarios, improper video summarization can have a large impact. For example in forensics, the quality of the generated video summary will affect an investigator's judgment while in journalism it might yield undesired bias. Because of this, modeling explainability is a key concern. One of the best ways to address the explainability challenge is to uncover the causal relations that steer the process and lead to the result. Current machine learning-based video summarization algorithms learn optimal parameters but do not uncover causal relationships. Hence, they suffer from a relative lack of explainability. In this work, a Causal Explainer, dubbed Causalainer, is proposed to address this issue. Multiple meaningful random variables and their joint distributions are introduced to characterize the behaviors of key components in the problem of video summarization. In addition, helper distributions are introduced to enhance the effectiveness of model training. In visual-textual input scenarios, the extra input can decrease the model performance. A causal semantics extractor is designed to tackle this issue by effectively distilling the mutual information from the visual and textual inputs. Experimental results on commonly used benchmarks demonstrate that the proposed method achieves state-of-the-art performance while being more explainable."}, "cited_paper_content": {"title": "Attention Is All You Need", "abstract": "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data."}, "keywords": ["vanilla attention matrix"], "citation_intent": "method"} {"citing_id": "2304.08597v1", "cited_id": "1902.00532", "section_title": "Key Limitations", "citation": "Time Constrained approaches have been proposed that limited the amount of time available for the AutoML system to search through the space, however, this came with a trade-off -balancing between time and pipeline accuracy #REFR . Thus the resulting pipeline was mostly sub-optimal.", "text_before_citation": ["While the iterative search (such as grid search or random search) has shown success, however, the search space explodes when pipelines contain many steps with an even greater number of choices.", "This explosion in the number of candidate pipelines is prohibitive of compute time and cost of resources needed to support searching through large hyper-parameter search space.", "The significant time cost of AutoML systems comes from evaluating the end-to-end pipeline.", "As the quality of the pipeline is only measured after the completion of the last step."], "text_after_citation": [], "citing_paper_content": {"title": "Etop: Early Termination Of Pipelines For Faster Training Of Automl System", "abstract": "Recent advancements in software and hardware technologies have enabled the use of AI/ML models in everyday applications has significantly improved the quality of service rendered. However, for a given application, finding the right AI/ML model is a complex and costly process, that involves the generation, training, and evaluation of multiple interlinked steps (called pipelines), such as data pre-processing, feature engineering, selection, and model tuning. These pipelines are complex (in structure) and costly (both in compute resource and time) to execute end-to-end, with a hyper-parameter associated with each step. Au-toML systems automate the search of these hyper-parameters but are slow, as they rely on optimizing the pipelines' end output. We propose \"the eTOP Framework\" which works on top of any AutoML system and decides whether or not to execute the pipeline to the end or terminate at an intermediate step. Experimental evaluation on 26 benchmark datasets 3 and integration of eTOPwith MLBox 4 reduces the training time of the AutoML system upto 40x than baseline MLBox."}, "cited_paper_content": {"title": "Hyper-Parameter Tuning Under A Budget Constraint", "abstract": "We study a budgeted hyper-parameter tuning problem, where we optimize the tuning result under a hard resource constraint. We propose to solve it as a sequential decision making problem, such that we can use the partial training progress of configurations to dynamically allocate the remaining budget. Our algorithm combines a Bayesian belief model which estimates the future performance of configurations, with an action-value function which balances exploration-exploitation tradeoff, to optimize the final output. It automatically adapts the tuning behaviors to different constraints, which is useful in practice. Experiment results demonstrate superior performance over existing algorithms, including the-state-of-the-art one, on real-world tuning tasks across a range of different budgets."}, "keywords": ["AutoML system", "trade-off -balancing"], "citation_intent": "background"} {"citing_id": "2303.10135v2", "cited_id": "1706.03762", "section_title": "B. Assembly Graphs", "citation": "Both the part-id and surface-id fields are encoded with a d-dimensional Sinusoidal Positional Encoding #REFR .", "text_before_citation": ["N, part-id \u2208 R d ].", "There are three atomic part types: long profile, short profile and angle bracket.", "2) Surface Nodes: Different to the one in #OTHEREFR , we associate each surface node", "v s i \u2208 V s with the features \u03c6 (v s i ) = [surface-type \u2208 N, surface-id \u2208 R d ].", "There are two surface types (long and short) for profiles and one (lateral) for brackets."], "text_after_citation": ["3) Surface-to-Surface Edges: We design a fullyconnected graph for all surface nodes V s to capture the relation between untouched surfaces, which is more fine-grained than those in #OTHEREFR with only connects between touched surfaces.", "These edges are assigned with a feature \u03c6 (e i ) \u2208 R, indicating the relation between the two surfaces: \u03c6 (e i ) = relative distance (parallel); 1 (belong to the same part); \u22121 (orthogonal); 0 (same-surface loop).", "4) Surface-to-Part Edges: These connect each surface and part node pair", "(v s i , v p j ) \u2208 V s \u00d7 V p , where surface v s i belongs to the part v p", "j ."], "citing_paper_content": {"title": "Efficient And Feasible Robotic Assembly Sequence Planning Via Graph Representation Learning", "abstract": "Automatic Robotic Assembly Sequence Planning (RASP) can significantly improve productivity and resilience in modern manufacturing along with the growing need for greater product customization. One of the main challenges in realizing such automation resides in efficiently finding solutions from a growing number of potential sequences for increasingly complex assemblies. Besides, costly feasibility checks are always required for the robotic system. To address this, we propose a holistic graphical approach including a graph representation called Assembly Graph for product assemblies and a policy architecture, Graph Assembly Processing Network, dubbed GRACE for assembly sequence generation. Secondly, we use GRACE to extract meaningful information from the graph input and predict assembly sequences in a step-by-step manner. In experiments, we show that our approach can predict feasible assembly sequences across product variants of aluminum profiles based on data collected in simulation of a dual-armed robotic system. We further demonstrate that our method is capable of detecting infeasible assemblies, substantially alleviating the undesirable impacts from false predictions, and hence facilitating real-world deployment soon. Code and training data will be open-sourced. Equal Contribution."}, "cited_paper_content": {"title": "Attention Is All You Need", "abstract": "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data."}, "keywords": ["part-id", "Encoding"], "citation_intent": "method"} {"citing_id": "2303.05183v1", "cited_id": "1912.01703", "section_title": "Implementation Details", "citation": "All models are trained using Python 3.10.4, Pytorch 1.11.0 #REFR , and an Nvidia Tesla V100 GPU.", "text_before_citation": ["The learning rate for the noise estimator decreases by half every 10 epochs with 50 epochs trained, while the learning rate for the denoising network is halved every 20 epochs with 100 epochs trained.", "As for the hyper-parameter in the adaptive re-visible loss, we set \u03bb = 3 as the initial value and progressively increase it to 11.", "The denoising network and noise estimation are jointly optimized during training.", "The patches of size 128 \u00d7 128 are randomly cropped for training.", "Note that the details of the global masker and global mask mapper remain the same as Blind2Unblind #OTHEREFR ."], "text_after_citation": ["Datasets We consider five types of Poisson-Gaussian noise in synthetic noise estimation: (1) P G1 : \u03b1 = 0.1, \u03c3 = 0.02, (2) P G2 : \u03b1 = 0.1, \u03c3 = 0.0002, (3) P G3 : \u03b1 = 0.05, \u03c3 = 0.02, (4) P G4 : \u03b1 = 0.05, \u03c3 = 0.0002, (5) P G5 : \u03b1 = 0.01, \u03c3 = 0.02.", "For grayscale images, we use BSD400 #OTHEREFR for the training set, and for sRGB images, we use CBSD432 #OTHEREFR .", "The noise levels are estimated on standard BSD68 #OTHEREFR and CBSD68 #OTHEREFR for grayscale and sRGB images, respectively.", "For synthetic denoising, we use ILSVRC2012 #OTHEREFR validation set for sRGB image denoising and BSD400 #OTHEREFR for grayscale image denoising.", "Specifically, following the setting in #OTHEREFR , we select 44328 images with sizes between 256 \u00d7 256 and 512\u00d7512 pixels from ILSVRC2012 validation set for training."], "citing_paper_content": {"title": "Blind2Sound: Self-Supervised Image Denoising Without Residual Noise", "abstract": "Self-supervised blind denoising for Poisson-Gaussian noise remains a challenging task. Pseudo-supervised pairs constructed from single noisy images re-corrupt the signal and degrade the performance. The visible blindspots solve the information loss in masked inputs. However, without explicitly noise sensing, mean square error as an objective function cannot adjust denoising intensities for dynamic noise levels, leading to noticeable residual noise. In this paper, we propose Blind2Sound, a simple yet effective approach to overcome residual noise in denoised images. The proposed adaptive re-visible loss senses noise levels and performs personalized denoising without noise residues while retaining the signal lossless. The theoretical analysis of intermediate medium gradients guarantees stable training, while the Cramer Gaussian loss acts as a regularization to facilitate the accurate perception of noise levels and improve the performance of the denoiser. Experiments on synthetic and real-world datasets show the superior performance of our method, especially for single-channel images. The code is available in supplementary materials."}, "cited_paper_content": {"title": "Pytorch: An Imperative Style, High-Performance Deep Learning Library", "abstract": "Deep learning frameworks have often focused on either usability or speed, but not both. PyTorch is a machine learning library that shows that these two goals are in fact compatible: it provides an imperative and Pythonic programming style that supports code as a model, makes debugging easy and is consistent with other popular scientific computing libraries, while remaining efficient and supporting hardware accelerators such as GPUs. ::: In this paper, we detail the principles that drove the implementation of PyTorch and how they are reflected in its architecture. We emphasize that every aspect of PyTorch is a regular Python program under the full control of its user. We also explain how the careful and pragmatic implementation of the key components of its runtime enables them to work together to achieve compelling performance. ::: We demonstrate the efficiency of individual subsystems, as well as the overall speed of PyTorch on several common benchmarks."}, "keywords": ["Pytorch"], "citation_intent": "method"} {"citing_id": "2303.04654v1", "cited_id": "1608.03983", "section_title": "Implementation Details", "citation": "We utilize the AdamW optimizer [20] and the CosineAnnealing learning rate scheduler #REFR with their default parameters.", "text_before_citation": ["The FlyingThings3D dataset #OTHEREFR is used as the training dataset, which contains 800 pairs of synthetic RGBD images.", "The Middle-bury2014 dataset #OTHEREFR is used as the testing dataset, which contains 23 real-world RGBD images.", "There is a significant domain gap between the synthetic and real-world images, making it suitable for evaluating the generalizability of the pre-trained DfF models.", "We simulate focal stacks with the RGBD images for training and testing, using a stack size of #OTHEREFR The lens data comes from lensnet.com #OTHEREFR 10.", "The focus distances are chosen linearly from the minimum (20 cm) and maximum (20 m) depth range of each image, randomly perturbed."], "text_after_citation": ["The training batch size is 16, and the initial learning rate is 1e \u22124 .", "Each DfF model is trained for 400 epochs on a single A100 GPU.", "Following training, we evaluate each model directly on the testing focal stacks without any fine-tuning.", "In real-world experiments, we use the Canon EOS R camera and the RF 50mm F/1.8 lens discussed in Section 4.", "The training procedure keeps the same as the previous experiment."], "citing_paper_content": {"title": "Aberration-Aware Depth-From-Focus", "abstract": "Computer vision methods for depth estimation usually use simple camera models with idealized optics. For modern machine learning approaches, this creates an issue when attempting to train deep networks with simulated data, especially for focus-sensitive tasks like Depth-from-Focus. In this work, we investigate the domain gap caused by off-axis aberrations that will affect the decision of the best-focused frame in a focal stack. We then explore bridging this domain gap through aberration-aware training (AAT). Our approach involves a lightweight network that models lens aberrations at different positions and focus distances, which is then integrated into the conventional network training pipeline. We evaluate the generality of pretrained models on both synthetic and real-world data. Our experimental results demonstrate that the proposed AAT scheme can improve depth estimation accuracy without fine-tuning the model or modifying the network architecture. Our code and models will be made publicly available."}, "cited_paper_content": {"title": "Sgdr: Stochastic Gradient Descent With Warm Restarts", "abstract": "Restart techniques are common in gradient-free optimization to deal with multimodal functions. Partial warm restarts are also gaining popularity in gradient-based optimization to improve the rate of convergence in accelerated gradient schemes to deal with ill-conditioned functions. In this paper, we propose a simple warm restart technique for stochastic gradient descent to improve its anytime performance when training deep neural networks. We empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively. We also demonstrate its advantages on a dataset of EEG recordings and on a downsampled version of the ImageNet dataset. Our source code is available at https://github.com/loshchil/SGDR"}, "keywords": ["CosineAnnealing learning rate"], "citation_intent": "method"} {"citing_id": "2303.08431v2", "cited_id": "1801.05039", "section_title": "Introduction", "citation": "In the seminal work of #REFR , the authors studied an LQR problem with deterministic dynamics over an infinite horizon.", "text_before_citation": ["In order to maximize the accumulated reward over time, the agent learns to select actions based on past experiences (exploitation) and by making new choices (exploration).", "In recent years, we have witnessed successful development of RL systems in various applications, including robotics control #OTHEREFR , AlphaGo and Atari games #OTHEREFR , autonomous driving #OTHEREFR , and stock trading #OTHEREFR .", "Despite its practical success, theoretical understanding of RL is still limited and at its primitive stage.", "To establish a better foundation of RL, there has been a surge of theoretical works in recent years on the Linear Quadratic Regulator (LQR) problem.", "This problem is a special class of control problems with linear dynamics and quadratic cost functions #OTHEREFR ."], "text_after_citation": ["They proved that the simple policy gradient method converges to the globally optimal solution with a linear rate (despite nonconvexity of the objective).", "Their key idea is to utilize the Riccati equation (an algebraic-equation characterization that only works for LQR problems) and show that the cost function enjoys a \"gradient dominance\" property.", "This result has been extended to other settings such as linear dynamics with additive or multiplicative Gaussian noise, finite-time horizon, and modifications of the vanilla policy-gradient method in follow-up works #OTHEREFR .", "Other aspects in the learning of LQR, such as the trade-off between exploration and exploitation, have also been studied recently #OTHEREFR .", "Despite the desirable theoretical properties of LQR, this setting is limited in practice due to the nonlinear nature of many real-world dynamic systems."], "citing_paper_content": {"title": "Policy Gradient Converges To The Globally Optimal Policy For Nearly Linear-Quadratic Regulators", "abstract": "Nonlinear control systems with partial information to the decision maker are prevalent in a variety of applications. As a step toward studying such nonlinear systems, this work explores reinforcement learning methods for finding the optimal policy in the nearly linear-quadratic regulator systems. In particular, we consider a dynamic system that combines linear and nonlinear components, and is governed by a policy with the same structure. Assuming that the nonlinear component comprises kernels with small Lipschitz coefficients, we characterize the optimization landscape of the cost function. Although the cost function is nonconvex in general, we establish the local strong convexity and smoothness in the vicinity of the global optimizer. Additionally, we propose an initialization mechanism to leverage these properties. Building on the developments, we design a policy gradient algorithm that is guaranteed to converge to the globally optimal policy with a linear rate."}, "cited_paper_content": {"title": "Global Convergence Of Policy Gradient Methods For The Linear Quadratic Regulator", "abstract": "Direct policy gradient methods for reinforcement learning and continuous control problems are a popular approach for a variety of reasons: 1) they are easy to implement without explicit knowledge of the underlying model 2) they are an \"end-to-end\" approach, directly optimizing the performance metric of interest 3) they inherently allow for richly parameterized policies. A notable drawback is that even in the most basic continuous control problem (that of linear quadratic regulators), these methods must solve a non-convex optimization problem, where little is understood about their efficiency from both computational and statistical perspectives. In contrast, system identification and model based planning in optimal control theory have a much more solid theoretical footing, where much is known with regards to their computational and statistical properties. This work bridges this gap showing that (model free) policy gradient methods globally converge to the optimal solution and are efficient (polynomially so in relevant problem dependent quantities) with regards to their sample and computational complexities."}, "keywords": ["LQR problem"], "citation_intent": "background"} {"citing_id": "2304.05561v1", "cited_id": "1506.02753", "section_title": "Comparison With Non-Adversarial Embedding Inversion", "citation": "We have compared our results with the ones obtained using the non-adversarial embedding inversion approach by Dosovitskiy et al. in #REFR .", "text_before_citation": [], "text_after_citation": ["With access to the original data and to all layers of the original model (both missing in the adversarial settings), the approach in #OTHEREFR trains an inversion network minimizing the imagespace L2 loss.", "In #OTHEREFR , the lowest reconstruction errors are obtained when the inversion network is trained together with the original model in the auto-encoder setup.", "We adopt this case as our baseline, which provides an upper-bound to the reconstruction quality.", "To enable the comparison with our approach, we implemented the baseline on ResNet50 face recognition model.", "As shown in Figure 8 , despite the adversarial conditions, our attack can craft reconstructions that are close to the baseline ones."], "citing_paper_content": {"title": "On The Adversarial Inversion Of Deep Biometric Representations", "abstract": "Biometric authentication service providers often claim that it is not possible to reverse-engineer a user's raw biometric sample, such as a fingerprint or a face image, from its mathematical (feature-space) representation. This is presented as a security feature of the system against an attacker who may be able to retrieve the template in the feature space. In this paper, we investigate this claim on the specific example of deep neural network (DNN) embeddings. Inversion of DNN embeddings has been investigated for explaining deep image representations or synthesizing normalized images. When setting the inversion, existing studies leverage full access to all layers of the original model, as well as all possible information on the original dataset-including, in some cases, the original image to be reconstructed. For the biometric authentication use case, we need to investigate this under adversarial settings where an attacker has access to a feature-space representation but no direct access to the exact original dataset nor the original learned model. Instead, we assume varying degree of attacker's background knowledge about the distribution of the dataset as well as the original learned model (architecture and training process). In the worst case, we assume attacker has no knowledge of the original data distribution nor the original model. In these cases, we show that the attacker can exploit off-the-shelf DNN models and public datasets, to mimic the behaviour of the original learned model to varying degrees of success, based only on the obtained representation and attacker's prior knowledge. We propose a two-pronged attack that first infers the original DNN by exploiting the model footprint on the embedding, and then reconstructs the raw data by using the inferred model. We show the practicality of the attack on popular DNNs trained for two prominent biometric modalities, face and fingerprint recognition. The attack can effectively infer the original recognition model (mean accuracy 83% for faces, 86% for fingerprints), and can craft effective biometric reconstructions that are successfully authenticated with 1-vs-1 authentication accuracy of up to 92% for some models."}, "cited_paper_content": {"title": "Inverting Visual Representations With Convolutional Networks", "abstract": "Feature representations, both hand-designed and learned ones, are often hard to analyze and interpret, even when they are extracted from visual data. We propose a new approach to study image representations by inverting them with an up-convolutional neural network. We apply the method to shallow representations (HOG, SIFT, LBP), as well as to deep networks. For shallow representations our approach provides significantly better reconstructions than existing methods, revealing that there is surprisingly rich information contained in these features. Inverting a deep network trained on ImageNet provides several insights into the properties of the feature representation learned by the network. Most strikingly, the colors and the rough contours of an image can be reconstructed from activations in higher network layers and even from the predicted class probabilities."}, "keywords": ["non-adversarial embedding inversion"], "citation_intent": "result"} {"citing_id": "2303.12187v1", "cited_id": "1510.08484", "section_title": "Pre-Training", "citation": "The noise audio clips in the categories of \"natural\", \"music\" and \"babble\" are sampled from the MUSAN dataset #REFR .", "text_before_citation": ["The implication of baseline and all is the same as Table 2 . CMLR afterward.", "Our model is trained on 4 A100 GPUs with max tokens 2500, one batch size of data containing at most 100 seconds of audio per GPU.", "It takes around 32 hours to finish a phase, where 400k steps are.", "We do not use accumulated gradient, update freq is set to 1.", "One thing to note is that, among all pre-training and fine-tuning stages, there is no extra noise added. Random noise is included in the decoding stage."], "text_after_citation": [], "citing_paper_content": {"title": "Practice Of The Conformer Enhanced Audio-Visual Hubert On Mandarin And English", "abstract": "Considering the bimodal nature of human speech perception, lips, and teeth movement has a pivotal role in automatic speech recognition. Benefiting from the correlated and noise-invariant visual information, audiovisual recognition systems enhance robustness in multiple scenarios. In previous work, audiovisual HuBERT appears to be the finest practice incorporating modality knowledge. This paper outlines a mixed methodology, named conformer enhanced AV-HuBERT, boosting the AV-HuBERT system's performance a step further. Compared with baseline AV-HuBERT, our method in the one-phase evaluation of clean and noisy conditions achieves 7% and 16% relative WER reduction on the English AVSR benchmark dataset LRS3. Furthermore, we establish a novel 1000h Mandarin AVSR dataset CSTS. On top of the baseline AV-HuBERT, we exceed the WeNet ASR system by 14% and 18% relatively on MISP and CMLR by pre-training with this dataset. The conformerenhanced AV-HuBERT we proposed brings 7% on MISP and 6% CER reduction on CMLR, compared with the baseline AV-HuBERT system."}, "cited_paper_content": {"title": "Musan: A Music, Speech, And Noise Corpus", "abstract": "This report introduces a new corpus of music, speech, and noise. This dataset is suitable for training models for voice activity detection (VAD) and music/speech discrimination. Our corpus is released under a flexible Creative Commons license. The dataset consists of music from several genres, speech from twelve languages, and a wide assortment of technical and non-technical noises. We demonstrate use of this corpus for music/speech discrimination on Broadcast news and VAD for speaker identification."}, "keywords": ["MUSAN dataset"], "citation_intent": "method"} {"citing_id": "2303.10460v1", "cited_id": "1204.0867", "section_title": "Iv. Choosing A Spanning Tree Which Minimizes The Total Number Of Transmissions Used", "citation": "For a single uniprior ICP represented by the informationflow graph G, since the bandwidth-optimal index code in #REFR gives a separate code for each strongly connected component of G, in the rest of this paper, we consider all informationflow graphs to be strongly connected.", "text_before_citation": [], "text_after_citation": ["Further, for an index code C T obtained from a spanning tree T , both the notations l max (C T ) as well as l max (T ) are used interchangeably to mean the maximum number of transmissions used to decode a single requested message at any receiver while using the index code C T . Example 1.", "Consider the single uniprior ICP represented by the information-flow graph, G shown in Fig. 1 .", "For this graph, Algorithm 2 in #OTHEREFR forms the connected graph on 4 labeled nodes and finds a spanning tree of diameter two.", "There are four possible spanning trees of diameter two for a labeled K 4 , which are shown in Fig. 2 .", "Corresponding to each of these spanning trees, the bandwidth-optimal index codes satisfying the min-max probability of error criterion are given in Table I ."], "citing_paper_content": {"title": "Average Probability Of Error For Single Uniprior Index Coding Over Rayleigh Fading Channel", "abstract": "Ong and Ho developed optimal linear index codes for single uniprior index coding problems (ICPs) by finding a spanning tree for each of the strongly connected components of the corresponding information-flow graphs, following which Thomas et al. considered the same class of ICPs over Rayleigh fading channel. They developed the min-max probability of error criterion for choosing an index code which minimized the probability of error at the receivers and showed that there always exist optimal linear index codes for which any receiver takes at most two transmissions to decode a requested message. Motivated by the above works, this paper considers single uniprior ICPs over Rayleigh fading channels for which minimizing average probability of error is shown to be a criterion for further selection of index codes. The optimal index code w.r.t this criterion is shown to be one that minimizes the total number of transmissions used for decoding the message requests at all the receivers. An algorithm that generates a spanning tree which has a lower value of this metric as compared to the optimal star graph is also presented. For a given set of parameters of single uniprior ICPs, a lower bound for the total number of transmissions used by any optimal index code is derived, and a class of ICPs for which this bound is tight is identified."}, "cited_paper_content": {"title": "Optimal Index Codes For A Class Of Multicast Networks With Receiver Side Information", "abstract": "This paper studies a special class of multicast index coding problems where a sender transmits messages to multiple receivers, each with some side information. Here, each receiver knows a unique message a priori, and there is no restriction on how many messages each receiver requests from the sender. For this class of multicast index coding problems, we obtain the optimal index code, which has the shortest codelength for which the sender needs to send in order for all receivers to obtain their (respective) requested messages. This is the first class of index coding problems where the optimal index codes are found. In addition, linear index codes are shown to be optimal for this class of index coding problems."}, "keywords": ["bandwidth-optimal index code"], "citation_intent": "background"} {"citing_id": "2304.04875v1", "cited_id": "1812.01601", "section_title": "Ablation Study", "citation": "Table 8 shows how 3DPW changes the 3D pseudo-GTs compared to other ITW datasets, such as MPII, LSPET, and InstaVariety #REFR .", "text_before_citation": ["This shows that our recipes are specially designed for the annotation networks f to obtain beneficial 3D pseudo-GTs.", "The reason for the small effect when the recipes are applied to the estimation networks g is that the estimation networks g are trained with 3D pseudo-GTs, while annotation networks f are trained with 2D GTs of ITW datasets without 3D evidence.", "The absence of 3D evidence when training annotation networks f results in severe ambiguities, which can be cured by our recipes.", "On the other hand, as estimation networks g are fully supervised with 3D pseudo-GTs, they suffer less from ambiguities.", "For each recipe, None represents both annotation and estimation networks are trained with the remaining other two recipes. Effect of training annotation network f on 3DPW."], "text_after_citation": ["As the table shows, adding other ITW datasets does not obtain the performance gain of g compared to H36M+MI+COCO.", "This is because adding ITW datasets does not contribute to relieving the depth ambiguity as they provide only 2D GTs.", "On the other hand, 3DPW provides 3D GTs, largely helpful to alleviate the depth ambiguity.", "Importantly, the 3D errors of g on MuPoTS decrease as well, which implies that using 3DPW as an additional training set is beneficial for multiple 3D ITW benchmarks.", "It is noticeable that InstaVariety has 95 times more images than 3DPW, while much less helpful for the beneficial 3D pseudo-GTs."], "citing_paper_content": {"title": "Three Recipes For Better 3D Pseudo-Gts Of 3D Human Mesh Estimation In The Wild", "abstract": "Recovering 3D human mesh in the wild is greatly challenging as in-the-wild (ITW) datasets provide only 2D pose ground truths (GTs). Recently, 3D pseudo-GTs have been widely used to train 3D human mesh estimation networks as the 3D pseudo-GTs enable 3D mesh supervision when training the networks on ITW datasets. However, despite the great potential of the 3D pseudo-GTs, there has been no extensive analysis that investigates which factors are important to make more beneficial 3D pseudo-GTs. In this paper, we provide three recipes to obtain highly beneficial 3D pseudo-GTs of ITW datasets. The main challenge is that only 2D-based weak supervision is allowed when obtaining the 3D pseudo-GTs. Each of our three recipes addresses the challenge in each aspect: depth ambiguity, sub-optimality of weak supervision, and implausible articulation. Experimental results show that simply retraining state-of-the-art networks with our new 3D pseudo-GTs elevates their performance to the next level without bells and whistles. The 3D pseudo-GT is publicly available 1 ."}, "cited_paper_content": {"title": "Learning 3D Human Dynamics From Video", "abstract": "From an image of a person in action, we can easily guess the 3D motion of the person in the immediate past and future. This is because we have a mental model of 3D human dynamics that we have acquired from observing visual sequences of humans in motion. We present a framework that can similarly learn a representation of 3D dynamics of humans from video via a simple but effective temporal encoding of image features. At test time, from video, the learned temporal representation give rise to smooth 3D mesh predictions. From a single image, our model can recover the current 3D mesh as well as its 3D past and future motion. Our approach is designed so it can learn from videos with 2D pose annotations in a semi-supervised manner. Though annotated data is always limited, there are millions of videos uploaded daily on the Internet. In this work, we harvest this Internet-scale source of unlabeled data by training our model on unlabeled video with pseudo-ground truth 2D pose obtained from an off-the-shelf 2D pose detector. Our experiments show that adding more videos with pseudo-ground truth 2D pose monotonically improves 3D prediction performance. We evaluate our model on the recent challenging dataset of 3D Poses in the Wild and obtain state-of-the-art performance on the 3D prediction task without any fine-tuning. The project website with video can be found at https://akanazawa.github.io/human_dynamics/."}, "keywords": ["3D pseudo"], "citation_intent": "result"} {"citing_id": "2303.12942v1", "cited_id": "1907.07374", "section_title": "Iii. Research Methodology", "citation": "Researchers in #REFR provide a survey of interpretability and explainability related to ML algorithms, categorizing different interpretations proposed by different research works. Their categorization also pertains to medical applications.", "text_before_citation": ["In #OTHEREFR , the authors present a survey that explores the current trends and challenges of using visual analytics to interpret deep learning models based on XAI methods, as well as future directions of research in this area.", "Two perspectives have been taken into consideration, model usage and visual approaches.", "Model usage focuses on the performance of the AI system and how it behaves in different scenarios, while visual approaches examine the behavior of the AI system by looking at its outputs, such as predictions or classifications, over time.", "The researchers identified several research questions in light of their findings, then discussed the research directions that could be pursued in the future.", "By using XAI methods in the field of visual analytics, this survey provides guidance to better interpret neural networks."], "text_after_citation": ["According to the researchers, interpretability research can broadly be classified in many ways.", "It can range from methods that provide clearly interpreted information to analysis of complex patterns.", "The authors present attempts to mathematically formalize interpretability, as well as efforts to visualize it and evaluate its impact on task performance.", "These different approaches aim to better understand and improve the interpretability of algorithms.", "The investigators in #OTHEREFR provide a literature review of the use of XAI in the field of cybersecurity."], "citing_paper_content": {"title": "A Survey On Explainable Artificial Intelligence For Network Cybersecurity", "abstract": "The \"black-box\" nature of artificial intelligence (AI) models has been the source of many concerns in their use for critical applications. Explainable Artificial Intelligence (XAI) is a rapidly growing research field that aims to create machine learning models that can provide clear and interpretable explanations for their decisions and actions. In the field of network cybersecurity, XAI has the potential to revolutionize the way we approach network security by enabling us to better understand the behavior of cyber threats and to design more effective defenses. In this survey, we review the state of the art in XAI for cybersecurity in network systems and explore the various approaches that have been proposed to address this important problem. The review follows a systematic classification of network-driven cybersecurity threats and issues. We discuss the challenges and limitations of current XAI methods in the context of cybersecurity and outline promising directions for future research."}, "cited_paper_content": {"title": "A Survey On Explainable Artificial Intelligence (Xai): Towards Medical Xai", "abstract": "Recently, artificial intelligence, especially machine learning has demonstrated remarkable performances in many tasks, from image processing to natural language processing, especially with the advent of deep learning. Along with research progress, machine learning has encroached into many different fields and disciplines. Some of them, such as the medical field, require high level of accountability, and thus transparency, which means we need to be able to explain machine decisions, predictions and justify their reliability. This requires greater interpretability, which often means we need to understand the mechanism underlying the algorithms. Unfortunately, the black-box nature of the deep learning is still unresolved, and many machine decisions are still poorly understood. We provide a review on interpretabilities suggested by different research works and categorize them, with the intention of providing alternative perspective that is hopefully more tractable for future adoption of interpretability standard. We explore further into interpretability in the medical field, illustrating the complexity of interpretability issue."}, "keywords": ["explainability", "interpretability"], "citation_intent": "background"} {"citing_id": "2304.09527v1", "cited_id": "1604.03650", "section_title": "Stereo Synthesis Comparison", "citation": "In Figure 5 , the poles generated by Xie et al. #REFR and Godard et al.", "text_before_citation": ["Although they perform well on visible regions, they are inefficient on handling the disoccluded regions.", "Thanks to the two rectification strategies, our method is able to identify the error regions caused by the first stage and inpaint it with reasonable contents.", "Specifically, the pruning-based rectification detects the intractable area of warping while the bidirectional matching rectification finds out the inconsistent pixels between the stereo views.", "On the other hands, the exemplar synthesis results are shown in Figure 5 , and we can see that there are some artifacts in their results.", "In particular, their synthesized stereo either produces blurring artifacts, or contains severe distortion."], "text_after_citation": ["#OTHEREFR (highlighted in the red rectangles) are distorted, while the ones generated by our method are very close to the ground-truth image.", "Although the pole in the synthesized results produced by Luo et al. #OTHEREFR and Gonzalez et al.", "#OTHEREFR are straight, their image details are not well preserved.", "Also, the shape of traffic lights in Gonzalez et al. #OTHEREFR is blurry.", "Moreover, the contour of the van in the green rectangles generated by previous methods is blurry or distorted, while our method can produce faithful result."], "citing_paper_content": {"title": "Single-View View Synthesis With Self-Rectified Pseudo-Stereo", "abstract": "Synthesizing novel views from a single view image is a highly ill-posed problem. We discover an effective solution to reduce the learning ambiguity by expanding the single-view view synthesis problem to a multi-view setting. Specifically, we leverage the reliable and explicit stereo prior to generate a pseudostereo viewpoint, which serves as an auxiliary input to construct the 3D space. In this way, the challenging novel view synthesis process is decoupled into two simpler problems of stereo synthesis and 3D reconstruction. In order to synthesize a structurally correct and detail-preserved stereo image, we propose a selfrectified stereo synthesis to amend erroneous regions in an identify-rectify manner. Hard-to-train and incorrect warping samples are first discovered by two strategies, 1) pruning the network to reveal low-confident predictions; and 2) bidirectionally matching between stereo images to allow the discovery of improper mapping. These regions are then inpainted to form the final pseudo-stereo. With the aid of this extra input, a preferable 3D reconstruction can be easily obtained, and our"}, "cited_paper_content": {"title": "Deep3D: Fully Automatic 2D-To-3D Video Conversion With Deep Convolutional Neural Networks", "abstract": "As 3D movie viewing becomes mainstream and the Virtual Reality (VR) market emerges, the demand for 3D contents is growing rapidly. Producing 3D videos, however, remains challenging. In this paper we propose to use deep neural networks to automatically convert 2D videos and images to a stereoscopic 3D format. In contrast to previous automatic 2D-to-3D conversion algorithms, which have separate stages and need ground truth depth map as supervision, our approach is trained end-to-end directly on stereo pairs extracted from existing 3D movies. This novel training scheme makes it possible to exploit orders of magnitude more data and significantly increases performance. Indeed, Deep3D outperforms baselines in both quantitative and human subject evaluations."}, "keywords": ["Figure", "Godard"], "citation_intent": "background"} {"citing_id": "2304.07442v1", "cited_id": "1312.5602", "section_title": "Iv. Our Algorithm", "citation": "To overcome this issue, we develop a technique inspired by work in reinforcement learning #REFR .", "text_before_citation": ["These updates do not cause a significant overall decrease in optimizee's cost function.", "This effect is more pronounced when we operate in parameter space as opposed to gradient space since the LSTM network does not get any information about the curvature of the optimization surface.", "Running optimization in a finite horizon window in parameter space can then cause LSTM to suggest incorrect updates to the optimizee network.", "In the specific case of QNNs, the parameters correspond to a rotation of given input state about a particular axis.", "An incorrect update can very easily cause the quantum state to be rotated incorrectly and therefore lead to an increase in the value of the objective function we're interested in minimizing."], "text_after_citation": ["Throughout training, we keep track of past history of parameters and their corresponding cost function values in a \"replay buffer\".", "At the start of training we instantiate a double ended queue dubbed as a replay-buffer B of a finite capacity R. For metaiteration t = 1 . . .", "T we observe a history of parameters \u03b8 t and the corresponding cost C(\u03b8 t ).", "If C(\u03b8 t+1 ) < C(\u03b8 t ) then we add the state s = [\u03b8 t+1 , C(\u03b8 t+1 ), \u2206C(\u03b8), h t+1 ] to the replay buffer.", "Once the meta-iteration ends and L(\u03a6) is computed, if the QNN cost function is diverging, we seed the parameters for the next meta-iteration by performing the following update:"], "citing_paper_content": {"title": "Learning To Optimize Quantum Neural Network Without Gradients", "abstract": "Quantum Machine Learning is an emerging subfield in machine learning where one of the goals is to perform pattern recognition tasks by encoding data into quantum states. This extension from classical to quantum domain has been made possible due to the development of hybrid quantum-classical algorithms that allow a parameterized quantum circuit to be optimized using gradient based algorithms that run on a classical computer. The similarities in training of these hybrid algorithms and classical neural networks has further led to the development of Quantum Neural Networks (QNNs). However, in the current training regime for QNNs, the gradients w.r.t objective function have to be computed on the quantum device. This computation is highly non-scalable and is affected by hardware and sampling noise present in the current generation of quantum hardware. In this paper, we propose a training algorithm that does not rely on gradient information. Specifically, we introduce a novel metaoptimization algorithm that trains a meta-optimizer network to output parameters for the quantum circuit such that the objective function is minimized. We empirically and theoretically show that we achieve a better quality minima in fewer circuit evaluations than existing gradient based algorithms on different datasets."}, "cited_paper_content": {"title": "Playing Atari With Deep Reinforcement Learning", "abstract": "We present the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning. The model is a convolutional neural network, trained with a variant of Q-learning, whose input is raw pixels and whose output is a value function estimating future rewards. We apply our method to seven Atari 2600 games from the Arcade Learning Environment, with no adjustment of the architecture or learning algorithm. We find that it outperforms all previous approaches on six of the games and surpasses a human expert on three of them."}, "keywords": ["reinforcement learning"], "citation_intent": "method"} {"citing_id": "2304.03006v1", "cited_id": "1602.05629", "section_title": "A. Federated Learning", "citation": "However, for the federated model, which has seen no training data (insuring the data privacy), these local updates are then collated, in our case via a block to be added to the blockchain, and averaged based on the FedAvg algorithm #REFR .", "text_before_citation": ["The goal for every model in the system, both the local models and the federated (global) model is to minimize the loss with respect to the model parameter: min", "EQUATION", "Where is a chosen loss function, consistent across all participating models, x, y are the training (input and desired output) vectors and \u03c9 is the model's parameters."], "text_after_citation": ["Hence, for each local model index m = 1, 2, ...., M of a participating IoT device, which has performed local training (1) either on the device itself or via a processing node, over its training set T m :", "EQUATION", "Where", "T = M m=1", "|T m |."], "citing_paper_content": {"title": "Iot Federated Blockchain Learning At The Edge", "abstract": "IoT devices are sorely underutilised in the medical field, especially within machine learning for medicine, yet they offer unrivalled benefits. IoT devices are low cost, energy efficient, small and intelligent devices [1]. In this paper, we propose a distributed federated learning framework for IoT devices, more specifically for IoMT (Internet of Medical Things), using blockchain to allow for a decentralised scheme improving privacy and efficiency over a centralised system; this allows us to move from the cloud based architectures, that are prevalent, to the edge. The system is designed for three paradigms: 1) Training neural networks on IoT devices to allow for collaborative training of a shared model whilst decoupling the learning from the dataset [2] to ensure privacy [3]. Training is performed in an online manner simultaneously amongst all participants, allowing for training of actual data that may not have been present in a dataset collected in the traditional way and dynamically adapt the system whilst it is being trained. 2) Training of an IoMT system in a fully private manner such as to mitigate the issue with confidentiality of medical data and to build robust, and potentially bespoke [4], models where not much, if any, data exists. 3) Distribution of the actual network training, something federated learning itself does not do, to allow hospitals, for example, to utilize their spare computing resources to train network models."}, "cited_paper_content": {"title": "Communication-Efficient Learning Of Deep Networks From Decentralized Data", "abstract": "Modern mobile devices have access to a wealth of data suitable for learning models, which in turn can greatly improve the user experience on the device. For example, language models can improve speech recognition and text entry, and image models can automatically select good photos. However, this rich data is often privacy sensitive, large in quantity, or both, which may preclude logging to the data center and training there using conventional approaches. We advocate an alternative that leaves the training data distributed on the mobile devices, and learns a shared model by aggregating locally-computed updates. We term this decentralized approach Federated Learning. We present a practical method for the federated learning of deep networks based on iterative model averaging, and conduct an extensive empirical evaluation, considering five different model architectures and four datasets. These experiments demonstrate the approach is robust to the unbalanced and non-IID data distributions that are a defining characteristic of this setting. Communication costs are the principal constraint, and we show a reduction in required communication rounds by 10-100x as compared to synchronized stochastic gradient descent."}, "keywords": ["federated model"], "citation_intent": "method"} {"citing_id": "2305.02468v1", "cited_id": "1909.02027", "section_title": "Datasets", "citation": "CLINC150 #REFR This dataset is multi-domain dataset which contains 23,700 utterances that cover 150 intent classes over 10 domains.", "text_before_citation": ["The dialogues cover a wide range of domains and topics.", "MultiWOZ 2.2 #OTHEREFR is the improved version of MultiWOZ 2.1 #OTHEREFR ) that has corrected annotation errors, inconsistencies, and ontology issues, also added span annotations to standardize.", "MultiWOZ -2.1, 2.2 The MultiWOZ", "Banking 77 #OTHEREFR This dataset is a collection of 77 real-life customer banking service queries. It consists of 13,083 utterances.", "Each query is labeled with a single intent, however, it is hard to differentiate because they correspond to very similar tasks."], "text_after_citation": ["HWU64"], "citing_paper_content": {"title": "Task-Optimized Adapters For An End-To-End Task-Oriented Dialogue System", "abstract": "Task-Oriented Dialogue (TOD) systems are designed to carry out specific tasks by tracking dialogue states and generating appropriate responses to help users achieve defined goals. Recently, end-to-end dialogue models pre-trained based on large datasets have shown promising performance in the conversational system. However, they share the same parameters to train tasks of the dialogue system (NLU, DST, NLG), so debugging each task is challenging. Also, they require a lot of effort to fine-tune large parameters to create a taskoriented chatbot, making it difficult for nonexperts to handle. Therefore, we intend to train relatively lightweight and fast models compared to PLM. In this paper, we propose an End-to-end TOD system with Task-Optimized Adapters which learn independently per task, adding only small number of parameters after fixed layers of pre-trained network. We also enhance the performance of the DST and NLG modules through reinforcement learning, overcoming the learning curve that has lacked at the adapter learning and enabling the natural and consistent response generation that is appropriate for the goal. Our method is a model-agnostic approach and does not require prompt-tuning as only input data without a prompt. As results of the experiment, our method shows competitive performance on the MultiWOZ benchmark compared to the existing end-to-end models. In particular, we attain state-of-the-art performance on the DST task of 2.2 dataset."}, "cited_paper_content": {"title": "An Evaluation Dataset For Intent Classification And Out-Of-Scope Prediction", "abstract": "Task-oriented dialog systems need to know when a query falls outside their range of supported intents, but current text classification corpora only define label sets that cover every example. We introduce a new dataset that includes queries that are out-of-scope---i.e., queries that do not fall into any of the system's supported intents. This poses a new challenge because models cannot assume that every query at inference time belongs to a system-supported intent class. Our dataset also covers 150 intent classes over 10 domains, capturing the breadth that a production task-oriented agent must handle. We evaluate a range of benchmark classifiers on our dataset along with several different out-of-scope identification schemes. We find that while the classifiers perform well on in-scope intent classification, they struggle to identify out-of-scope queries. Our dataset and evaluation fill an important gap in the field, offering a way of more rigorously and realistically benchmarking text classification in task-driven dialog systems."}, "keywords": ["multi-domain dataset", "150 intent classes"], "citation_intent": "background"} {"citing_id": "2303.11546v1", "cited_id": "1706.02677", "section_title": "Implementation Details", "citation": "A weight decay is set to 0.01, with a linear warmup #REFR over t warm =1k iterations, followed by a linear decay.", "text_before_citation": ["Mapillary #OTHEREFR involves 18,000 training images and 2,000 validation images with diverse resolutions.", "For brevity, we denote GTA, SYNTHIA, Cityscapes, BDD, and Mapillary as G, S, C, B, and M, respectively. Network architecture.", "We conduct experiments using ResNet #OTHEREFR as an encoder architecture and DeepLabV3+ #OTHEREFR as a semantic segmentation decoder architecture.", "In all experiments, encoders are initialized with an ImageNet #OTHEREFR pre-trained model. Training. We adopt an AdamW [38] optimizer.", "An initial learning rate is set to 3\u00d710 \u22125 for the encoder and 3 \u00d7 10 \u22124 for the decoder, 40k training iterations, a batch size of 4."], "text_after_citation": ["We use random scaling in the range [0.5, 2.0] and random cropping with a size of 768\u00d7768.", "We apply additional data augmentation techniques, including random flipping and color jittering.", "We set the texture regularization parameters as u l =5 \u00d7 10 \u2212l\u22122 , and the texture generalization parameters as v l =5 \u00d7 10 \u2212l\u22122 .", "The original task loss and the stylized task loss weights are set to \u03b1 orig =0.5 and \u03b1 styl =0.5, respectively. We set the RSM threshold \u03c4 =0.1."], "citing_paper_content": {"title": "Texture Learning Domain Randomization For Domain Generalized Segmentation", "abstract": "Deep Neural Networks (DNNs)-based semantic segmentation models trained on a source domain often struggle to generalize to unseen target domains, i.e., a domain gap problem. Texture often contributes to the domain gap, making DNNs vulnerable to domain shift because they are prone to be texture-biased. Existing Domain Generalized Semantic Segmentation (DGSS) methods have alleviated the domain gap problem by guiding models to prioritize shape over texture. On the other hand, shape and texture are two prominent and complementary cues in semantic segmentation. This paper argues that leveraging texture is crucial for improving performance in DGSS. Specifically, we propose a novel framework, coined Texture Learning Domain Randomization (TLDR). TLDR includes two novel losses to effectively enhance texture learning in DGSS: (1) a texture regularization loss to prevent overfitting to source domain textures by using texture features from an ImageNet pretrained model and (2) a texture generalization loss that utilizes random style images to learn diverse texture representations in a self-supervised manner. Extensive experimental results demonstrate the superiority of the proposed TLDR; e.g., TLDR achieves 46.5 mIoU on GTA\u2192Cityscapes using ResNet-50, which improves the prior state-of-the-art method by 1.9 mIoU."}, "cited_paper_content": {"title": "Accurate, Large Minibatch Sgd: Training Imagenet In 1 Hour", "abstract": "Deep learning thrives with large neural networks and large datasets. However, larger networks and larger datasets result in longer training times that impede research and development progress. Distributed synchronous SGD offers a potential solution to this problem by dividing SGD minibatches over a pool of parallel workers. Yet to make this scheme efficient, the per-worker workload must be large, which implies nontrivial growth in the SGD minibatch size. In this paper, we empirically show that on the ImageNet dataset large minibatches cause optimization difficulties, but when these are addressed the trained networks exhibit good generalization. Specifically, we show no loss of accuracy when training with large minibatch sizes up to 8192 images. To achieve this result, we adopt a hyper-parameter-free linear scaling rule for adjusting learning rates as a function of minibatch size and develop a new warmup scheme that overcomes optimization challenges early in training. With these simple techniques, our Caffe2-based system trains ResNet-50 with a minibatch size of 8192 on 256 GPUs in one hour, while matching small minibatch accuracy. Using commodity hardware, our implementation achieves ~90% scaling efficiency when moving from 8 to 256 GPUs. Our findings enable training visual recognition models on internet-scale data with high efficiency."}, "keywords": ["weight decay", "linear warmup"], "citation_intent": "method"} {"citing_id": "2304.09715v1", "cited_id": "1707.03167", "section_title": "Iv. Experiments", "citation": "Proposed by RegNet #REFR , it is the first Kitti split introduced for this task, and as shown by our results in Table II it remains the most challenging.", "text_before_citation": ["It provides enough samples and an accurate calibration ground truth.", "Using the same dataset as previous works will help us compare our results.", "However, those previous publications used different splits of the Kitti dataset.", "We present those splits in Table I and will refer to them as \u03b1, \u03b2, \u03b3, and \u03b4.", "From the splits presented in Table I , we choose \u03b1 as our reference."], "text_after_citation": ["This is because it uses samples from separate days for training/validation and testing, with different camera intrinsics.", "In comparison, splits \u03b2, \u03b3, and \u03b4 used in other publications include in their testing sets samples recorded the same day and with the same camera intrinsics as their training sets.", "\u03b4 even uses spatially redundant samples (some scenes are captured in the same location in training and testing sets) as acknowledged in #OTHEREFR .", "The split and dataset choice are not trivial as results in Table II show that the same network (here UniCal) trained and tested on different splits will perform differently.", "The splits with more similarities between the training and testing sets will get seemingly better results, whether those similarities are mostly in the camera intrinsics as in \u03b2, or even in the location where the samples were collected, as in \u03b4."], "citing_paper_content": {"title": "Unical: A Single-Branch Transformer-Based Model For Camera-To-Lidar Calibration And Validation", "abstract": "We introduce a novel architecture, UniCal, for Camera-to-LiDAR (C2L) extrinsic calibration which leverages self-attention mechanisms through a Transformer-based backbone network to infer the 6-degree of freedom (DoF) relative transformation between the sensors. Unlike previous methods, UniCal performs an early fusion of the input camera and LiDAR data by aggregating camera image channels and LiDAR mappings into a multi-channel unified representation before extracting their features jointly with a single-branch architecture. This single-branch architecture makes UniCal lightweight, which is desirable in applications with restrained resources such as autonomous driving. Through experiments, we show that UniCal achieves state-of-the-art results compared to existing methods. We also show that through transfer learning, weights learned on the calibration task can be applied to a calibration validation task without retraining the backbone."}, "cited_paper_content": {"title": "Regnet: Multimodal Sensor Registration Using Deep Neural Networks", "abstract": "In this paper, we present RegNet, the first deep convolutional neural network (CNN) to infer a 6 degrees of freedom (DOF) extrinsic calibration between multimodal sensors, exemplified using a scanning LiDAR and a monocular camera. Compared to existing approaches, RegNet casts all three conventional calibration steps (feature extraction, feature matching and global regression) into a single real-time capable CNN. Our method does not require any human interaction and bridges the gap between classical offline and target-less online calibration approaches as it provides both a stable initial estimation as well as a continuous online correction of the extrinsic parameters. During training we randomly decalibrate our system in order to train RegNet to infer the correspondence between projected depth measurements and RGB image and finally regress the extrinsic calibration. Additionally, with an iterative execution of multiple CNNs, that are trained on different magnitudes of decalibration, our approach compares favorably to state-of-the-art methods in terms of a mean calibration error of 0.28 degrees for the rotational and 6 cm for the translation components even for large decalibrations up to 1.5 m and 20 degrees."}, "keywords": ["RegNet"], "citation_intent": "background"} {"citing_id": "2303.00844v1", "cited_id": "1909.09564", "section_title": "Lad-Lasso-Based Omp", "citation": "Finally, we consider the LAD-LASSO loss function G LAD \u2113 1 w defined in #REFR .", "text_before_citation": [], "text_after_citation": ["In this case, we restrict ourselves to the real-valued case for the sake of simplicity.", "In order to formulate the corresponding greedy selection rule, we need to introduce some auxiliary notation.", "First, we define an augmented version A \u2208 R (m+1)\u00d7N of the matrix A \u2208 R m\u00d7N as A := A \u03bbw * or, equivalently, A ij :=", "EQUATION", "In addition, given x \u2208 R N , we consider N augmentations of the residual vector r = Ax \u2212 y \u2208 R N as the vectors r j \u2208 R m+1 , defined by"], "citing_paper_content": {"title": "The Greedy Side Of The Lasso: New Algorithms For Weighted Sparse Recovery Via Loss Function-Based Orthogonal Matching Pursuit", "abstract": "We propose a class of greedy algorithms for weighted sparse recovery by considering new loss function-based generalizations of Orthogonal Matching Pursuit (OMP). Given a (regularized) loss function, the proposed algorithms alternate the iterative construction of the signal support via greedy index selection and a signal update based on solving a local data-fitting problem restricted to the current support. We show that greedy selection rules associated with popular weighted sparsity-promoting loss functions admit explicitly computable and simple formulas. Specifically, we consider \u2113 0-and \u2113 1-based versions of the weighted LASSO (Least Absolute Shrinkage and Selection Operator), the Square-Root LASSO (SR-LASSO) and the Least Absolute Deviations LASSO (LAD-LASSO). Through numerical experiments on Gaussian compressive sensing and high-dimensional function approximation, we demonstrate the effectiveness of the proposed algorithms and empirically show that they inherit desirable characteristics from the corresponding loss functions, such as SR-LASSO's noise-blind optimal parameter tuning and LAD-LASSO's fault tolerance. In doing so, our study sheds new light on the connection between greedy sparse recovery and convex relaxation."}, "cited_paper_content": {"title": "Sparse Harmonic Transforms Ii: Best $S$-Term Approximation Guarantees For Bounded Orthonormal Product Bases In Sublinear-Time", "abstract": "In this paper, we develop a sublinear-time compressive sensing algorithm for approximating functions of many variables which are compressible in a given Bounded Orthonormal Product Basis (BOPB). The resulting algorithm is shown to both have an associated best $s$-term recovery guarantee in the given BOPB, and also to work well numerically for solving sparse approximation problems involving functions contained in the span of fairly general sets of as many as $\\sim10^{230}$ orthonormal basis functions. All code is made publicly available. ::: As part of the proof of the main recovery guarantee new variants of the well known CoSaMP algorithm are proposed which can utilize any sufficiently accurate support identification procedure satisfying a {Support Identification Property (SIP)} in order to obtain strong sparse approximation guarantees. These new CoSaMP variants are then proven to have both runtime and recovery error behavior which are largely determined by the associated runtime and error behavior of the chosen support identification method. The main theoretical results of the paper are then shown by developing a sublinear-time support identification algorithm for general BOPB sets which is robust to arbitrary additive errors. Using this new support identification method to create a new CoSaMP variant then results in a new robust sublinear-time compressive sensing algorithm for BOPB-compressible functions of many variables."}, "keywords": ["LAD-LASSO loss function"], "citation_intent": "method"} {"citing_id": "2303.09128v1", "cited_id": "1910.10683", "section_title": "Models", "citation": "We have experimented with two large language models for code: (1) CodeT5 , which is an encoder-decoder model based on T5 #REFR and (2) Codex , which is a decoder only model based on GPT-3 (Brown et al., 2020) .", "text_before_citation": [], "text_after_citation": ["Both T5 and GPT-3 have strong zero-shot learning #OTHEREFR and transfer learning capabilities, and their versions for programming languages exhibit strong performance across multiple benchmarks.", "The two models are of different sizesthe CodeT5 is using 700M parameters T5-large architecture, and the Codex model uses GPT-3 architecture with more than 100B parameters.", "We have provided a more detailed discussion of these models in the Appendix, Section 7.3.", "Next, we describe the methods we have employed to improve out-of-domain generalization for each model."], "citing_paper_content": {"title": "Exploring Distributional Shifts In Large Language Models For Code Analysis", "abstract": "We systematically study the capacity of two large language models for code-CodeT5 and Codex-to generalize to out-of-domain data. In this study, we consider two fundamental applications-code summarization, and code generation. We split data into domains following its natural boundaries-by an organization, by a project, and by a module within the software project. This makes recognition of in-domain vs out-of-domain data at the time of deployment trivial. We establish that samples from each new domain present both models with a significant challenge of distribution shift. We study how well different established methods can adapt models to better generalize to new domains. Our experiments show that while multitask learning alone is a reasonable baseline, combining it with few-shot finetuning on examples retrieved from training data can achieve very strong performance. In fact, according to our experiments, this solution can outperform direct finetuning for very low-data scenarios. Finally, we consider variations of this approach to create a more broadly applicable method to adapt to multiple domains at once. We find that in the case of code generation, a model adapted to multiple domains simultaneously performs on par with those adapted to each domain individually."}, "cited_paper_content": {"title": "Exploring The Limits Of Transfer Learning With A Unified Text-To-Text Transformer", "abstract": "Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new \"Colossal Clean Crawled Corpus\", we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code."}, "keywords": ["two large language", "decoder"], "citation_intent": "method"} {"citing_id": "2303.16206v1", "cited_id": "2003.12039", "section_title": "Related Work", "citation": "Most similar to our approach is arguably RAFT #REFR , which poses optical flow estimation as an optimization problem and uses a recurrent neural network to find an optimal descent direction.", "text_before_citation": ["Learning to Optimize.", "Recent methods have investigated incorporating optimization problems into neural network architectures #OTHEREFR .", "These methods design custom neural network layers that mimic certain canonical optimization problems.", "As a result, parametrized optimization problems can be inserted into neural networks.", "Not all optimization algorithms can easily be written as a layer and another strategy is to train a network to learn iterative updates from data (Adler &\u00d6ktem, 2018; #OTHEREFR ."], "text_after_citation": ["In a similar vein, we aim to learn the steganography optimization problem with a convolutional neural network without specially designed optimization layers."], "citing_paper_content": {"title": "Learning Iterative Neural Optimizers For Im-Age Steganography", "abstract": "Image steganography is the process of concealing secret information in images through imperceptible changes. Recent work has formulated this task as a classic constrained optimization problem. In this paper, we argue that image steganography is inherently performed on the (elusive) manifold of natural images, and propose an iterative neural network trained to perform the optimization steps. In contrast to classical optimization methods like L-BFGS or projected gradient descent, we train the neural network to also stay close to the manifold of natural images throughout the optimization. We show that our learned neural optimization is faster and more reliable than classical optimization approaches. In comparison to previous state-of-the-art encoder-decoder based steganography methods, it reduces the recovery error rate by multiple orders of magnitude and achieves zero error up to 3 bits per pixel (bpp) without the need for error-correcting codes."}, "cited_paper_content": {"title": "Raft: Recurrent All-Pairs Field Transforms For Optical Flow", "abstract": "We introduce Recurrent All-Pairs Field Transforms (RAFT), a new deep network architecture for optical flow. RAFT extracts per-pixel features, builds multi-scale 4D correlation volumes for all pairs of pixels, and iteratively updates a flow field through a recurrent unit that performs lookups on the correlation volumes. RAFT achieves state-of-the-art performance, with strong cross-dataset generalization and high efficiency in inference time, training speed, and parameter count. Code is available \\url{https://github.com/princeton-vl/RAFT}."}, "keywords": ["recurrent neural network", "optical flow estimation"], "citation_intent": "method"} {"citing_id": "2304.12012v1", "cited_id": "1602.05629", "section_title": "Introduction", "citation": "Under certain conditions #REFR , this procedure is guaranteed to converge to a final global model representing an optimal consensus among the hospitals participating in the experiment.", "text_before_citation": ["The need for large amounts of data to develop Artificial Intelligence (AI) in healthcare has motivated a number of national and international initiatives aimed at creating medical data lakes accessible to researchers, such as the French Health Data Hub #OTHEREFR , the UK BioBank #OTHEREFR , the US ADNI #OTHEREFR and TCGA #OTHEREFR , among the many #OTHEREFR .", "In spite of these initiatives, there are still major bottlenecks preventing the widespread availability of large centralized repositories of healthcare information #OTHEREFR .", "To overcome these limitations, Federated Learning (FL) has been proposed as a working paradigm to enable the training of ML models on large datasets from diverse sources while guaranteeing the respect of data privacy and governance.", "The basic paradigm of FL consists of iterating the following steps: i) model training is performed locally in the hospitals starting from a common initialization, ii) the resulting model parameters are subsequently shared (instead of the data) and aggregated, to define a global model iii) transmitted back to the hospitals to initiate a new local training step."], "text_after_citation": ["FL is particularly suited for applications in sensitive domains, such as healthcare and biomedical research #OTHEREFR .", "The current societal and economical interest in FL for healthcare is paramount [56, #OTHEREFR , as demonstrated by the several largescale medical research projects based on FL at the national and international level, focusing for example on rare hematological diseases 1 , drug development 2 , blood cancer #OTHEREFR , among the many #OTHEREFR .", "In spite of the current popularity, the real-world implementation of FL is complex and requires research and development actions at the crossroad between different domains spanning data science, software programming, networking, and security.", "Today, several FL software frameworks are currently being proposed to data scientists and users, based on different design spaces, goals, and with varying degrees of software maturity.", "Nevertheless, most of these frameworks are not designed to find seamless application in medical use-cases, due to the specific challenges and requirements of working with medical data and hospital infrastructures."], "citing_paper_content": {"title": "Fed-Biomed: Open, Transparent And Trusted Federated Learning For Real-World Healthcare Applications", "abstract": "The real-world implementation of federated learning is complex and requires research and development actions at the crossroad between different domains ranging from data science, to software programming, networking, and security. While today several FL libraries are proposed to data scientists and users, most of these frameworks are not designed to find seamless application in medical use-cases, due to the specific challenges and requirements of working with medical data and hospital infrastructures. Moreover, governance, design principles, and security assumptions of these frameworks are generally not clearly illustrated, thus preventing the adoption in sensitive applications. Motivated by the current technological landscape of FL in healthcare, in this document we present"}, "cited_paper_content": {"title": "Communication-Efficient Learning Of Deep Networks From Decentralized Data", "abstract": "Modern mobile devices have access to a wealth of data suitable for learning models, which in turn can greatly improve the user experience on the device. For example, language models can improve speech recognition and text entry, and image models can automatically select good photos. However, this rich data is often privacy sensitive, large in quantity, or both, which may preclude logging to the data center and training there using conventional approaches. We advocate an alternative that leaves the training data distributed on the mobile devices, and learns a shared model by aggregating locally-computed updates. We term this decentralized approach Federated Learning. We present a practical method for the federated learning of deep networks based on iterative model averaging, and conduct an extensive empirical evaluation, considering five different model architectures and four datasets. These experiments demonstrate the approach is robust to the unbalanced and non-IID data distributions that are a defining characteristic of this setting. Communication costs are the principal constraint, and we show a reduction in required communication rounds by 10-100x as compared to synchronized stochastic gradient descent."}, "keywords": ["hospitals", "optimal consensus"], "citation_intent": "method"} {"citing_id": "2303.00516v1", "cited_id": "1511.06581", "section_title": "Interaction Task", "citation": "Since M 0 now includes continuous features we adopt Dueling DQN #REFR , a Deep RL algorithm.", "text_before_citation": ["In addition, we consider a ground MDP M 0,d at which the robot movements are modelled using continuous features.", "The state space S 0 now contains continuous vectors (x, y, \u03b8, v), representing pose and velocity of agent's mobile base on the plane.", "The discrete set of actions A 0 allows to accelerate, decelerate, rotate, and a special action denotes the initiation of an interaction.", "There exists goal MDPs, M 2 , M 1 , M 0 that capture both the dynamics and the task defined above, which can be obtained through a suitable composition of each M i,d and D #OTHEREFR .", "Therefore, we can still apply our tecnique to the composed goal MDP."], "text_after_citation": ["The plot in Figure 6 shows a training comparison between the Dueling DQN agent alone (dot-dashed brown), and Dueling DQN receiving rewards from the grid abstraction (green).", "As we can see, our method allows to provide useful exploration bias even in case of extremely sparse goal states, as in this case."], "citing_paper_content": {"title": "Exploiting Multiple Abstractions In Episodic Rl Via Reward Shaping", "abstract": "One major limitation to the applicability of Reinforcement Learning (RL) to many practical domains is the large number of samples required to learn an optimal policy. To address this problem and improve learning efficiency, we consider a linear hierarchy of abstraction layers of the Markov Decision Process (MDP) underlying the target domain. Each layer is an MDP representing a coarser model of the one immediately below in the hierarchy. In this work, we propose a novel form of Reward Shaping where the solution obtained at the abstract level is used to offer rewards to the more concrete MDP, in such a way that the abstract solution guides the learning in the more complex domain. In contrast with other works in Hierarchical RL, our technique has few requirements in the design of the abstract models and it is also tolerant to modeling errors, thus making the proposed approach practical. We formally analyze the relationship between the abstract models and the exploration heuristic induced in the lower-level domain. Moreover, we prove that the method guarantees optimal convergence and we demonstrate its effectiveness experimentally."}, "cited_paper_content": {"title": "Dueling Network Architectures For Deep Reinforcement Learning", "abstract": "In recent years there have been many successes of using deep representations in reinforcement learning. Still, many of these applications use conventional architectures, such as convolutional networks, LSTMs, or auto-encoders. In this paper, we present a new neural network architecture for model-free reinforcement learning. Our dueling network represents two separate estimators: one for the state value function and one for the state-dependent action advantage function. The main benefit of this factoring is to generalize learning across actions without imposing any change to the underlying reinforcement learning algorithm. Our results show that this architecture leads to better policy evaluation in the presence of many similar-valued actions. Moreover, the dueling architecture enables our RL agent to outperform the state-of-the-art on the Atari 2600 domain."}, "keywords": ["Deep RL algorithm", "Dueling DQN"], "citation_intent": "method"} {"citing_id": "2303.01068v1", "cited_id": "1312.6199", "section_title": "Introduction", "citation": "In spite of the impressive performance of Deep Neural Networks (DNNs) in different fields, from computer vision to Natural Language Processing, these models are shown to be vulnerable to adversarial attacks, i.e., small perturbations of the input data #REFR .", "text_before_citation": [], "text_after_citation": ["In recent years many works have been proposed to evaluate the robustness of DNN models, and design methods to make them more robust to the perturbations to their inputs #OTHEREFR .", "Neural Machine Translation (NMT) models, which take an input sentence and automatically generate its translation, have reached impressive performance by us-ing DNN models such as transformers #OTHEREFR .", "Due to their performance, NMT models are widely used in different applications.", "However, faulty outputs of such models may pose serious threats, especially in securityimportant applications.", "Adversarial attacks against NMT models have been studied in the recent literature."], "citing_paper_content": {"title": "Targeted Adversarial Attacks Against Neural Machine Translation", "abstract": "Neural Machine Translation (NMT) systems are used in various applications. However, it has been shown that they are vulnerable to very small perturbations of their inputs, known as adversarial attacks. In this paper, we propose a new targeted adversarial attack against NMT models. In particular, our goal is to insert a predefined target keyword into the translation of the adversarial sentence while maintaining similarity between the original sentence and the perturbed one in the source domain. To this aim, we propose an optimization problem, including an adversarial loss term and a similarity term. We use gradient projection in the embedding space to craft an adversarial sentence. Experimental results show that our attack outperforms Seq2Sick, the other targeted adversarial attack against NMT models, in terms of success rate and decrease in translation quality. Our attack succeeds in inserting a keyword into the translation for more than 75% of sentences while similarity with the original sentence stays preserved 1 ."}, "cited_paper_content": {"title": "Intriguing Properties Of Neural Networks", "abstract": "Deep neural networks are highly expressive models that have recently achieved state of the art performance on speech and visual recognition tasks. While their expressiveness is the reason they succeed, it also causes them to learn uninterpretable solutions that could have counter-intuitive properties. In this paper we report two such properties. ::: First, we find that there is no distinction between individual high level units and random linear combinations of high level units, according to various methods of unit analysis. It suggests that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks. ::: Second, we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend. We can cause the network to misclassify an image by applying a certain imperceptible perturbation, which is found by maximizing the network's prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input."}, "keywords": ["adversarial attacks", "Deep Neural Networks"], "citation_intent": "background"} {"citing_id": "2303.10385v1", "cited_id": "1706.03762", "section_title": "C. Vector Encoder", "citation": "We use a transformer #REFR encoder composed of 6 layers to model the high-order interactions on these features.", "text_before_citation": ["1) Polyline Encoder: All the three types of polylines are encoded by a shared polyline encoder based on attention mechanism. As illustrated in Fig.", "3 , given a polyline After a MSA layer, the information of all other tokens is aggregated on the learnable token, which is treated as polyline feature h P .", "P = {v 1 , v 2 , \u2022 \u2022 \u2022 , v", "2) Interaction-Aware Transformer: Social interaction modeling is a standard method to capture the interactions between agents #OTHEREFR , #OTHEREFR .", "We go one step further by jointly modeling the interactions between agents, scene context, and occlusion for the inference task."], "text_after_citation": ["Each transformer layer consists of a MSA block followed by a position-wise fully connected feedforward network (FFN).", "A residual connection is added after each block, followed by a LN."], "citing_paper_content": {"title": "Social Occlusion Inference With Vectorized Representation For Autonomous Driving", "abstract": "Autonomous vehicles must be capable of handling the occlusion of the environment to ensure safe and efficient driving. In urban environment, occlusion often arises due to other vehicles obscuring the perception of the ego vehicle. Since the occlusion condition can impact the trajectories of vehicles, the behavior of other vehicles is helpful in making inferences about the occlusion as a remedy for perceptual deficiencies. This paper introduces a novel social occlusion inference approach that learns a mapping from agent trajectories and scene context to an occupancy grid map (OGM) representing the view of ego vehicle. Specially, vectorized features are encoded through the polyline encoder to aggregate features of vectors into features of polylines. A transformer module is then utilized to model the high-order interactions of polylines. Importantly, occlusion queries are proposed to fuse polyline features and generate the OGM without the input of visual modality. To verify the performance of vectorized representation, we design a baseline based on a fully transformer encoder-decoder architecture mapping the OGM with occlusion and historical trajectories information to the ground truth OGM. We evaluate our approach on an unsignalized intersection in the INTERACTION dataset, which outperforms the state-of-the-art results."}, "cited_paper_content": {"title": "Attention Is All You Need", "abstract": "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data."}, "keywords": ["transformer encoder"], "citation_intent": "method"} {"citing_id": "2303.00191v1", "cited_id": "1504.04909", "section_title": "Definition.", "citation": "The vanilla MAP-Elites #REFR mutates randomly sampled solutions in the archive with a genetic operator; generated solutions are added to the archive if their objective value exceeds that of the solution currently occupying their corresponding archive cell.", "text_before_citation": ["The QD objective is to find, for each \u2208 , a solution \u2208 R such that ( ) = and ( ) is maximized:", "EQUATION", "However, since is continuous, this objective would require infinite memory to solve, so we relax the QD objective to finding an archive (i.e., a finite set) of representative solutions \u0398 \u2286 R .", "A special case of QD is differentiable quality diversity (DQD) #OTHEREFR , where the objective and measure functions are first-order differentiable with gradients \u2207 and \u2207 .", "Algorithms based on MAP-Elites #OTHEREFR tessellate the measure space into cells, and \u0398 is constrained such that each of its solutions falls into a different cell of the tessellation based on its measure values."], "text_after_citation": ["Since its inception, MAP-Elites extensions have included new genetic operators, such as the Iso+LineDD operator inspired by crossover #OTHEREFR , as well as new methods for tessellating the measure space to create the archive.", "For example, MAP-Elites with Sliding Boundaries (MESB) adapts the size of grid cells online to reflect the distribution of solutions in measure space #OTHEREFR , while CVT-MAP-Elites #OTHEREFR precomputes a centroidal Voronoi tessellation (CVT) #OTHEREFR of the measure space that defines the archive cells.", "Algorithms based on Novelty Search #OTHEREFR maintain an unstructured archive where each solution must be novel by being a certain distance away from its nearest neighbors in measure space.", "A genetic algorithm then optimizes a population of solutions to achieve further novelty.", "While Novelty Search itself is a purely diversity-driven approach, many of its successors are designed for QD; for instance, Novelty Search with Local Competition (NSLC) #OTHEREFR balances between optimizing for the objective and novelty via multi-objective evolutionary algorithms. QD algorithms have started to incorporate modern optimization algorithms."], "citing_paper_content": {"title": "Pyribs: A Bare-Bones Python Library For Quality Diversity Optimization", "abstract": "Recent years have seen a rise in the popularity of quality diversity (QD) optimization, a branch of optimization that seeks to find a collection of diverse, high-performing solutions to a given problem. To grow further, we believe the QD community faces two challenges: developing a framework to represent the field's growing array of algorithms, and implementing that framework in software that supports a range of researchers and practitioners. To address these challenges, we have developed pyribs, a library built on a highly modular conceptual QD framework. By replacing components in the conceptual framework, and hence in pyribs, users can compose algorithms from across the QD literature; equally important, they can identify unexplored algorithm variations. Furthermore, pyribs makes this framework simple, flexible, and accessible, with a userfriendly API supported by extensive documentation and tutorials. This paper overviews the creation of pyribs, focusing on the conceptual framework that it implements and the design principles that have guided the library's development. Pyribs is available at https://pyribs.org CCS CONCEPTS \u2022 Computing methodologies \u2192 Search methodologies; \u2022 Software and its engineering \u2192 Software libraries and repositories."}, "cited_paper_content": {"title": "Illuminating Search Spaces By Mapping Elites", "abstract": "Many fields use search algorithms, which automatically explore a search space to find high-performing solutions: chemists search through the space of molecules to discover new drugs; engineers search for stronger, cheaper, safer designs, scientists search for models that best explain data, etc. The goal of search algorithms has traditionally been to return the single highest-performing solution in a search space. Here we describe a new, fundamentally different type of algorithm that is more useful because it provides a holistic view of how high-performing solutions are distributed throughout a search space. It creates a map of high-performing solutions at each point in a space defined by dimensions of variation that a user gets to choose. This Multi-dimensional Archive of Phenotypic Elites (MAP-Elites) algorithm illuminates search spaces, allowing researchers to understand how interesting attributes of solutions combine to affect performance, either positively or, equally of interest, negatively. For example, a drug company may wish to understand how performance changes as the size of molecules and their cost-to-produce vary. MAP-Elites produces a large diversity of high-performing, yet qualitatively different solutions, which can be more helpful than a single, high-performing solution. Interestingly, because MAP-Elites explores more of the search space, it also tends to find a better overall solution than state-of-the-art search algorithms. We demonstrate the benefits of this new algorithm in three different problem domains ranging from producing modular neural networks to designing simulated and real soft robots. Because MAP- Elites (1) illuminates the relationship between performance and dimensions of interest in solutions, (2) returns a set of high-performing, yet diverse solutions, and (3) improves finding a single, best solution, it will advance science and engineering."}, "keywords": ["objective value", "genetic operator"], "citation_intent": "method"} {"citing_id": "2303.00897v1", "cited_id": "1806.00582", "section_title": "Preliminaries", "citation": "Based on the findings with Non-IID data #REFR , we expect datasets with similar data distributions to provide similar \u03a8(\u2022) values.", "text_before_citation": ["The federated client clustering results, i.e., the subjective term in Equation 1, determines the performance of CFL to some extent.", "Therefore, Equation 1motivates that the federated client clustering is a vital component in CFL.", "We design a distribution extractor function \u03a8(D) = Normalize( \u2202 (\u03c8;D) \u2202\u03c8 ), which indicates the updated direction toward the local minimum corresponding for the input dataset D, anchor model \u03c8 and loss function .", "We do not optimize the anchor model \u03c8 and maintain the loss function constant across all datasets in our implementations.", "As a result, the \u03a8(\u2022) output can be viewed as a representation of data distribution corresponding to the input dataset."], "text_after_citation": ["Then, we use cosine similarity to evaluate the distribution similarity of the two decentralized datasets, i.e., given any two unknown datasets D i , D j , the similarity is determined as:", "cos(\u03a8(D i ), \u03a8(D j )) = \u03a8(D i ) \u2022 \u03a8(D j ) \u03a8(D i ) \u03a8(D j ) .", "To better support our assumptions, we implement observation experiments on cosine similarity, as shown in Figure 2 , in which we augment MNIST/Fashion-MNIST dataset and partition them with varying levels of augmentation.", "The results reveal a significant difference in cosine similarity values.", "As a result, we conclude that \u03a8(\u2022) could represent a local data distribution, and clients with similar local data distributions (at both label and feature levels) have higher cosine similarity."], "citing_paper_content": {"title": "Stochastic Clustered Federated Learning", "abstract": "Federated learning is a distributed learning framework that takes full advantage of private data samples kept on edge devices. In real-world federated learning systems, these data samples are often decentralized and Non-Independently Identically Distributed (Non-IID), causing divergence and performance degradation in the federated learning process. As a new solution, clustered federated learning groups federated clients with similar data distributions to impair the Non-IID effects and train a better model for every cluster. This paper proposes StoCFL, a novel clustered federated learning approach for generic Non-IID issues. In detail, StoCFL implements a flexible CFL framework that supports an arbitrary proportion of client participation and newly joined clients for a varying FL system, while maintaining a great improvement in model performance. The intensive experiments are conducted by using four basic Non-IID settings and a real-world dataset. The results show that StoCFL could obtain promising cluster results even when the number of clusters is unknown. Based on the client clustering results, models trained with StoCFL outperform baseline approaches in a variety of contexts."}, "cited_paper_content": {"title": "Federated Learning With Non-Iid Data", "abstract": "Federated learning enables resource-constrained edge compute devices, such as mobile phones and IoT devices, to learn a shared model for prediction, while keeping the training data local. This decentralized approach to train models provides privacy, security, regulatory and economic benefits. In this work, we focus on the statistical challenge of federated learning when local data is non-IID. We first show that the accuracy of federated learning reduces significantly, by up to 55% for neural networks trained for highly skewed non-IID data, where each client device trains only on a single class of data. We further show that this accuracy reduction can be explained by the weight divergence, which can be quantified by the earth mover's distance (EMD) between the distribution over classes on each device and the population distribution. As a solution, we propose a strategy to improve training on non-IID data by creating a small subset of data which is globally shared between all the edge devices. Experiments show that accuracy can be increased by 30% for the CIFAR-10 dataset with only 5% globally shared data."}, "keywords": ["datasets"], "citation_intent": "result"} {"citing_id": "2303.05245v1", "cited_id": "1907.11346", "section_title": "Prior Work", "citation": "For example the errors of single view methods which estimate the position of people relative to the camera is on the order of 120mm where 100mm is in the depth direction #REFR . This is consistent with the presence of scale/depth ambiguity.", "text_before_citation": ["When estimating 3d position from camera data there is an inherent ambiguity due to scale/depth ambiguity.", "Many types of interesting types of objects have a scale ambiguity, such as people, cars and animals.", "The existence of scale ambiguity is reflected in the performance on competitive datasets."], "text_after_citation": ["In the multi view setting where the depth ambiguity can be eliminated the state of the art is on the order of 17mm #OTHEREFR .", "There are many ways to resolve the scale/depth ambiguity.", "One way to solve the problem is to add a depth sensor such as a lidar #OTHEREFR or structured light #OTHEREFR .", "However adding additional sensors come with several drawbacks such as price, complexity, range among others.", "For these reasons we will focus on methods which only use camera data."], "citing_paper_content": {"title": "Probabilistic 3D Regression With Projected Huber Distribution", "abstract": "Estimating probability distributions which describe where an object is likely to be from camera data is a task with many applications. In this work we describe properties which we argue such methods should conform to. We also design a method which conform to these properties. In our experiments we show that our method produces uncertainties which correlate well with empirical errors. We also show that the mode of the predicted distribution outperform our regression baselines. The code for our implementation is available online Focal Length Model P(x,y,z;) P(x/z,y/z;) P(x, z;) Figure 1: Overview of out method. Our method takes an image as input and the focal length which was used to capture the scene and produces a log concave probability distribution in world coordinates in a way which can model the ambiguities which are inherent for camera sensors. On the left we have visualized the level curves in projected coordinates with the projected ground truth center of the cylinder. On the right we show the level curves of the distribution from a birds eye view. The estimated mode is shown as a cross and the ground truth position is a dot."}, "cited_paper_content": {"title": "Camera Distance-Aware Top-Down Approach For 3D Multi-Person Pose Estimation From A Single Rgb Image", "abstract": "Although significant improvement has been achieved recently in 3D human pose estimation, most of the previous methods only treat a single-person case. In this work, we firstly propose a fully learning-based, camera distance-aware top-down approach for 3D multi-person pose estimation from a single RGB image. The pipeline of the proposed system consists of human detection, absolute 3D human root localization, and root-relative 3D single-person pose estimation modules. Our system achieves comparable results with the state-of-the-art 3D single-person pose estimation models without any groundtruth information and significantly outperforms previous 3D multi-person pose estimation methods on publicly available datasets. ::: The code is available in this https URL , this https URL."}, "keywords": ["scale/depth ambiguity", "position"], "citation_intent": "result"} {"citing_id": "2303.12671v1", "cited_id": "1405.0312", "section_title": "Existing Datasets And Methods For Visual Question Answering", "citation": "The Microsoft COCO dataset #REFR is one of the large-scale datasets that impact many studies in computer vision tasks, including object detection, image classification, image captioning, and visual question answering.", "text_before_citation": ["In computer vision, the research purpose for VQA is to make computers understand the semantic context of images."], "text_after_citation": ["Several VQA datasets are built on the MS-COCO in different languages, such as the VQA #OTHEREFR , VQAv2 #OTHEREFR in English, FM-IQA #OTHEREFR for Chinese, the Japanese VQA #OTHEREFR for Japanese, and the ViVQA #OTHEREFR for Vietnamese.", "There are also other two benchmark datasets for training and fine-tuning VQA methods, including Visual Genome (VG-QA) [15] and GQA #OTHEREFR . VG-QA is a VQA dataset that contains real-world photographs.", "It is designed and constructed to emphasize the interactions and relationship between natural questions and particular regions on the images.", "The creation of VG-QA lays the groundwork for building GQA, another large VQA collection that make use of Visual Genome scene graph structures to feature compositional question answering and real world reasoning.", "Besides, in the natural language processing field, the SQuAD dataset #OTHEREFR has boosted many studies in question-answering and natural language understanding."], "citing_paper_content": {"title": "Integrating Image Features With Convolutional Sequence-To-Sequence Network For Multilingual Visual Question Answering", "abstract": "Visual Question Answering (VQA) is a task that requires computers to give correct answers for the input questions based on the images. This task can be solved by humans with ease but is a challenge for computers. The VLSP2022-EVJVQA shared task carries the Visual Question Answering task in the multilingual domain on a newly released dataset: UIT-EVJVQA, in which the questions and answers are written in three different languages: English, Vietnamese and Japanese. We approached the challenge as a sequenceto-sequence learning task, in which we integrated hints from pre-trained state-of-the-art VQA models and image features with Convolutional Sequence-to-Sequence network to generate the desired answers. Our results obtained up to 0.3442 by F1 score on the public test set, 0.4210 on the private test set, and placed 3 rd in the competition."}, "cited_paper_content": {"title": "Microsoft Coco: Common Objects In Context", "abstract": "We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model."}, "keywords": ["Microsoft COCO dataset"], "citation_intent": "background"} {"citing_id": "2303.12304v1", "cited_id": "1812.11703", "section_title": "I. Introduction", "citation": "DW-Xcorr #REFR is a common similarity calculation method in the currently popular siamese network based trackers.", "text_before_citation": ["Specifically, in the process of similarity calculation, the generated similarity response map is difficult to focus on the target region, which will directly affect the effectiveness of feature decoding in the subsequent tracking head.", "Secondly, the classification branch and the regression branch in the tracking head are separate in processing the task.", "Specifically, the classification branch is responsible for distinguishing the target from the background, while the regression branch is responsible for locating the bounding box of all positive samples and does not consider the classification information.", "It results in the accuracy misalignment between the output feature maps.", "On the one hand, most of the methods #OTHEREFR [13] #OTHEREFR treat each channel in the feature map equally in the process of channel downscaling, making it difficult to focus the similarity response maps on the target region."], "text_after_citation": ["It convolves the two feature maps extracted by the siamese network channel by channel and outputs the final similarity response map.", "The similarity response map has a feature that objects of the same category have a higher response on the same channel, while the response of other channels is suppressed.", "However, existing siamese network based trackers usually use a modified ResNet-50 as the feature extraction network.", "As a result, the number of channels of the final output feature map is too large, which leads to an elevated computational effort and makes it difficult to meet the real-time requirements of object tracking.", "In order to reduce the computational effort, they #OTHEREFR [13] #OTHEREFR use 1\u00d71 convolution to decrease the feature map's channels."], "citing_paper_content": {"title": "Siamthn: Siamese Target Highlight Network For Visual Tracking", "abstract": "Siamese network based trackers develop rapidly in the field of visual object tracking in recent years. The majority of siamese network based trackers now in use treat each channel in the feature maps generated by the backbone network equally, making the similarity response map sensitive to background influence and hence challenging to focus on the target region. Additionally, there are no structural links between the classification and regression branches in these trackers, and the two branches are optimized separately during training. Therefore, there is a misalignment between the classification and regression branches, which results in less accurate tracking results. In this paper, a Target Highlight Module is proposed to help the generated similarity response maps to be more focused on the target region. To reduce the misalignment and produce more precise tracking results, we propose a corrective loss to train the model. The two branches of the model are jointly tuned with the use of corrective loss to produce more reliable prediction results. Experiments on 5 challenging benchmark datasets reveal that the method outperforms current models in terms of performance, and runs at 38 fps, proving its effectiveness and efficiency."}, "cited_paper_content": {"title": "Siamrpn++: Evolution Of Siamese Visual Tracking With Very Deep Networks", "abstract": "Siamese network based trackers formulate tracking as convolutional feature cross-correlation between target template and searching region. However, Siamese trackers still have accuracy gap compared with state-of-the-art algorithms and they cannot take advantage of feature from deep networks, such as ResNet-50 or deeper. In this work we prove the core reason comes from the lack of strict translation invariance. By comprehensive theoretical analysis and experimental validations, we break this restriction through a simple yet effective spatial aware sampling strategy and successfully train a ResNet-driven Siamese tracker with significant performance gain. Moreover, we propose a new model architecture to perform depth-wise and layer-wise aggregations, which not only further improves the accuracy but also reduces the model size. We conduct extensive ablation studies to demonstrate the effectiveness of the proposed tracker, which obtains currently the best results on four large tracking benchmarks, including OTB2015, VOT2018, UAV123, and LaSOT. Our model will be released to facilitate further studies based on this problem."}, "keywords": ["network based trackers", "currently popular siamese"], "citation_intent": "method"} {"citing_id": "2303.16406v1", "cited_id": "1906.03327", "section_title": "Dataset Analysis", "citation": "Our videos and text queries are collected from the HowTo100M #REFR dataset, and hence our category labels match theirs. As shown in Fig.", "text_before_citation": ["Task Category Distribution."], "text_after_citation": ["2 , the most frequently occurring categories (for all text-video pairs and just videos with step captions) are \"Hobbies and Crafts\", \"Food and Entertaining\", and \"Home and Garden\".", "While these are the most common categories (similar to HowTo100M's most common categories), other categories still have a presence in our dataset.", "Dataset Statistics.", "We collected a total of 3.4K text-video pairs, which are 287 seconds long on average, with a total duration of 270 hours.", "Out of 3.4K videos, 1.8K videos are clippable to a moment; i.e., only a short clip (<75% of the original video) is relevant to the text query."], "citing_paper_content": {"title": "Hierarchical Video-Moment Retrieval And Step-Captioning", "abstract": "There is growing interest in searching for information from large video corpora. Prior works have studied relevant tasks, such as text-based video retrieval, moment retrieval, video summarization, and video captioning in isolation, without an end-to-end setup that can jointly search from video corpora and generate summaries. Such an endto-end setup would allow for many interesting applications, e.g., a text-based search that finds a relevant video from a video corpus, extracts the most relevant moment from that video, and segments the moment into important steps with captions. To address this, we present the HIREST (HIerarchical REtrieval and STep-captioning) dataset and propose a new benchmark that covers hierarchical information retrieval and visual/textual stepwise summarization from an instructional video corpus. HIREST consists of 3.4K text-video pairs from an instructional video dataset, where 1.1K videos have annotations of moment spans relevant to text query and breakdown of each moment into key instruction steps with caption and timestamps (totaling 8.6K step captions). Our hierarchical benchmark consists of video retrieval, moment retrieval, and two novel moment segmentation and step captioning tasks. In moment segmentation, models break down a video moment into instruction steps and identify start-end boundaries. In step captioning, models generate a textual summary for each step. We also present starting point task-specific and end-to-end joint baseline models for our new benchmark. While the baseline models show some promising results, there still exists large room for future improvement by the community."}, "cited_paper_content": {"title": "Howto100M: Learning A Text-Video Embedding By Watching Hundred Million Narrated Video Clips", "abstract": "Learning text-video embeddings usually requires a dataset of video clips with manually provided captions. However, such datasets are expensive and time consuming to create and therefore difficult to obtain on a large scale. In this work, we propose instead to learn such embeddings from video data with readily available natural language annotations in the form of automatically transcribed narrations. The contributions of this work are three-fold. First, we introduce HowTo100M: a large-scale dataset of 136 million video clips sourced from 1.22M narrated instructional web videos depicting humans performing and describing over 23k different visual tasks. Our data collection procedure is fast, scalable and does not require any additional manual annotation. Second, we demonstrate that a text-video embedding trained on this data leads to state-of-the-art results for text-to-video retrieval and action localization on instructional video datasets such as YouCook2 or CrossTask. Finally, we show that this embedding transfers well to other domains: fine-tuning on generic Youtube videos (MSR-VTT dataset) and movies (LSMDC dataset) outperforms models trained on these datasets alone. Our dataset, code and models will be publicly available at: www.di.ens.fr/willow/research/howto100m/."}, "keywords": ["videos"], "citation_intent": "method"} {"citing_id": "2303.16178v1", "cited_id": "1902.01954", "section_title": "C. Baselines", "citation": "For our implementation, we use GRUs instead of LSTMs because they are much faster while providing comparable performance #REFR .", "text_before_citation": ["We use six baselines in this paper.", "We chose each baseline because it represents a family of similar approaches or is a well-cited approach used as a baseline in many papers.", "attendgru This baseline is a simple unidirectional RNNbased attentional neural encoder-decoder architecture.", "It takes only source code tokens as encoder input and English comment as decoder input. It was first introduced by Iyer et al.", "#OTHEREFR as an off-the-shelf NMT/NLG approach to generate source code summaries."], "text_after_citation": ["transformer This baseline is another simple encoderdecoder architecture, but it replaces the recurrent layers with stacked muti-head attention layers #OTHEREFR .", "As mentioned in Section II-B, transformers introduce a position embedding layer that captures the sequential order of tokens which allows the multi-head attention layer to process the entire sequence at the same time.", "On the encoder side, the multi-head attention layer computes dot-product based self-attention on the source code tokens.", "On the decoder side, there are two multi-head attention layers: a masked multi-head attention layer that computes self-attention on the comment tokens followed by a regular multi-head attention layer that computes attention between the encoder and the masked attention layer.", "ast-attendgru This baseline is an enhancement over the attendgru model by including AST information on the encoder side along with source code tokens."], "citing_paper_content": {"title": "Label Smoothing Improves Neural Source Code Summarization", "abstract": "Label smoothing is a regularization technique for neural networks. Normally neural models are trained to an output distribution that is a vector with a single 1 for the correct prediction, and 0 for all other elements. Label smoothing converts the correct prediction location to something slightly less than 1, then distributes the remainder to the other elements such that they are slightly greater than 0. A conceptual explanation behind label smoothing is that it helps prevent a neural model from becoming \"overconfident\" by forcing it to consider alternatives, even if only slightly. Label smoothing has been shown to help several areas of language generation, yet typically requires considerable tuning and testing to achieve the optimal results. This tuning and testing has not been reported for neural source code summarization-a growing research area in software engineering that seeks to generate natural language descriptions of source code behavior. In this paper, we demonstrate the effect of label smoothing on several baselines in neural code summarization, and conduct an experiment to find good parameters for label smoothing and make recommendations for its use."}, "cited_paper_content": {"title": "A Neural Model For Generating Natural Language Summaries Of Program Subroutines", "abstract": "Source code summarization -- creating natural language descriptions of source code behavior -- is a rapidly-growing research topic with applications to automatic documentation generation, program comprehension, and software maintenance. Traditional techniques relied on heuristics and templates built manually by human experts. Recently, data-driven approaches based on neural machine translation have largely overtaken template-based systems. But nearly all of these techniques rely almost entirely on programs having good internal documentation; without clear identifier names, the models fail to create good summaries. In this paper, we present a neural model that combines words from code with code structure from an AST. Unlike previous approaches, our model processes each data source as a separate input, which allows the model to learn code structure independent of the text in code. This process helps our approach provide coherent summaries in many cases even when zero internal documentation is provided. We evaluate our technique with a dataset we created from 2.1m Java methods. We find improvement over two baseline techniques from SE literature and one from NLP literature."}, "keywords": ["LSTMs"], "citation_intent": "method"} {"citing_id": "2304.03730v1", "cited_id": "1909.04054", "section_title": "Dialog Encoder", "citation": "Our method to encode dialog data is inspired by the Sequential Sentence Classification (SCC) model #REFR which also is based on BERT encoder and organizes the dialog data in the hierarchical manner.", "text_before_citation": [", and regard it as the representation of the ith utterance.", "The representation of head token, T [CLS] , would be used in the following Gated Mechanism part as a kind of context information which encodes all the utterances.", "With the utterance representations of T i[SEP ] , the role {r 1 , ..., r i } and intent {e i , ..., e i } information (one-hot vectors) are concatenated to their corresponding utterance representations.", "After a MLP operation, we can obtain the final representation of each utterance by", "T new i[SEP ] = M LP (r i \u2295 e i \u2295 T i[SEP ] )."], "text_after_citation": ["However, our model is different from SCC model in two aspects.", "Firstly, the representation of head token [CLS] is additionally utilized as context information in the following modules.", "Secondly, our encoder can integrate the extra role and intent information."], "citing_paper_content": {"title": "Gated Mechanism Enhanced Multi-Task Learning For Dialog Routing", "abstract": "Currently, human-bot symbiosis dialog systems, e.g., pre-and after-sales in E-commerce, are ubiquitous, and the dialog routing component is essential to improve the overall efficiency, reduce human resource cost, and enhance user experience. Although most existing methods can fulfil this requirement, they can only model single-source dialog data and cannot effectively capture the underlying knowledge of relations among data and subtasks. In this paper, we investigate this important problem by thoroughly mining both the data-totask and task-to-task knowledge among various kinds of dialog data. To achieve the above targets, we propose a Gated Mechanism enhanced Multi-task Model (G3M), specifically including a novel dialog encoder and two tailored gated mechanism modules. The proposed method can play the role of hierarchical information filtering and is non-invasive to existing dialog systems. Based on two datasets collected from real world applications, extensive experimental results demonstrate the effectiveness of our method, which achieves the state-of-the-art performance by improving 8.7%/11.8% on RMSE metric and 2.2%/4.4% on F1 metric."}, "cited_paper_content": {"title": "Pretrained Language Models For Sequential Sentence Classification", "abstract": "As a step toward better document-level understanding, we explore classification of a sequence of sentences into their corresponding categories, a task that requires understanding sentences in context of the document. Recent successful models for this task have used hierarchical models to contextualize sentence representations, and Conditional Random Fields (CRFs) to incorporate dependencies between subsequent labels. In this work, we show that pretrained language models, BERT (Devlin et al., 2018) in particular, can be used for this task to capture contextual dependencies without the need for hierarchical encoding nor a CRF. Specifically, we construct a joint sentence representation that allows BERT Transformer layers to directly utilize contextual information from all words in all sentences. Our approach achieves state-of-the-art results on four datasets, including a new dataset of structured scientific abstracts."}, "keywords": ["dialog data", "Sequential Sentence Classification"], "citation_intent": "method"} {"citing_id": "2303.15472v1", "cited_id": "1805.09662", "section_title": "Computational Overhead And The Number Of Parameters", "citation": "When using our model with the deeper backbone denoted denoted by ours \u2020, the number of model parameters increases, but it does not increase significantly compared to other comparison groups, where is still similar to that of LF-Net #REFR .", "text_before_citation": ["The right table shows the number of parameters in millions, where the first group (top) are descriptor-only models and the second group (bottom) are joint detection and description models.", "Our model in the first row has a second smallest model size among those of descriptoronly models."], "text_after_citation": [], "citing_paper_content": {"title": "Learning Rotation-Equivariant Features For Visual Correspondence", "abstract": "Extracting discriminative local features that are invariant to imaging variations is an integral part of establishing correspondences between images. In this work, we introduce a self-supervised learning framework to extract discriminative rotation-invariant descriptors using groupequivariant CNNs. Thanks to employing group-equivariant CNNs, our method effectively learns to obtain rotationequivariant features and their orientations explicitly, without having to perform sophisticated data augmentations. The resultant features and their orientations are further processed by group aligning, a novel invariant mapping technique that shifts the group-equivariant features by their orientations along the group dimension. Our group aligning technique achieves rotation-invariance without any collapse of the group dimension and thus eschews loss of discriminability. The proposed method is trained end-to-end in a self-supervised manner, where we use an orientation alignment loss for the orientation estimation and a contrastive descriptor loss for robust local descriptors to geometric/photometric variations. Our method demonstrates state-of-the-art matching accuracy among existing rotationinvariant descriptors under varying rotation and also shows competitive results when transferred to the task of keypoint matching and camera pose estimation."}, "cited_paper_content": {"title": "Lf-Net: Learning Local Features From Images", "abstract": "We present a novel strategy to learn a local feature pipeline from collections of images with deep networks, without the need for human supervision. To do so we frame the learning problem with a two-branch network. We posit that training both branches with a standard Siamese architecture is not feasible, as solving the correspondence problem jointly with feature learning is too challenging, and does not converge well enough to train from scratch. Instead, we propose to break differentiability on one branch and use ground-truth geometry to remove the burden of solving the correspondence problem while training, while keeping the other fully-differentiable. In order to train this setup with gradient-based methods, we optimize for the differentiable branch while using the parameters from the previous training step for the other, and demonstrate that both converge to a single, optimal solution. Our method can be trained with only the relative camera pose and depth information---furthermore, we show that this ground truth does not need to be perfect and can be easily obtained with off-the-shelf Structure-from-Motion solutions. Our models outperform the state of the art on sparse feature matching on both indoor and outdoor datasets, while running at 60+ fps for QVGA images."}, "keywords": ["model", "LF-Net"], "citation_intent": "result"} {"citing_id": "2304.00595v1", "cited_id": "1912.01244", "section_title": "Vi. Solving The Conditions Of Optimality Using A Modified Physics Informed Neural Network", "citation": "IV, computing the uncontrolled PDFs for the deterministic dynamics (i.e., \u03b4 = 0) requires inverting #REFR .", "text_before_citation": ["Additionally, to satisfy compute constraints, we uniformly randomly sampled 35,000 samples every 40,000 epochs.", "For computing the Sinkhorn losses at the endpoint boundary conditions, we use the entropic regularization parameter (see (11)) \u03b5 = 0.1. Fig.", "2 depicts fifty optimally controlled state sample paths for this simulation.", "These sample paths are obtained via closed-loop simulation with the optimal control policy u opt resulting from the training of the PINN. Fig.", "3 shows the snaphsots of the univariate marginal PDFs under optimal control and the same without control, for the aforesaid numerical simulation. Following Sec."], "text_after_citation": ["We used the methodof-characteristics #OTHEREFR to solve the corresponding unforced Liouville PDE, thereby obtaining the uncontrolled joint PDF snapshots.", "The marginals \u03c1 unc i , i \u2208 3 , in Fig. 3 were obtained by numerically integrating these uncontrolled joints."], "citing_paper_content": {"title": "Optimal Mass Transport Over The Euler Equation", "abstract": "We consider the finite horizon optimal steering of the joint state probability distribution subject to the angular velocity dynamics governed by the Euler equation. The problem and its solution amounts to controlling the spin of a rigid body via feedback, and is of practical importance, for example, in angular stabilization of a spacecraft with stochastic initial and terminal states. We clarify how this problem is an instance of the optimal mass transport (OMT) problem with bilinear prior drift. We deduce both static and dynamic versions of the Eulerian OMT, and provide analytical and numerical results for the synthesis of the optimal controller."}, "cited_paper_content": {"title": "Wasserstein Proximal Algorithms For The Schr\\\"{O}Dinger Bridge Problem: Density Control With Nonlinear Drift", "abstract": "We study the Schrodinger bridge problem (SBP) with nonlinear prior dynamics. In control-theoretic language, this is a problem of minimum effort steering of a given joint state probability density function (PDF) to another over a finite time horizon, subject to a controlled stochastic differential evolution of the state vector. For generic nonlinear drift, we reduce the SBP to solving a system of forward and backward Kolmogorov partial differential equations (PDEs) that are coupled through the boundary conditions, with unknowns being the \"Schrodinger factors\" -- so named since their product at any time yields the optimal controlled joint state PDF at that time. We show that if the drift is a gradient vector field, or is of mixed conservative-dissipative nature, then it is possible to transform these PDEs into a pair of initial value problems (IVPs) involving the same forward Kolmogorov operator. Combined with a recently proposed fixed point recursion that is contractive in the Hilbert metric, this opens up the possibility to numerically solve the SBPs in these cases by computing the Schrodinger factors via a single IVP solver for the corresponding (uncontrolled) forward Kolmogorov PDE. The flows generated by such forward Kolmogorov PDEs, for the two aforementioned types of drift, in turn, enjoy gradient descent structures on the manifold of joint PDFs with respect to suitable distance functionals. We employ a proximal algorithm developed in our prior work, that exploits this geometric viewpoint, to solve these IVPs and compute the Schrodinger factors via weighted scattered point cloud evolution in the state space. We provide the algorithmic details and illustrate the proposed framework of solving the SBPs with nonlinear prior dynamics by numerical examples."}, "keywords": ["deterministic dynamics"], "citation_intent": "method"} {"citing_id": "2304.05600v1", "cited_id": "1807.09840", "section_title": "A. External Results", "citation": "Similarly, we show results for the TUT18 task #REFR , sampled from the HARES benchmark, in Table A2 .", "text_before_citation": ["Here, we review results for our primary downstream tasks from external, high-performing models.", "We provide these results to contextualize performance on these tasks further, and compare to our strongest result in each case.", "In Table A1 , we assemble a collection of recent top results on VGGSound #OTHEREFR .", "We restrict ourselves to those with which we share an evaluation metric (Top-1 Accuracy, which is common in recent VGGSound evaluations).", "These methods involve any combination of supervision during pretraining, highly optimized state-of-the-art architectures like multimodal transformers, and larger pretraining sets."], "text_after_citation": ["Here, we compare against all results reported in the original HARES paper #OTHEREFR .", "The difference between the LibriSpeechpretrained [79] Wav2Vec 2.0 #OTHEREFR performance (which is very low) and the other methods, which are trained on Au-dioSet #OTHEREFR , illustrates the importance of pretraining data to this particular task.", "Our performance is competitive, but several AudioSet approaches with tuned architectures outperform our result, suggesting that our fully self-supervised pretraining and the relatively modest amount of pretraining data that we use may play a role. Arch."], "citing_paper_content": {"title": "Looking Similar, Sounding Different: Leveraging Counterfactual Cross-Modal Pairs For Audiovisual Representation Learning", "abstract": "2 [chihweiw, iorife, mkalayeh]@netflix.com Figure 1. (Left) Audiovisual scenes can be perceptually similar even as the words spoken in them differ, which may be a challenge to self-supervised audiovisual representation learning. (Right) We propose to leverage movie dubs during training and show that it improves the quality of learned representations on a wide range of tasks."}, "cited_paper_content": {"title": "A Multi-Device Dataset For Urban Acoustic Scene Classification", "abstract": "This paper introduces the acoustic scene classification task of DCASE 2018 Challenge and the TUT Urban Acoustic Scenes 2018 dataset provided for the task, and evaluates the performance of a baseline system in the task. As in previous years of the challenge, the task is defined for classification of short audio samples into one of predefined acoustic scene classes, using a supervised, closed-set classification setup. The newly recorded TUT Urban Acoustic Scenes 2018 dataset consists of ten different acoustic scenes and was recorded in six large European cities, therefore it has a higher acoustic variability than the previous datasets used for this task, and in addition to high-quality binaural recordings, it also includes data recorded with mobile devices. We also present the baseline system consisting of a convolutional neural network and its performance in the subtasks using the recommended cross-validation setup."}, "keywords": ["TUT18 task"], "citation_intent": "result"} {"citing_id": "2304.14676v1", "cited_id": "1609.08138", "section_title": "1) From The", "citation": "Now it is easily verified that [0 N I N ]P \u22121 \u03c0 in (24) is the same as the matrix in #REFR . V.", "text_before_citation": ["EQUATION", "Therefore, this N -sum box is feasible according to #OTHEREFR .", "For the specified G, H matrices, the transfer matrix M Q is,", "EQUATION", "Recall that P \u22121 \u03c0 = P \u03c0 \u22121 ."], "text_after_citation": ["CONCLUSION Using the N -sum box abstraction #OTHEREFR , this work translated existing CSA coding schemes for classical MAC settings into Quantum CSA coding schemes, for entanglementassisted QMACs.", "This leads to immediate applications to Quantum PIR with secure and MDS-coded storage (Q-MDS-XSTPIR), as well as Quantum SDBMM.", "In both cases, the rate achieved with the QCSA scheme can be expressed as R Q = min{1, 2R C }, where R C is the rate achieved by the CSA scheme in the corresponding classical setting.", "Recent results in QPIR, QTPIR, QMDSTPIR can be recovered as special cases.", "An important direction for future work is to explore settings where certain qudits are erased or lost, which would extend the scope of QCSA schemes to allow stragglers."], "citing_paper_content": {"title": "Quantum Cross Subspace Alignment Codes Via The N -Sum Box Abstraction", "abstract": "Cross-subspace alignment (CSA) codes are used in various private information retrieval (PIR) schemes (e.g., with secure storage) and in secure distributed batch matrix multiplication (SDBMM). Using a recently developed N-sum box abstraction of a quantum multiple-access channel (QMAC), we translate CSA schemes over classical multiple-access channels into efficient quantum CSA schemes over a QMAC, achieving maximal superdense coding gain. Because of the N-sum box abstraction, the underlying problem of coding to exploit quantum entanglements for CSA schemes, becomes conceptually equivalent to that of designing a channel matrix for a MIMO MAC subject to given structural constraints imposed by the N-sum box abstraction, such that the resulting MIMO MAC is able to implement the functionality of a CSA scheme (encoding/decoding) over-the-air. Applications include Quantum PIR with secure and MDS-coded storage, as well as Quantum SDBMM."}, "cited_paper_content": {"title": "The Capacity Of Private Information Retrieval From Coded Databases", "abstract": "We consider the problem of private information retrieval (PIR) over a distributed storage system. The storage system consists of $N$ non-colluding databases, each storing a coded version of $M$ messages. In the PIR problem, the user wishes to retrieve one of the available messages without revealing the message identity to any individual database. We derive the information-theoretic capacity of this problem, which is defined as the maximum number of bits of the desired message that can be privately retrieved per one bit of downloaded information. We show that the PIR capacity in this case is $C=\\left(1+\\frac{K}{N}+\\frac{K^2}{N^2}+\\cdots+\\frac{K^{M-1}}{N^{M-1}}\\right)^{-1}=(1+R_c+R_c^2+\\cdots+R_c^{M-1})^{-1}=\\frac{1-R_c}{1-R_c^M}$, where $R_c$ is the rate of the $(N,K)$ code used. The capacity is a function of the code rate and the number of messages only regardless of the explicit structure of the storage code. The result implies a fundamental tradeoff between the optimal retrieval cost and the storage cost. The result generalizes the achievability and converse results for the classical PIR with replicating databases to the case of coded databases."}, "keywords": ["matrix", "N ]P \u22121"], "citation_intent": "background"} {"citing_id": "2303.07853v1", "cited_id": "1512.04150", "section_title": "Introduction", "citation": "To the best of our knowledge, the current state-of-the-art in image-level-based WSSS methods use class activation maps (CAMs) #REFR to generate the pixel-level masks of an object from its image-level label.", "text_before_citation": ["Therefore, to reduce the time and resources required for generating pixel-wise masks, a wide range of research works focus on developing approaches that focus on weaker kinds of supervision.", "This is where Weakly Supervised Semantic Segmentation (WSSS) can be highly beneficial.", "WSSS approaches focus on generating the required masks with minimum supervision, such as image-level labels #OTHEREFR , bounding boxes #OTHEREFR , point annotations #OTHEREFR , and scribbles #OTHEREFR .", "This work focuses on generating semantic segmentation masks for medical images using the most straightforward, and least-supervised, image-level labels.", "We limit the scope of our work to just using image-level labels since they are the most inexpensive form of annotation."], "text_after_citation": ["The central idea of CAMs is to use any model trained with classification loss to generate activation maps that highlight the image regions responsible for the prediction decision.", "This results mostly in a rough localization of the objects rather than precise pixel-wise masks.", "The most popular CAM approaches focus on adding regularization loss to improve the quality of the CAM prediction #OTHEREFR or utilizing refinement methods that aim to enhance the CAM afterward #OTHEREFR .", "For example, adversarial erasing #OTHEREFR erases the most discriminative part of the CAM to force the model to consider different parts of an object. #OTHEREFR", "(Chang et al., 2020) use clustering to automatically sub-divide every class into sub-classes, implicitly generating distinctive classes for less discriminative parts of the CAM."], "citing_paper_content": {"title": "Boundarycam: A Boundary-Based Refinement Framework For Weakly Supervised Semantic Segmentation Of Medical Images", "abstract": "Weakly Supervised Semantic Segmentation (WSSS) with only image-level supervision is a promising approach to deal with the need for Segmentation networks, especially for generating a large number of pixel-wise masks in a given dataset. However, most stateof-the-art image-level WSSS techniques lack an understanding of the geometric features embedded in the images since the network cannot derive any object boundary information from just image-level labels. We define a boundary here as the line separating an object and its background, or two different objects. To address this drawback, we propose our novel BoundaryCAM framework, which deploys state-of-the-art class activation maps combined with various post-processing techniques in order to achieve fine-grained higher-accuracy segmentation masks. To achieve this, we investigate a state-of-the-art unsupervised semantic segmentation network that can be used to construct a boundary map, which enables BoundaryCAM to predict object locations with sharper boundaries. By applying our method to WSSS predictions, we were able to achieve up to 10% improvements even to the benefit of the current state-of-the-art WSSS methods for medical imaging. The framework is open-source and accessible online 1 ."}, "cited_paper_content": {"title": "Learning Deep Features For Discriminative Localization", "abstract": "In this work, we revisit the global average pooling layer proposed in [13], and shed light on how it explicitly enables the convolutional neural network (CNN) to have remarkable localization ability despite being trained on imagelevel labels. While this technique was previously proposed as a means for regularizing training, we find that it actually builds a generic localizable deep representation that exposes the implicit attention of CNNs on an image. Despite the apparent simplicity of global average pooling, we are able to achieve 37.1% top-5 error for object localization on ILSVRC 2014 without training on any bounding box annotation. We demonstrate in a variety of experiments that our network is able to localize the discriminative image regions despite just being trained for solving classification task1."}, "keywords": ["image-level label", "class activation maps"], "citation_intent": "method"} {"citing_id": "2304.03717v1", "cited_id": "1512.03385", "section_title": "Experimental Results", "citation": "For the image part, we use a pre-trained ResNet101 #REFR , followed by the same layers.", "text_before_citation": ["Besides the simulation results reported in Figure 1 , we also conduct experiments on the MSCOCO-2014 dataset #OTHEREFR using more practical models. See Figure 2 for the results.", "For the text part, we use a pre-trained RoBERTa model #OTHEREFR , followed by a 3-layer fully-connected network with batch norm between layers."], "text_after_citation": ["In both parts, the width of the fully-connected layers and the output dimension are 768.", "We freeze the pre-trained parts of the model and only train the fully-connected parts.", "We measure the quality of the learned representation using its zero-shot performance on the MSCOCO-2014 validation set.", "Unlike common image classification datasets, images in the MSCOCO dataset usually have multiple labels, each corresponding to an object that appears in the image, and there are 80 categories in total.", "We regard a prediction to be correct if it matches one label."], "citing_paper_content": {"title": "On The Importance Of Contrastive Loss In Multimodal Learning", "abstract": "Recently, contrastive learning approaches (e.g., CLIP (Radford et al., 2021)) have received huge success in multimodal learning, where the model tries to minimize the distance between the representations of different views (e.g., image and its caption) of the same data point while keeping the representations of different data points away from each other. However, from a theoretical perspective, it is unclear how contrastive learning can learn the representations from different views efficiently, especially when the data is not isotropic. In this work, we analyze the training dynamics of a simple multimodal contrastive learning model and show that contrastive pairs are important for the model to efficiently balance the learned representations. In particular, we show that the positive pairs will drive the model to align the representations at the cost of increasing the condition number, while the negative pairs will reduce the condition number, keeping the learned representations balanced."}, "cited_paper_content": {"title": "Deep Residual Learning For Image Recognition", "abstract": "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers\u20148\u00d7 deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation."}, "keywords": ["pre-trained ResNet101"], "citation_intent": "method"} {"citing_id": "2303.15965v1", "cited_id": "2002.08546", "section_title": "Classification: Organamnist", "citation": "SHOT #REFR showed comparable performance to SFHarmony when trained with a batchsize of 500 (85.27%), but was highly dependent on the modified source training.", "text_before_citation": ["Thus, robustness to the batchsize is vital if a SFDA method is to be used for harmonisation.", "We use a single source model to allow fair comparison, trained with a batchsize of 50.", "We wish to maximise performance across all sites: as harmonisation is normally framed as a joint domain adaptation problem #OTHEREFR , the average performance across all sites is reported.", "The results can be seen in Table 1 , alongside the baseline methods.", "It can be seen that SFHarmony outperforms the existing SFDA methods, especially when a small batchsize was used for training (86.22% for batchsize 5)."], "text_after_citation": ["Interestingly, several of the SFDA approaches outperformed the adversarial approaches despite them having access to the source data, possibly due to the instability of such approaches.", "The proposed D GM M loss is clearly able to align the features across sites using only the GMM summary statistics. This is demonstrated by Fig.", "2 , which shows the source and target features for each site before and after DA.", "Clearly the features overlap much more after DA, which both leads to the clear improvement in performance, and shows that the approach is achieving the harmonisation goals of the model having a shared feature embedding across sites.", "We tried modelling the features with K \u2208 {1, 2, 3} GMM components: visual inspection of the features suggested that at least 2 components would be beneficial."], "citing_paper_content": {"title": "Sfharmony: Source Free Domain Adaptation For Distributed Neuroimaging Analysis", "abstract": "To represent the biological variability of clinical neuroimaging populations, it is vital to be able to combine data across scanners and studies. However, different MRI scanners produce images with different characteristics, resulting in a domain shift known as the 'harmonisation problem'. Additionally, neuroimaging data is inherently personal in nature, leading to data privacy concerns when sharing the data. To overcome these barriers, we propose an Unsupervised Source-Free Domain Adaptation (SFDA) method, SFHarmony. Through modelling the imaging features as a Gaussian Mixture Model and minimising an adapted Bhattacharyya distance between the source and target features, we can create a model that performs well for the target data whilst having a shared feature representation across the data domains, without needing access to the source data for adaptation or target labels. We demonstrate the performance of our method on simulated and real domain shifts, showing that the approach is applicable to classification, segmentation and regression tasks, requiring no changes to the algorithm. Our method outperforms existing SFDA approaches across a range of realistic data scenarios, demonstrating the potential utility of our approach for MRI harmonisation and general SFDA problems. Our code is available at https://github.com/ nkdinsdale/SFHarmony."}, "cited_paper_content": {"title": "Do We Really Need To Access The Source Data? Source Hypothesis Transfer For Unsupervised Domain Adaptation", "abstract": "Unsupervised domain adaptation (UDA) aims to leverage the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled domain. Prior UDA methods typically require to access the source data when learning to adapt the model, making them risky and inefficient for decentralized private data. In this work we tackle a novel setting where only a trained source model is available and investigate how we can effectively utilize such a model without source data to solve UDA problems. To this end, we propose a simple yet generic representation learning framework, named \\emph{Source HypOthesis Transfer} (SHOT). Specifically, SHOT freezes the classifier module (hypothesis) of the source model and learns the target-specific feature extraction module by exploiting both information maximization and self-supervised pseudo-labeling to implicitly align representations from the target domains to the source hypothesis. In this way, the learned target model can directly predict the labels of target data. We further investigate several techniques to refine the network architecture to parameterize the source model for better transfer performance. To verify its versatility, we evaluate SHOT in a variety of adaptation cases including closed-set, partial-set, and open-set domain adaptation. Experiments indicate that SHOT yields state-of-the-art results among multiple domain adaptation benchmarks."}, "keywords": ["modified source training"], "citation_intent": "result"} {"citing_id": "2304.02945v1", "cited_id": "1201.0490", "section_title": "Svm And Binary Relevance", "citation": "We used the support vector machines implementation in the \"scikit-learn\" library for Python #REFR . For binary relevance, we employed the \"one vs.", "text_before_citation": [], "text_after_citation": ["rest\" classification approach using support vector machines as the base learner with the usual threshold, 0.5.", "For both a linear and a radial kernel, we performed a grid search to determine a suitable values for the hyperparameters C and gamma (gamma applies only to the radial kernel) to minimize 0/1 loss. The x-variables are the TF-IDF variables.", "The models were trained on the training data and evaluated on the validation data.", "For binary relevance, the grid search determined a linear kernel with C = 100."], "citing_paper_content": {"title": "Multi-Label Classification Of Open-Ended Questions With Bert", "abstract": "Open-ended questions in surveys are valuable because they do not constrain the respondent's answer, thereby avoiding biases. However, answers to open-ended questions are text data which are harder to analyze. Traditionally, answers were manually classified as specified in the coding manual. In the last 10 years, researchers have tried to automate coding. Most of the effort has gone into the easier problem of single label prediction, where answers are classified into a single code. However, open-ends that require multi-label classification, i.e., that are assigned multiple codes, occur frequently. In social science surveys, such open-ends are also frequently mildly multi-label. In mildly multi-label classifications, the average number of labels per answer text is relatively low (e.g. < 1.5). For example, the data set we analyze asks \"What do you think is the most important political problem in Germany at the moment?\" Even though the question asks for a single problem, some answers contain multiple problems. Of course, the average number of problems (or labels) per answer is still low. This paper focuses on multi-label classification of text answers to open-ended survey questions in social science surveys. We evaluate the performance of the transformer-based architecture BERT for the German language in comparison to traditional multi-label algorithms (Binary Relevance, Label Powerset, ECC) in a German social science survey, the GLES Panel (N=17,584, 55 labels). Because our data set requires at least one label per text answer, we also propose a modification in case the classification methods fail to predict any labels. We evaluate the algorithms on 0/1 loss: zero loss occurs only when all labels are predicted correctly; a mistake on one label incurs the full loss (1). This loss corresponds to the reality of manual text classification: you code the whole text answer with all labels, even if only a single suspicious label requires review. We find that classification with BERT (forcing at least one label) has the smallest 0/1 loss (13.1%) among methods considered (18.9%-21.6%). As expected, it is much easier to correctly predict answer texts that correspond to a single label (7.1% loss) than those that correspond to multiple labels (\u223c50% loss). Because BERT predicts zero labels for only 1.5% of the answers, forcing at least one label, while successful and recommended, ultimately does not lower the 0/1 loss by much. Our work has important implications for social scientists: 1) We have shown multi-label classification with BERT works in the German language for open-ends. 2) For mildly multi-label classification tasks, the loss now appears small enough to allow for fully automatic classification. Previously, the loss was more substantial, usually requiring semi-automatic approaches. 3) Multilabel classification with BERT requires only a single model. The leading competitor, ECC, is an iterative approach that iterates through individual single label predictions."}, "cited_paper_content": {"title": "Scikit-Learn: Machine Learning In Python", "abstract": "Scikit-learn is a Python module integrating a wide range of state-of-the-art machine learning algorithms for medium-scale supervised and unsupervised problems. This package focuses on bringing machine learning to non-specialists using a general-purpose high-level language. Emphasis is put on ease of use, performance, documentation, and API consistency. It has minimal dependencies and is distributed under the simplified BSD license, encouraging its use in both academic and commercial settings. Source code, binaries, and documentation can be downloaded from http://scikit-learn.sourceforge.net."}, "keywords": ["binary relevance", "\"scikit-learn\" library"], "citation_intent": "method"} {"citing_id": "2304.11685v1", "cited_id": "1812.00194", "section_title": "V. Discussion", "citation": "In the report, they note several of the same observations regarding race and age. Similar findings were made in #REFR .", "text_before_citation": ["They do see a performance increase by fine-tuning a system on their child database.", "These results are comparable with the results observed in this work, but this dataset has the benefit of being synthetic.", "It was also observed how subjects of Black and Asian race in general perform worse than ones of White and Latino-Hispanic.", "It was further seen that all races have a performance decrease as they get younger. In #OTHEREFR , Grother et al.", "have performed a vendor test with a specific focus on the performance and bias of commercial face recognition systems concerning demographics."], "text_after_citation": ["Overall the results indicate that facial recognition systems are not robust to younger subjects and that racial and gender bias is a general problem across age groups.", "In figure 15 , pairs of subjects from ages 7-10 with a high non-mated score can be seen.", "As seen, the pairs of subjects has the same gender and race.", "Similarly, pairs of subjects from the youngest age group (ages 1-4) with a high non-mated score can be seen in figure 16 .", "In this particular age group, false matches across gender and race have also been observed."], "citing_paper_content": {"title": "Child Face Recognition At Scale: Synthetic Data Generation And Performance Benchmark", "abstract": "We address the need for a large-scale database of children's faces by using generative adversarial networks (GANs) and face age progression (FAP) models to synthesize a realistic dataset referred to as HDA-SynChildFaces. To this end, we proposed a processing pipeline that initially utilizes StyleGAN3 to sample adult subjects, which are subsequently progressed to children of varying ages using InterFaceGAN. Intra-subject variations, such as facial expression and pose, are created by further manipulating the subjects in their latent space. Additionally, the presented pipeline allows to evenly distribute the races of subjects, allowing to generate a balanced and fair dataset with respect to race distribution. The created HDA-SynChildFaces consists of 1,652 subjects and a total of 188,832 images, each subject being present at various ages and with many different intra-subject variations. Subsequently, we evaluates the performance of various facial recognition systems on the generated database and compare the results of adults and children at different ages. The study reveals that children consistently perform worse than adults, on all tested systems, and the degradation in performance is proportional to age. Additionally, our study uncovers some biases in the recognition systems, with Asian and Black subjects and females performing worse than White and Latino Hispanic subjects and males."}, "cited_paper_content": {"title": "Racial Faces In The Wild: Reducing Racial Bias By Information Maximization Adaptation Network", "abstract": "Racial bias is an important issue in biometric, but has not been thoroughly studied in deep face recognition. In this paper, we first contribute a dedicated dataset called Racial Faces in-the-Wild (RFW) database, on which we firmly validated the racial bias of four commercial APIs and four state-of-the-art (SOTA) algorithms. Then, we further present the solution using deep unsupervised domain adaptation and propose a deep information maximization adaptation network (IMAN) to alleviate this bias by using Caucasian as source domain and other races as target domains. This unsupervised method simultaneously aligns global distribution to decrease race gap at domain-level, and learns the discriminative target representations at cluster level. A novel mutual information loss is proposed to further enhance the discriminative ability of network output without label information. Extensive experiments on RFW, GBU, and IJB-A databases show that IMAN successfully learns features that generalize well across different races and across different databases."}, "keywords": ["age", "race"], "citation_intent": "result"} {"citing_id": "2304.03088v1", "cited_id": "1307.5640", "section_title": "Introduction", "citation": "Approaches that exploit online-sampling of the uncertainties for the scenario approximation of SMPC problems are known as Scenario MPC #REFR .", "text_before_citation": ["By introducing chance constraints, i.e., requiring state or output constraints to be satisfied with a prespecified probability level, SMPC allows for a systematic trade-off between control performance and constraint satisfaction.", "This is particularly important for MPC of uncertain systems when optimal performance requires operation near constraint boundaries in applications where rare or transient constraint violations are acceptable, such as in electric grids #OTHEREFR or finance #OTHEREFR .", "For safety-critical applications, safety guarantees are enabled by employing failsafe or robust backup plans #OTHEREFR .", "A major challenge in SMPC is to reformulate the chance constraint into a deterministic expression for tractability of the OCP.", "Sampling-based approaches provide an appealing remedy, as they are easy to implement, independent of the underlying probability distribution, and also allow for nonlinearity of the uncertainties in the system dynamics (Lorenzen et al., 2017) ."], "text_after_citation": ["While online-sampling comes with reduced sample complexity, offline-sampling allows for a reduced online computational load by removing redundant constraints offline, as well as for a guarantee of recursive feasibility by introducing a constraint on the first predicted step, as proposed by Lorenzen et al. (2017) for known system matrices subject to parametric uncertainties.", "On the other hand, sampling-based approaches can only guarantee chance constraint satisfaction with confidence, and the computational load increases drastically with the dimension of the system #OTHEREFR .", "In the data-driven setting, a reformulation of the chance constraint may be achieved by leveraging polynomial chaos expansion #OTHEREFR or employing stochastic tubes #OTHEREFR .", "However, both works consider systems subject to additive stochastic disturbances, and assume the available (measured) data to be exact."], "citing_paper_content": {"title": "Offline Uncertainty Sampling In Data-Driven Stochastic Mpc", "abstract": "In this work, we exploit an offline-sampling based strategy for the constrained data-driven predictive control of an unknown linear system subject to random measurement noise. The strategy uses only past measured, potentially noisy data in a non-parametric system representation and does not require any prior model identification. The approximation of chance constraints using uncertainty sampling leads to efficient constraint tightening. Under mild assumptions, robust recursive feasibility and closed-loop constraint satisfaction is shown. In a simulation example, we provide evidence for the improved control performance of the proposed control scheme in comparison to a purely robust data-driven predictive control approach."}, "cited_paper_content": {"title": "The Scenario Approach For Stochastic Model Predictive Control With Bounds On Closed-Loop Constraint Violations", "abstract": "Many practical applications in control require that constraints on the inputs and states of the system are respected, while some performance criterion is optimized. In the presence of model uncertainties or disturbances, it is often sufficient to satisfy the state constraints for at least a prescribed share of the time, such as in building climate control or load mitigation for wind turbines. For such systems, this paper presents a new method of Scenario-Based Model Predictive Control (SCMPC). The basic idea is to optimize the control inputs over a finite horizon, subject to robust constraint satisfaction under a finite number of random scenarios of the uncertainty and/or disturbances. Previous SCMPC approaches have suffered from a substantial gap between the rate of constraint violations specified in the optimal control problem and that actually observed in closed-loop operation of the controlled system. This paper identifies the two theoretical explanations for this gap. First, accounting for the special structure of the optimal control problem leads to a substantial reduction of the problem dimension. Second, the probabilistic constraints have to be interpreted as average-in-time, rather than pointwise-in-time. Based on these insights, a novel SCMPC method can be devised for general linear systems with additive and multiplicative disturbances, for which the number of scenarios is significantly reduced. The presented method retains the essential advantages of the general SCMPC approach, namely a low computational complexity and the ability to handle arbitrary probability distributions. Moreover, the computational complexity can be adjusted by a sample-and-remove strategy."}, "keywords": ["Scenario MPC"], "citation_intent": "background"} {"citing_id": "2304.13013v1", "cited_id": "1804.04235", "section_title": "Preliminaries And Related Work", "citation": "While our analysis and methods build directly on Shazeer and Stern #REFR (AdaFactor), there are important differences.", "text_before_citation": ["These instabilities may slow learning, or even destabilize training completely.", "Various solutions have been proposed, including freezing the embedding layer #OTHEREFR , adding additional layer normalization #OTHEREFR , or reparametrizing the weights #OTHEREFR .", "In our work we investigate instabilities which arise during CLIP training.", "Unlike the instabilities observed in #OTHEREFR , we find these are not caused by attention entropy collapse.", "Instead, our results indicate that spikes arise when the second moment estimator is out of date for the networks early layers."], "text_after_citation": ["In contrast with Shazeer and Stern #OTHEREFR , who only observe instabilities without warmup, we observe instabilities despite a long warmup period.", "Moreover, in contrast with Shazeer and Stern #OTHEREFR we find that an out-of-date second moment estimator is primarily an issue for the (patch) embedding layer, and measure how well loss spikes are predicted by this event.", "Finally, we note that reserachers have moved away from AdaFactor in its original formulation for large-scale training #OTHEREFR , finding AdaFactor to under-perform AdamW #OTHEREFR .", "We believe this is due to the factored second moment or absence of first moment.", "This is why our focus is AdamW #OTHEREFR which is the de facto standard optimizer for transformers."], "citing_paper_content": {"title": "Stable And Low-Precision Training For Large-Scale Vision-Language Models", "abstract": "We introduce new methods for 1) accelerating and 2) stabilizing training for large language-vision models. 1) Towards accelerating training, we introduce SwitchBack , a linear layer for int8 quantized training which provides a speed-up of 13-25% while matching the performance of bfloat16 training within 0.1 percentage points for the 1B parameter CLIP ViT-Huge-the largest int8 training to date. Our main focus is int8 as GPU support for float8 is rare, though we also analyze float8 training through simulation. While SwitchBack proves effective for float8, we show that standard techniques are also successful if the network is trained and initialized so that large feature magnitudes are discouraged, which we accomplish via layer-scale initialized with zeros. 2) Towards stable training, we analyze loss spikes and find they consistently occur 1-8 iterations after the squared gradients become underestimated by their AdamW second moment estimator. As a result, we recommend an AdamW-Adafactor hybrid, which we refer to as StableAdamW because it avoids loss spikes when training a CLIP ViT-Huge model and outperforms gradient clipping."}, "cited_paper_content": {"title": "Adafactor: Adaptive Learning Rates With Sublinear Memory Cost", "abstract": "In several recently proposed stochastic optimization methods (e.g. RMSProp, Adam, Adadelta), parameter updates are scaled by the inverse square roots of exponential moving averages of squared past gradients. Maintaining these per-parameter second-moment estimators requires memory equal to the number of parameters. For the case of neural network weight matrices, we propose maintaining only the per-row and per-column sums of these moving averages, and estimating the per-parameter second moments based on these sums. We demonstrate empirically that this method produces similar results to the baseline. Secondly, we show that adaptive methods can produce larger-than-desired updates when the decay rate of the second moment accumulator is too slow. We propose update clipping and a gradually increasing decay rate scheme as remedies. Combining these methods and dropping momentum, we achieve comparable results to the published Adam regime in training the Transformer model on the WMT 2014 English-German machine translation task, while using very little auxiliary storage in the optimizer. Finally, we propose scaling the parameter updates based on the scale of the parameters themselves."}, "keywords": ["AdaFactor"], "citation_intent": "method"} {"citing_id": "2303.09706v1", "cited_id": "1503.02531", "section_title": "Input Image", "citation": "In addition, we also compare the semi-supervised settings following #REFR upon the same network, and the results are reported in Table 6 .", "text_before_citation": ["Ground Truth Ours Full Fully-Supervised APB w/o. Knowledge Enhancement Table 6 .", "Comparing the different training paradigms, i.e., supervised, semi-supervised and unsupervised settings.", "Table 5 , the result indicates that using the operation in Eq. 3 works best. Semi-supervised setting."], "text_after_citation": ["Specifically, we conduct two semi-supervised training schemes: 1) Semi-supervised v1 refers to training the APB using 1 4 of randomly sampled labeled data on BDD-A and then training the entire network using pseudo-labels generated from the remaining raw images; 2) Semi-supervised v2 refers to the reversed process.", "However, as is shown in Table 6 , we observe drastic drops in the result of the network in both Semi-supervised v1 and v2 compared with fully-supervised APB and are even inferior to our model trained in an unsupervised way.", "The poor performance can be explained by only using a small portion of the dataset tend to fool the model into learning a more restricted central bias, especially in self-driving.", "Our unsupervised method can leverage the information transferred from natural scenes by uncertainty mining, which is able to include more generalized information from non-traffic scenes to reduce bias.", "Figure 5 shows visual comparisons of our model's variants on the BDD-A test set."], "citing_paper_content": {"title": "Unsupervised Self-Driving Attention Prediction Via Uncertainty Mining And Knowledge Embedding", "abstract": "Predicting attention regions of interest is an important yet challenging task for self-driving systems. Existing methodologies rely on large-scale labeled traffic datasets that are labor-intensive to obtain. Besides, the huge domain gap between natural scenes and traffic scenes in current datasets also limits the potential for model training. To address these challenges, we are the first to introduce an unsupervised way to predict self-driving attention by uncertainty modeling and driving knowledge integration. Our approach's Uncertainty Mining Branch (UMB) discovers commonalities and differences from multiple generated pseudo-labels achieved from models pre-trained on natural scenes by actively measuring the uncertainty. Meanwhile, our Knowledge Embedding Block (KEB) bridges the domain gap by incorporating driving knowledge to adaptively refine the generated pseudo-labels. Quantitative and qualitative results with equivalent or even more impressive performance compared to fully-supervised state-of-the-art approaches across all three public datasets demonstrate the effectiveness of the proposed method and the potential of this direction. The code will be made publicly available."}, "cited_paper_content": {"title": "Distilling The Knowledge In A Neural Network", "abstract": "A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel."}, "keywords": ["semi-supervised settings"], "citation_intent": "result"} {"citing_id": "2303.15395v1", "cited_id": "1806.00138", "section_title": "Vii. Numerical Results", "citation": "As a further research, we plan to find a theoretical methods of evaluating the expectations in #REFR and 7and evaluating the number of samples required to find these expectations reliably.", "text_before_citation": ["One can observe a gap of \u2248 3 dB between the achievability and converse bounds.", "With a further K a increase, this gap becomes smaller, and when K a approaches n, projection-based achievability bound shows worse results.", "To show this, we compare the ML-based and projection-based achievability for k = 100, n = 1000, L = 64", "and P e = 10 \u22123 . This scenario has been taken from #OTHEREFR .", "We also find that FASURA #OTHEREFR , a state-of-the-art practical scheme, demonstrates the energy efficiency very close to both ML-and projection-based bounds for K a \u2264 400."], "text_after_citation": ["Projection-based ML-based Single-user ach, #OTHEREFR Single-user converse, #OTHEREFR Multi-user converse, #OTHEREFR Fig. 2. Same-codebook achievability bound for no-CSI setting.", "Frame length n = 1000, the number of information bits k = 100.", "Base station is equipped with L = 64 antennas, Pe = 10 \u22123 .", "Note that the channel (1) is permutation invariant, thus we can perform such type of decoding.DRAFTMarch 28, 2023", "Note that the papers are devoted to a single antenna case.March 28, 2023 DRAFT"], "citing_paper_content": {"title": "Unsourced Random Access With The Mimo Receiver: Projection Decoding Analysis", "abstract": "We consider unsourced random access with MIMO receiver-a crucial communication scenario for future 5G/6G wireless networks. We perform a projection-based decoder analysis and derive energy efficiency achievability bounds when channel state information is unknown at transmitters and the receiver (no-CSI scenario). The comparison to the maximum-likelihood (ML) achievability bounds by Gao et al. (2023) is performed. We show that there is a region where the new bound outperforms the ML bound. The latter fact should not surprise the reader as both decoding criteria are suboptimal when considering per-user probability of error (PUPE). Moreover, transition to projection decoding allows for significant dimensionality reduction, which greatly reduces the computation time."}, "cited_paper_content": {"title": "A Coupled Compressive Sensing Scheme For Unsourced Multiple Access", "abstract": "This article introduces a novel paradigm for the unsourced multiple-access communication problem. This divide-and-conquer approach leverages recent advances in compressive sensing and forward error correction to produce a computationally efficient algorithm. Within the proposed framework, every active device first partitions its data into several sub-blocks, and subsequently adds redundancy using a systematic linear block code. Compressive sensing techniques are then employed to recover sub-blocks, and the original messages are obtained by connecting pieces together using a low-complexity tree-based algorithm. Numerical results suggest that the proposed scheme outperforms other existing practical coding schemes. Measured performance lies approximately $4.3$~dB away from the Polyanskiy achievability limit, which is obtained in the absence of complexity constraints."}, "keywords": ["number"], "citation_intent": "method"} {"citing_id": "2304.03856v1", "cited_id": "1903.03063", "section_title": "I. Introduction", "citation": "A high number of devices attempting to connect to the network at the same time is a medium access control problem; standard random access (RA) systems are incapable of handling such a huge number of requests #REFR , and pure RA methods such as ALOHA have serious performance limitations.", "text_before_citation": ["In B5G, the base station (BS) needs to simultaneously support devices with a variety of capabilities and deployments as the number of active devices continues to increase as well as access requests for multiple services, e.g., vehicles, sensors, mobiles, etc., and applications in 5G and B5G, such as massive machine-type communication (mMTC) and crowded mobile broadband (cMBB) #OTHEREFR .", "Hence, to support massive connections a promising candidate for the next generation of multiple access techniques, non-orthogonal multiple access (NOMA) has the ability to serve in the same resource element (RE), more than one device.", "In comparison to standard orthogonal multiple access (OMA), the NOMA increases the system throughput, improves user fairness, reduces latency, and enables large connections #OTHEREFR , sharing the same orthogonal resource (time and frequency) by coding superposition in transmitter side and successive interference cancellation (SIC) in receiver side #OTHEREFR ."], "text_after_citation": ["The adoption of NOMA and SIC considerably enhances performance, allowing two or more users per time slot #OTHEREFR , #OTHEREFR .", "In (over-)crowded scenarios, the number of user connection attempts considerably outnumbers the number of available pilot sequences.", "As a result, establishing collision resolution techniques became crucial for enabling efficient communication.", "The strongest user collision resolution (SUCRe) protocol is a well-known decentralized grant-based random access (RA) protocol for crowded massive multiple-input multiple-output (MIMO) systems, which takes advantage of MIMO properties #OTHEREFR , giving preference to users with good channel conditions while harming edge users.", "Hence, the SUCRe protocol has undergone several evolutions; e.g., in #OTHEREFR a graphbased pilot access (SUCR-GBPA) protocol is proposed, enabling all users who lost contention resolution to choose a new pilot at random."], "citing_paper_content": {"title": "Improving Random Access With Noma In Mmtc Xl-Mimo", "abstract": "The extra-large multiple-input multiple-output (XL-MIMO) architecture has been recognized as a technology for giving support for the massive MTC (mMTC), providing very high-data rates in high-user density scenarios. However, the large dimension of the array increases the Rayleigh distance (d Rayl), in addition to obstacles and scatters causing spatial non-stationarities and distinct visibility regions (VRs) across the XL array extension. We investigate the random access (RA) problem in crowded XL-MIMO scenarios; the proposed grant-based random access (GB-RA) protocol combining the advantage of non-orthogonal multiple access (NOMA) and strongest user collision resolutions in extra-large arrays (SUCRe-XL) named NOMA-XL can allow access of two or three colliding users in the same XL subarray (SA) selecting the same pilot sequence. The received signal processing in a SA basis changes the d Rayl , enabling the far-field planar wavefront propagation condition, while improving the system performance. The proposed NOMA-XL GB-RA protocol is able to provide a reduction in the number of attempts to access the mMTC network while improving the average sum rate, as the number of SA increases."}, "cited_paper_content": {"title": "From 5G To 6G: Has The Time For Modern Random Access Come?", "abstract": "This short paper proposes the use of modern random access for IoT applications in 6G. A short overview of recent advances in uncoordinated medium access is provided, highlighting the gains that can be achieved by leveraging smart protocol design intertwined with advanced signal processing techniques at the receiver. The authors\u2019 vision on the benefits such schemes can yield for beyond-5G systems is presented, with the aim to trigger further discussion."}, "keywords": ["standard random access"], "citation_intent": "background"} {"citing_id": "2305.00104v1", "cited_id": "1904.08779", "section_title": "Data Augmentations", "citation": "We also utilize the Specaugment #REFR , which involves applying spectrogram masking with a maximum time mask length of 192 frames and a maximum frequency mask length of 48 bins.", "text_before_citation": ["Our audio experiments include various augmentation techniques including Mixup #OTHEREFR and Cutmix #OTHEREFR , which are com- Figure 3 : Audio-specific Cutmix which contains only a temporal axis cut.", "monly used in audio data augmentation #OTHEREFR .", "Additionally, we introduce an audio-specific data augmentation method called audio Cutmix as shown in Figure 3 .", "Audio cutmix is similar to image Cutmix #OTHEREFR , but has an temporal axis cut because the frequency axis contains quantitative information that is not invariant as in images.", "Throughout the entire experiment training process, including the process for training the baseline models, half of the audio clips are augmented using Mixup and the other half are augmented using our audio Cutmix technique."], "text_after_citation": ["Another augmentation technique we use is random rolling, which randomly shifts each audio clip in time by a certain number of samples.", "This random shift causes the entire waveform to appear as if it was played earlier or later in time, which prevents the model from overfitting to the training data.", "We adopted the identical data augmentation pipeline as MViT #OTHEREFR for our image classification experiment, which encompasses Mixup #OTHEREFR , Cutmix #OTHEREFR , Random erasing #OTHEREFR , and Randaugment #OTHEREFR ."], "citing_paper_content": {"title": "Mmvit: Multiscale Multiview Vision Transformers", "abstract": "We present Multiscale Multiview Vision Transformers (MMViT), which introduces multiscale feature maps and multiview encodings to transformer models. Our model encodes different views of the input signal and builds several channelresolution feature stages to process the multiple views of the input at different resolutions in parallel. At each scale stage, we use a cross-attention block to fuse information across different views. This enables the MMViT model to acquire complex high-dimensional representations of the input at different resolutions. The proposed model can serve as a backbone model in multiple domains. We demonstrate the effectiveness of MMViT on audio and image classification tasks, achieving state-of-theart results."}, "cited_paper_content": {"title": "Specaugment: A Simple Data Augmentation Method For Automatic Speech Recognition", "abstract": "We present SpecAugment, a simple data augmentation method for speech recognition. SpecAugment is applied directly to the feature inputs of a neural network (i.e., filter bank coefficients). The augmentation policy consists of warping the features, masking blocks of frequency channels, and masking blocks of time steps. We apply SpecAugment on Listen, Attend and Spell networks for end-to-end speech recognition tasks. We achieve state-of-the-art performance on the LibriSpeech 960h and Swichboard 300h tasks, outperforming all prior work. On LibriSpeech, we achieve 6.8% WER on test-other without the use of a language model, and 5.8% WER with shallow fusion with a language model. This compares to the previous state-of-the-art hybrid system of 7.5% WER. For Switchboard, we achieve 7.2%/14.6% on the Switchboard/CallHome portion of the Hub5'00 test set without the use of a language model, and 6.8%/14.1% with shallow fusion, which compares to the previous state-of-the-art hybrid system at 8.3%/17.3% WER."}, "keywords": ["spectrogram masking", "Specaugment"], "citation_intent": "method"} {"citing_id": "2303.03278v1", "cited_id": "1712.01765", "section_title": "Human Evaluation Setup", "citation": "We use best-worst-scaling (BWS) for evaluating the informativeness of the generated summaries, as this method is \"a less labor-intensive alternative to paired comparisons that has been shown to produce more reliable results than rating scales\" #REFR .", "text_before_citation": ["the percentage of summaries rated as 3-star) as the faithfulness score, and also report the distribution of summaries rated as 1, 2, and 3 stars.", "Details on qualification, payment and other aspects of the evaluation can be found in Appendix A.4.", "Informativeness.", "We also evaluate the generated summaries in terms of informativeness.", "We consider summary to be informative if its content is important and relevant, but it does not necessarily need to be long."], "text_after_citation": ["Accordingly, for each dataset, we select 200 random articles with the corresponding summaries from five systems in random order.", "We ask three annotators to select the most informative (\"best\") and the least informative (\"worst\") among the five.", "A rating per system is computed as the percentage of times it is chosen as best minus the percentage of times it is selected as worst.", "A value of 100 means that the system has been unanimously picked as \"best\", whereas a value of -100 means that the system has been unanimously picked as \"worst\".", "Additional details, as well as the screenshot of the annotation interface, are in Appendix A.4."], "citing_paper_content": {"title": "Faithfulness-Aware Decoding Strategies For Abstractive Summarization", "abstract": "Despite significant progress in understanding and improving faithfulness in abstractive summarization, the question of how decoding strategies affect faithfulness is less studied. We present a systematic study of the effect of generation techniques such as beam search and nucleus sampling on faithfulness in abstractive summarization. We find a consistent trend where beam search with large beam sizes produces the most faithful summaries while nucleus sampling generates the least faithful ones. We propose two faithfulness-aware generation methods to further improve faithfulness over current generation techniques: (1) ranking candidates generated by beam search using automatic faithfulness metrics and (2) incorporating lookahead heuristics that produce a faithfulness score on the future summary. We show that both generation methods significantly improve faithfulness across two datasets as evaluated by four automatic faithfulness metrics and human evaluation. To reduce computational cost, we demonstrate a simple distillation approach that allows the model to generate faithful summaries with just greedy decoding. 1 * * Work conducted during an internship at Amazon."}, "cited_paper_content": {"title": "Best-Worst Scaling More Reliable Than Rating Scales: A Case Study On Sentiment Intensity Annotation", "abstract": "Rating scales are a widely used method for data annotation; however, they present several challenges, such as difficulty in maintaining inter- and intra-annotator consistency. Best-worst scaling (BWS) is an alternative method of annotation that is claimed to produce high-quality annotations while keeping the required number of annotations similar to that of rating scales. However, the veracity of this claim has never been systematically established. Here for the first time, we set up an experiment that directly compares the rating scale method with BWS. We show that with the same total number of annotations, BWS produces significantly more reliable results than the rating scale."}, "keywords": ["generated summaries", "rating scales"], "citation_intent": "method"} {"citing_id": "2303.13186v1", "cited_id": "1706.03762", "section_title": "Overview", "citation": "The multi-modal fusion module leverages attention mechanism #REFR to fuse proposal features F p , gestural features F g , and word features F l , thereby generating the confidence score of each bounding-box.", "text_before_citation": ["To better leverage the features among language, gesture, and the scene point cloud, an attention mechanism #OTHEREFR is employed in our work.", "The proposal generation module is the same as that used in 3DVG-Transformer #OTHEREFR and it generates a bounding-box from object proposal while extracting context-aware features.", "The proposal features are represented by F p \u2208 R M \u00d7H , for M proposals with H-dimensional features.", "The gestural encoding module uses a PointNet++ #OTHEREFR to extract the features F g \u2208 R M \u00d7H of the human agent's point cloud.", "Similar to ScanRefer #OTHEREFR and 3DVG-Transformer #OTHEREFR , the language encoding module aggregates the word embeddings into the language features F l \u2208 R L\u00d7H and global language features using a GRU #OTHEREFR cell and a self-attention module."], "text_after_citation": ["Specifically, this study centers on the combination of gestural information with the proposal and word information, aiming to disambiguate referring expressions and accurately identify the referred object."], "citing_paper_content": {"title": "Scaneru: Interactive 3D Visual Grounding Based On Embodied Reference Understanding", "abstract": "Aiming to link natural language descriptions to specific regions in a 3D scene represented as 3D point clouds, 3D visual grounding is a very fundamental task for human-robot interaction. The recognition errors can significantly impact the overall accuracy and then degrade the operation of AI systems. Despite their effectiveness, existing methods suffer from the difficulty of low recognition accuracy in cases of multiple adjacent objects with similar appearances. To address this issue, this work intuitively introduces the humanrobot interaction as a cue to facilitate the development of 3D visual grounding. Specifically, a new task termed Embodied Reference Understanding (ERU) is first designed for this concern. Then a new dataset called ScanERU is constructed to evaluate the effectiveness of this idea. Different from existing datasets, our ScanERU is the first to cover semi-synthetic scene integration with textual, real-world visual, and synthetic gestural information. Additionally, this paper formulates a heuristic framework based on attention mechanisms and human body movements to enlighten the research of ERU. Experimental results demonstrate the superiority of the proposed method, especially in the recognition of multiple identical objects. Our codes and dataset 1 are ready to be available publicly."}, "cited_paper_content": {"title": "Attention Is All You Need", "abstract": "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data."}, "keywords": ["proposal features", "multi-modal fusion module"], "citation_intent": "method"} {"citing_id": "2304.09374v1", "cited_id": "2002.05709", "section_title": "I. Introduction", "citation": "Especially, contrastive learning method has shown great advances in image classification with massive unlabeled data. It aims to use similarities and differences between images. SimCLR #REFR is a prominent work for this approach.", "text_before_citation": ["Up until now, however, its success is mainly for upstream tasks, not for the downstream.", "It requires yet another labeled data to finetune the pre-trained models.", "In other words, it still depends on supervised learning that requires much manual effort.", "NLP downstream tasks and their performances are limited by disadvantages of supervised learning.", "In image processing, on the other hand, numerous approaches to utilize SSL have been fruitful."], "text_after_citation": ["As its pre-trained model applies to downstream task of classification, it outperforms even supervised learning models.", "But, it also need two stage process of upstream and downstream task.", "To take one step further, some works make downstream task also self-supervised, to minimize human intervention.", "They include SCAN #OTHEREFR , RUC #OTHEREFR , SelfMatch #OTHEREFR , yielding tangible results.", "There is not much work to extend SSL or unsupervised learning method to downstream task in natural language processing field. A few has tried contrastive learning in the field."], "citing_paper_content": {"title": "Shuffle & Divide: Contrastive Learning For Long Text", "abstract": "We propose a self-supervised learning method for long text documents based on contrastive learning. A key to our method is Shuffle and Divide (SaD), a simple text augmentation algorithm that sets up a pretext task required for contrastive updates to BERT-based document embedding. SaD splits a document into two sub-documents containing randomly shuffled words in the entire documents. The sub-documents are considered positive examples, leaving all other documents in the corpus as negatives. After SaD, we repeat the contrastive update and clustering phases until convergence. It is naturally a time-consuming, cumbersome task to label text documents, and our method can help alleviate human efforts, which are most expensive resources in AI. We have empirically evaluated our method by performing unsupervised text classification on the 20 Newsgroups, Reuters-21578, BBC, and BBCSport datasets. In particular, our method pushes the current state-of-the-art, SS-SB-MT, on 20 Newsgroups by 20.94% in accuracy. We also achieve the state-of-the-art performance on Reuters-21578 and exceptionally-high accuracy performances (over 95%) for unsupervised classification on the BBC and BBCSport datasets."}, "cited_paper_content": {"title": "A Simple Framework For Contrastive Learning Of Visual Representations", "abstract": "This paper presents SimCLR: a simple framework for contrastive learning of visual representations. We simplify recently proposed contrastive self-supervised learning algorithms without requiring specialized architectures or a memory bank. In order to understand what enables the contrastive prediction tasks to learn useful representations, we systematically study the major components of our framework. We show that (1) composition of data augmentations plays a critical role in defining effective predictive tasks, (2) introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and (3) contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning. By combining these findings, we are able to considerably outperform previous methods for self-supervised and semi-supervised learning on ImageNet. A linear classifier trained on self-supervised representations learned by SimCLR achieves 76.5% top-1 accuracy, which is a 7% relative improvement over previous state-of-the-art, matching the performance of a supervised ResNet-50. When fine-tuned on only 1% of the labels, we achieve 85.8% top-5 accuracy, outperforming AlexNet with 100X fewer labels."}, "keywords": ["contrastive learning method"], "citation_intent": "method"} {"citing_id": "2304.05360v1", "cited_id": "1401.6848", "section_title": "Explicit De Finetti-Style Theorems", "citation": "Compared with the earlier information-theoretic results #REFR and 3, the bound in Corollary 2.4 is both more general and stronger.", "text_before_citation": ["I(X k 1 ; Y ) = H(X k 1 ) \u2212 H(X k 1 |Y ) \u2264 H(X k 1 ) \u2264 k log |A|,", "where H(X) = \u2212 x\u2208B P (x) log P (x) denotes the entropy of a random variable with probability mass function P on a discrete alphabet B.", "Therefore, Theorem 2.1 immediately yields: is an exchangeable vector of random variables X i taking values in a discrete alphabet A.", "For every 1 \u2264 k \u2264 n \u2212 1 there exists a probability measure \u00b5 = \u00b5 k,n on M 1 (A), such that:", "D P X k 1 M k,\u00b5 \u2264 k(k \u2212 1) 2(n \u2212 k + 1) H(X 1 ) \u2264 k(k \u2212 1) 2(n \u2212 k + 1) log |A|."], "text_after_citation": ["Moreover, it can be used to recover the classical infinite version of de Finetti's theorem for compact spaces, under some conditions.", "Corollary 2.5 (Classical de Finetti theorem for compact spaces) Let G be a compact metrisable space equipped with its Baire \u03c3-algebra G.", "Suppose the process {X k ; k \u2265 1} is exchangeable and the X k take values in G and are G-measurable. If for every k we have I(X k\u22121"], "citing_paper_content": {"title": "A Third Information-Theoretic Approach To Finite De Finetti Theorems", "abstract": "A new finite form of de Finetti's representation theorem is established using elementary information-theoretic tools. The distribution of the first k random variables in an exchangeable vector of n \u2265 k random variables is close to a mixture of product distributions. Closeness is measured in terms of the relative entropy and an explicit bound is provided. This bound is tighter than those obtained via earlier information-theoretic proofs, and its utility extends to random variables taking values in general spaces. The core argument employed has its origins in the quantum information-theoretic literature."}, "cited_paper_content": {"title": "Am With Multiple Merlins", "abstract": "We introduce and study a new model of interactive proofs: AM(k), or Arthur-Merlin with k non-communicating Merlins. Unlike with the better-known MIP, here the assumption is that each Merlin receives an independent random challenge from Arthur. One motivation for this model (which we explore in detail) comes from the close analogies between it and the quantum complexity class QMA(k), but the AM(k) model is also natural in its own right. We illustrate the power of multiple Merlins by giving an AM(2) protocol for 3SAT, in which the Merlins' challenges and responses consist of only n^{1/2+o(1)} bits each. Our protocol has the consequence that, assuming the Exponential Time Hypothesis (ETH), any algorithm for approximating a dense CSP with a polynomial-size alphabet must take n^{(log n)^{1-o(1)}} time. Algorithms nearly matching this lower bound are known, but their running times had never been previously explained. Brandao and Harrow have also recently used our 3SAT protocol to show quasipolynomial hardness for approximating the values of certain entangled games. In the other direction, we give a simple quasipolynomial-time approximation algorithm for free games, and use it to prove that, assuming the ETH, our 3SAT protocol is essentially optimal. More generally, we show that multiple Merlins never provide more than a polynomial advantage over one: that is, AM(k)=AM for all k=poly(n). The key to this result is a subsampling theorem for free games, which follows from powerful results by Alon et al. and Barak et al. on subsampling dense CSPs, and which says that the value of any free game can be closely approximated by the value of a logarithmic-sized random subgame."}, "keywords": ["earlier information-theoretic results"], "citation_intent": "result"} {"citing_id": "2303.17764v1", "cited_id": "1611.07725", "section_title": "Revisiting Distillation For Catastrophic Forgetting", "citation": "It, however, is observed in #REFR that there is tendency of classifying test samples to new classes by LwF.", "text_before_citation": ["EQUATION", "where C = C o \u222a C n , y i is the i th value of the one-hot ground truth y, and p i is the i th value of predicted class probability p.", "The goal of L dis is to preserve knowledge obtained from previous data, which is expressed as", "EQUATION", "where p * is the soft label of x generated by the old model."], "text_after_citation": ["Thus, iCaRL utilized herd selection to better approximate the class mean vector of old classes, where samples that are close to the center of old classes are selected.", "Recall that our goal is to obtain a robust model trained in the continual learning manner.", "To gain robustness, adversarial training is inevitable, which requires augmenting datasets with adversarial examples in every training iteration.", "Following the definition of continual learning, we can derive the loss function of Robust Continual Learning (RCL).", "With adversarial training, we should replace the input x in Equation 1and 2with its adversarial counterpart x adv , which is solved by"], "citing_paper_content": {"title": "Towards Adversarially Robust Continual Learning", "abstract": "Recent studies show that models trained by continual learning can achieve the comparable performances as the standard supervised learning and the learning flexibility of continual learning models enables their wide applications in the real world. Deep learning models, however, are shown to be vulnerable to adversarial attacks. Though there are many studies on the model robustness in the context of standard supervised learning, protecting continual learning from adversarial attacks has not yet been investigated. To fill in this research gap, we are the first to study adversarial robustness in continual learning and propose a novel method called Task-Aware Boundary Augmentation (TABA) to boost the robustness of continual learning models. With extensive experiments on CIFAR-10 and CIFAR-100, we show the efficacy of adversarial training and TABA in defending adversarial attacks."}, "cited_paper_content": {"title": "Icarl: Incremental Classifier And Representation Learning", "abstract": "A major open problem on the road to artificial intelligence is the development of incrementally learning systems that learn about more and more concepts over time from a stream of data. In this work, we introduce a new training strategy, iCaRL, that allows learning in such a class-incremental way: only the training data for a small number of classes has to be present at the same time and new classes can be added progressively. iCaRL learns strong classifiers and a data representation simultaneously. This distinguishes it from earlier works that were fundamentally limited to fixed data representations and therefore incompatible with deep learning architectures. We show by experiments on CIFAR-100 and ImageNet ILSVRC 2012 data that iCaRL can learn many classes incrementally over a long period of time where other strategies quickly fail."}, "keywords": ["new classes"], "citation_intent": "background"} {"citing_id": "2304.09730v1", "cited_id": "2003.09504", "section_title": "Conclusion", "citation": "In the future, we first aim to focus on embedding the graph information #REFR in the optimization process of the proposed approach.", "text_before_citation": ["In conclusion, the high-dimensional nature and imbalanced classes of hyperspectral images pose challenges for traditional machine learning algorithms. One-class classifiers are useful in cases", "where the training data is from a single class only, but they still face challenges in handling the curse of dimensionality.", "To address these challenges, we leverage S-SVDD for one-class classification of hyperspectral images.", "Our experiments on two benchmark HSI datasets show that the proposed approach can effectively tackle the curse of dimensionality and the imbalanced nature of HSI data."], "text_after_citation": ["We also consider utilizing neighbouring pixels' spectral features for a future study, since they are usually correlated and give better scattering information, allowing enhanced performance."], "citing_paper_content": {"title": "Hyperspectral Image Analysis With Subspace Learning-Based One-Class Classification", "abstract": "Hyperspectral image (HSI) classification is an important task in many applications, such as environmental monitoring, medical imaging, and land use/land cover (LULC) classification. Due to the significant amount of spectral information from recent HSI sensors, analyzing the acquired images is challenging using traditional Machine Learning (ML) methods. As the number of frequency bands increases, the required number of training samples increases exponentially to achieve a reasonable classification accuracy, also known as the curse of dimensionality. Therefore, separate band selection or dimensionality reduction techniques are often applied before performing any classification task over HSI data. In this study, we investigate recently proposed subspace learning methods for one-class classification (OCC). These methods map high-dimensional data to a lower-dimensional feature space that is optimized for one-class classification. In this way, there is no separate dimensionality reduction or feature selection procedure needed in the proposed classification framework. Moreover, one-class classifiers have the ability to learn a data description from the category of a single class only. Considering the imbalanced labels of the LULC classification problem and rich spectral information (high number of dimensions), the proposed classification approach is well-suited for HSI data. Overall, this is a pioneer study focusing on subspace learning-based one-class classification for HSI data. We analyze the performance of the proposed subspace learning one-class classifiers in the proposed pipeline. Our experiments validate that the proposed approach helps tackle the curse of dimensionality along with the imbalanced nature of HSI data."}, "cited_paper_content": {"title": "Ellipsoidal Subspace Support Vector Data Description", "abstract": "In this paper, we propose a novel method for transforming data into a low-dimensional space optimized for one-class classification. The proposed method iteratively transforms data into a new subspace optimized for ellipsoidal encapsulation of target class data. We provide both linear and non-linear formulations for the proposed method. The method takes into account the covariance of the data in the subspace; hence, it yields a more generalized solution as compared to Subspace Support Vector Data Description for a hypersphere. We propose different regularization terms expressing the class variance in the projected space. We compare the results with classic and recently proposed one-class classification methods and achieve better results in the majority of cases. The proposed method is also noticed to converge much faster than recently proposed Subspace Support Vector Data Description."}, "keywords": ["graph information"], "citation_intent": "method"} {"citing_id": "2304.12202v1", "cited_id": "1706.03741", "section_title": "Introduction", "citation": "OpenAI's latest conversational agent, Chat-GPT 1 (OpenAI, 2022), a successor of Instruct-GPT (Ouyang et al., 2022) -also known as GPT-3.5models, is an instruction-following transformerbased language model, which has been further trained (aligned) with reinforcement learning from human feedback (RLHF) #REFR .", "text_before_citation": ["Recent advances in Large Language Models (LLMs) #OTHEREFR Chowdhery et al., 2022) , also known as Foundation Models (Bommasani et al., 2021) , have challenged the traditional supervised learning paradigm of fine-tuning by demonstrating emergent zero-shot Natural Language Understanding (NLU) capabilities #OTHEREFR ) through scaling the model's size in billions of parameters #OTHEREFR ."], "text_after_citation": ["ChatGPT demonstrates unprecedented emergent capabilities in zero-shot Question-Answering (QA) 1 https://chat.openai.com/chat capabilities that cover common sense knowledge, but also extend to specialized domains such as problem solving, programming/debugging, and law, as presented by many users in the web.", "Recently, Bommarito and Katz (2022) audited several variants of OpenAI's GPT 2/3/3.5 models in legal bar exam questions, and found that the most advanced -at the time-model ('text-davinci-003') achieves an accuracy of 50.3% on a complete practice exam, significantly in excess of the 25% baseline guessing rate, while it performs at a passing rate in two legal areas (Evidence and Torts). In a follow-up work, #OTHEREFR", "(2023) assessed the model's performance in accounting certification exams, where the model significantly under-performs human capabilities with a correct rate of 14.4%.", "Following the work of Bommarito and Katz, we evaluate the latest OpenAI's GPT-3.5 model (Ouyang et al., 2022 ) ('gpt-3.5-turbo', v.", "March 2023 , the first available ChatGPT, on legal text classification tasks from the LexGLUE #OTHEREFR benchmark in a zeroshot fashion providing examples in a templated instruction-following format, similar to those used by Chung et al. (2022) ."], "citing_paper_content": {"title": "Chatgpt May Pass The Bar Exam Soon, But Has A Long Way To Go For The Lexglue Benchmark", "abstract": "Following the hype around OpenAI's Chat-GPT conversational agent, the last straw in the recent development of Large Language Models (LLMs) that demonstrate emergent unprecedented zero-shot capabilities, we audit the latest OpenAI's GPT-3.5 model, 'gpt-3.5-turbo', the first available ChatGPT model, in the LexGLUE benchmark in a zeroshot fashion providing examples in a templated instruction-following format. The results indicate that ChatGPT achieves an average micro-F1 score of 49.0% across LexGLUE tasks, surpassing the baseline guessing rates. Notably, the model performs exceptionally well in some datasets, achieving micro-F1 scores of 62.8% and 70.1% in the ECtHR B and LEDGAR datasets, respectively. The code base and model predictions are available for review on https://github.com/coastalcph/ zeroshot_lexglue."}, "cited_paper_content": {"title": "Deep Reinforcement Learning From Human Preferences", "abstract": "For sophisticated reinforcement learning (RL) systems to interact usefully with real-world environments, we need to communicate complex goals to these systems. In this work, we explore goals defined in terms of (non-expert) human preferences between pairs of trajectory segments. We show that this approach can effectively solve complex RL tasks without access to the reward function, including Atari games and simulated robot locomotion, while providing feedback on less than one percent of our agent's interactions with the environment. This reduces the cost of human oversight far enough that it can be practically applied to state-of-the-art RL systems. To demonstrate the flexibility of our approach, we show that we can successfully train complex novel behaviors with about an hour of human time. These behaviors and environments are considerably more complex than any that have been previously learned from human feedback."}, "keywords": ["OpenAI's latest conversational", "reinforcement learning"], "citation_intent": "method"} {"citing_id": "2303.02400v1", "cited_id": "1902.10811", "section_title": "Related Work", "citation": "Robustness under distribution shifts Generalization capabilities of existing image classifiers have been a crucial problem #REFR , currently addressed from a few different viewpoints.", "text_before_citation": ["In general, transformer-based models rely on an abundance of training data to ensure proper generalization.", "This requirement was relaxed in DeiT #OTHEREFR , enabling learning on medium-sized datasets.", "Further development introduced novel transformer-based architectures, such as BeiT #OTHEREFR , Swin #OTHEREFR and RegNets #OTHEREFR , which realize specific refinements to boost performance.", "Overall, it has been proven that ViTs are more robust compared to classic CNN image classifiers #OTHEREFR .", "In our work, we verify the degree this claim holds by testing CNN and transformer-based classifiers on the uncurated fine-grained setting."], "text_after_citation": ["Artificial corruptions #OTHEREFR or natural shifts #OTHEREFR on curated data have already exposed biases and architectural vulnerabilities.", "Adversarial robustness #OTHEREFR ] is a related field where models are tested against adversarial examples, which introduce imperceptible though influential perturbations on images.", "Contrary to such attempts, we concentrated around naturally occurring distribution shifts stemming from uncurated image data.", "Regarding architectural choices, many studies perform robustness tests attempting to resolve the CNN vs Transformer contest #OTHEREFR , while other ventures focus on interpreting and understanding model robustness #OTHEREFR .", "In our approach, by experimenting with both CNN and transformer-based architectures we adopt such research attempts to the uncurated setting."], "citing_paper_content": {"title": "Fine-Grained Imagenet Classification In The Wild", "abstract": "Image classification has been one of the most popular tasks in Deep Learning, seeing an abundance of impressive implementations each year. However, there is a lot of criticism tied to promoting complex architectures that continuously push performance metrics higher and higher. Robustness tests can uncover several vulnerabilities and biases which go unnoticed during the typical model evaluation stage. So far, model robustness under distribution shifts has mainly been examined within carefully curated datasets. Nevertheless, such approaches do not test the real response of classifiers in the wild, e.g. when uncurated web-crawled image data of corresponding classes are provided. In our work, we perform fine-grained classification on closely related categories, which are identified with the help of hierarchical knowledge. Extensive experimentation on a variety of convolutional and transformer-based architectures reveals model robustness in this novel setting. Finally, hierarchical knowledge is again employed to evaluate and explain misclassifications, providing an information-rich evaluation scheme adaptable to any classifier."}, "cited_paper_content": {"title": "Do Imagenet Classifiers Generalize To Imagenet?", "abstract": "We build new test sets for the CIFAR-10 and ImageNet datasets. Both benchmarks have been the focus of intense research for almost a decade, raising the danger of overfitting to excessively re-used test sets. By closely following the original dataset creation processes, we test to what extent current classification models generalize to new data. We evaluate a broad range of models and find accuracy drops of 3% - 15% on CIFAR-10 and 11% - 14% on ImageNet. However, accuracy gains on the original test sets translate to larger gains on the new test sets. Our results suggest that the accuracy drops are not caused by adaptivity, but by the models' inability to generalize to slightly \"harder\" images than those found in the original test sets."}, "keywords": ["existing image classifiers"], "citation_intent": "background"} {"citing_id": "2303.17228v1", "cited_id": "1603.00831", "section_title": "Experimental Setup", "citation": "MOT17 #REFR is a multiple object tracking dataset that contains 7 training sequences and 7 test sequences.", "text_before_citation": ["Something-Something V2 (SSv2) #OTHEREFR is another largescale action recognition dataset which focus more on temporal modeling.", "The labels are like \"Pulling something from left to right\", so it is crucial to learn motion information.", "The training set contains 168.9K training videos and the validation set contains 24.7K validation videos.", "We use segment-based sampling from #OTHEREFR to sample 32 frames with 224 \u00d7 224 resolution.", "The augmentation and regularization in SSv2 include random augmentation #OTHEREFR , repeated augmentation #OTHEREFR , random erasing #OTHEREFR , Mixup #OTHEREFR , and CutMix #OTHEREFR , which follow the practice in MViT #OTHEREFR ."], "text_after_citation": ["The total frame number is only 11k, so it is not enough to train our S-ViT model.", "We use the CrowdHuman #OTHEREFR dataset and the MOTSynth #OTHEREFR dataset to expand the training data.", "CrowdHuman contains 19.4k images in crowd human scenarios, and MOTSynth contains 764 synthetic video sequences with 1.3m frames generated from Grand Theft Auto V.", "We conduct our experiments with combinations of different data sources and discuss the influence in Sec. #OTHEREFR"], "citing_paper_content": {"title": "Streaming Video Model", "abstract": "Figure 1. Illustration of the proposed streaming video model with a comparison to conventional frame-based architecture and clip-based architecture. (a) The two-stage streaming video model gracefully serves different types of video tasks through a unified architecture. The output of the temporal-aware (T-aware) spatial encoder serves the frame-based tasks, such as MOT, while the output of the temporal decoder serves the sequence-based tasks, such as action recognition. (b) Frame-based architecture, which uses single image model to independently extract spatial features for each frame, is widely used in the frame-based video tasks. (c) Clip-based architecture, which uses video model to produce the spatiotemporal features for an entire clip, is widely used in the sequence-based video tasks."}, "cited_paper_content": {"title": "Mot16: A Benchmark For Multi-Object Tracking", "abstract": "Standardized benchmarks are crucial for the majority of computer vision applications. Although leaderboards and ranking tables should not be over-claimed, benchmarks often provide the most objective measure of performance and are therefore important guides for reseach. ::: Recently, a new benchmark for Multiple Object Tracking, MOTChallenge, was launched with the goal of collecting existing and new data and creating a framework for the standardized evaluation of multiple object tracking methods. The first release of the benchmark focuses on multiple people tracking, since pedestrians are by far the most studied object in the tracking community. This paper accompanies a new release of the MOTChallenge benchmark. Unlike the initial release, all videos of MOT16 have been carefully annotated following a consistent protocol. Moreover, it not only offers a significant increase in the number of labeled boxes, but also provides multiple object classes beside pedestrians and the level of visibility for every single object of interest."}, "keywords": ["7 training sequences", "multiple object"], "citation_intent": "background"} {"citing_id": "2304.04932v1", "cited_id": "1807.04271", "section_title": "Motivation: Approximate Sampling-And-Query Access", "citation": "Consider for instance the inner product estimation from #REFR , a technique also used in several other \"quantum-inspired\" algorithms, which dequantizes the SWAP test (a basic quantum algorithm that can be used to estimate the inner product between two pure states).", "text_before_citation": [", n} we can query the entry u(i)), for the sampling operation we can only get one sample from a distributionp u : {1, . . .", ", n} \u2192 [0, 1] that is close to p u .", "More precisely, the only guarantee is that the total variation distance betweenp u and p u is at most \u03b5.", "(Note that standard sampling-and-query access corresponds to the case \u03b5 = 0.) This variant corresponds to the setting where query access to u can be easily implemented (e.g., when the input is stored in RAM) but perfect sampling access is not available or simply too costly to be implemented.", "It happens that this seemingly minor change is problematic for known dequantization techniques."], "text_after_citation": ["Given two unit-norm vectors u, v \u2208 R n , where u is given via sampling-and-query access, the approach typically works as follows: sample an index i \u2208 {1, . . .", ", n} according to the distribution p u and output the value v (i) u (i) . Note that the expectation of this value is", "EQUATION", "It is easy to show that the variance of this estimator is small, and thus taking the mean for a few samples gives a good approximation of the inner product (u, v) .", "Note that this test, however, is not \"robust\" due to the division by u(i)."], "citing_paper_content": {"title": "Robust Dequantization Of The Quantum Singular Value Transformation And Quantum Machine Learning Algorithms", "abstract": "Several quantum algorithms for linear algebra problems, and in particular quantum machine learning problems, have been \"dequantized\" in the past few years. These dequantization results typically hold when classical algorithms can access the data via lengthsquared sampling. This assumption, which is standard in the field of randomized linear algebra, means that for a unit-norm vector u \u2208 C n , we can sample from the distribution p u : {1,. .. , n} \u2192 [0, 1] defined as p u (i) = |u(i)| 2 for each i \u2208 {1,. .. , n}. Since this distribution corresponds to the distribution obtained by measuring the quantum state |u in the computational basis, length-squared sampling access gives a reasonable classical analogue to the kind of quantum access considered in many quantum algorithms for linear algebra problems. In this work we investigate how robust these dequantization results are. We introduce the notion of approximate length-squared sampling, where classical algorithms are only able to sample from a distribution close to the ideal distribution in total variation distance. While quantum algorithms are natively robust against small perturbations, current techniques in dequantization are not. Our main technical contribution is showing how many techniques from randomized linear algebra can be adapted to work under this weaker assumption as well. We then use these techniques to show that the recent low-rank dequantization framework by Chia, Gily\u00e9n, Li, Lin, Tang and Wang (JACM 2022) and the dequantization framework for sparse matrices by Gharibian and Le Gall (STOC 2022), which are both based on the Quantum Singular Value Transformation, can be generalized to the case of approximate length-squared sampling access to the input. We also apply these results to obtain a robust dequantization of many quantum machine learning algorithms, including quantum algorithms for recommendation systems, supervised clustering and low-rank matrix inversion."}, "cited_paper_content": {"title": "A Quantum-Inspired Classical Algorithm For Recommendation Systems", "abstract": "We give a classical analogue to Kerenidis and Prakash\u2019s quantum recommendation system, previously believed to be one of the strongest candidates for provably exponential speedups in quantum machine learning. Our main result is an algorithm that, given an m \u00d7 n matrix in a data structure supporting certain l2-norm sampling operations, outputs an l2-norm sample from a rank-k approximation of that matrix in time O(poly(k)log(mn)), only polynomially slower than the quantum algorithm. As a consequence, Kerenidis and Prakash\u2019s algorithm does not in fact give an exponential speedup over classical algorithms. Further, under strong input assumptions, the classical recommendation system resulting from our algorithm produces recommendations exponentially faster than previous classical systems, which run in time linear in m and n. The main insight of this work is the use of simple routines to manipulate l2-norm sampling distributions, which play the role of quantum superpositions in the classical setting. This correspondence indicates a potentially fruitful framework for formally comparing quantum machine learning algorithms to classical machine learning algorithms."}, "keywords": ["\"quantum-inspired\" algorithms"], "citation_intent": "method"} {"citing_id": "2304.12406v1", "cited_id": "1604.01685", "section_title": "Segmentation", "citation": "Cityscapes #REFR is a street-view dataset with high quality annotations, containing 2975 training images and 500 validation images, with a total of 19 object classes.", "text_before_citation": ["Datasets.", "We evaluate on semantic, instance, and panoptic segmentation using 3 datasets: ADE-20K #OTHEREFR is a semantic segmentation dataset containing 150 categories across 20K training images and 2K validation images."], "text_after_citation": ["COCO 2017 #OTHEREFR is an instance segmentation dataset, containing 118K training and 5K validation images."], "citing_paper_content": {"title": "Autofocusformer: Image Segmentation Off The Grid", "abstract": "Real world images often have highly imbalanced content density. Some areas are very uniform, e.g., large patches of blue sky, while other areas are scattered with many small objects. Yet, the commonly used successive grid downsampling strategy in convolutional deep networks treats all areas equally. Hence, small objects are represented in very few spatial locations, leading to worse results in tasks such as segmentation. Intuitively, retaining more pixels representing small objects during downsampling helps to preserve important information. To achieve this, we propose AutoFocusFormer (AFF), a local-attention transformer image recognition backbone, which performs adaptive downsampling by learning to retain the most important pixels for the task. Since adaptive downsampling generates a set of pixels irregularly distributed on the image plane, we abandon the classic grid structure. Instead, we develop a novel point-based local attention block, facilitated by a balanced clustering module and a learnable neighborhood merging module, which yields representations for our point-based versions of state-of-the-art segmentation heads. Experiments show that our AutoFocusFormer (AFF) improves significantly over baseline models of similar sizes. * Work done while Chen Ziwen was an intern at Apple Inc. Image Remaining tokens Stage 2 Prediction AFF Swin Remaining tokens Stage 4 Comparison between on-grid model Swin and off-grid model AFF. AFF downsamples non automatically focusing on more textured, important image regions, and successfully captu the background. The red pixels indicate the locations of the remaining tokens."}, "cited_paper_content": {"title": "The Cityscapes Dataset For Semantic Urban Scene Understanding", "abstract": "Visual understanding of complex urban street scenes is an enabling factor for a wide range of applications. Object detection has benefited enormously from large-scale datasets, especially in the context of deep learning. For semantic urban scene understanding, however, no current dataset adequately captures the complexity of real-world urban scenes. To address this, we introduce Cityscapes, a benchmark suite and large-scale dataset to train and test approaches for pixel-level and instance-level semantic labeling. Cityscapes is comprised of a large, diverse set of stereo video sequences recorded in streets from 50 different cities. 5000 of these images have high quality pixel-level annotations, 20 000 additional images have coarse annotations to enable methods that leverage large volumes of weakly-labeled data. Crucially, our effort exceeds previous attempts in terms of dataset size, annotation richness, scene variability, and complexity. Our accompanying empirical study provides an in-depth analysis of the dataset characteristics, as well as a performance evaluation of several state-of-the-art approaches based on our benchmark."}, "keywords": ["street-view dataset"], "citation_intent": "background"} {"citing_id": "2305.02474v1", "cited_id": "1711.05225", "section_title": "Introduction", "citation": "Similarly, deep convolutional neural networks have been shown to achieve superior performance in detecting pneumonia and other pathologies from chest X-rays, compared to practicing radiologists #REFR .", "text_before_citation": ["Academic hospitals are increasingly dedicating resources to bring machine learning (ML) to the bedside and to addressing issues encountered by clinical staff.", "These resources are being utilized across a range of applications including clinical decision support, early warning, treatment recommendation, risk prediction, image informatics, telediagnosis, drug discovery, and intelligent health knowledge systems.", "There are various examples of ML being applied to medical data, including prediction of sepsis #OTHEREFR , in-hospital mortality, prolonged length-of-stay, patient deterioration, and unplanned readmission #OTHEREFR .", "In particular, sepsis is one of the leading causes of in-hospital deaths.", "A large-scale study demonstrated the impact of an early warning system to reduce the lead time for detecting the onset of sepsis, and hence allowing more time for clinicians to prescribe antibiotics #OTHEREFR ."], "text_after_citation": ["These results highlight the potential of ML models when they are strongly integrated into clinical workflows.", "When deployed successfully, data-driven models can free time for clinicians #OTHEREFR , improve clinical outcomes #OTHEREFR , reduce costs #OTHEREFR , and provide improved quality care for patients.", "However, most studies remain preliminary, limited to small datasets, and/or implemented in select health sub-systems.", "Integrating with clinical workflows remains crucial #OTHEREFR but, despite recent computational advances and an explosion of health data, deploying ML in healthcare responsibly and reliably faces several operational and engineering challenges, including:", "\u2022 Standardizing data formats,"], "citing_paper_content": {"title": "Mlhops: Machine Learning For Healthcare Operations", "abstract": "Machine Learning Health Operations (MLHOps) is the combination of processes for reliable, efficient, usable, and ethical deployment and maintenance of machine learning models in healthcare settings. This paper provides both a survey of work in this area and guidelines for developers and clinicians to deploy and maintain their own models in clinical practice. We cover the foundational concepts of general machine learning operations, describe the initial setup of MLHOps pipelines (including data sources, preparation, engineering, and tools). We then describe long-term monitoring and updating (including data distribution shifts and model updating) and ethical considerations (including bias, fairness, interpretability, and privacy). This work therefore provides guidance across the full pipeline of MLHOps from conception to initial and ongoing deployment."}, "cited_paper_content": {"title": "Chexnet: Radiologist-Level Pneumonia Detection On Chest X-Rays With Deep Learning", "abstract": "We develop an algorithm that can detect pneumonia from chest X-rays at a level exceeding practicing radiologists. Our algorithm, CheXNet, is a 121-layer convolutional neural network trained on ChestX-ray14, currently the largest publicly available chest X-ray dataset, containing over 100,000 frontal-view X-ray images with 14 diseases. Four practicing academic radiologists annotate a test set, on which we compare the performance of CheXNet to that of radiologists. We find that CheXNet exceeds average radiologist performance on the F1 metric. We extend CheXNet to detect all 14 diseases in ChestX-ray14 and achieve state of the art results on all 14 diseases."}, "keywords": ["pneumonia", "deep convolutional neural"], "citation_intent": "background"} {"citing_id": "2304.09718v1", "cited_id": "1910.05513", "section_title": "B. Model-Based Reinforcement Learning", "citation": "Both (a) and (b) essentially imply a sort of intrinsic robustness of the ODE flow \u03c6 t (z 0 ) to perturbations on z 0 #REFR .", "text_before_citation": ["\u2212 \u2212 \u2192 E(T ) generated by H \u03b6 \u03b6 \u03b6 for some time t \u2208 [0, T ] and propagator E.", "The ansatz is a good choice because of the following two properties of ODE paths: (a) they do not intersect and (b) if paths \u03c6", "(A) 0 , \u03c6 (B) 0 start close compared to path \u03c6 (C) 0 , then paths \u03c6 (A) t , \u03c6 (B) t remain close compared to path \u03c6 (C) t .", "Both properties are well known #OTHEREFR for ODEs and become very useful when we try to predict the trajectories from noisy quantum data by imposing strong priors on the space of learnable Hamiltonians.", "Property (b) is a consequence of Gronwall's inequality #OTHEREFR and essentially can be interpreted as: ODE flows that start off closer (w.r.t. the initial condition) stay closer (w.r.t. the final condition)."], "text_after_citation": ["They constrain the trajectories predicted by the model M \u03b6 \u03b6 \u03b6 to be intrinsically robust to small noise in the states s k and inaccuracies in the learned system Hamiltonian H", "(L) 0 (\u03b6 \u03b6 \u03b6).", "We call the SAC equipped with this differentiable ODE model the learnable Hamiltonian model-based SAC (LH-MBSAC) as listed in Algorithm 2.", "Crucially, we note that LH-MBSAC generalizes the SAC by allowing the policy to interact with the ODE model and the physical system.", "LH-MBSAC gracefully falls back to the model-free SAC in the absence of a model with low prediction error that is measured from the performance of the model's predictions on an unseen validation set of interaction data."], "citing_paper_content": {"title": "Sample-Efficient Model-Based Reinforcement Learning For Quantum Control", "abstract": "We propose a model-based reinforcement learning (RL) approach for noisy time-dependent gate optimization with improved sample complexity over model-free RL. Sample complexity is the number of controller interactions with the physical system. Leveraging an inductive bias, inspired by recent advances in neural ordinary differential equations (ODEs), we use an auto-differentiable ODE parametrised by a learnable Hamiltonian ansatz to represent the model approximating the environment whose time-dependent part, including the control, is fully known. Control alongside Hamiltonian learning of continuous time-independent parameters is addressed through interactions with the system. We demonstrate an order of magnitude advantage in the sample complexity of our method over standard model-free RL in preparing some standard unitary gates with closed and open system dynamics, in realistic numerical experiments incorporating single shot measurements, arbitrary Hilbert space truncations and uncertainty in Hamiltonian parameters. Also, the learned Hamiltonian can be leveraged by existing control methods like GRAPE for further gradient-based optimization with the controllers found by RL as initializations. Our algorithm that we apply on nitrogen vacancy (NV) centers and transmons in this paper is well suited for controlling partially characterised one and two qubit systems."}, "cited_paper_content": {"title": "On Robustness Of Neural Ordinary Differential Equations", "abstract": "Neural ordinary differential equations (ODEs) have been attracting increasing attention in various research domains recently. There have been some works studying optimization issues and approximation capabilities of neural ODEs, but their robustness is still yet unclear. In this work, we fill this important gap by exploring robustness properties of neural ODEs both empirically and theoretically. We first present an empirical study on the robustness of the neural ODE-based networks (ODENets) by exposing them to inputs with various types of perturbations and subsequently investigating the changes of the corresponding outputs. In contrast to conventional convolutional neural networks (CNNs), we find that the ODENets are more robust against both random Gaussian perturbations and adversarial attack examples. We then provide an insightful understanding of this phenomenon by exploiting a certain desirable property of the flow of a continuous-time ODE, namely that integral curves are non-intersecting. Our work suggests that, due to their intrinsic robustness, it is promising to use neural ODEs as a basic block for building robust deep network models. To further enhance the robustness of vanilla neural ODEs, we propose the time-invariant steady neural ODE (TisODE), which regularizes the flow on perturbed data via the time-invariant property and the imposition of a steady-state constraint. We show that the TisODE method outperforms vanilla neural ODEs and also can work in conjunction with other state-of-the-art architectural methods to build more robust deep networks."}, "keywords": ["perturbations", "ODE flow"], "citation_intent": "background"} {"citing_id": "2305.01165v1", "cited_id": "1708.09832", "section_title": "Differentiation Of Forged And Authentic Paa Images", "citation": "The fool rate of the iterative refinement-based PAA image generation is 49.375%, which is much higher than the fool rate of ESRGAN, which is only 20% #REFR .", "text_before_citation": ["We employ a random cropping and shuffling method to create a dataset consisting of 40 PAA images, both authetic and forged.", "We then sent this dataset to 14 experts and asked them to differentiate between real and forged images. As shown in Fig.", "3(c) , the labels Y and N indicate whether an image is classified as a real and forged PAA image, respectively.", "Each red point or black cross represents a result from an expert, and the y-axis refers to the number of samples classified as either Y or N.", "Our results showed that the probability of authentic PAA images of human lips being classified as real is 48.04%, while the probability of placing the forged images into the real pool is 46.79%."], "text_after_citation": ["To quantitatively compare the forged PAA images and other images with high-resolution PAA images as the standard, we utilized the Fr\u00e9chet Inception Distance #OTHEREFR , which measures the distance between two images based on the mean and covariance matrices from the Inception V3 model.", "Despite not being trained on authentic PAA images, the Inception V3 model can still differentiate between blood vessel image sets using the Fr\u00e9chet Inception Distance.", "The Fr\u00e9chet Inception Distance between forged and authentic PAA images is", "EQUATION", "where , stand for the input image and the ground-truth, , are mean matrices of the input and the ground-truth, () is the trace of a matrix, and \u03a3 , \u03a3 are covariance matrices of the input and ground-truth, respectively."], "citing_paper_content": {"title": "Self-Similarity-Based Super-Resolution Of Photoacoustic Angiography From Hand-Drawn Doodles", "abstract": "Deep-learning-based super-resolution photoacoustic angiography (PAA) is a powerful tool that restores blood vessel images from under-sampled images to facilitate disease diagnosis. Nonetheless, due to the scarcity of training samples, PAA super-resolution models often exhibit inadequate generalization capabilities, particularly in the context of continuous monitoring tasks. To address this challenge, we propose a novel approach that employs a super-resolution PAA method trained with forged PAA images. We start by generating realistic PAA images of human lips from hand-drawn curves using a diffusion-based image generation model. Subsequently, we train a self-similarity-based super-resolution model with these forged PAA images. Experimental results show that our method outperforms the super-resolution model trained with authentic PAA images in both original-domain and cross-domain tests. Specially, our approach boosts the quality of super-resolution reconstruction using the images forged by the deep learning model, indicating that the collaboration between deep learning models can facilitate generalization, despite limited initial dataset. This approach shows promising potential for exploring zero-shot learning neural networks for vision tasks."}, "cited_paper_content": {"title": "Model-Based Learning For Accelerated, Limited-View 3-D Photoacoustic Tomography", "abstract": "Recent advances in deep learning for tomographic reconstructions have shown great potential to create accurate and high quality images with a considerable speed up. In this paper, we present a deep neural network that is specifically designed to provide high resolution 3-D images from restricted photoacoustic measurements. The network is designed to represent an iterative scheme and incorporates gradient information of the data fit to compensate for limited view artifacts. Due to the high complexity of the photoacoustic forward operator, we separate training and computation of the gradient information. A suitable prior for the desired image structures is learned as part of the training. The resulting network is trained and tested on a set of segmented vessels from lung computed tomography scans and then applied to in-vivo photoacoustic measurement data."}, "keywords": ["image generation"], "citation_intent": "result"} {"citing_id": "2304.11033v1", "cited_id": "1801.10228", "section_title": "Blockchain Implementation.", "citation": "The widely used Hyperledger Fabric #REFR was not considered, as it is based on PoA-based consensus.", "text_before_citation": ["We do not depend on smart contracts, which made our selection of a concrete blockchain more flexible. Potential choices included Bitcoin #OTHEREFR and Ethereum #OTHEREFR ."], "text_after_citation": ["Between the two, Ethereum offers multiple advantages for our use case, namely a configurable hash difficulty and support for the creation of private chains [31] . Therefore, we use it for our implementation.", "As the client, we use the official implementation Go Ethereum (Geth).", "Note that, at the time of implementation, Ethereum still used PoW consensus. Recently, Ethereum switched to PoS consensus #OTHEREFR .", "When setting up a private network, it would therefore have to be configured to use PoW instead. Alternatively, a different PoW-based blockchain can be used."], "citing_paper_content": {"title": "Decentralized Inverse Transparency With Blockchain", "abstract": "Employee data can be used to facilitate work, but their misusage may pose risks for individuals. Inverse transparency therefore aims to track all usages of personal data, allowing individuals to monitor them to ensure accountability for potential misusage. This necessitates a trusted log to establish an agreed-upon and non-repudiable timeline of events. The unique properties of blockchain facilitate this by providing immutability and availability. For power asymmetric environments such as the workplace, permissionless blockchain is especially beneficial as no trusted third party is required. Yet, two issues remain: (1) In a decentralized environment, no arbiter can facilitate and attest to data exchanges. Simple peer-to-peer sharing of data, conversely, lacks the required non-repudiation. (2) With data governed by privacy legislation such as the GDPR, the core advantage of immutability becomes a liability. After a rightful request, an individual's personal data need to be rectified or deleted, which is impossible in an immutable blockchain. To solve these issues, we present Kovacs, a decentralized data exchange and usage logging system for inverse transparency built on blockchain. Its new-usage protocol ensures non-repudiation, and therefore accountability, for inverse transparency. Its one-time pseudonym generation algorithm guarantees unlinkability and enables proof of ownership, which allows data subjects to exercise their legal rights regarding their personal data. With our implementation, we show the viability of our solution. The decentralized communication impacts performance and scalability, but exchange duration and storage size are still reasonable. More importantly, the provided information security meets high requirements. We conclude that Kovacs realizes decentralized inverse transparency through secure and GDPR-compliant use of permissionless blockchain. CCS Concepts: \u2022 Computer systems organization \u2192 Peer-to-peer architectures; \u2022 Security and privacy \u2192 Distributed systems security; Privacy-preserving protocols; Cryptography."}, "cited_paper_content": {"title": "Hyperledger Fabric: A Distributed Operating System For Permissioned Blockchains", "abstract": "Hyperledger Fabric is a modular and extensible open-source system for deploying and operating permissioned blockchains. Fabric is currently used in more than 400 prototypes and proofs-of-concept of distributed ledger technology, as well as several production systems, across different industries and use cases. Starting from the premise that there are no\"one-size-fits-all\"solutions, Fabric is the first truly extensible blockchain system for running distributed applications. It supports modular consensus protocols, which allows the system to be tailored to particular use cases and trust models. Fabric is also the first blockchain system that runs distributed applications written in general-purpose programming languages, without systemic dependency on a native cryptocurrency. This stands in sharp contrast to existing blockchain platforms for running smart contracts that require code to be written in domain-specific languages or rely on a cryptocurrency. Furthermore, it uses a portable notion of membership for realizing the permissioned model, which may be integrated with industry-standard identity management. To support such flexibility, Fabric takes a novel approach to the design of a permissioned blockchain and revamps the way blockchains cope with non-determinism, resource exhaustion, and performance attacks. This paper describes Fabric, its architecture, the rationale behind various design decisions, its security model and guarantees, its most prominent implementation aspects, as well as its distributed application programming model. We further evaluate Fabric by implementing and benchmarking a Bitcoin-inspired digital currency. We show that Fabric achieves end-to-end throughput of more than 3500 transactions per second in certain popular deployment configurations, with sub-second latency."}, "keywords": ["PoA-based consensus", "Hyperledger Fabric"], "citation_intent": "method"} {"citing_id": "2304.03931v1", "cited_id": "1911.05076", "section_title": "Introduction", "citation": "Overall, distortions produced when using Euclidean geometry for non-Euclidean geometric structures are overwhelming, causing the loss of semantic information, and hence resulting in inferior performance #REFR .", "text_before_citation": ["In fact, data in countless applications intrinsically has non-Euclidean geometric structures #OTHEREFR .", "Several studies show that non-Euclidean geometric structures can be better captured by particular forms of Riemannian geometry #OTHEREFR .", "For example, the hyperbolic geometry has a natural expressive ability for the hierarchical structure and is hence used successfully for fine-grained images #OTHEREFR .", "The spherical geometry is shown as a suitable choice for face images that have the cyclical structure #OTHEREFR .", "In addition to the geometric structures discussed above, natural data may be diverse and irregular in structure, e.g., data exhibits hierarchical forms in some regions and cyclical forms in others #OTHEREFR ."], "text_after_citation": ["In this paper, we study how to attain suitable non-Euclidean geometry to capture the intrinsic geometric structures of data during continual learning.", "To achieve our goal, we have to face two challenges (see Fig. 1 ).", "(1) Non-stationary stream of data will inevitably increase the complexity of intrinsic geometric structures.", "In other words, fixing the geometry of the underlying space cannot always match new and unseen data in continual learning.", "For example, more and more complex hierarchies in a data stream bring more leaf nodes, requiring a faster growing space volume with the radius, which conflicts with a fixed geometry #OTHEREFR ."], "citing_paper_content": {"title": "Exploring Data Geometry For Continual Learning", "abstract": "Continual learning aims to efficiently learn from a nonstationary stream of data while avoiding forgetting the knowledge of old data. In many practical applications, data complies with non-Euclidean geometry. As such, the commonly used Euclidean space cannot gracefully capture non-Euclidean geometric structures of data, leading to inferior results. In this paper, we study continual learning from a novel perspective by exploring data geometry for the non-stationary stream of data. Our method dynamically expands the geometry of the underlying space to match growing geometric structures induced by new data, and prevents forgetting by keeping geometric structures of old data into account. In doing so, making use of the mixed curvature space, we propose an incremental search scheme, through which the growing geometric structures are encoded. Then, we introduce an angular-regularization loss and a neighbor-robustness loss to train the model, capable of penalizing the change of global geometric structures and local geometric structures. Experiments show that our method achieves better performance than baseline methods designed in Euclidean space."}, "cited_paper_content": {"title": "Constant Curvature Graph Convolutional Networks", "abstract": "Interest has been rising lately towards methods representing data in non-Euclidean spaces, e.g. hyperbolic or spherical, that provide specific inductive biases useful for certain real-world data properties, e.g. scale-free, hierarchical or cyclical. However, the popular graph neural networks are currently limited in modeling data only via Euclidean geometry and associated vector space operations. Here, we bridge this gap by proposing mathematically grounded generalizations of graph convolutional networks (GCN) to (products of) constant curvature spaces. We do this by i) introducing a unified formalism that can interpolate smoothly between all geometries of constant curvature, ii) leveraging gyro-barycentric coordinates that generalize the classic Euclidean concept of the center of mass. Our class of models smoothly recover their Euclidean counterparts when the curvature goes to zero from either side. Empirically, we outperform Euclidean GCNs in the tasks of node classification and distortion minimization for symbolic data exhibiting non-Euclidean behavior, according to their discrete curvature."}, "keywords": ["non-Euclidean geometric structures"], "citation_intent": "background"} {"citing_id": "2304.07699v1", "cited_id": "1909.02027", "section_title": "Related Works", "citation": "First, it enhances representations by incorporating strong prior knowledge from a network pre-trained on external data in the intention domain (i.e., CLINC dataset #REFR ) and adding a masked language modeling (MLM) task.", "text_before_citation": ["We have conducted a pilot study using the CDAC+ #OTHEREFR algorithm, which first captures pairwise sentence relationships with the guidance of labeled data and then refines cluster assignments with the DEC loss.", "Another of our works, DeepAligned #OTHEREFR , initializes intent representations under the supervision of labeled data and then iteratively performs clustering and representation learning, aligning cluster centroids between adjacent iterations to obtain consistent self-supervised signals.", "DCSC #OTHEREFR improves the pretraining stage by applying contrastive losses to both labeled and unlabeled data.", "It mainly uses the SwAV #OTHEREFR algorithm for unsupervised learning, which requires each sample to predict the swapped view and uses Sinkhorn-Knopp #OTHEREFR to produce soft cluster assignments.", "MTP-CLNN #OTHEREFR is the current state-of-the-art method, which has two key features."], "text_after_citation": ["Second, it adapts the SCAN algorithm #OTHEREFR to the semi-supervised setting, creating positive pairs with K-nearest neighbors or samples with the same labels for contrastive learning.", "However, this method relies heavily on the selected external data, and its performance drops dramatically in a purely unsupervised scenario #OTHEREFR .", "Open set recognition (OSR) #OTHEREFR and open intent detection #OTHEREFR , #OTHEREFR are two similar fields in computer vision and natural language processing, respectively.", "Both fields focus on detecting unseen image-based or text-based classes during testing. However, they have limitations in distinguishing fine-grained new classes.", "CD-OSR #OTHEREFR extends OSR to new class discovery, but it doesn't make use of unlabeled data during training."], "citing_paper_content": {"title": "Usnid: A Framework For Unsupervised And Semi-Supervised New Intent Discovery", "abstract": "New intent discovery is of great value to natural language processing, allowing for a better understanding of user needs and providing friendly services. However, most existing methods struggle to capture the complicated semantics of discrete text representations when limited or no prior knowledge of labeled data is available. To tackle this problem, we propose a novel framework called USNID for unsupervised and semi-supervised new intent discovery, which has three key technologies. First, it takes full use of unsupervised or semi-supervised data to mine shallow semantic similarity relations and provide well-initialized representations for clustering. Second, it designs a centroid-guided clustering mechanism to address the issue of cluster allocation inconsistency and provide high-quality self-supervised targets for representation learning. Third, it captures high-level semantics in unsupervised or semi-supervised data to discover fine-grained intent-wise clusters by optimizing both cluster-level and instance-level objectives. We also propose an effective method for estimating the cluster number in open-world scenarios without knowing the number of new intents beforehand. USNID performs exceptionally well on several intent benchmark datasets, achieving new state-of-the-art results in unsupervised and semi-supervised new intent discovery and demonstrating robust performance with different cluster numbers. Index Terms-new intent discovery, unsupervised and semi-supervised clustering, self-supervised learning, deep neural network."}, "cited_paper_content": {"title": "An Evaluation Dataset For Intent Classification And Out-Of-Scope Prediction", "abstract": "Task-oriented dialog systems need to know when a query falls outside their range of supported intents, but current text classification corpora only define label sets that cover every example. We introduce a new dataset that includes queries that are out-of-scope---i.e., queries that do not fall into any of the system's supported intents. This poses a new challenge because models cannot assume that every query at inference time belongs to a system-supported intent class. Our dataset also covers 150 intent classes over 10 domains, capturing the breadth that a production task-oriented agent must handle. We evaluate a range of benchmark classifiers on our dataset along with several different out-of-scope identification schemes. We find that while the classifiers perform well on in-scope intent classification, they struggle to identify out-of-scope queries. Our dataset and evaluation fill an important gap in the field, offering a way of more rigorously and realistically benchmarking text classification in task-driven dialog systems."}, "keywords": ["intention domain", "masked language modeling"], "citation_intent": "method"} {"citing_id": "2303.12068v1", "cited_id": "1409.0473", "section_title": "The History Of Attention", "citation": "This notion of context is the motivation behind the introduction of the attention mechanism in 2015 #REFR .", "text_before_citation": [], "text_after_citation": ["Before this, language translation was mostly relying on encoder-decoder architectures: recurrent neural networks (RNNs) #OTHEREFR and in particular long short-term memory (LSTMs) networks were used to model the relationship among words #OTHEREFR .", "Specifically, each word of an input sentence is processed by the encoder sequentially.", "At each step, the past and present information are summarized and encoded into a fixed-length vector.", "In the end, the encoder has processed every word, and outputs a final fixed-length vector, which summarizes all input information.", "This final vector is then decoded, and finally translates the input information into the target language."], "citing_paper_content": {"title": "Transformers And Visual Transformers", "abstract": "Transformers were initially introduced for natural language processing (NLP) tasks, but fast they were adopted by most deep learning fields, including computer vision. They measure the relationships between pairs of input tokens (words in the case of text strings, parts of images for visual Transformers), termed attention. The cost is exponential with the number of tokens. For image classification, the most common Transformer Architecture uses only the Transformer Encoder in order to transform the various input tokens. However, there are also numerous other applications in which the decoder part of the traditional Transformer Architecture is also used. Here, we first introduce the Attention mechanism (Section 1), and then the Basic Transformer Block including the Vision Transformer (Section 2). Next, we discuss some improvements of visual Transformers to account for small datasets or less computation (Section 3). Finally, we introduce Visual Transformers applied to tasks other than image classification, such as detection, segmentation, generation and training without labels (Section 4) and other domains, such as video or multimodality using text or audio data (Section 5)."}, "cited_paper_content": {"title": "Neural Machine Translation By Jointly Learning To Align And Translate", "abstract": "Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition."}, "keywords": ["attention mechanism"], "citation_intent": "background"} {"citing_id": "2303.17573v1", "cited_id": "1603.02754", "section_title": "Model Training, Evaluation And Explanation", "citation": "To model our dataset, we applied a standard set of regressor models like XGBoost, CatBoost, SVM, etc., and found XG-Boost #REFR to be the best-performing one.", "text_before_citation": [], "text_after_citation": ["XGBoost -a widely used algorithm for regression, classification, and ranking problems -stands for eXtreme Gradient Boosting, and it implements a gradient boosting decision trees algorithm.", "XGBoost-regressor is an implementation of the XGBoost algorithm for regression problems.", "It works by building a series of decision trees where each tree tries to correct the errors made by the previous tree.", "In the end, the algorithm combines the results of all trees to make the final prediction.", "We conducted an extensive hyperparameter search of the XGBoost regressor by experimenting with different learning rates, max depth of the tree, number of estimators to use, etc."], "citing_paper_content": {"title": "Using Ai To Measure Parkinson'S Disease Severity At Home", "abstract": "We present an artificial intelligence system to remotely assess the motor performance of individuals with Parkinson's disease (PD). Participants performed a motor task (i.e., tapping fingers) in front of a webcam, and data from 250 global participants were rated by three expert neurologists following the Movement Disorder Society Unified Parkinson's Disease Rating Scale (MDS-UPDRS). The neurologists' ratings were highly reliable, with an intra-class correlation coefficient (ICC) of 0.88. We developed computer algorithms to obtain objective measurements that align with the MDS-UPDRS guideline and are strongly correlated with the neurologists' ratings. Our machine learning model trained on these measures outperformed an MDS-UPDRS certified rater, with a mean absolute error (MAE) of 0.59 compared to the rater's MAE of 0.79. However, the model performed slightly worse than the expert neurologists (0.53 MAE). The methodology can be replicated for similar motor tasks, providing the possibility of evaluating individuals with PD and other movement disorders remotely, objectively, and in areas with limited access to neurological care."}, "cited_paper_content": {"title": "Xgboost: A Scalable Tree Boosting System", "abstract": "Tree boosting is a highly effective and widely used machine learning method. In this paper, we describe a scalable end-to-end tree boosting system called XGBoost, which is used widely by data scientists to achieve state-of-the-art results on many machine learning challenges. We propose a novel sparsity-aware algorithm for sparse data and weighted quantile sketch for approximate tree learning. More importantly, we provide insights on cache access patterns, data compression and sharding to build a scalable tree boosting system. By combining these insights, XGBoost scales beyond billions of examples using far fewer resources than existing systems."}, "keywords": ["SVM", "XGBoost"], "citation_intent": "method"} {"citing_id": "2305.01275v1", "cited_id": "1606.00915", "section_title": "Scribbles", "citation": "Using the best pseudo labels from scribble prompts, DeepLab-v2 #REFR can reach 75.9% and 76.6% mIoU scores on the validation and test sets, as shown in Tab. 3.", "text_before_citation": ["2, we find that sampling 20% scribble pixels outperforms inputting all scribble pixels in an object by 6.4%.", "Besides, iteratively inputting scribble pixels of a class can further improve the performance by 3.3%.", "We analyze that iterative input is more effective for scribbles and points than image-level labels due to accurate point locations.", "Finally, when inputting the scribble pixels of one class, the scribble pixels of other classes can be regarded as negative points.", "We can see that adding negative points can further improve the quality of pseudo labels."], "text_after_citation": [], "citing_paper_content": {"title": "Segment Anything Is A Good Pseudo-Label Generator For Weakly Supervised Semantic Segmentation", "abstract": "Weakly supervised semantic segmentation with weak labels is a long-lived illposed problem. Mainstream methods mainly focus on improving the quality of pseudo labels. In this report, we attempt to explore the potential of 'prompt to masks' from the powerful class-agnostic large segmentation model, i.e., segmentanything. Specifically, different weak labels are used as prompts to the segmentanything model, generating precise class masks. The class masks are utilized to generate pseudo labels to train the segmentation networks. We have conducted extensive experiments on PASCAL VOC 2012 dataset. Experiments demonstrate that segment-anything can serve as a good pseudo-label generator. The code will be made publicly available. * * denotes equal contribution. Preprint. Under review."}, "cited_paper_content": {"title": "Deeplab: Semantic Image Segmentation With Deep Convolutional Nets, Atrous Convolution, And Fully Connected Crfs", "abstract": "In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First , we highlight convolution with upsampled filters, or \u2018atrous convolution\u2019, as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second , we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third , we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed \u201cDeepLab\u201d system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online."}, "keywords": ["best pseudo labels", "DeepLab-v2"], "citation_intent": "method"} {"citing_id": "2303.16372v2", "cited_id": "1808.06651", "section_title": "Definition 6.3 (Input Lipschitzness)", "citation": "We extend the analysis of one such algorithm, the Projected Noisy Stochastic Gradient Descent (PNSGD) #REFR , restated in Algorithm 1, and establish that the algorithm satisfies (\u03b1, )-R\u00e9nyi mDP with a lower noise magnitude than that required for (\u03b1, )-R\u00e9nyi DP. Proposition 6.5.", "text_before_citation": ["Standard arguments on the composition of privacy mechanisms renders the differentially private variant of SGD (q , q\u03b4)\u2212differentially private at each step, where q = L/N is the lot size.", "We note that, by replacing the global sensitivity assumption with input Lipschitzness, the Gaussian mechanism ensures ( L , \u03b4)-mDP for appropriately scaled noise, as in Proposition 6.4.", "Thus, DP-SGD inherently incorporates metric differential privacy, with privacy accounting based on standard composition theorems #OTHEREFR .", "Metric Differential Privacy in PN-SGD When analysing the privacy of learning algorithms that require iterative updates on an intermediate solution, it is common practice to ensure privacy at each iteration and argue about the cumulative loss of privacy via composition theorems.", "Another popular direction is the theoretical analysis of Noisy Stochastic Gradient Descent to formalize privacy amplifications under certain assumptions and obtain bounds on the degradation of privacy across iterations."], "text_after_citation": ["Let K \u2282 R d be a convex set and let {f (., x)} x\u2208cX be a family of convex, \u03b2-smooth functions over K, where the gradients are L input -input Lipschitz. Furthermore, assume X is a bounded set.", "Then, for any \u03b7 \u2264 2/\u03b2 and \u03b1 > 1, initializing w 0 \u2208 K and dataset S \u2208 X n , PNSGD run with", "\u03c3 2 \u2265 2\u03b1L 2 input diam(X ) L (n\u2212t+1)", "satisfies ( L , \u03b1) metric differential privacy.", "Proof. See Appendix 9.8 Claim 6.6."], "citing_paper_content": {"title": "Non-Asymptotic Lower Bounds For Training Data Reconstruction", "abstract": "We investigate semantic guarantees of private learning algorithms for their resilience to training Data Reconstruction Attacks (DRAs) by informed adversaries. To this end, we derive non-asymptotic minimax lower bounds on the adversary's reconstruction error against learners that satisfy differential privacy (DP) and metric differential privacy (mDP). Furthermore, we demonstrate that our lower bound analysis for the latter also covers the high dimensional regime, wherein, the input data dimensionality may be larger than the adversary's query budget. Motivated by the theoretical improvements conferred by metric DP, we extend the privacy analysis of popular deep learning algorithms such as DP-SGD[1] and Projected Noisy SGD[2] to cover the broader notion of metric differential privacy."}, "cited_paper_content": {"title": "Privacy Amplification By Iteration", "abstract": "Many commonly used learning algorithms work by iteratively updating an intermediate solution using one or a few data points in each iteration. Analysis of differential privacy for such algorithms often involves ensuring privacy of each step and then reasoning about the cumulative privacy cost of the algorithm. This is enabled by composition theorems for differential privacy that allow releasing of all the intermediate results. In this work, we demonstrate that for contractive iterations, not releasing the intermediate results strongly amplifies the privacy guarantees. We describe several applications of this new analysis technique to solving convex optimization problems via noisy stochastic gradient descent. For example, we demonstrate that a relatively small number of non-private data points from the same distribution can be used to close the gap between private and non-private convex optimization. In addition, we demonstrate that we can achieve guarantees similar to those obtainable using the privacy-amplification-by-sampling technique in several natural settings where that technique cannot be applied."}, "keywords": ["lower noise magnitude"], "citation_intent": "background"} {"citing_id": "2304.11072v1", "cited_id": "1807.04320", "section_title": "Rq2: Can Our Classifier Learn Vulnerability In A Biased Setting?", "citation": "Nevertheless, using either weighted loss or Focal Loss, the accuracy of our model drops by almost 1% compared to #REFR , indicating that these previous models are highly biased with an assumption of non-vulnerability.", "text_before_citation": ["However, \u03b3 is used as an exponent in Equation 7, so when we run RoBERTa-GCN w/ WL, \u03b3 is set to 0 in order to ignore its effect.", "Observing the last two rows in Table 5 , we find that, initially, we tested our model using Weighted Loss (WL) only, and later we tested with Focal Loss, which is a combination of weighted loss \u03b1 with the hyperparameter \u03b3.", "In both cases, we observe that our model has surpassed previous models in terms of Precision, Recall, and F1 score, indicating lower false positive and false negative rates with WL and with FL.", "However, these numbers improved slightly when compared with performance between weighted loss and Focal Loss, with the exception of the precision metric.", "From the results, it is observed that with Focal Loss, we achieved an improvement of 11.18% in Precision, 1.06% in Recall, and 0.61% on our F1 score compared to the weighted loss."], "text_after_citation": ["Compared to the next best model #OTHEREFR , our model with Focal Loss shows an improvement of 22.91% in Precision, 23.53% in Recall, and 18.04% on F1 score.", "Thus, Focal Loss improves precision, recall and F1 on imbalanced data while not causing a negating impact on a balanced dataset only by adjusting the parameters \u03b1 and \u03b3 RQ3: Is our classifier generalized enough to detect vulnerabilities in N-day and zero-day program sam- We evaluated our classifier's performance based on its ability to accurately predict vulnerability with 273 Nday real-world sample programs. These sample programs are never used during training.", "We also used 4 zero-day examples in order to evaluate our classifier on predicting zero-day vulnerabilities as well.", "The classifier predicts the vulnerability classif the vulnerability exists in the code and predicts non-vulnerable when the vulnerability does not exist.", "Out of these 273 N-day and 4, zero-day code samples, some vulnerability classes exist that are not part of our VulF dataset from table 1."], "citing_paper_content": {"title": "An Unbiased Transformer Source Code Learning With Semantic Vulnerability Graph", "abstract": "Over the years, open-source software systems have become prey to threat actors. Even highly-adopted software has been crippled by unforeseeable attacks, leaving millions of devices exposed. Even as open-source communities act quickly to patch the breach, code vulnerability screening should be an integral part of agile software development from the beginning. Unfortunately, current vulnerability screening techniques are ineffective at identifying novel vulnerabilities or providing developers with code vulnerability and classification. Furthermore, the datasets used for vulnerability learning often exhibit distribution shifts from the real-world testing distribution due to novel attack strategies deployed by adversaries and as a result, the machine learning model's performance may be hindered or biased. To address these issues, we propose a joint interpolated multitasked unbiased vulnerability classifier comprising a transformer \"RoBERTa\" and graph convolution neural network (GCN). We present a training process utilizing a semantic vulnerability graph (SVG) representation from source code, created by integrating edges from a sequential flow, control flow, and data flow, as well as a novel flow dubbed Poacher Flow (PF). Poacher flow edges reduce the gap between dynamic and static program analysis and handle complex long-range dependencies. Moreover, our approach reduces biases of classifiers regarding unbalanced datasets by integrating Focal Loss objective function along with SVG. Remarkably, experimental results show that our classifier outperforms state-of-the-art results on vulnerability detection with fewer false negatives and false positives. After testing our model across multiple datasets, it shows an improvement of at least 2.41% and 18.75% in the bestcase scenario. Evaluations using N-day program samples demonstrate that our proposed approach achieves a 93% accuracy and was able to detect 4, zero-day vulnerabilities from popular GitHub repositories. Our code and data are"}, "cited_paper_content": {"title": "Automated Vulnerability Detection In Source Code Using Deep Representation Learning", "abstract": "Increasing numbers of software vulnerabilities are discovered every year whether they are reported publicly or discovered internally in proprietary code. These vulnerabilities can pose serious risk of exploit and result in system compromise, information leaks, or denial of service. We leveraged the wealth of C and C++ open-source code available to develop a large-scale function-level vulnerability detection system using machine learning. To supplement existing labeled vulnerability datasets, we compiled a vast dataset of millions of open-source functions and labeled it with carefully-selected findings from three different static analyzers that indicate potential exploits. The labeled dataset is available at: this https URL. Using these datasets, we developed a fast and scalable vulnerability detection tool based on deep feature representation learning that directly interprets lexed source code. We evaluated our tool on code from both real software packages and the NIST SATE IV benchmark dataset. Our results demonstrate that deep feature representation learning on source code is a promising approach for automated software vulnerability detection."}, "keywords": ["vulnerability"], "citation_intent": "result"} {"citing_id": "2303.18190v1", "cited_id": "2004.01670", "section_title": "Qualitative Language Model Risk Assessment", "citation": "How harms present is often highly language-dependent, and so each language needs its own dataset, but the distribution of languages represented in harm detection data is skewed #REFR .", "text_before_citation": ["Further, automated systems project an unknown set of values onto the result. How their creators define e.g.", "\"toxicity\" and represent it through data is often not transparent.", "Thus, not only is it hard to discover when novel forms of harm slip past undetected, it is also uncertain how well their classifications match the goal of an assessment.", "Automated systems are frequently limited to well-resourced languages.", "The efficacy of harm detection classifiers are limited by the amount of language-specific data."], "text_after_citation": ["Automated systems degrade over time.", "Forms of linguistic expression evolve, but a classifier is frozen in time when it is trained (or, specifically, when its training data was gathered).", "For example, some APIs would consistently mark any message containing the term \"toot\" as profane, causing errors first apparent when applied to Mastodon.", "Automating evaluation stops assessors from learning.", "A way to become better at assessing LM risks is to granularly understand their data, and output behaviours."], "citing_paper_content": {"title": "Assessing Language Model Deployment With Risk Cards", "abstract": "This paper introduces RiskCards, a framework for structured assessment and documentation of risks associated with an application of language models. As with all language, text generated by language models can be harmful, or used to bring about harm. Automating language generation adds both an element of scale and also more subtle or emergent undesirable tendencies to the generated text. Prior work establishes a wide variety of language model harms to many different actors: existing taxonomies identify categories of harms posed by language models; benchmarks establish automated tests of these harms; and documentation standards for models, tasks and datasets encourage transparent reporting. However, there is no risk-centric framework for documenting the complexity of a landscape in which some risks are shared across models and contexts, while others are specific, and where certain conditions may be required for risks to manifest as harms. RiskCards address this methodological gap by providing a generic framework for assessing the use of a given language model in a given scenario. Each RiskCard makes clear the routes for the risk to manifest harm, their placement in harm taxonomies, and example prompt-output pairs. While RiskCards are designed to be open-source, dynamic and participatory, we present a \"starter set\" of RiskCards taken from a broad literature survey, each of which details a concrete risk presentation. Language model RiskCards initiate a community knowledge base which permits the mapping of risks and harms to a specific model or its application scenario, ultimately contributing to a better, safer and shared understanding of the risk landscape. CCS Concepts: \u2022 Computing methodologies \u2192 Natural language processing; \u2022 Security and privacy \u2192 Human and societal aspects of security and privacy."}, "cited_paper_content": {"title": "Directions In Abusive Language Training Data: Garbage In, Garbage Out", "abstract": "Data-driven analysis and detection of abusive online content covers many different tasks, phenomena, contexts, and methodologies. This paper systematically reviews abusive language dataset creation and content in conjunction with an open website for cataloguing abusive language data. This collection of knowledge leads to a synthesis providing evidence-based recommendations for practitioners working with this complex and highly diverse data."}, "keywords": ["language", "harm detection data"], "citation_intent": "background"} {"citing_id": "2304.09915v1", "cited_id": "1908.07919", "section_title": "C. Dual Context Network With Transformer", "citation": "We remove the fifth stage and the last three pooling layers to preserve a high-resolution feature map, which has been proven to be effective in segmentation #REFR . Therefore, H 1 = H/4, W 1 = W/4.", "text_before_citation": ["In this section, we first introduce the architecture of the proposed DCN-T.", "Since the classification results are from the derived tri-spectral image set, the final prediction can be obtained by voting. The details will be presented later.", "1) Architecture: Combining the proposed DCM with an ImageNet pretrained backbone network, we obtain a complete segmentation network named DCN-T, where \"T\" represents the transformer used in DCM.", "After reviewing the literature in the remote sensing community #OTHEREFR , it has been found that VGG-16 #OTHEREFR is one of the most frequently utilized backbones.", "As a result, we adopt the VGG-16 network as the primary backbone for our study."], "text_after_citation": ["We add a 3 \u00d7 3 standard convolution to reduce the dimension to C = 256 in the feature F.", "In the DCM, since we use one transformer encoder to obtain RACs, and then use one transformer encoder and a single-layer transformer decoder to separately encode the relationships between homogeneous areas and aggregate global contexts to form 2-D features. Therefore, N D = N E = 1.", "In addition, the MLP ratio is always set to 2 for all transformer layers.", "Finally, the input and output of the DCM, i.e., F and F GAC , are concatenated together.", "The prediction is obtained after a convolutional layer and a bilinear upsampling layer."], "citing_paper_content": {"title": "Dcn-T: Dual Context Network With Transformer For Hyperspectral Image Classification", "abstract": "Hyperspectral image (HSI) classification is challenging due to spatial variability caused by complex imaging conditions. Prior methods suffer from limited representation ability, as they train specially designed networks from scratch on limited annotated data. We propose a tri-spectral image generation pipeline that transforms HSI into high-quality trispectral images, enabling the use of off-the-shelf ImageNet pretrained backbone networks for feature extraction. Motivated by the observation that there are many homogeneous areas with distinguished semantic and geometric properties in HSIs, which can be used to extract useful contexts, we propose an end-to-end segmentation network named DCN-T. It adopts transformers to effectively encode regional adaptation and global aggregation spatial contexts within and between the homogeneous areas discovered by similarity-based clustering. To fully exploit the rich spectrums of the HSI, we adopt an ensemble approach where all segmentation results of the tri-spectral images are integrated into the final prediction through a voting scheme. Extensive experiments on three public benchmarks show that our proposed method outperforms state-of-the-art methods for HSI classification. The code will be released at https://github.com/DotWang/DCN-T."}, "cited_paper_content": {"title": "Deep High-Resolution Representation Learning For Visual Recognition", "abstract": "High-resolution representations are essential for position-sensitive vision problems, such as human pose estimation, semantic segmentation, and object detection. Existing state-of-the-art frameworks first encode the input image as a low-resolution representation through a subnetwork that is formed by connecting high-to-low resolution convolutions \\emph{in series} (e.g., ResNet, VGGNet), and then recover the high-resolution representation from the encoded low-resolution representation. Instead, our proposed network, named as High-Resolution Network (HRNet), maintains high-resolution representations through the whole process. There are two key characteristics: (i) Connect the high-to-low resolution convolution streams \\emph{in parallel}; (ii) Repeatedly exchange the information across resolutions. The benefit is that the resulting representation is semantically richer and spatially more precise. We show the superiority of the proposed HRNet in a wide range of applications, including human pose estimation, semantic segmentation, and object detection, suggesting that the HRNet is a stronger backbone for computer vision problems. All the codes are available at~{\\url{this https URL}}."}, "keywords": ["segmentation", "high-resolution feature map"], "citation_intent": "method"} {"citing_id": "2304.03833v1", "cited_id": "1804.02717", "section_title": "Ablate Reference State Initialization In Dmfd", "citation": "An improvement we made over the original DMfD algorithm is to disable matching with expert states, known as RSI-IR, first proposed in #REFR .", "text_before_citation": [], "text_after_citation": ["We justify this improvement in this ablation, labeled ABL5 in Fig. 7 . As shown in Fig.", "10b , removing RSI and IR has a net positive effect throughout training, and around 10% on the final policy performance.", "This means that matching expert states exactly via imitation reward does not help, even during the initial stages of training when the policy is randomly initialized.", "We believe this is because RSI helps when there are hard-to-reach intermediate states that the policy cannot reach during the initial stages of training.", "This is true for dynamic or long-horizon tasks, such as karate chops and roundhouse kicks."], "citing_paper_content": {"title": "Bridging Action Space Mismatch In Learning From Demonstrations", "abstract": "Learning from demonstrations (LfD) methods guide learning agents to a desired solution using demonstrations from a teacher. While some LfD methods can handle small mismatches in the action spaces of the teacher and student, here we address the case where the teacher demonstrates the task in an action space that can be substantially different from that of the student-thereby inducing a large action space mismatch. We bridge this gap with a framework, Morphological Adaptation in Imitation Learning (MAIL), that allows training an agent from demonstrations by other agents with significantly different morphologies (from the student or each other). MAIL is able to learn from suboptimal demonstrations, so long as they provide some guidance towards a desired solution. We demonstrate MAIL on challenging household cloth manipulation tasks and introduce a new DRY CLOTH task-cloth manipulation in 3D task with obstacles. In these tasks, we train a visual control policy for a robot with one end-effector using demonstrations from a simulated agent with two end-effectors. MAIL shows up to 27% improvement over LfD and non-LfD baselines. It is deployed to a real Franka Panda robot, and can handle multiple variations in cloth properties (color, thickness, size, material) and pose (rotation and translation). We further show generalizability to transfers from n-to-m end-effectors, in the context of a simple rearrangement task."}, "cited_paper_content": {"title": "Deepmimic: Example-Guided Deep Reinforcement Learning Of Physics-Based Character Skills", "abstract": "A longstanding goal in character animation is to combine data-driven specification of behavior with a system that can execute a similar behavior in a physical simulation, thus enabling realistic responses to perturbations and environmental variation. We show that well-known reinforcement learning (RL) methods can be adapted to learn robust control policies capable of imitating a broad range of example motion clips, while also learning complex recoveries, adapting to changes in morphology, and accomplishing user-specified goals. Our method handles keyframed motions, highly-dynamic actions such as motion-captured flips and spins, and retargeted motions. By combining a motion-imitation objective with a task objective, we can train characters that react intelligently in interactive settings, e.g., by walking in a desired direction or throwing a ball at a user-specified target. This approach thus combines the convenience and motion quality of using motion clips to define the desired style and appearance, with the flexibility and generality afforded by RL methods and physics-based animation. We further explore a number of methods for integrating multiple clips into the learning process to develop multi-skilled agents capable of performing a rich repertoire of diverse skills. We demonstrate results using multiple characters (human, Atlas robot, bipedal dinosaur, dragon) and a large variety of skills, including locomotion, acrobatics, and martial arts."}, "keywords": ["improvement", "expert states"], "citation_intent": "method"} {"citing_id": "2303.16447v1", "cited_id": "2001.06659", "section_title": "Mvas Versus Mvps", "citation": "B-MVPS #REFR achieves the best scores in 4 objects due to the usage of calibrated light information.", "text_before_citation": ["We use 15-view azimuth maps for optimization and leave out 5 views for testing, following PS-NeRF #OTHEREFR .", "The azimuth maps are computed from the normal maps estimated by the self-calibrated photometric stereo method SDPS #OTHEREFR .", "Evaluation metrics We use Chamfer distance (CD) and F-score for geometry accuracy #OTHEREFR , and mean angular error (MAE) for normal accuracy #OTHEREFR .", "For CD and F-score, we only consider visible points by casting rays for all pixels and finding the first ray-mesh intersections #OTHEREFR .", "Table 2 reports the geometry accuracy of the recovered DiLiGenT-MV surfaces."], "text_after_citation": ["UA-MVPS #OTHEREFR distorts the surface reconstruction by not considering the multi-view consistency.", "MVAS outperforms PS-NeRF #OTHEREFR in 3 objects without modeling the rendering process. Figure 7 visually compares recovered \"Buddha\" and \"Reading\" objects.", "Despite not having the best numerical scores, our method produces comparable results.", "Lower scores for these objects are mainly due to our method's sensitivity to inaccurate silhouette masks provided by DiLiGenT-MV #OTHEREFR .", "We project the GT surface onto the image plane and find up to 10-pixel inconsistency between the projected region and the GT mask. Thus, the silhouette loss Eq."], "citing_paper_content": {"title": "Multi-View Azimuth Stereo Via Tangent Space Consistency", "abstract": "Figure 1. 3D reconstruction from calibrated multi-view azimuth maps (3 out of 31 are shown). An azimuth angle indicates the surface normal's orientation in the image plane, and an azimuth map records the azimuth angles across the entire surface. We show that azimuth maps can be effectively used for shape and normal recovery. Color images are for reference only and are not used in shape optimization."}, "cited_paper_content": {"title": "Multi-View Photometric Stereo: A Robust Solution And Benchmark Dataset For Spatially Varying Isotropic Materials", "abstract": "We present a method to capture both 3D shape and spatially varying reflectance with a multi-view photometric stereo (MVPS) technique that works for general isotropic materials. Our algorithm is suitable for perspective cameras and nearby point light sources. Our data capture setup is simple, which consists of only a digital camera, some LED lights, and an optional automatic turntable. From a single viewpoint, we use a set of photometric stereo images to identify surface points with the same distance to the camera. We collect this information from multiple viewpoints and combine it with structure-from-motion to obtain a precise reconstruction of the complete 3D shape. The spatially varying isotropic bidirectional reflectance distribution function (BRDF) is captured by simultaneously inferring a set of basis BRDFs and their mixing weights at each surface point. In experiments, we demonstrate our algorithm with two different setups: a studio setup for highest precision and a desktop setup for best usability. According to our experiments, under the studio setting, the captured shapes are accurate to 0.5 millimeters and the captured reflectance has a relative root-mean-square error (RMSE) of 9%. We also quantitatively evaluate state-of-the-art MVPS on a newly collected benchmark dataset, which is publicly available for inspiring future research."}, "keywords": ["calibrated light information"], "citation_intent": "background"} {"citing_id": "2303.12993v1", "cited_id": "1905.02249", "section_title": "Ablation Study On Defense Settings", "citation": "As shown in Table 4 , UDA and ReMixMatch can still have similar robustness against backdoor attacks compared with MixMatch #REFR under our proposed ASD.", "text_before_citation": ["However, it can wrongly introduce a large number of poisoned samples into D C and result in the failure of our ASD, especially under WaNet and CLB.", "Hence, it is necessary to control the speed to build D C and constrain the number of samples in D C during stage 1. Different semi-supervised learning methods.", "We treat the samples in D P as unlabeled and apply semi-supervised learning to learn from both data pools.", "In this experiment, we show our ASD can work well with various semisupervised learning, e.g., UDA #OTHEREFR and ReMixMatch #OTHEREFR .", "We keep all settings unchanged."], "text_after_citation": ["More details about these three semisupervised learning methods are in Appendix J."], "citing_paper_content": {"title": "Backdoor Defense Via Adaptively Splitting Poisoned Dataset", "abstract": "Backdoor defenses have been studied to alleviate the threat of deep neural networks (DNNs) being backdoor attacked and thus maliciously altered. Since DNNs usually adopt some external training data from an untrusted third party, a robust backdoor defense strategy during the training stage is of importance. We argue that the core of training-time defense is to select poisoned samples and to handle them properly. In this work, we summarize the training-time defenses from a unified framework as splitting the poisoned dataset into two data pools. Under our framework, we propose an adaptively splitting datasetbased defense (ASD). Concretely, we apply loss-guided split and meta-learning-inspired split to dynamically update two data pools. With the split clean data pool and polluted data pool, ASD successfully defends against backdoor attacks during training. Extensive experiments on multiple benchmark datasets and DNN models against six state-ofthe-art backdoor attacks demonstrate the superiority of our ASD. Our code is available at https://github.com/ KuofengGao/ASD."}, "cited_paper_content": {"title": "Mixmatch: A Holistic Approach To Semi-Supervised Learning", "abstract": "Semi-supervised learning has proven to be a powerful paradigm for leveraging unlabeled data to mitigate the reliance on large labeled datasets. In this work, we unify the current dominant approaches for semi-supervised learning to produce a new algorithm, MixMatch, that works by guessing low-entropy labels for data-augmented unlabeled examples and mixing labeled and unlabeled data using MixUp. We show that MixMatch obtains state-of-the-art results by a large margin across many datasets and labeled data amounts. For example, on CIFAR-10 with 250 labels, we reduce error rate by a factor of 4 (from 38% to 11%) and by a factor of 2 on STL-10. We also demonstrate how MixMatch can help achieve a dramatically better accuracy-privacy trade-off for differential privacy. Finally, we perform an ablation study to tease apart which components of MixMatch are most important for its success."}, "keywords": ["backdoor attacks", "MixMatch"], "citation_intent": "result"} {"citing_id": "2303.03951v1", "cited_id": "1610.01644", "section_title": "Probing Strategies", "citation": "The advocates of linear probing #REFR argue that the probe model should be simple, e.g.", "text_before_citation": ["The goal of probing is to evaluate the \"extractability\" or \"readability\" of a property from the representation.", "The standard approach is to train a separate model, called a probe, to predict the property p i given the fixed representations z i .", "Specifically, the probing dataset D is split into a train and test set, the probe is trained on the train set, and its performance is evaluated on the test set.", "Good test performance is taken as evidence that the representation contains information about the property.", "Low performance indicates that the property is either not present in the representations or not usable. The idea of usability is prominent in the literature."], "text_after_citation": ["a logistic regression (or linear regression for continuous properties), since this means that the information can be easily extracted and used in subsequent processing (e.g. in subsequent layers). The advocates for more complex probes #OTHEREFR", "(2020b) argue they are better since the information about the property may be non-linearly encoded in the representation.", "Yet, the good performance of non-linear probes may come from overfitting (memorization of spurious correlations).", "To overcome these limitations, various control tasks have been proposed: comparing the performance to a majority baseline, random representations, randomization of the properties, or the use of minimum description length.", "Despite its limitations, in this paper we use linear probing since the results are more interpretable."], "citing_paper_content": {"title": "Probing Graph Representations", "abstract": "Today we have a good theoretical understanding of the representational power of Graph Neural Networks (GNNs). For example, their limitations have been characterized in relation to a hierarchy of Weisfeiler-Lehman (WL) isomorphism tests. However, we do not know what is encoded in the learned representations. This is our main question. We answer it using a probing framework to quantify the amount of meaningful information captured in graph representations. Our findings on molecular datasets show the potential of probing for understanding the inductive biases of graph-based models. We compare different families of models and show that transformer-based models capture more chemically relevant information compared to models based on message passing. We also study the effect of different design choices such as skip connections and virtual nodes. We advocate for probing as a useful diagnostic tool for evaluating graph-based models."}, "cited_paper_content": {"title": "Understanding Intermediate Layers Using Linear Classifier Probes", "abstract": "Neural network models have a reputation for being black boxes. We propose a new method to better understand the roles and dynamics of the intermediate layers. This has direct consequences on the design of such models and it enables the expert to be able to justify certain heuristics (such as adding auxiliary losses in middle layers). Our method uses linear classifiers, referred to as ``probes'', where a probe can only use the hidden units of a given intermediate layer as discriminating features. Moreover, these probes cannot affect the training phase of a model, and they are generally added after training. They allow the user to visualize the state of the model at multiple steps of training. We demonstrate how this can be used to develop a better intuition about models and to diagnose potential problems."}, "keywords": ["probe model"], "citation_intent": "background"} {"citing_id": "2303.04839v1", "cited_id": "1512.00567", "section_title": "Introduction", "citation": "A large dataset with approximately 129,450 images has been utilized to develop the skin cancer classification model with Inception v3 #REFR .", "text_before_citation": ["Computer-aided diagnosis of skin diseases has become more popular since the introduction of Inception v3 #OTHEREFR that achieved a performance accuracy of 93.3% #OTHEREFR in classifying various cancerous skin conditions."], "text_after_citation": ["However, gathering such a large amount of data is not feasible for some skin conditions such as Rosacea.", "Although many skin conditions can lead to fatal consequences, cancer has been considered the most serious of all and has motivated the gathering of more data over time.", "As a result, many Teledermatology #OTHEREFR websites have a substantial amount of skin cancer images.", "On the other hand, there is very limited data for non-fatal chronic skin conditions such as Rosacea. Deep Convolutional Neural Networks (DCNNs) e.g.", "Inception v3, perform relatively well provided with a large training dataset #OTHEREFR ."], "citing_paper_content": {"title": "High Fidelity Synthetic Face Generation For Rosacea Skin Condition From Limited Data", "abstract": "Similar to the majority of deep learning applications, diagnosing skin diseases using computer vision and deep learning often requires a large volume of data. However, obtaining sufficient data for particular types of facial skin conditions can be difficult due to privacy concerns. As a result, conditions like Rosacea are often understudied in computer-aided diagnosis. The limited availability of data for facial skin conditions has led to the investigation of alternative methods for computer-aided diagnosis. In recent years, Generative Adversarial Networks (GANs), mainly variants of StyleGANs, have demonstrated promising results in generating synthetic facial images. In this study, for the first time, a small dataset of Rosacea with 300 full-face images is utilized to further investigate the possibility of generating synthetic data. The preliminary experiments show how fine-tuning the model and varying experimental settings significantly affect the fidelity of the Rosacea features. It is demonstrated that R 1 Regularization strength helps achieve high-fidelity details. Additionally, this study presents qualitative evaluations of synthetic/generated faces by expert dermatologists and non-specialist participants. The quantitative evaluation is presented using a few validation metric(s). Furthermore a number of limitations and future directions are discussed. Code and generated dataset are"}, "cited_paper_content": {"title": "Rethinking The Inception Architecture For Computer Vision", "abstract": "Convolutional networks are at the core of most state of-the-art computer vision solutions for a wide variety of tasks. Since 2014 very deep convolutional networks started to become mainstream, yielding substantial gains in various benchmarks. Although increased model size and computational cost tend to translate to immediate quality gains for most tasks (as long as enough labeled data is provided for training), computational efficiency and low parameter count are still enabling factors for various use cases such as mobile vision and big-data scenarios. Here we are exploring ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization. We benchmark our methods on the ILSVRC 2012 classification challenge validation set demonstrate substantial gains over the state of the art: 21:2% top-1 and 5:6% top-5 error for single frame evaluation using a network with a computational cost of 5 billion multiply-adds per inference and with using less than 25 million parameters. With an ensemble of 4 models and multi-crop evaluation, we report 3:5% top-5 error and 17:3% top-1 error on the validation set and 3:6% top-5 error on the official test set."}, "keywords": ["skin cancer classification", "Inception v3"], "citation_intent": "method"} {"citing_id": "2304.07911v1", "cited_id": "1904.08030", "section_title": "Study Of M2Gnn (Rq2)", "citation": "Secondly, the increase of interest capsules indeed improves the recall list since local businesses contain multiple aspects, while there is a saturation limit which has a similar conclusion in #REFR . Thirdly, the optimal number of is 6~8.", "text_before_citation": ["The lower part of Table 3 shows that M2GNNw/oL 1 sharply underperforms M2GNN without regularizing tag embeddings via L 1 , which proves the necessity of the skip-gram regularization. Impact of hyperparameters.", "We investigate the influence of three hyperparameters: layer numbers, capsule numbers, and exponent before softmax, all of which greatly affect multi-interest extraction and transfer.", ", max and are searched in {1, 2, 3}, {2, 4, 6, 8} and {2, 4, 6, 8}, respectively.", "Figure 6 shows the results in the DPBJ dataset and it has a similar conclusion in the Amazon dataset, which is omitted. From the observations above, we make several conclusions.", "Firstly, = 2 results in the best performance, which means 2-order connections integrate more similar tags into users and stacking more layers continually brings in noisy tags, which is harmful to the recommendation."], "text_after_citation": ["The reason is that a proper exponent helps the softmax function to identify more important interests and assign bigger scores."], "citing_paper_content": {"title": "M2Gnn: Metapath And Multi-Interest Aggregated Graph Neural Network For Tag-Based Cross-Domain Recommendation", "abstract": "Cross-domain recommendation (CDR) is an effective way to alleviate the data sparsity problem. Content-based CDR is one of the most promising branches since most kinds of products can be described by a piece of text, especially when cold-start users or items have few interactions. However, two vital issues are still under-explored: (1) From the content modeling perspective, sufficient long-text descriptions are usually scarce in a real recommender system, more often the lightweight textual features, such as a few keywords or tags, are more accessible, which is improperly modeled by existing methods. (2) From the CDR perspective, not all inter-domain interests are helpful to infer intra-domain interests. Caused by domain-specific features, there are part of signals benefiting for recommendation in the source domain but harmful for that in the target domain. Therefore, how to distill useful interests is crucial. To tackle the above two problems, we propose a metapath and multi-interest aggregated graph neural network (M2GNN). Specifically, to model the tag-based contents, we construct a heterogeneous information network to hold the semantic relatedness between users, items, and tags in all domains. The metapath schema is predefined according to domain-specific knowledge, with one metapath for one domain. User representations are learned by GNN with a hierarchical aggregation framework, where the intra-metapath aggregation firstly filters out trivial tags and the inter-metapath aggregation further filters out useless interests. Offline experiments and online A/B tests demonstrate that M2GNN achieves significant improvements over the state-of-the-art methods and current industrial recommender system in Dianping, respectively. Further analysis shows that M2GNN offers an interpretable recommendation."}, "cited_paper_content": {"title": "Multi-Interest Network With Dynamic Routing For Recommendation At Tmall", "abstract": "Industrial recommender systems usually consist of the matching stage and the ranking stage, in order to handle the billion-scale of users and items. The matching stage retrieves candidate items relevant to user interests, while the ranking stage sorts candidate items by user interests. Thus, the most critical ability is to model and represent user interests for either stage. Most of the existing deep learning-based models represent one user as a single vector which is insufficient to capture the varying nature of user's interests. In this paper, we approach this problem from a different view, to represent one user with multiple vectors encoding the different aspects of the user's interests. We propose the Multi-Interest Network with Dynamic routing (MIND) for dealing with user's diverse interests in the matching stage. Specifically, we design a multi-interest extractor layer based on capsule routing mechanism, which is applicable for clustering historical behaviors and extracting diverse interests. Furthermore, we develop a technique named label-aware attention to help learn a user representation with multiple vectors. Through extensive experiments on several public benchmarks and one large-scale industrial dataset from Tmall, we demonstrate that MIND can achieve superior performance than state-of-the-art methods for recommendation. Currently, MIND has been deployed for handling major online traffic at the homepage on Mobile Tmall App."}, "keywords": ["interest capsules"], "citation_intent": "result"} {"citing_id": "2304.11697v1", "cited_id": "1711.08488", "section_title": "B. Multi-Modal Fusion For Object Detection 1) Multi-Modal Object Detection:", "citation": "Frustum PointNets #REFR extract the 3D bounding frustum of an object by extruding 2D bounding boxes from image detectors.", "text_before_citation": ["To date, several studies have investigated multi-modal fusion for 2D and 3D object detection."], "text_after_citation": ["PointFusion #OTHEREFR combines a CNN and a PointNet #OTHEREFR architecture respectively to process images and raw point clouds then predict 3D boxes.", "PointPainting #OTHEREFR projects LiDAR points into the output of an image-only semantic segmentation network, and appends the class scores to each point.", "All these fusion methods of RGB and LiDAR achieve high average precision on the benchmarks, however, the coupling or interrelation of two modalities will cause the whole system to fail easily once part of the sensors break down.", "Besides, the methods above only provide a deterministic predict result, making it risky to carry out in the real application.", "2) Adaptive fusion: Several new studies have proposed self-adaptive techniques in computer vision."], "citing_paper_content": {"title": "Informative Data Selection With Uncertainty For Multi-Modal Object Detection", "abstract": "Noise has always been nonnegligible trouble in object detection by creating confusion in model reasoning, thereby reducing the informativeness of the data. It can lead to inaccurate recognition due to the shift in the observed pattern, that requires a robust generalization of the models. To implement a general vision model, we need to develop deep learning models that can adaptively select valid information from multi-modal data. This is mainly based on two reasons. Multi-modal learning can break through the inherent defects of single-modal data, and adaptive information selection can reduce chaos in multi-modal data. To tackle this problem, we propose a universal uncertainty-aware multi-modal fusion model. It adopts a multi-pipeline loosely coupled architecture to combine the features and results from point clouds and images. To quantify the correlation in multimodal information, we model the uncertainty, as the inverse of data information, in different modalities and embed it in the bounding box generation. In this way, our model reduces the randomness in fusion and generates reliable output. Moreover, we conducted a completed investigation on the KITTI 2D object detection dataset and its derived dirty data. Our fusion model is proven to resist severe noise interference like Gaussian, motion blur, and frost, with only slight degradation. The experiment results demonstrate the benefits of our adaptive fusion. Our analysis on the robustness of multi-modal fusion will provide further insights for future research."}, "cited_paper_content": {"title": "Frustum Pointnets For 3D Object Detection From Rgb-D Data", "abstract": "In this work, we study 3D object detection from RGB-D data in both indoor and outdoor scenes. While previous methods focus on images or 3D voxels, often obscuring natural 3D patterns and invariances of 3D data, we directly operate on raw point clouds by popping up RGB-D scans. However, a key challenge of this approach is how to efficiently localize objects in point clouds of large-scale scenes (region proposal). Instead of solely relying on 3D proposals, our method leverages both mature 2D object detectors and advanced 3D deep learning for object localization, achieving efficiency as well as high recall for even small objects. Benefited from learning directly in raw point clouds, our method is also able to precisely estimate 3D bounding boxes even under strong occlusion or with very sparse points. Evaluated on KITTI and SUN RGB-D 3D detection benchmarks, our method outperforms the state of the art by remarkable margins while having real-time capability."}, "keywords": ["image detectors", "Frustum PointNets"], "citation_intent": "background"} {"citing_id": "2304.05147v1", "cited_id": "1911.01547", "section_title": "Intelligence", "citation": "In general, there are two different interpretations: intelligence as either a collection of task-specific skills or a general learning ability #REFR ], which reflect the distinction between crystallised and fluid abilities, respectively.", "text_before_citation": ["Intelligence is a controversial and elusive concept subject to philosophical debate #OTHEREFR", "2007] , best understood as a nomological network of constructs #OTHEREFR .", "Etymologically, intelligence comes from Latin \"intelligere\", which means \"to understand\".", "It can be defined as \"the global capacity of the individual to act purposefully, to think rationally, and to deal effectively with the environment\" #OTHEREFR ], or the property that \"measures an agent's ability to achieve goals in a wide range of environments\" #OTHEREFR ."], "text_after_citation": ["Problems about intelligence include, for instance, its definition and modelling, such as devising the structure of intelligence #OTHEREFR", "2011] ; its relation with action; its measurement and evaluation; its analysis; and its construction and development.", "Concerning the theories of intelligence, there are two main traditions #OTHEREFR", "2011] : the psychometric tradition, based on the number and nature of basic cognitive abilities or factors; and the developmental or holistic perspective, based on acquired intellect.", "The problem of the measure of intelligence #OTHEREFR is of course related to what representation or model of intelligence is considered, and is complicated by the need of distinguishing between causality and correlation, selecting a representative set of environments for evaluation, etc. Carrol defines an ability (i.e."], "citing_paper_content": {"title": "Artificial Collective Intelligence Engineering: A Survey Of Concepts And Perspectives", "abstract": "Collectiveness is an important property of many systems-both natural and artificial. By exploiting a large number of individuals, it is often possible to produce effects that go far beyond the capabilities of the smartest individuals, or even to produce intelligent collective behaviour out of not-so-intelligent individuals. Indeed, collective intelligence, namely the capability of a group to act collectively in a seemingly intelligent way, is increasingly often a design goal of engineered computational systems-motivated by recent techno-scientific trends like the Internet of Things, swarm robotics, and crowd computing, just to name a few. For several years, the collective intelligence observed in natural and artificial systems has served as a source of inspiration for engineering ideas, models, and mechanisms. Today, artificial and computational collective intelligence are recognised research topics, spanning various techniques, kinds of target systems, and application domains. However, there is still a lot of fragmentation in the research panorama of the topic within computer science, and the verticality of most communities and contributions makes it difficult to extract the core underlying ideas and frames of reference. The challenge is to identify, place in a common structure, and ultimately connect the different areas and methods addressing intelligent collectives. To address this gap, this paper considers a set of broad scoping questions providing a map of collective intelligence research, mostly by the point of view of computer scientists and engineers. Accordingly, it covers preliminary notions, fundamental concepts, and the main research perspectives, identifying opportunities and challenges for researchers on artificial and computational collective intelligence engineering."}, "cited_paper_content": {"title": "On The Measure Of Intelligence", "abstract": "To make deliberate progress towards more intelligent and more human-like artificial systems, we need to be following an appropriate feedback signal: we need to be able to define and evaluate intelligence in a way that enables comparisons between two systems, as well as comparisons with humans. Over the past hundred years, there has been an abundance of attempts to define and measure intelligence, across both the fields of psychology and AI. We summarize and critically assess these definitions and evaluation approaches, while making apparent the two historical conceptions of intelligence that have implicitly guided them. We note that in practice, the contemporary AI community still gravitates towards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks such as board games and video games. We argue that solely measuring skill at any given task falls short of measuring intelligence, because skill is heavily modulated by prior knowledge and experience: unlimited priors or unlimited training data allow experimenters to \"buy\" arbitrary levels of skills for a system, in a way that masks the system's own generalization power. We then articulate a new formal definition of intelligence based on Algorithmic Information Theory, describing intelligence as skill-acquisition efficiency and highlighting the concepts of scope, generalization difficulty, priors, and experience. Using this definition, we propose a set of guidelines for what a general AI benchmark should look like. Finally, we present a benchmark closely following these guidelines, the Abstraction and Reasoning Corpus (ARC), built upon an explicit set of priors designed to be as close as possible to innate human priors. We argue that ARC can be used to measure a human-like form of general fluid intelligence and that it enables fair general intelligence comparisons between AI systems and humans."}, "keywords": ["intelligence"], "citation_intent": "background"} {"citing_id": "2304.08868v1", "cited_id": "1802.04741", "section_title": "I. Introduction", "citation": "We note that the syndrome does not depend on the codeword and, therefore, we do not require the NN to have a special structure, it can be arbitrary, but the best results were obtained with recurrent NNs #REFR .", "text_before_citation": ["The next idea was to replace the activation functions, the architecture is called a hyper-network #OTHEREFR , #OTHEREFR . Later, Cammerer et al.", "proposed to replace node and edge message updates with trainable functions, thus allowing NN to learn a generalized message passing algorithm #OTHEREFR .", "Another approach proposed in #OTHEREFR is to consider the syndrome-based decoding algorithm that is suitable for any linear codes.", "The basic syndrome-based decoding algorithm implies the use of the mapping (syndrome to the coset leader), which has the exponential (in the number of parity-check bits) size.", "The idea of #OTHEREFR is to approximate this table with a NN."], "text_after_citation": ["Later a syndrome-based approach was adapted to the transformer and denoising diffusion architectures #OTHEREFR , #OTHEREFR .", "We also note the papers (see, e.g. #OTHEREFR ) devoted to DNN-based code construction.", "For additional literature and a more detailed overview, we refer the reader to #OTHEREFR .", "The papers above focus on the performance of hard-output decoding, i.e.", "the decoder is required to return the estimated information word."], "citing_paper_content": {"title": "Soft-Output Deep Neural Network-Based Decoding", "abstract": "Deep neural network (DNN)-based channel decoding is widely considered in the literature. The existing solutions are investigated for the case of hard output, i.e. when the decoder returns the estimated information word. At the same time, soft-output decoding is of critical importance for iterative receivers and decoders. In this paper, we focus on the soft-output DNN-based decoding problem. We start with the syndrome-based approach proposed by Bennatan et al. (2018) and modify it to provide soft output in the AWGN channel. The new decoder can be considered as an approximation of the MAP decoder with smaller computation complexity. We discuss various regularization functions for joint DNN-MAP training and compare the resulting distributions for [64, 45] BCH code. Finally, to demonstrate the soft-output quality we consider the turboproduct code with [64, 45] BCH codes as row and column codes. We show that the resulting DNN-based scheme is very close to the MAP-based performance and significantly outperforms the solution based on the Chase decoder. We come to the conclusion that the new method is prospective for the challenging problem of DNN-based decoding of long codes consisting of short component codes."}, "cited_paper_content": {"title": "Deep Learning For Decoding Of Linear Codes - A Syndrome-Based Approach", "abstract": "We present a novel framework for applying deep neural networks (DNN) to soft decoding of linear codes at arbitrary block lengths. Unlike other approaches, our framework allows unconstrained DNN design, enabling the free application of powerful designs that were developed in other contexts. Our method is robust to overfitting that inhibits many competing methods, which follows from the exponentially large number of codewords required for their training. We achieve this by transforming the channel output before feeding it to the network, extracting only the syndrome of the hard decisions and the channel output reliabilities. We prove analytically that this approach does not involve any intrinsic performance penalty, and guarantees the generalization of performance obtained during training. Our best results are obtained using a recurrent neural network (RNN) architecture combined with simple preprocessing by permutation. We provide simulation results that demonstrate performance that sometimes approaches that of the ordered statistics decoding (OSD) algorithm."}, "keywords": ["recurrent NNs"], "citation_intent": "result"} {"citing_id": "2305.01310v1", "cited_id": "1705.10253", "section_title": "Continuization Results", "citation": "If c 1 \u2265 2, then d c1 \u2264 1 2 , i.e., for size 1, the solution has value d c1 \u2264 #REFR 2 which implies that the solution has a competitive ratio of at least 2. Thus assume that c 1 = 1.", "text_before_citation": ["There are even discrete instances where every continuization of the instance has a competitive ratio smaller than the initial instance.", "For i \u2208 {2, 5, 6, 7, 8, 9, 10, 11}, the density is chosen such that i", "\u2022 d i = v i = v i\u22121 = (i \u2212 1)d i\u22121 .", "We show that every incremental solution (c 1 , c 2 , . . .", ") has a competitive ratio of at least 1.446 for this problem instance."], "text_after_citation": ["If c 2 \u2265 5, we can, without loss of generality, assume that c 2 \u2265 12.", "Otherwise we can improve the solution by choosing c 2 = 4 instead.", "Then, the value of the solution for size 4 is max{1, 3", "\u2022 d c2 } = max{1, 3 \u2022 16473", "107200 } = 1, while the optimal solution has value 4d 4 = 17 10 , i.e., the competitive ratio of the solution is at least 1.7."], "citing_paper_content": {"title": "Incremental Maximization Via Continuization", "abstract": "We consider the problem of finding an incremental solution to a cardinality-constrained maximization problem that not only captures the solution for a fixed cardinality, but also describes how to gradually grow the solution as the cardinality bound increases. The goal is to find an incremental solution that guarantees a good competitive ratio against the optimum solution for all cardinalities simultaneously. The central challenge is to characterize maximization problems where this is possible, and to determine the best-possible competitive ratio that can be attained. A lower bound of 2.18 and an upper bound of \u03d5+1 \u2248 2.618 are known on the competitive ratio for monotone and accountable objectives [Bernstein et al., Math. Prog., 2022], which capture a wide range of maximization problems. We introduce a continuization technique and identify an optimal incremental algorithm that provides strong evidence that \u03d5 + 1 is the best-possible competitive ratio. Using this continuization, we obtain an improved lower bound of 2.246 by studying a particular recurrence relation whose characteristic polynomial has complex roots exactly beyond the lower bound. Based on the optimal continuous algorithm combined with a scaling approach, we also provide a 1.772-competitive randomized algorithm. We complement this by a randomized lower bound of 1.447 via Yao's principle."}, "cited_paper_content": {"title": "General Bounds For Incremental Maximization", "abstract": "We propose a theoretical framework to capture incremental solutions to cardinality constrained maximization problems. The defining characteristic of our framework is that the cardinality/support of the solution is bounded by a value $k\\in\\mathbb{N}$ that grows over time, and we allow the solution to be extended one element at a time. We investigate the best-possible competitive ratio of such an incremental solution, i.e., the worst ratio over all $k$ between the incremental solution after $k$ steps and an optimum solution of cardinality $k$. We define a large class of problems that contains many important cardinality constrained maximization problems like maximum matching, knapsack, and packing/covering problems. We provide a general $2.618$-competitive incremental algorithm for this class of problems, and show that no algorithm can have competitive ratio below $2.18$ in general. In the second part of the paper, we focus on the inherently incremental greedy algorithm that increases the objective value as much as possible in each step. This algorithm is known to be $1.58$-competitive for submodular objective functions, but it has unbounded competitive ratio for the class of incremental problems mentioned above. We define a relaxed submodularity condition for the objective function, capturing problems like maximum (weighted) ($b$-)matching and a variant of the maximum flow problem. We show that the greedy algorithm has competitive ratio (exactly) $2.313$ for the class of problems that satisfy this relaxed submodularity condition. Note that our upper bounds on the competitive ratios translate to approximation ratios for the underlying cardinality constrained problems."}, "keywords": ["competitive ratio"], "citation_intent": "background"} {"citing_id": "2303.04603v1", "cited_id": "1505.04597", "section_title": "Evaluation Metrics", "citation": "We first train a U-Net #REFR on GAMMA and then evaluate on the testing set of the degraded REFUGE dataset. VSD.", "text_before_citation": ["3 , higher values of PSNR or SSIM do not necessarily indicate better preservation of structural details, as already pointed out elsewhere #OTHEREFR .", "To assess the fundus image enhancement performance with respect to structural details and clinically-relevant applications, we propose the following evaluation metrics: FIQA.", "Similar to #OTHEREFR , we train a ResNet101 #OTHEREFR on EyeQ for quality assessment and use the predictions to calculate a fundus image quality assessment score (FIQA).", "We evaluate on the original testing set of EyeQ labeled as \"Usable\". OCSD.", "We use the OC segmentation Dice (OCSD) as a metric of assessing the capability of preserving anatomical structures."], "text_after_citation": ["We use the vessel segmentation Dice (VSD) as another metric of assessing the capability of preserving anatomical structures.", "Iter-Net #OTHEREFR is used to segment vessels on the testing set of the degraded DRIVE dataset. FCNR.", "Inspired by the contrast noise ratio #OTHEREFR , we propose a fundus contrast noise ratio (FCNR) to evaluate the contrast quality of the vessel areas.", "We first obtain a region of interest (ROI) R using a disk-shaped dilation operation with a radius of 3 pixels based on the vessel segmentation mask.", "We formulate the vessel area as V while the background area as B. The FCNR is defined as"], "citing_paper_content": {"title": "Learning Enhancement From Degradation: A Diffusion Model For Fundus Image Enhancement", "abstract": "The quality of a fundus image can be compromised by numerous factors, many of which are challenging to be appropriately and mathematically modeled. In this paper, we introduce a novel diffusion model based framework, named Learning Enhancement from Degradation (LED), for enhancing fundus images. Specifically, we first adopt a data-driven degradation framework to learn degradation mappings from unpaired high-quality to low-quality images. We then apply a conditional diffusion model to learn the inverse enhancement process in a paired manner. The proposed LED is able to output enhancement results that maintain clinically important features with better clarity. Moreover, in the inference phase, LED can be easily and effectively integrated with any existing fundus image enhancement framework. We evaluate the proposed LED on several downstream tasks with respect to various clinically-relevant metrics, successfully demonstrating its superiority over existing state-of-the-art methods both quantitatively and qualitatively."}, "cited_paper_content": {"title": "U-Net: Convolutional Networks For Biomedical Image Segmentation", "abstract": "There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net ."}, "keywords": ["degraded REFUGE dataset", "U-Net"], "citation_intent": "method"} {"citing_id": "2304.06708v1", "cited_id": "1906.03327", "section_title": "Implementation Details", "citation": "We find that the baseline already performs competitively on our benchmarks, despite the relatively small size of SMiT compared to other datasets such as HowTo100M #REFR , due to the quality and diversity of the manually annotated captions.", "text_before_citation": ["Spoken Moments in Time (SMiT) pretraining dataset.", "The SMiT #OTHEREFR training set consists of 481K pairs of 3 seconds video clips with corresponding captions.", "It is a subset of Moments in Time (MiT) #OTHEREFR .", "Our work falls under the umbrella of transfer learning: we pretrain on SMiT and then use the resulting features to solve different downstream tasks in a zero-shot or fine-tuned manner.", "Pretraining is either done as in regular contrastive learning ('baseline') or with our VFC framework."], "text_after_citation": ["We encourage the community to consider SMiT as a powerful pretraining dataset. PaLM.", "We use PaLM-540B #OTHEREFR with beam size 4, output sequence length 512, and temperature of 0.7.", "The negative captions are generated in an autogressive way and are therefore of arbitrary length.", "We post-process them by removing text after any newline character and by filtering out candidates which contain the same verbs as the original caption.", "Training details. Most hyper-parameters follow CLIP4CLIP #OTHEREFR ."], "citing_paper_content": {"title": "Verbs In Action: Improving Verb Understanding In Video-Language Models", "abstract": "two brown horses eating grass LLM Original caption Video Add prompt two brown horses running on the grass two brown horses fighting on the grass two brown horses lying on the grass eating grass Verb phrase Hard verb negative captions two brown horses sleeping on the grass two brown horses playing on the grass eating grass cleaning camera standing squatting Verb phrase loss two brown horses eating grass two brown horses lying on the grass a woman standing in a post office person squatting at the gym a man cleaning his camera Hard negative loss Generated hard negative caption Negative captions in the batch Negative verb phrases in the batch Batch Figure 1. Verb-Focused Contrastive (VFC) learning: (Left): Given a video and its corresponding caption, we leverage a Large Language Model (LLM) to output (1) hard negative captions, where only the verb has been changed while keeping the remaining context, and (2) verb phrases which succinctly describe the action in the video. (Right): To encourage better verb reasoning, we subsequently enforce (1) a calibrated hard negative loss, using our generated hard negative captions and other captions in the batch, and (2) a fine-grained, verb phrase loss. We show that VFC improves verb understanding of video-language models compared to the standard contrastive loss."}, "cited_paper_content": {"title": "Howto100M: Learning A Text-Video Embedding By Watching Hundred Million Narrated Video Clips", "abstract": "Learning text-video embeddings usually requires a dataset of video clips with manually provided captions. However, such datasets are expensive and time consuming to create and therefore difficult to obtain on a large scale. In this work, we propose instead to learn such embeddings from video data with readily available natural language annotations in the form of automatically transcribed narrations. The contributions of this work are three-fold. First, we introduce HowTo100M: a large-scale dataset of 136 million video clips sourced from 1.22M narrated instructional web videos depicting humans performing and describing over 23k different visual tasks. Our data collection procedure is fast, scalable and does not require any additional manual annotation. Second, we demonstrate that a text-video embedding trained on this data leads to state-of-the-art results for text-to-video retrieval and action localization on instructional video datasets such as YouCook2 or CrossTask. Finally, we show that this embedding transfers well to other domains: fine-tuning on generic Youtube videos (MSR-VTT dataset) and movies (LSMDC dataset) outperforms models trained on these datasets alone. Our dataset, code and models will be publicly available at: www.di.ens.fr/willow/research/howto100m/."}, "keywords": ["manually annotated captions"], "citation_intent": "result"} {"citing_id": "2303.01276v3", "cited_id": "1604.01685", "section_title": "Datasets", "citation": "Cityscapes dataset #REFR is another benchmark dataset for SSS, which focuses on urban scenarios and it consists of 2,975 annotated training images, 500 validation images and 1,525 testing images from 19 classes.", "text_before_citation": ["Pascal VOC 2012 dataset #OTHEREFR is a standard semisupervised semantic segmentation (SSS) benchmark dataset, which consists of over 13,000 images from 21 classes.", "It contains 1,464 fully annotated images for training, 1,449 images for validation and 1,456 images for testing.", "Previous works use SBD #OTHEREFR to render the labelled images and extend the number of labelled data to 10,582.", "The rendered labelled images are of low quality and some of them are accompanied by noise.", "Therefore, most of the previous works validate their SSS methods with sampled labelled images from the high-quality training images and rendered training images, respectively."], "text_after_citation": [], "citing_paper_content": {"title": "Conflict-Based Cross-View Consistency For Semi-Supervised Semantic Segmentation", "abstract": "Semi-supervised semantic segmentation (SSS) has recently gained increasing research interest as it can reduce the requirement for large-scale fully-annotated training data. The current methods often suffer from the confirmation bias from the pseudo-labelling process, which can be alleviated by the co-training framework. The current co-training-based SSS methods rely on hand-crafted perturbations to prevent the different sub-nets from collapsing into each other, but these artificial perturbations cannot lead to the optimal solution. In this work, we propose a new conflict-based cross-view consistency (CCVC) method based on a two-branch co-training framework which aims at enforcing the two sub-nets to learn informative features from irrelevant views. In particular, we first propose a new cross-view consistency (CVC) strategy that encourages the two sub-nets to learn distinct features from the same input by introducing a feature discrepancy loss, while these distinct features are expected to generate consistent prediction scores of the input. The CVC strategy helps to prevent the two sub-nets from stepping into the collapse. In addition, we further propose a conflict-based pseudo-labelling (CPL) method to guarantee the model will learn more useful information from conflicting predictions, which will lead to a stable training process. We validate our new CCVC approach on the SSS benchmark datasets where our method achieves new state-of-the-art performance. Our code is available at https://github.com/xiaoyao3302/CCVC."}, "cited_paper_content": {"title": "The Cityscapes Dataset For Semantic Urban Scene Understanding", "abstract": "Visual understanding of complex urban street scenes is an enabling factor for a wide range of applications. Object detection has benefited enormously from large-scale datasets, especially in the context of deep learning. For semantic urban scene understanding, however, no current dataset adequately captures the complexity of real-world urban scenes. To address this, we introduce Cityscapes, a benchmark suite and large-scale dataset to train and test approaches for pixel-level and instance-level semantic labeling. Cityscapes is comprised of a large, diverse set of stereo video sequences recorded in streets from 50 different cities. 5000 of these images have high quality pixel-level annotations, 20 000 additional images have coarse annotations to enable methods that leverage large volumes of weakly-labeled data. Crucially, our effort exceeds previous attempts in terms of dataset size, annotation richness, scene variability, and complexity. Our accompanying empirical study provides an in-depth analysis of the dataset characteristics, as well as a performance evaluation of several state-of-the-art approaches based on our benchmark."}, "keywords": ["2,975 annotated training", "Cityscapes"], "citation_intent": "background"} {"citing_id": "2304.07358v1", "cited_id": "1905.08750", "section_title": "Performance", "citation": "It coincides with the performance of centralized benchmark (44) and the asymptotic performance of #REFR .", "text_before_citation": ["Proof.", "The proof revolves around introducing a long-term model, analogous to #OTHEREFR , which can be shown to be accurate for small step-sizes and under conditions 3-4. Details are omitted due to space limitations. We verify the accuracy of numerically in Section 5.", "We note that for moderately large step-sizes \u00b5, expression (59) will yield more accurate estimates of the steadystate performance than (62). Relation (62) on the other hand is more tractable.", "We can interpret U T H U in (62) as the projection of the Hessian onto the space spanned by U, while U T R s U is the projection of the noise covariance onto the same space.", "In this sense, Tr U T H U \u22121 U T R s U is a measure of the inverse signal-to-noise ratio after restricting the signal to the space of feasible solutions in #OTHEREFR ."], "text_after_citation": ["We will verify in Section 5 that (59) is accurate for finite step-sizes \u00b5, and that the proposed bias-corrected algorithm outperforms the approximate solution of #OTHEREFR ."], "citing_paper_content": {"title": "Exact Subspace Diffusion For Decentralized Multitask Learning", "abstract": "Classical paradigms for distributed learning, such as federated or decentralized gradient descent, employ consensus mechanisms to enforce homogeneity among agents. While these strategies have proven effective in i.i.d. scenarios, they can result in significant performance degradation when agents follow heterogeneous objectives or data. Distributed strategies for multitask learning, on the other hand, induce relationships between agents in a more nuanced manner, and encourage collaboration without enforcing consensus. We develop a generalization of the exact diffusion algorithm for subspace constrained multitask learning over networks, and derive an accurate expression for its mean-squared deviation when utilizing noisy gradient approximations. We verify numerically the accuracy of the predicted performance expressions, as well as the improved performance of the proposed approach over alternatives based on approximate projections."}, "cited_paper_content": {"title": "Adaptation And Learning Over Networks Under Subspace Constraints\u2014Part I: Stability Analysis", "abstract": "This paper considers optimization problems over networks where agents have individual objectives to meet, or individual parameter vectors to estimate, subject to subspace constraints that require the objectives across the network to lie in low-dimensional subspaces. This constrained formulation includes consensus optimization as a special case, and allows for more general task relatedness models such as smoothness. While such formulations can be solved via projected gradient descent, the resulting algorithm is not distributed. Starting from the centralized solution, we propose an iterative and distributed implementation of the projection step, which runs in parallel with the stochastic gradient descent update. We establish that, for small step-sizes $\\mu$, the proposed distributed adaptive strategy leads to small estimation errors on the order of $\\mu$. We also examine steady-state performance. The results reveal explicitly the influence of the gradient noise, data characteristics, and subspace constraints, on the network performance. The results also show that in the small step-size regime, the iterates generated by the distributed algorithm achieve the centralized steady-state performance. Finally, we apply the proposed strategy to distributed adaptive beamforming."}, "keywords": ["centralized benchmark"], "citation_intent": "result"} {"citing_id": "2303.17294v1", "cited_id": "1807.10418", "section_title": "Optimizing Process", "citation": "Besides, we apply the co-activity similarity loss L cas 1 #REFR on the enhanced feature E a and the suppressed coarse T-CAS S coarse to model better definitephase feature representation.", "text_before_citation": ["The loss L supp mil is defined as the cross-entropy loss between the video label and the prediction p(j),", "EQUATION", "where y f (j) contains only foreground activities, i.e., the background class y f (C + 1) = 0 for suppressing the background activity.", "And the final suppressed MIL loss can be formulated as,", "EQUATION"], "text_after_citation": ["Following the work #OTHEREFR , we utilize a L1-norm loss L norm to make foreground weights more polarized, denoted as", "L norm = T i=1 |A ness i |,", "where |\u2022| is a L1-norm function.", "In addition, we introduce a guidance loss L guide to make the distribution of action-ness scores A ness opposite to the background class probability in S, i.e.,", "L guide = T i=1 |1 \u2212 A ness i \u2212 s c+1 |,"], "citing_paper_content": {"title": "Jcdnet: Joint Of Common And Definite Phases Network For Weakly Supervised Temporal Action Localization", "abstract": "Weakly-supervised temporal action localization aims to localize action instances in untrimmed videos with only video-level supervision. We witness that different actions record common phases, e.g., the run-up in the HighJump and LongJump. These different actions are defined as conjoint actions, whose rest parts are definite phases, e.g., leaping over the bar in a HighJump. Compared with the common phases, the definite phases are more easily localized in existing researches. Most of them formulate this task as a Multiple Instance Learning paradigm, in which the common phases are tended to be confused with the background, and affect the localization completeness of the conjoint actions. To tackle this challenge, we propose a Joint of Common and Definite phases Network (JCDNet) by improving feature discriminability of the conjoint actions. Specifically, we design a Class-Aware Discriminative module to enhance the contribution of the common phases in classification by the guidance of the coarse definite-phase features. Besides, we introduce a temporal attention module to learn robust action-ness scores via modeling temporal dependencies, distinguishing the common phases from the background. Extensive experiments on three datasets (THUMOS14, ActivityNetv1.2, and a conjoint-action subset) demonstrate that JCDNet achieves competitive performance against the state-of-the-art methods."}, "cited_paper_content": {"title": "W-Talc: Weakly-Supervised Temporal Activity Localization And Classification", "abstract": "Most activity localization methods in the literature suffer from the burden of frame-wise annotation requirement. Learning from weak labels may be a potential solution towards reducing such manual labeling effort. Recent years have witnessed a substantial influx of tagged videos on the Internet, which can serve as a rich source of weakly-supervised training data. Specifically, the correlations between videos with similar tags can be utilized to temporally localize the activities. Towards this goal, we present W-TALC, a Weakly-supervised Temporal Activity Localization and Classification framework using only video-level labels. The proposed network can be divided into two sub-networks, namely the Two-Stream based feature extractor network and a weakly-supervised module, which we learn by optimizing two complimentary loss functions. Qualitative and quantitative results on two challenging datasets - Thumos14 and ActivityNet1.2, demonstrate that the proposed method is able to detect activities at a fine granularity and achieve better performance than current state-of-the-art methods."}, "keywords": ["co-activity similarity loss"], "citation_intent": "method"} {"citing_id": "2304.03977v1", "cited_id": "1806.08887", "section_title": "More Related Works", "citation": "In 2018, a work called sparse manifold transform #REFR builds upon the above two areas.", "text_before_citation": ["SSL Methods Not Based on Deep Learning.", "Our work has also been inspired by the classical approaches before deep learning, especially sparse modeling and manifold learning.", "Some earlier works approach unsupervised learning mainly from the perspective of sparsity #OTHEREFR .", "In particular, a work focuses on lossy coding #OTHEREFR has inspired many of the recent SSL learning methods #OTHEREFR , as well as our work to promote covariance in the representation of data through maximizing the coding rate.", "Manifold learning #OTHEREFR and spectral clustering #OTHEREFR propose to model the geometric structure of high dimensional objects in the signal space."], "text_after_citation": ["The work proposes to use sparsity to handle locality in the data space to build support and construct representations that assign similar values to similar points on the support.", "One may note that this work already shares a similar idea with the current joint-embedding self-supervised learning in the deep-learning community."], "citing_paper_content": {"title": "Emp-Ssl: Towards Self-Supervised Learning In One Training Epoch Under Review", "abstract": "Recently, self-supervised learning (SSL) has achieved tremendous success in learning image representation. Despite the empirical success, most self-supervised learning methods are rather \"inefficient\" learners, typically taking hundreds of training epochs to fully converge. In this work, we show that the key towards efficient self-supervised learning is to increase the number of crops from each image instance. Leveraging one of the state-of-the-art SSL method, we introduce a simplistic form of self-supervised learning method called Extreme-Multi-Patch Self-Supervised-Learning (EMP-SSL) that does not rely on many heuristic techniques for SSL such as weight sharing between the branches, feature-wise normalization, output quantization, and stop gradient, etc, and reduces the training epochs by two orders of magnitude. We show that the proposed method is able to converge to 85.1% on CIFAR-10, 58.5% on CIFAR-100, 38.1% on Tiny ImageNet and 58.5% on ImageNet-100 in just one epoch. Furthermore, the proposed method achieves 91.5% on CIFAR-10, 70.1% on CIFAR-100, 51.5% on Tiny ImageNet and 78.9% on ImageNet-100 with linear probing in less than ten training epochs. In addition, we show that EMP-SSL shows significantly better transferability to out-of-domain datasets compared to baseline SSL methods. We will release the code in https://github.com/tsb0601/EMP-SSL. * Equal contribution 2 For example, all representations collapse to the same point."}, "cited_paper_content": {"title": "The Sparse Manifold Transform", "abstract": "We present a signal representation framework called the sparse manifold transform that combines key ideas from sparse coding, manifold learning, and slow feature analysis. It turns non-linear transformations in the primary sensory signal space into linear interpolations in a representational embedding space while maintaining approximate invertibility. The sparse manifold transform is an unsupervised and generative framework that explicitly and simultaneously models the sparse discreteness and low-dimensional manifold structure found in natural scenes. When stacked, it also models hierarchical composition. We provide a theoretical description of the transform and demonstrate properties of the learned representation on both synthetic data and natural videos."}, "keywords": ["sparse manifold transform"], "citation_intent": "background"} {"citing_id": "2305.02224v2", "cited_id": "1702.06491", "section_title": "Conclusions And Future Work", "citation": "It is also evident there is diversity in fact-checking practices and that robustness of the workflow depends on the collaborative and ongoing efforts of the team members, confirming findings from our earlier study #REFR .", "text_before_citation": ["In this paper we have reported findings from a series of semi-structured interviews with news professionals working in fact-checking.", "We have seen how the fact-checking workflow consists of series of stages.", "Each stage has its particular challenges for fact-checkers working under pressure to determine quickly the veracity of a claim and publish the result."], "text_after_citation": ["Interviewees identified ways in which computational tools could assist throughout the fact-checking workflow: surfacing claims, deciding on which are check-worthy and assembling the evidence.", "Interviewees were skeptical about the prospects for automating the process.", "They take for granted that the fact-checker is the human-in-the-loop and that they must be the final arbiter when a claim is assessed.", "As with the introduction of advanced decision-making tools in other fields, there are important professional and organisational reasons why this should remain the case in fact-checking #OTHEREFR .", "Hence, like several recent studies (e.g., #OTHEREFR , #OTHEREFR and #OTHEREFR"], "citing_paper_content": {"title": "Some Observations On Fact-Checking Work With Implications For Computational Support", "abstract": "Social media and user-generated content (UGC) have become increasingly important features of journalistic work in a number of different ways. However, the growth of misinformation means that news organisations have had devote more and more resources to determining its veracity and to publishing corrections if it is found to be misleading. In this work, we present the results of interviews with eight members of fact-checking teams from two organisations. Team members described their fact-checking processes and the challenges they currently face in completing a fact-check in a robust and timely way. The former reveals, inter alia, significant differences in fact-checking practices and the role played by collaboration between team members. We conclude with a discussion of the implications for the development and application of computational tools, including where computational tool support is currently lacking and the importance of being able to accommodate different fact-checking practices."}, "cited_paper_content": {"title": "Supporting The Use Of User Generated Content In Journalistic Practice", "abstract": "Social media and user-generated content (UGC) are increasingly important features of journalistic work in a number of different ways. However, their use presents major challenges, not least because information posted on social media is not always reliable and therefore its veracity needs to be checked before it can be considered as fit for use in the reporting of news. We report on the results of a series of in-depth ethnographic studies of journalist work practices undertaken as part of the requirements gathering for a prototype of a social media verification 'dashboard' and its subsequent evaluation. We conclude with some reflections upon the broader implications of our findings for the design of tools to support journalistic work."}, "keywords": ["fact-checking practices"], "citation_intent": "result"} {"citing_id": "2304.14463v1", "cited_id": "1910.02653", "section_title": "Formulation", "citation": "In addition to the methods of CHECKMATE and MOCCASIN, we include results for the rounding algorithm proposed in #REFR under the \"LP+rounding\" column of the table.", "text_before_citation": ["Table 2 provides numerical results for a range of different computation graphs.", "In Table 2 we have selected the memory budget values for each graph to be the 80% and 90% #OTHEREFR to evaluate CHECKMATE, where CM 1 is FCN with VGG layers and CM 2 is the ResNet50 model, n: number of nodes, m: number of edges, M : memory budget, TDI: total duration increase in percentage, peak mem: peak memory of the resulting rematerialization sequence.", "The column 'Time (s)' indicates the elapsed time in seconds until the best solution. Dashes \"-\" indicate that no solution is found.", "The best solution in each row is shown in bold font.", "Full version of this of the initial peak memory without rematerialization."], "text_after_citation": ["This method consists of relaxing the MILP into a linear program (LP) and then rounding the solution (see #OTHEREFR for further details on this algorithm).", "Note that solution produced by the rounding algorithm is not guaranteed to satisfy the memory budget constraint.", "This could be seen in Table 2 where in most cases the peak memory for the relaxation and rounding approach is higher than the memory budget M .", "Table 2 shows that the random layered graphs and real-world graphs are the most challenging ones among the graph set.", "The solve times for these graphs are higher than the CM graphs, which is consistent with higher edge densities and more complex edge connectivities of the RL and RW graphs."], "citing_paper_content": {"title": "Moccasin: Efficient Tensor Rematerialization For Neural Networks", "abstract": "The deployment and training of neural networks on edge computing devices pose many challenges. The low memory nature of edge devices is often one of the biggest limiting factors encountered in the deployment of large neural network models. Tensor rematerialization or recompute is a way to address high memory requirements for neural network training and inference. In this paper we consider the problem of execution time minimization of compute graphs subject to a memory budget. In particular, we develop a new constraint programming formulation called MOCCASIN with only O(n) integer variables, where n is the number of nodes in the compute graph. This is a significant improvement over the works in the recent literature that propose formulations with O(n 2) Boolean variables. We present numerical studies that show that our approach is up to an order of magnitude faster than recent work especially for large-scale graphs."}, "cited_paper_content": {"title": "Checkmate: Breaking The Memory Wall With Optimal Tensor Rematerialization", "abstract": "Modern neural networks are increasingly bottlenecked by the limited capacity of on-device GPU memory. Prior work explores dropping activations as a strategy to scale to larger neural networks under memory constraints. However, these heuristics assume uniform per-layer costs and are limited to simple architectures with linear graphs, limiting their usability. In this paper, we formalize the problem of trading-off DNN training time and memory requirements as the tensor rematerialization optimization problem, a generalization of prior checkpointing strategies. We introduce Checkmate, a system that solves for optimal schedules in reasonable times (under an hour) using off-the-shelf MILP solvers, then uses these schedules to accelerate millions of training iterations. Our method scales to complex, realistic architectures and is hardware-aware through the use of accelerator-specific, profile-based cost models. In addition to reducing training cost, Checkmate enables real-world networks to be trained with up to 5.1$\\times$ larger input sizes."}, "keywords": ["CHECKMATE"], "citation_intent": "method"} {"citing_id": "2304.06036v1", "cited_id": "2002.00538", "section_title": "I. Introduction", "citation": "It has been found that movement execution (ME) generates stronger amplitude co-relates in EEG signals, hence decoding ME gave promising results when compared to decoding movement imagination (MI) #REFR .", "text_before_citation": ["For instance, it does not require surgical intervention, it is painless, cheap, portable, and accurate #OTHEREFR .", "Hence EEG is widely used in various biomedical applications including seizure detection #OTHEREFR , stress assessment #OTHEREFR , depression disorder detection #OTHEREFR , and schizophrenia #OTHEREFR .", "The interaction of the human body with its environment depends significantly on controlled movements of the upper limbs.", "However, spinal cord injury (SCI) and other neuro-muscular diseases can effect this control and limb movement.", "It is crucial to restoring upper limb movements for people with SCI so they can independently take care of their daily activities."], "text_after_citation": ["Deep learning (DL) methods are getting a lot of attention recently in a diverse set of data processing tasks.", "The same holds for upper limb movement classification using EEG data.", "Features like minimal pre-processing of data, automatic low-and high-level feature extraction, and superior learning capabilities make DL an attractive tool in the machine learning domain #OTHEREFR .", "Classification of the MI of two movements was performed using a convolutional neural network (CNN), along with common spatial patterns (CSP) as a feature extraction mechanism beforehand.", "The strategy was able to classify palm extension and hand grasp movements #OTHEREFR ."], "citing_paper_content": {"title": "Upper Limb Movement Execution Classification Using Electroencephalography For Brain Computer Interface", "abstract": "An accurate classification of upper limb movements using electroencephalography (EEG) signals is gaining significant importance in recent years due to the prevalence of brain-computer interfaces. The upper limbs in the human body are crucial since different skeletal segments combine to make a range of motion that helps us in our trivial daily tasks. Decoding EEG-based upper limb movements can be of great help to people with spinal cord injury (SCI) or other neuro-muscular diseases such as amyotrophic lateral sclerosis (ALS), primary lateral sclerosis, and periodic paralysis. This can manifest in a loss of sensory and motor function, which could make a person reliant on others to provide care in day-today activities. We can detect and classify upper limb movement activities, whether they be executed or imagined using an EEG-based brain-computer interface (BCI). Toward this goal, we focus our attention on decoding movement execution (ME) of the upper limb in this study. For this purpose, we utilize a publicly available EEG dataset that contains EEG signal recordings from fifteen subjects acquired using a 61-channel EEG device. We propose a method to classify four ME classes for different subjects using spectrograms of the EEG data through pre-trained deep learning (DL) models. Our proposed method of using EEG spectrograms for the classification of ME has shown significant results, where the highest average classification accuracy (for four ME classes) obtained is 87.36%, with one subject achieving the best classification accuracy of 97.03%. Clinical relevance-This research shows that movement execution of upper limbs is classified with significant accuracy by employing a spectrogram of the EEG signals and a pre-trained deep learning model which is fine-tuned for the downstream task."}, "cited_paper_content": {"title": "Decoding Movement Imagination And Execution From Eeg Signals Using Bci-Transfer Learning Method Based On Relation Network", "abstract": "A brain-computer interface (BCI) is used not only to control external devices for healthy people but also to rehabilitate motor functions for motor-disabled patients. Decoding movement intention is one of the most significant aspects for performing arm movement tasks using brain signals. Decoding movement execution (ME) from electroencephalogram (EEG) signals have shown high performance in previous works, however movement imagination (MI) paradigm-based intention decoding has so far failed to achieve sufficient accuracy. In this study, we focused on a robust MI decoding method with transfer learning for the ME and MI paradigm. We acquired EEG data related to arm reaching for 3D directions. We proposed a BCI-transfer learning method based on a Relation network (BTRN) architecture. Decoding performances showed the highest performance compared to conventional works. We confirmed the possibility of the BTRN architecture to contribute to continuous decoding of MI using ME datasets."}, "keywords": ["EEG signals"], "citation_intent": "result"} {"citing_id": "2304.04151v1", "cited_id": "1706.03762", "section_title": "History Encoder", "citation": "To better capture long range spatial-temporal dependencies in users' historical check-in sequences, we stack Transformer encoder layers #REFR for constructing the history encoder.", "text_before_citation": ["All these vectors are then linearly projected into -dimensional embeddings \u2208 R , \u2208 R , and \u2208 R .", "In this way, for the user , the historical check-in sequence { } =1 can be further denoted as", "({ } =1 , { } =1 , { } =1 , { } =1 ).", "Note that since checkin data requires a certain order of precedence, learnable positional embedding is also added into inputs for history encoder.", "Compared with previous RNN-based methods, Transformer architecture #OTHEREFR can not only avoid recurrence, allowing parallel computing to reduce training time, but also migrate performance degradation problem with regard to long-term dependencies in RNNs."], "text_after_citation": ["Each Transformer encoder layer involves a multi-head self-attention module and a point-wise feed-forward network.", "We also keep the residual connection and layer normalization employed in Transformer encoder layers.", "Dividing the attention mechanism into multiple heads to form multiple sub-spaces allows the model to focus on different aspects of information.", "For each attention head, self-attention result for a check-in can be computed as", "EQUATION"], "citing_paper_content": {"title": "Timestamps As Prompts For Geography-Aware Location Recommendation", "abstract": "Figure 1: An illustration of how TPG performs the next location recommendation (denoted by the purple line) and interval predictions (denoted by red lines) by using temporal prompts. Different colored markers denote different categories of POIs."}, "cited_paper_content": {"title": "Attention Is All You Need", "abstract": "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data."}, "keywords": ["history encoder", "Transformer encoder layers"], "citation_intent": "method"} {"citing_id": "2303.16079v1", "cited_id": "1802.06132", "section_title": "Limitations Of Existing Approaches", "citation": "A similar limitation has been reported for the simultaneous gradient descent-ascent (SGDA) approach #REFR .", "text_before_citation": ["First, we discuss the slow convergence issue on smooth strongly convex-concave problems highlighted in #OTHEREFR .", "For instance, consider a convex-concave quadratic problem ( , ) = ( /2) 2 + \u2212 ( /2) 2 . The worst-case scenario is\u02c6( ) = ( / )", "for each and the optimal solution is\u02c6( ) = \u2212( / ) for each .", "It is intuitive that both\u02c6( ) and\u02c6( ) should not be too sensitive to follow their change by (3).", "In fact, it has been theoretically derived that, for linear convergence, the learning rate must be set as , \u2208 ( /( + 2 )) and the required number of iterations to find near-optimal solution is \u03a9(1 + 2 /( )); refer to #OTHEREFR for details."], "text_after_citation": ["The same limitation is expected to exist in ZO-Min-Max because it is regarded as an approximation of the SGDA approach.", "The adaptation of the learning rates in ADV-CMA-ES can mitigate the difficulty in tuning learning rates. However, it cannot avoid the slow convergence problem.", "The situation is worse if the objective function is convex-concave but not strongly convex-concave.", "For example, consider 7 with = = 1 and = .", "This objective function is similar to , but the coefficients are regarded as = (1/2) 2 and = (1/2) 2 , i.e., decreasing as the solution approaches the global min-max saddle point ( * , * =\u02c6( * ))."], "citing_paper_content": {"title": "Covariance Matrix Adaptation Evolutionary Strategy With Worst-Case Ranking Approximation For Min-Max Optimization And Its Application To Berthing Control Tasks", "abstract": "In this study, we consider a continuous min-max optimization problem min \u2208X max \u2208Y (,) whose objective function is a black-box. We propose a novel approach to minimize the worst-case objective function () = max (,) directly using a covariance matrix adaptation evolution strategy (CMA-ES) in which the rankings of solution candidates are approximated by our proposed worst-case ranking approximation (WRA) mechanism. We develop two variants of WRA combined with CMA-ES and approximate gradient ascent as numerical solvers for the inner maximization problem. Numerical experiments show that our proposed approach outperforms several existing approaches when the objective function is a smooth strongly convex-concave function and the interaction between and is strong. We investigate the advantages of the proposed approach for problems where the objective function is not limited to smooth strongly convex-concave functions. The effectiveness of the proposed approach is demonstrated in the robust berthing control problem with uncertainty. CCS Concepts: \u2022 Mathematics of computing \u2192 Continuous optimization."}, "cited_paper_content": {"title": "Interaction Matters: A Note On Non-Asymptotic Local Convergence Of Generative Adversarial Networks", "abstract": "Motivated by the pursuit of a systematic computational and algorithmic understanding of Generative Adversarial Networks (GANs), we present a simple yet unified non-asymptotic local convergence theory for smooth two-player games, which subsumes several discrete-time gradient-based saddle point dynamics. The analysis reveals the surprising nature of the off-diagonal interaction term as both a blessing and a curse. On the one hand, this interaction term explains the origin of the slow-down effect in the convergence of Simultaneous Gradient Ascent (SGA) to stable Nash equilibria. On the other hand, for the unstable equilibria, exponential convergence can be proved thanks to the interaction term, for four modified dynamics proposed to stabilize GAN training: Optimistic Mirror Descent (OMD), Consensus Optimization (CO), Implicit Updates (IU) and Predictive Method (PM). The analysis uncovers the intimate connections among these stabilizing techniques, and provides detailed characterization on the choice of learning rate. As a by-product, we present a new analysis for OMD proposed in Daskalakis, Ilyas, Syrgkanis, and Zeng [2017] with improved rates."}, "keywords": ["simultaneous gradient descent-ascent"], "citation_intent": "result"} {"citing_id": "2305.01289v1", "cited_id": "1903.00868", "section_title": "Cubature Rule For Unitary Jacobi Ensembles", "citation": "The corresponding specialization of the cubature rule in Theorem 2 was presented in turn in #REFR Section 9] .", "text_before_citation": ["\u03ba \u00b1 := 1 2 1\u2264r\u2264d 1 \u2212 |a r | 1 + |a r | \u00b11 + 1 2 1\u2264r\u2264d 1 \u2212 |\u00e3 r| 1 + |\u00e3 r| \u00b11 .", "Moreover, it is clear from Eqs.", "(3.1d), (3.1e) that the boundary value \u03be (m+n) 0 = 0 is attained iff \u01eb \u2212 =\u01eb \u2212 = 0 and the boundary value \u03be (m+n) m+n\u22121 = \u03c0 is attained iff \u01eb + =\u01eb + = 0 (since \u03c0 0 u a (\u03b8)d\u03b8 = \u03c0 for |a| < 1). Remark 4.", "In the above integration formulas the degree of exactness is optimal if D (3.2b) reaches the Gaussian value 2m+1, which is achieved whend = 0 and\u01eb \u00b1 = 1. This special case of the quadrature rule in Eqs.", "(3.1a)-(3.1g) can be inferred from #OTHEREFR for \u01eb \u00b1 = 0, and from #OTHEREFR for general \u01eb \u00b1 \u2208 {0, 1} (cf. also #OTHEREFR Section 8] )."], "text_after_citation": ["(2\u03c0) n n! \u03c0 0 \u2022 \u2022 \u2022 \u03c0 0 f cos(\u03be) \u03c1 \u01eb (\u03be)d\u03be 1 \u2022 \u2022 \u2022 d\u03be n = (3.4a) 1 N (m,n) \u01eb \u03bb\u2208\u039b (m,n) 1 2 (1\u2212\u01eb+)(1\u2212\u01eb+)\u03b4 m\u2212\u03bb 1 +(1\u2212\u01eb\u2212)(1\u2212\u01eb\u2212)\u03b4 \u03bbn f cos \u03be (m,n) \u03bb \u03c1 \u01eb \u03be (m,n) \u03bb", "for f cos(\u03be) := f cos(\u03be 1 ), . . .", ", cos(\u03be n ) , where f (x 1 , . . .", ", x n ) = f (x) \u2208 P (D,n) with D = 2m +\u01eb + +\u01eb \u2212 \u2212 1 and", "N (m,n) \u01eb := 2(m + n \u2212 1) + \u01eb + + \u01eb \u2212 +\u01eb + +\u01eb \u2212 n . (3.4b)"], "citing_paper_content": {"title": "Cubature Rules For Unitary Jacobi Ensembles", "abstract": "We present Chebyshev type cubature rules for the exact integration of rational symmetric functions with poles on prescribed coordinate hyperplanes. Here the integration is with respect to the densities of unitary Jacobi ensembles stemming from the Haar measures of the orthogonal and the compact symplectic Lie groups."}, "cited_paper_content": {"title": "Exact Cubature Rules For Symmetric Functions", "abstract": "We employ a multivariate extension of the Gauss quadrature formula, originally due to Berens, Schmid and Xu [BSX95], so as to derive cubature rules for the integration of symmetric functions over hypercubes (or infinite limiting degenerations thereof) with respect to the densities of unitary random matrix ensembles. Our main application concerns the explicit implementation of a class of cubature rules associated with the Bernstein-Szego polynomials, which permit the exact integration of symmetric rational functions with prescribed poles at coordinate hyperplanes against unitary circular Jacobi distributions stemming from the Haar measures on the symplectic and the orthogonal groups."}, "keywords": ["cubature rule"], "citation_intent": "background"} {"citing_id": "2304.04017v2", "cited_id": "1707.02880", "section_title": "A. Automatic Photo Retouching", "citation": "HDRNet #REFR obtains transformations on the low-resolution input and applies upsampled transformations to the full-resolution image with bilateral grid processing.", "text_before_citation": ["StarEnhancer #OTHEREFR introduces multiple-style enhancement with the ability to transform inputs to an unseen style.", "CSRNet #OTHEREFR designs a lightweight framework containing a base network and a conditioning network for extracting global features and performing photo retouching, respectively.", "Pienet #OTHEREFR constructs preference vectors with metric learning and adaptively enhances images according to user-provided preferable styles.", "Operator Prediction Methods.", "DeepLPF #OTHEREFR regresses the parameters of spatially localized filters and automatically applies those filters to enhance inputs."], "text_after_citation": ["Enhancement curve learning methods #OTHEREFR , #OTHEREFR , #OTHEREFR estimate retouching curves to tone global properties of inputs rather than directly mapping.", "RCTNet #OTHEREFR first estimates the transformation of the representative colors and then enhances inputs according to the similarity between inputs and representative colors.", "3D LUT based methods #OTHEREFR , #OTHEREFR utilize variants of 3D lookup tables (3D LUTs) with deep learning, achieving real-time and flexible photo enhancement performance.", "In contrast to most of the methods above, 3D LUT HRP [2] focuses on portrait retouching and achieves region awareness through the human-region priority strategy.", "Our work distinguishes from 3D LUT HRP #OTHEREFR by emphasizing the significance of the user guidance and investigating interactive region-aware portrait retouching, whcih provides the flexibility to retouch different instances according to users' intents."], "citing_paper_content": {"title": "Region-Aware Portrait Retouching With Sparse Interactive Guidance", "abstract": "Portrait retouching aims to improve the aesthetic quality of input portrait photos and especially requires humanregion priority. The deep learning-based methods largely elevate the retouching efficiency and provide promising retouched results. However, existing portrait retouching methods focus on automatic retouching, which treats all human-regions equally and ignores users' preferences for specific individuals, thus suffering from limited flexibility in interactive scenarios. In this work, we emphasize the importance of users' intents and explore the interactive portrait retouching task. Specifically, we propose a regionaware retouching framework with two branches: an automatic branch and an interactive branch. The automatic branch involves an encoding-decoding process, which searches region candidates and performs automatic region-aware retouching without user guidance. The interactive branch encodes sparse user guidance into a priority condition vector and modulates latent features with a region selection module to further emphasize the userspecified regions. Experimental results show that our interactive branch effectively captures users' intents and generalizes well to unseen scenes with sparse user guidance, while our automatic branch also outperforms the state-of-the-art retouching methods due to improved region-awareness."}, "cited_paper_content": {"title": "Deep Bilateral Learning For Real-Time Image Enhancement", "abstract": "Performance is a critical challenge in mobile image processing. Given a reference imaging pipeline, or even human-adjusted pairs of images, we seek to reproduce the enhancements and enable real-time evaluation. For this, we introduce a new neural network architecture inspired by bilateral grid processing and local affine color transforms. Using pairs of input/output images, we train a convolutional neural network to predict the coefficients of a locally-affine model in bilateral space. Our architecture learns to make local, global, and content-dependent decisions to approximate the desired image transformation. At runtime, the neural network consumes a low-resolution version of the input image, produces a set of affine transformations in bilateral space, upsamples those transformations in an edge-preserving fashion using a new slicing node, and then applies those upsampled transformations to the full-resolution image. Our algorithm processes high-resolution images on a smartphone in milliseconds, provides a real-time viewfinder at 1080p resolution, and matches the quality of state-of-the-art approximation techniques on a large class of image operators. Unlike previous work, our model is trained off-line from data and therefore does not require access to the original operator at runtime. This allows our model to learn complex, scene-dependent transformations for which no reference implementation is available, such as the photographic edits of a human retoucher."}, "keywords": ["upsampled transformations"], "citation_intent": "method"} {"citing_id": "2304.05015v1", "cited_id": "1907.13372", "section_title": "Introduction", "citation": "For example, #REFR selects the most common samples with the lowest diversity for replay, believing that the most representative samples will elevate the effectiveness of replay.", "text_before_citation": ["By doing so, the model can be trained with samples from both current and previous classes, resulting in better generalization.", "However, since the number of selected samples in the memory is much smaller than those within the new classes, the selected samples are easy to be ignored or cause overfitting when training due to the small number.", "Careful selection of the samples is required, which naturally brings the question: How to select the best samples for replay?", "Some attempts have been made to answer the question, aiming to seek the most effective samples for replay.", "Researchers propose different criteria that are mostly manually designed based on some heuristic factors like diversity #OTHEREFR ."], "text_after_citation": ["However, the most common samples may not always be the samples being forgotten in later stages.", "#OTHEREFR proposes to save both the low-diversity samples near the distribution center and high-diversity samples near the classification boundaries.", "However, new challenges arise since the memory length is limited, so it is challenging to find the optimal quotas for the two kinds of samples to promote replay effectiveness to the greatest extent.", "Moreover, most of the existing methods are designed based on a single factor, the selection performance, however, can be influenced by many factors with complicated relationships.", "For example, besides diversity, memory sample selection should also be class-dependent because the hard classes need more samples to replay in order to alleviate the more severe catastrophic forgetting issue."], "citing_paper_content": {"title": "Continual Semantic Segmentation With Automatic Memory Sample Selection", "abstract": "Continual Semantic Segmentation (CSS) extends static semantic segmentation by incrementally introducing new classes for training. To alleviate the catastrophic forgetting issue in CSS, a memory buffer that stores a small number of samples from the previous classes is constructed for replay. However, existing methods select the memory samples either randomly or based on a single-factor-driven handcrafted strategy, which has no guarantee to be optimal. In this work, we propose a novel memory sample selection mechanism that selects informative samples for effective replay in a fully automatic way by considering comprehensive factors including sample diversity and class performance. Our mechanism regards the selection operation as a decision-making process and learns an optimal selection policy that directly maximizes the validation performance on a reward set. To facilitate the selection decision, we design a novel state representation and a dual-stage action space. Our extensive experiments on Pascal-VOC 2012 and ADE 20K datasets demonstrate the effectiveness of our approach with state-of-the-art (SOTA) performance achieved, outperforming the second-place one by 12.54% for the 6stage setting on Pascal-VOC 2012."}, "cited_paper_content": {"title": "Incremental Learning Techniques For Semantic Segmentation", "abstract": "Deep learning architectures exhibit a critical drop of performance due to catastrophic forgetting when they are required to incrementally learn new tasks. Contemporary incremental learning frameworks focus on image classification and object detection while in this work we formally introduce the incremental learning problem for semantic segmentation in which a pixel-wise labeling is considered. To tackle this task we propose to distill the knowledge of the previous model to retain the information about previously learned classes, whilst updating the current model to learn the new ones. We propose various approaches working both on the output logits and on intermediate features. In opposition to some recent frameworks, we do not store any image from previously learned classes and only the last model is needed to preserve high accuracy on these classes. The experimental evaluation on the Pascal VOC2012 dataset shows the effectiveness of the proposed approaches."}, "keywords": ["replay"], "citation_intent": "background"} {"citing_id": "2303.10344v1", "cited_id": "1606.00373", "section_title": "Loss Function And Training Details", "citation": "We optimize PanoTransformer by minimizing a pixelwise reverse Huber loss #REFR between the predicted panoramas and corresponding ground truth.", "text_before_citation": [], "text_after_citation": ["Since using a standard L1 loss function to learn a binary light mask heavily penalizes even small shifts of a light source position, the reverse Huber loss takes advantage of L1 loss and L2 loss as below:", "L B = \uf8f1 \uf8f2 \uf8f3 |y \u2212\u0177|, |y \u2212\u0177| \u2264 T (y \u2212\u0177) 2 + T 2 2T , |y \u2212\u0177| \u2265 T (4)", "where y is the ground truth value and\u0177 is the prediction.", "The threshold T is set to 0.2 in our experiments.", "To generate more realistic details, an extra adversarial loss is also involved in the training process."], "citing_paper_content": {"title": "Local-To-Global Panorama Inpainting For Locale-Aware Indoor Lighting Prediction", "abstract": "Figure 1. We propose a locale-aware indoor illumination prediction method that can generate a full and texture-rich HDR panorama (the third column in each group) at any locale in the scene, enabling spatially-varying and consistent shading after virtual object insertion (the first and second columns)."}, "cited_paper_content": {"title": "Deeper Depth Prediction With Fully Convolutional Residual Networks", "abstract": "This paper addresses the problem of estimating the depth map of a scene given a single RGB image. To model the ambiguous mapping between monocular images and depth maps, we leverage on deep learning capabilities and present a fully convolutional architecture encompassing residual learning. The proposed model is deeper than the current state of the art, but contains fewer parameters and requires less training data, while still outperforming all current CNN approaches aimed at the same task. We further present a novel way to efficiently learn feature map up-sampling within the network. For optimization we introduce the reverse Huber loss, particularly suited for the task at hand and driven by the value distributions commonly present in depth maps. The predictions are given by a single architecture, trained end-to-end, that does not rely on post-processing techniques, such as CRFs or other additional refinement steps."}, "keywords": ["predicted panoramas"], "citation_intent": "method"} {"citing_id": "2303.02244v1", "cited_id": "2002.07764", "section_title": "B. Survey Measures", "citation": "In the email, we pledged to donate US$ 2 per completed response to the United Nations World Food Programme #REFR .", "text_before_citation": ["We also merged records where individuals used different names but the same email address, or different email addresses but the same or an extremely similar name for different commits.", "When judgment was needed, we made decisions as conservatively as possible. We ultimately obtained a list of 1,555 unique individuals.", "In case of multiple email addresses per person, we selected only one, preferring personal email addresses over professional ones because the person's commits to the projects were partially already quite old and the person might have moved organizations since.", "We sent an email to all 1,555 developers to invite them to our survey #OTHEREFR (which was part of a larger data collection effort for multiple studies).", "We sent the invitation with a personalized link to the survey."], "text_after_citation": ["We also sent two reminders #OTHEREFR , including one that linked to an official university web page confirming the authenticity of the survey as some developers were concerned that the invitation might be a scam.", "Because 165 emails bounced, we reached 1,390 developers (89.4% deliverable emails), of which 194 developers started the survey, and 124 completed it.", "We dropped all respondents that provided implausible values for their age (\u226410 or \u2265100), leaving us with a preliminary sample of 121 developers.", "Although this number may seem low, it is of a similar magnitude as the total number even from studies that assessed personality not based on self-reports but on mining vast corpora of emails.", "Calefato et al., for instance, used about 1.35 million emails from 46,304 developers but only obtained full personality profiles for 211 of them (<0.5%), of which only 118 had made source code commits #OTHEREFR ."], "citing_paper_content": {"title": "The Type To Take Out A Loan? A Study Of Developer Personality And Technical Debt", "abstract": "Background: Technical debt (TD) has been widely discussed in software engineering research, and there is an emerging literature linking it to developer characteristics. However, developer personality has not yet been studied in this context. Aims and Method: We explore the relationship between various personality traits (Five Factor Model, regulatory focus, and narcissism) of developers and the introduction and removal of TD. To this end, we complement an existing TD dataset with novel selfreport personality data gathered by surveying developers, and analyze 2,145 commits from 19 developers. Results: We find that conscientiousness, emotional stability, openness to experience, and prevention focus are negatively associated with TD. There were no significant results for extraversion, agreeableness, promotion focus, or narcissism. Conclusions: We take our results as first evidence that developer personality has a systematic influence on the introduction and removal of TD. This has implications not only for future research, which could, for example, study the effects of personality on downstream consequences of TD like defects, but also for software engineering practitioners who may, for example, consider developer personality in staffing decisions."}, "cited_paper_content": {"title": "Sampling In Software Engineering Research: A Critical Review And Guidelines", "abstract": "Representative sampling appears rare in software engineering research. Not all studies need representative samples, but a general lack of representative sampling undermines a scientific field. This study therefore investigates the state of sampling in recent, high-quality software engineering research. The key findings are: (1) random sampling is rare; (2) sophisticated sampling strategies are very rare; (3) sampling, representativeness and randomness do not appear well-understood. To address these problems, the paper synthesizes existing knowledge of sampling into a succinct primer and proposes extensive guidelines for improving the conduct, presentation and evaluation of sampling in software engineering research. It is further recommended that while researchers should strive for more representative samples, disparaging non-probability sampling is generally capricious and particularly misguided for predominately qualitative research."}, "keywords": ["email"], "citation_intent": "background"} {"citing_id": "2303.00894v1", "cited_id": "1811.07871", "section_title": "Introduction", "citation": "By incentivizing incorrect behavior, misspecified objectives can lead to useless or even dangerous outcomes #REFR .", "text_before_citation": ["Standard AI and machine learning algorithms require the designer to specify a cost or reward function.", "This objective incentivizes desired behavior and penalizes mistakes, teaching the system how to perform the task.", "While such objectives are easy to manually specify for problems with clear win conditions, such as games #OTHEREFR and tasks with clear goals, such as image classification #OTHEREFR , they can be challenging to formalize for more nuanced tasks #OTHEREFR . For example, Lee et al.", "#OTHEREFR find that humans struggle to define an objective that incentivizes bipedal locomotion, despite being experts in both machine learning and walking."], "text_after_citation": ["Ensuring that AI systems optimize objectives that align with our own is a crucial part of building safe and beneficial AI.", "Reward learning techniques enable AI systems to learn their objectives by observing and interacting with humans instead of requiring their designers to specify these objectives manually #OTHEREFR .", "Humans can train reward learning systems using a variety of feedback modalities, including demonstrations #OTHEREFR , pairwise comparisons #OTHEREFR , natural language #OTHEREFR , numeric values #OTHEREFR , corrections #OTHEREFR , and proxy rewards #OTHEREFR .", "Reward learning from pairwise comparisons in particular has proven remarkably effective across a variety of tasks, including complex physical maneuvers for continuous control systems #OTHEREFR and text summarization for language language models #OTHEREFR .", "In the future, it may even be possible to use reward learning to train AI systems to assist humans in researching safe AI #OTHEREFR ."], "citing_paper_content": {"title": "Active Reward Learning From Multiple Teachers", "abstract": "Reward learning algorithms utilize human feedback to infer a reward function, which is then used to train an AI system. This human feedback is often a preference comparison, in which the human teacher compares several samples of AI behavior and chooses which they believe best accomplishes the objective. While reward learning typically assumes that all feedback comes from a single teacher, in practice these systems often query multiple teachers to gather sufficient training data. In this paper, we investigate this disparity, and find that algorithmic evaluation of these different sources of feedback facilitates more accurate and efficient reward learning. We formally analyze the value of information (VOI) when reward learning from teachers with varying levels of rationality, and define and evaluate an algorithm that utilizes this VOI to actively select teachers to query for feedback. Surprisingly, we find that it is often more informative to query comparatively irrational teachers. By formalizing this problem and deriving an analytical solution, we hope to facilitate improvement in reward learning approaches to aligning AI behavior with human values."}, "cited_paper_content": {"title": "Scalable Agent Alignment Via Reward Modeling: A Research Direction", "abstract": "One obstacle to applying reinforcement learning algorithms to real-world problems is the lack of suitable reward functions. Designing such reward functions is difficult in part because the user only has an implicit understanding of the task objective. This gives rise to the agent alignment problem: how do we create agents that behave in accordance with the user's intentions? We outline a high-level research direction to solve the agent alignment problem centered around reward modeling: learning a reward function from interaction with the user and optimizing the learned reward function with reinforcement learning. We discuss the key challenges we expect to face when scaling reward modeling to complex and general domains, concrete approaches to mitigate these challenges, and ways to establish trust in the resulting agents."}, "keywords": ["misspecified objectives"], "citation_intent": "background"} {"citing_id": "2304.00858v1", "cited_id": "1911.12409", "section_title": "The Multi-View Auto-Encoder Backbone", "citation": "Instead of a two-stage encoder for feature extraction #REFR , our FoCoViL makes full use of a single auto-encoder in an end-to-end fashion that already achieves promising results.", "text_before_citation": ["To maintain the action representation, we employ an effective sequential autoencoder as the backbone network sharing among the multi-view actions (see Fig. 2(c) ).", "From #OTHEREFR , the encoder usually plays a more important role than the decoder to integrate representative features.", "We thus consider a portable decoder by feeding the empty frame (zero vector) to every step of the decoder, such that the model only focuses on the hidden representation delivered from the encoded output."], "text_after_citation": ["The structure consists of a three-layer bi-directional encoder f e to derive the latent representation, a linear projection net g that is specifically designed for contrastive learning, and a single-layer uni-directional decoder f d for reconstruction purposes.", "Both f e and f d are under the Gated Recurrent Unit (GRU) architecture to process frame-wise information.", "For each action X u i , the reconstruction loss L r is defined as:", "EQUATION", "where T is the total number of frames in the action video."], "citing_paper_content": {"title": "Focalized Contrastive View-Invariant Learning For Self-Supervised Skeleton-Based Action Recognition", "abstract": "Learning view-invariant representation is a key to improving feature discrimination power for skeleton-based action recognition. Existing approaches cannot effectively remove the impact of viewpoint due to the implicit view-dependent representations. In this work, we propose a self-supervised framework called Focalized Contrastive View-invariant Learning (FoCoViL), which significantly suppresses the view-specific information on the representation space where the viewpoints are coarsely aligned. By maximizing mutual information with an effective contrastive loss between multi-view sample pairs, FoCoViL associates actions with common view-invariant properties and simultaneously separates the dissimilar ones. We further propose an adaptive focalization method based on pairwise similarity to enhance contrastive learning for a clearer cluster boundary in the learned space. Different from many existing self-supervised representation learning work that rely heavily on supervised classifiers, FoCoViL performs well on both unsupervised and supervised classifiers with superior recognition performance. Extensive experiments also show that the proposed contrastive-based focalization generates a more discriminative latent representation."}, "cited_paper_content": {"title": "Predict&Cluster: Unsupervised Skeleton Based Action Recognition", "abstract": "We propose a novel system for unsupervised skeleton-based action recognition. Given inputs of body keypoints sequences obtained during various movements, our system associates the sequences with actions. Our system is based on an encoder-decoder recurrent neural network, where the encoder learns a separable feature representation within its hidden states formed by training the model to perform prediction task. We show that according to such unsupervised training the decoder and the encoder self-organize their hidden states into a feature space which clusters similar movements into the same cluster and distinct movements into distant clusters. Current state-of-the-art methods for action recognition are strongly supervised, i.e., rely on providing labels for training. Unsupervised methods have been proposed, however, they require camera and depth inputs (RGB+D) at each time step. In contrast, our system is fully unsupervised, does not require labels of actions at any stage, and can operate with body keypoints input only. Furthermore, the method can perform on various dimensions of body keypoints (2D or 3D) and include additional cues describing movements. We evaluate our system on three extensive action recognition benchmarks with different number of actions and examples. Our results outperform prior unsupervised skeleton-based methods, unsupervised RGB+D based methods on cross-view tests and while being unsupervised have similar performance to supervised skeleton-based action recognition."}, "keywords": ["single auto-encoder"], "citation_intent": "method"} {"citing_id": "2303.00171v1", "cited_id": "1603.02754", "section_title": "Gbdt", "citation": "For the given user and TTS pronunciations, the phoneme embedding sequences are concatenated and used as input to train a GBDT model using XGBoost #REFR with logistic loss.", "text_before_citation": ["We train a Gradient Boosted Decision Tree (GBDT) #OTHEREFR classifier using phoneme embeddings as input."], "text_after_citation": ["The annotations are binary labels, where 0 represents both pronunciations are the same, and 1 otherwise."], "citing_paper_content": {"title": "Dtw-Siamesenet: Dynamic Time Warped Siamese Network For Mispronunciation Detection And Correction", "abstract": "Personal Digital Assistants (PDAs)-such as Siri, Alexa and Google Assistant, to name a few-play an increasingly important role to access information and complete tasks spanning multiple domains, and by diverse groups of users. A textto-speech (TTS) module allows PDAs to interact in a natural, human-like manner, and play a vital role when the interaction involves people with visual impairments or other disabilities. To cater to the needs of a diverse set of users, inclusive TTS is important to recognize and pronounce correctly text in different languages and dialects. Despite great progress in speech synthesis, the pronunciation accuracy of named entities in a multilingual setting still has a large room for improvement. Existing approaches to correct named entity (NE) mispronunciations, like retraining Grapheme-to-Phoneme (G2P) models, or maintaining a TTS pronunciation dictionary, require expensive annotation of the ground truth pronunciation, which is also time consuming. In this work, we present a highly-precise, PDA-compatible pronunciation learning framework for the task of TTS mispronunciation detection and correction. In addition, we also propose a novel mispronunciation detection model called DTW-SiameseNet, which employs metric learning with a Siamese architecture for Dynamic Time Warping (DTW) with triplet loss. We demonstrate that a locale-agnostic, privacypreserving solution to the problem of TTS mispronunciation detection is feasible. We evaluate our approach on a real-world dataset, and a corpus of NE pronunciations of an anonymized audio dataset of person names recorded by participants from 10 different locales. Human evaluation shows our proposed approach improves pronunciation accuracy on average by \u2248 6% compared to strong phoneme-based and audio-based baselines."}, "cited_paper_content": {"title": "Xgboost: A Scalable Tree Boosting System", "abstract": "Tree boosting is a highly effective and widely used machine learning method. In this paper, we describe a scalable end-to-end tree boosting system called XGBoost, which is used widely by data scientists to achieve state-of-the-art results on many machine learning challenges. We propose a novel sparsity-aware algorithm for sparse data and weighted quantile sketch for approximate tree learning. More importantly, we provide insights on cache access patterns, data compression and sharding to build a scalable tree boosting system. By combining these insights, XGBoost scales beyond billions of examples using far fewer resources than existing systems."}, "keywords": ["TTS pronunciations", "XGBoost"], "citation_intent": "method"} {"citing_id": "2304.08747v1", "cited_id": "cs/0702015", "section_title": "I. Introduction", "citation": "Furthermore, the results of #REFR showed that there is a trade-off between the minimum repair bandwidth and the storage capacity of the network.", "text_before_citation": ["The rapid development of distributed storage systems raised the question of how failed nodes in these systems can be efficiently repaired.", "A large body of literature on erasure codes for distributed storage addressed this question over the past decade.", "One approach to assess repair efficiency is to measure the so-called repair bandwidth, which is the amount of information downloaded from other nodes for the repair.", "This approach, first introduced in #OTHEREFR , assumes the nodes form a homogeneous network and determines the repair bandwidth by considering the information flow in course of the repair.", "In particular, #OTHEREFR derived a bound on the smallest number of symbols required for the repair of a single failed node, known as the cut-set bound on the repair bandwidth."], "text_after_citation": ["In this paper we shall focus on codes with minimum storage overhead and optimal repair bandwidth, namely, maximum distance separable (MDS) codes with optimal repair bandwidth.", "Such codes are termed minimum-storage regenerating (MSR) codes in the literature.", "The problem of MSR codes has received much attention and affords several variants.", "In its basic form, the problem concerns repair of a single failed node, but it has been generalized to repair of multiple failed nodes.", "There are generally two different models for repairing multiple failed nodes: the centralized repair model #OTHEREFR , #OTHEREFR , #OTHEREFR , #OTHEREFR and the cooperative repair model #OTHEREFR , #OTHEREFR , #OTHEREFR , #OTHEREFR ."], "citing_paper_content": {"title": "Rack-Aware Minimum-Storage Regenerating Codes With Optimal Access", "abstract": "We derive a lower bound on the amount of information accessed to repair failed nodes within a single rack from any number of helper racks in the rack-aware storage model that allows collective information processing in the nodes that share the same rack. Furthermore, we construct a family of rack-aware minimum-storage regenerating (MSR) codes with the property that the number of symbols accessed for repairing a single failed node attains the bound with equality for all admissible parameters. Constructions of rack-aware optimal-access MSR codes were only known for limited parameters. We also present a family of Reed-Solomon (RS) codes that only require accessing a relatively small number of symbols to repair multiple failed nodes in a single rack. In particular, for certain code parameters, the RS construction attains the bound on the access complexity with equality and thus has optimal access."}, "cited_paper_content": {"title": "Network Coding For Distributed Storage Systems", "abstract": "Peer-to-peer distributed storage systems provide reliable access to data through redundancy spread over nodes across the Internet. A key goal is to minimize the amount of bandwidth used to maintain that redundancy. Storing a file using an erasure code, in fragments spread across nodes, promises to require less redundancy and hence less maintenance bandwidth than simple replication to provide the same level of reliability. However, since fragments must be periodically replaced as nodes fail, a key question is how to generate a new fragment in a distributed way while transferring as little data as possible across the network. In this paper, we introduce a general technique to analyze storage architectures that combine any form of coding and replication, as well as presenting two new schemes for maintaining redundancy using erasure codes. First, we show how to optimally generate MDS fragments directly from existing fragments in the system. Second, we introduce a new scheme called regenerating codes which use slightly larger fragments than MDS but have lower overall bandwidth use. We also show through simulation that in realistic environments, regenerating codes can reduce maintenance bandwidth use by 25% or more compared with the best previous design - a hybrid of replication and erasure codes - while simplifying system architecture."}, "keywords": ["minimum repair", "storage capacity"], "citation_intent": "result"} {"citing_id": "2305.01518v1", "cited_id": "1310.0606", "section_title": "Replicability", "citation": "This definition is consistent with that given by the American Statistical Association [6] and with other work in statistics #REFR .", "text_before_citation": ["From this follows the desideratum that scientific predictions also agree well with experimental observations made across sufficiently similar circumstances. Defining this rigorously is not straightforward.", "In 2019 the USA's National Academies established a Committee on Reproducibility and Replicability in Science.", "Their report #OTHEREFR is essential reading for those interested in this topic.", "Though their focus is primarily on scientific hypotheses, their definition is a good starting point for this discussion.", "Conclusion 3-1 on page 36 states: \"Replicability is obtaining consistent results across studies aimed at answering the same scientific question, each of which has obtained its own data\"."], "text_after_citation": ["A distinct concept is that of repeatability: a repeatable prediction approach produces predictions without variation across independent tests carried out by repeating the entire process, including data collection, on the same individual or sampling unit #OTHEREFR . This is important but is not examined here.", "Replicability is also used in contrast to reproducibility, defined as \"obtaining consistent results using the same input data, computational steps, methods, and code, and conditions of analysis\" #OTHEREFR .", "Usage of these terms is often inconsistent and is plainly reversed in computer science -a sarcastic twist in the parallel evolution of the same concepts in siloed fields.", "A thread of literature debates and documents related terminologies and their usage #OTHEREFR .", "For predictions, I propose to modify the NAS definition to say:"], "citing_paper_content": {"title": "Defining Replicability Of Prediction Rules", "abstract": "In this article I propose an approach for defining replicability for prediction rules. Motivated by a recent NAS report, I start from the perspective that replicability is obtaining consistent results across studies suitable to address the same prediction question, each of which has obtained its own data. I then discuss concept and issues in defining key elements of this statement. I focus specifically on the meaning of \"consistent results\" in typical utilization contexts, and propose a multi-agent framework for defining replicability, in which agents are neither partners nor adversaries. I recover some of the prevalent practical approaches as special cases. I hope to provide guidance for a more systematic assessment of replicability in machine learning."}, "cited_paper_content": {"title": "Deciding Whether Follow-Up Studies Have Replicated Findings In A Preliminary Large-Scale\"Omics' Study\"", "abstract": "We propose a formal method to declare that findings from a primary study have been replicated in a follow-up study. Our proposal is appropriate for primary studies that involve large-scale searches for rare true positives (i.e., needles in a haystack). Our proposal assigns an r value to each finding; this is the lowest false discovery rate at which the finding can be called replicated. Examples are given and software is available."}, "keywords": ["American Statistical Association"], "citation_intent": "result"} {"citing_id": "2304.04364v3", "cited_id": "1812.04948", "section_title": "Preliminaries", "citation": "Specifically, EG3D starts with randomly sampled GAN latent codes, and then a feature generator based on Style-GAN #REFR converts latent codes into 2D features and maps them into 3D tri-planes.", "text_before_citation": ["EG3D.", "We first briefly review the network architecture of a SOTA 3D network, EG3D #OTHEREFR .", "Our generator G 3 , G 3 and G 3 are finetuned on EG3D."], "text_after_citation": ["An MLP decoder predicts features for 3D point projections on tri-planes to generate color and density.", "Finally, volume rendering is used to generate an image on the orientation of the camera pose. This process can be formulated as follows:", "EQUATION", "where W 3 \u2208 R 1\u00d714\u00d7512 is the lantent code. P \u2208 R 1\u00d725 is the camera pose.", "is the weight parameters of the EG3D generator G 3 . I is the generated images. CLIP-guided Loss."], "citing_paper_content": {"title": "Itportrait: Image-Text Coupled 3D Portrait Domain Adaptation", "abstract": "Artistic portraits have many applications [10, 12, 22, 24, 34] in our daily lives, especially in industries related to animation, art, and the metaverse. As shown in Fig. 1, artistic portraits can be regarded as a portrait domain adaptation task, which refers to transforming the artistic style, cross-species identity, and expression shape change. The current domain adaption methods are mainly divided into two categories: guided by the vision-based method (artistic-image [6, 21, 34]), or guided by the language-based method (text-description [23, 36]). Combining image-guided and text-driven guidance can not only transfer the precise and detailed style of the reference image but also have the text-driven flexible editing ability. Therefore, Image-Text coupled guidance has better style control-ability and artistic merit [8, 27]. However, the potential of Vision-Language (Image-Text) multi-modal guidance is under-explored."}, "cited_paper_content": {"title": "A Style-Based Generator Architecture For Generative Adversarial Networks", "abstract": "We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature. The new architecture leads to an automatically learned, unsupervised separation of high-level attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images (e.g., freckles, hair), and it enables intuitive, scale-specific control of the synthesis. The new generator improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. To quantify interpolation quality and disentanglement, we propose two new, automated methods that are applicable to any generator architecture. Finally, we introduce a new, highly varied and high-quality dataset of human faces."}, "keywords": ["Style-GAN converts latent"], "citation_intent": "method"} {"citing_id": "2304.11823v1", "cited_id": "1712.05526", "section_title": "Conclusion", "citation": "For Blended #REFR , we blend the poisoned samples with a Hello-Ketty image and the blended ratio is 0.1.", "text_before_citation": ["We evaluate our method on PreAct-ResNet18 #OTHEREFR and VGG19-BN #OTHEREFR networks.", "We compare our method with SOTA defense methods on three datasets and the two networks with a 10% poisining ratio and 5% clean samples for defense.", "To study the effectiveness of our method under different poisoning ratios, we compare with SOTA defense methods on CIFAR-10 dataset and PreAct-ResNet18 network on 5% and 1% poisoning ratios.", "Attack Details. We introduce some details about the backdoor attacks here.", "For BadNets-A2O and BadNets-A2A [12], we patch a 3 \u00d7 3 white square in the lower right corner of the images for CIFAR-10 and GTSRB datasets, and 6 \u00d7 6 white square for Tiny ImageNet."], "text_after_citation": ["Defense Details.", "The seven SOTA defense methods can be divided into two types based on what the defender is given.", "AC [4] and ABL #OTHEREFR assumes that the defender is given a poisoned dataset, while the remaining six defense methods assumes that the defender can acquire a subset of clean samples and a backdoored model.", "The learning rate for all methods is set to 0.01, and the batch size is set to 256.", "The threshold for ANP #OTHEREFR is set to 0.4 since we find that the recommended threshold 0.2 fails to remove backdoors."], "citing_paper_content": {"title": "Enhancing Fine-Tuning Based Backdoor Defense With Sharpness-Aware Minimization", "abstract": "Backdoor defense, which aims to detect or mitigate the effect of malicious triggers introduced by attackers, is becoming increasingly critical for machine learning security and integrity. Fine-tuning based on benign data is a natural defense to erase the backdoor effect in a backdoored model. However, recent studies show that, given limited benign data, vanilla fine-tuning has poor defense performance. In this work, we provide a deep study of fine-tuning the backdoored model from the neuron perspective and find that backdoorrelated neurons fail to escape the local minimum in the fine-tuning process. Inspired by observing that the backdoorrelated neurons often have larger norms, we propose FT-SAM, a novel backdoor defense paradigm that aims to shrink the norms of backdoor-related neurons by incorporating sharpness-aware minimization with fine-tuning. We demonstrate the effectiveness of our method on several benchmark datasets and network architectures, where it achieves stateof-the-art defense performance. Overall, our work provides a promising avenue for improving the robustness of machine learning models against backdoor attacks."}, "cited_paper_content": {"title": "Targeted Backdoor Attacks On Deep Learning Systems Using Data Poisoning", "abstract": "Deep learning models have achieved high performance on many tasks, and thus have been applied to many security-critical scenarios. For example, deep learning-based face recognition systems have been used to authenticate users to access many security-sensitive applications like payment apps. Such usages of deep learning systems provide the adversaries with sufficient incentives to perform attacks against these systems for their adversarial purposes. In this work, we consider a new type of attacks, called backdoor attacks, where the attacker's goal is to create a backdoor into a learning-based authentication system, so that he can easily circumvent the system by leveraging the backdoor. Specifically, the adversary aims at creating backdoor instances, so that the victim learning system will be misled to classify the backdoor instances as a target label specified by the adversary. In particular, we study backdoor poisoning attacks, which achieve backdoor attacks using poisoning strategies. Different from all existing work, our studied poisoning strategies can apply under a very weak threat model: (1) the adversary has no knowledge of the model and the training set used by the victim system; (2) the attacker is allowed to inject only a small amount of poisoning samples; (3) the backdoor key is hard to notice even by human beings to achieve stealthiness. We conduct evaluation to demonstrate that a backdoor adversary can inject only around 50 poisoning samples, while achieving an attack success rate of above 90%. We are also the first work to show that a data poisoning attack can create physically implementable backdoors without touching the training process. Our work demonstrates that backdoor poisoning attacks pose real threats to a learning system, and thus highlights the importance of further investigation and proposing defense strategies against them."}, "keywords": ["poisoned samples"], "citation_intent": "method"} {"citing_id": "2303.08984v1", "cited_id": "1511.08458", "section_title": "C. The Previous Rpcnn Model", "citation": "Afterwards, we train the plots with Convolutional Neural Network, which is proved to be powerful in image-driven pattern recognition #REFR .", "text_before_citation": ["However, the problems of a high False Positive rate and unstable performance need to be addressed.", "The workflow of the RPCNN model is sketched in Fig. 4 .", "First, we take two classes of time windows either close to or far from the interlocks, where the window length is a tunable model parameter.", "Explicitly, the interlock samples are taken as sliding windows from 1 s to 15 s before interlocks.", "Second, we transform each 1-dimensional time series of the 376 channels to a 2-dimensional Recurrence Plot, which could be interpreted as a pairwise distance measure of the time series and is capable of extracting finer dynamical patterns."], "text_after_citation": ["The output is a probability score inside the range [0, 1] indicating the likelihood of a sample belonging to the positive (i.e. close to interlock) class.", "More details such as model architecture are published in #OTHEREFR .", "Typical binary classification metrics in a confusion matrix are defined and applied in our setting.", "A True Positive (TP) means an interlock sample -a sample less than 15 s before an interlock -being classified as an interlock.", "A False Positive (FP) means a stable sample -a sample at least 10 minutes away from an interlock -being mistaken as an interlock."], "citing_paper_content": {"title": "Forecasting Particle Accelerator Interruptions Using Logistic Lasso Regression", "abstract": "Unforeseen particle accelerator interruptions, also known as interlocks, lead to abrupt operational changes despite being necessary safety measures. These may result in substantial loss of beam time and perhaps even equipment damage. We propose a simple yet powerful binary classification model aiming to forecast such interruptions, in the case of the High Intensity Proton Accelerator complex at the Paul Scherrer Institut. The model is formulated as logistic regression penalized by least absolute shrinkage and selection operator, based on a statistical two sample test to distinguish between unstable and stable states of the accelerator. The primary objective for receiving alarms prior to interlocks is to allow for countermeasures and reduce beam time loss. Hence, a continuous evaluation metric is developed to measure the saved beam time in any period, given the assumption that interlocks could be circumvented by reducing the beam current. The best-performing interlock-to-stable classifier can potentially increase the beam time by around 5 min in a day. Possible instrumentation for fast adjustment of the beam current is also listed and discussed."}, "cited_paper_content": {"title": "An Introduction To Convolutional Neural Networks", "abstract": "The field of machine learning has taken a dramatic twist in recent times, with the rise of the Artificial Neural Network (ANN). These biologically inspired computational models are able to far exceed the performance of previous forms of artificial intelligence in common machine learning tasks. One of the most impressive forms of ANN architecture is that of the Convolutional Neural Network (CNN). CNNs are primarily used to solve difficult image-driven pattern recognition tasks and with their precise yet simple architecture, offers a simplified method of getting started with ANNs. ::: This document provides a brief introduction to CNNs, discussing recently published papers and newly formed techniques in developing these brilliantly fantastic image recognition models. This introduction assumes you are familiar with the fundamentals of ANNs and machine learning."}, "keywords": ["Convolutional Neural Network"], "citation_intent": "method"} {"citing_id": "2304.01910v1", "cited_id": "1810.04805", "section_title": "Bert Finetuning", "citation": "In this section we study BERT #REFR finetuning, and use the tools developed in Section 4 to clearly differentiate the behavior of BERT-Large from BERT-Base.", "text_before_citation": [], "text_after_citation": ["For our experiment, we finetune pretrained checkpoints of both models 1,000 times each on the MRPC #OTHEREFR task.", "MRPC contains 5,801 sentence pairs labeled yes/no for whether the second sentence paraphrases the first, and has a training set of 3,668 examples, validation set of 408 examples, and test set of 1,725 examples.", "Previous works #OTHEREFR report and investigate training instability for BERT finetuning, noting particular instability for BERT-Large.", "In our experiment, we find that both BERT-Large and BERT-Base have substantial variance in validation-set performance between runs.", "BERT-Base has a standard deviation of 0.80% and BERT-Large has 2.24%, which seems to imply that both models are unstable, with BERT-Large being only somewhat moreso."], "citing_paper_content": {"title": "Calibrated Chaos: Variance Between Runs Of Neural Network Training Is Harmless And Inevitable", "abstract": "Typical neural network trainings have substantial variance in test-set performance between repeated runs, impeding hyperparameter comparison and training reproducibility. We present the following results towards understanding this variation. (1) Despite having significant variance on their test-sets, we demonstrate that standard CIFAR-10 and ImageNet trainings have very little variance in their performance on the test-distributions from which their test-sets are sampled, suggesting that variance is less of a practical issue than previously thought. (2) We present a simplifying statistical assumption which closely approximates the structure of the test-set accuracy distribution. (3) We argue that test-set variance is inevitable in the following two senses. First, we show that variance is largely caused by high sensitivity of the training process to initial conditions, rather than by specific sources of randomness like the data order and augmentations. Second, we prove that variance is unavoidable given the observation that ensembles of trained networks are well-calibrated. (4) We conduct preliminary studies of distribution-shift, fine-tuning, data augmentation and learning rate through the lens of variance between runs."}, "cited_paper_content": {"title": "Bert: Pre-Training Of Deep Bidirectional Transformers For Language Understanding", "abstract": "We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. ::: BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement)."}, "keywords": ["BERT finetuning", "BERT-Base"], "citation_intent": "method"} {"citing_id": "2303.15109v1", "cited_id": "1409.0575", "section_title": "Compatibility Of Our Methods To Other Types Of Attacks", "citation": "The results show that direction tuning attacks are perfectly Table 5 : The ASR (%) comparison on six models on ImageNet #REFR .", "text_before_citation": ["In this subsection, we combine our methods with a feature importanceaware attack (i.e., FIA #OTHEREFR ), three input transformation methods (i.e., DIM #OTHEREFR , TIM #OTHEREFR and SIM #OTHEREFR ), SGM #OTHEREFR and More Bayesian (MB) #OTHEREFR to verify the compatibility of our methods (including both direction tuning attack and network pruning method).", "Table 4 shows the ASR on six victim models when the gradient-based attacks are combined with the feature importance-aware attack (i.e., FIA #OTHEREFR ).", "The results show that our DTA and VDTA can assist FIA to achieve the greatest ASR in comparison with other gradient-based attacks (i.e., MI/NIFGSM and their variance tuning-based version).", "Besides, our DTA also spends less time consumption in comparison with VMI/VNIFGSM, in which the gradient of input in our method is computed 100 times (i.e., K\u2022T = 10\u00d710), while that in variance tuning-based gradient attacks is 200 times (i.e., N \u2022 T = 20 \u00d7 10).", "Table 5 shows the ASR on six victim models when our methods, including DTA, VDTA, and NP, are combined with three input transformation methods, respectively."], "text_after_citation": ["The gradient-based attacks are enhanced by DIM #OTHEREFR , SIM #OTHEREFR and TIM #OTHEREFR , respectively.", "The adversarial examples are generated by the surrogate model, ResNet50 #OTHEREFR . * denotes the ASR under the white-box setting. Avg. means the average value except *.", "compatible with each input transformation method, and the network pruning method can further improve the transferability of adversarial examples.", "In particular, the combination of both DTA+NP and DIM #OTHEREFR not only has the highest transferability but also has less time consumption in comparison with variance tuning-based gradient attacks.", "Note that, in this evaluation, VDTA computes the gradient of input 2000 times (i.e., K \u2022 N \u2022 T = 10 \u00d7 20 \u00d7 10)."], "citing_paper_content": {"title": "Improving The Transferability Of Adversarial Examples Via Direction Tuning", "abstract": "ing the oscillating component. By doing so, our direction tuning attack can achieve better convergence and enhance the transferability of the generated adversarial examples. In addition, a network pruning method is proposed to smooth the decision boundary, thereby further decreasing the update oscillation and enhancing the transferability of the generated adversarial examples. The experiment results on ImageNet demonstrate that the average attack success rate (ASR) of the adversarial examples generated by our method can be improved from 87.9% to 94.5% on five victim models without defenses, and from 69.1% to 76.2% on eight advanced defense methods, in comparison with that of latest gradient-based attacks."}, "cited_paper_content": {"title": "Imagenet Large Scale Visual Recognition Challenge", "abstract": "The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the five years of the challenge, and propose future directions and improvements."}, "keywords": ["direction tuning attacks", "ImageNet"], "citation_intent": "result"} {"citing_id": "2304.12294v1", "cited_id": "2003.08934", "section_title": "Nerf Decoder", "citation": "With the emitted color c and volume density \u03c3 predicted by the rendering network, novel views can be synthesized via volume rendering, which is implemented with differential ray marching as in NeRF #REFR .", "text_before_citation": ["4constructed from the encoder f \u03b8 is fed into the NeRF decoder g \u03c6 for predicting NeRF color and density, as formulated in Eq. (1). Rendering network.", "We follow prior works #OTHEREFR to construct a MLP-based rendering network.", "Similarly, we also include the texture priors by concatenating the color information sampled on all input views with the given position.", "But unlike the typical MLP decoder that processes all points on a ray independently, we further explore introducing cross-point interactions by fusing the rendered information along a ray via a Transformer.", "We adopt IBRNet's #OTHEREFR ray Transformer in our implementation for convenience. Volume rendering."], "text_after_citation": ["Specifically, to estimate the color C of a pixel, radiance needs to be accumulated across all sampled shading points on the corresponding ray that passes through the pixel,", "EQUATION", "where c i , \u03c3 i refer to the color and density of the i-th sampled 3D point on the ray.", "T i is the volume transmittance and \u03b4 i denotes the distances between adjacent points.", "K is the total number of sampled 3D points on a ray."], "citing_paper_content": {"title": "Explicit Correspondence Matching For Generalizable Neural Radiance Fields", "abstract": "We present a new generalizable NeRF method that is able to directly generalize to new unseen scenarios and perform novel view synthesis with as few as two source views. The key to our approach lies in the explicitly modeled correspondence matching information, so as to provide the geometry prior to the prediction of NeRF color and density for volume rendering. The explicit correspondence matching is quantified with the cosine similarity between image features sampled at the 2D projections of a 3D point on different views, which is able to provide reliable cues about the surface geometry. Unlike previous methods where image features are extracted independently for each view, we consider modeling the cross-view interactions via Transformer cross-attention, which greatly improves the feature matching quality. Our method achieves state-of-the-art results on different evaluation settings, with the experiments showing a strong correlation between our learned cosine feature similarity and volume density, demonstrating the effectiveness and superiority of our proposed method. Code is at https://github.com/donydchen/matchnerf."}, "cited_paper_content": {"title": "Nerf: Representing Scenes As Neural Radiance Fields For View Synthesis", "abstract": "We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. Our algorithm represents a scene using a fully-connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location $(x,y,z)$ and viewing direction $(\\theta, \\phi)$) and whose output is the volume density and view-dependent emitted radiance at that spatial location. We synthesize views by querying 5D coordinates along camera rays and use classic volume rendering techniques to project the output colors and densities into an image. Because volume rendering is naturally differentiable, the only input required to optimize our representation is a set of images with known camera poses. We describe how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrate results that outperform prior work on neural rendering and view synthesis. View synthesis results are best viewed as videos, so we urge readers to view our supplementary video for convincing comparisons."}, "keywords": ["volume rendering"], "citation_intent": "method"} {"citing_id": "2303.12869v1", "cited_id": "1910.10683", "section_title": "A. Fine-Tuning", "citation": "According to #REFR , a way to improve the model's performance is by increasing the number of steps in the training.", "text_before_citation": ["We fine-tune our models based on two criteria: a) Sequence Length: After analyzing in detail the outputs generated by previous works, we observed that some of the code sequences produced by the models were incomplete compared to the target ones.", "Consequently, we tokenized the training and validation sets with SentencePiece model #OTHEREFR .", "We then computed the largest sequence data, and used its length for both the inputs and the targets.", "b) Number of steps: Since we increased the length of sequences in our model, we increased the number of fine-tuning steps."], "text_after_citation": ["We apply both criteria initializing the fine-tuning from CoTexT checkpoints 2CC and 1CC, respectively.", "CoTexT-1CC is pretrained on unimodal data (only code), and CoTexT-2CC is pretrained on bimodal data (both code and natural language).", "Results of these experiments are shown in Table III ."], "citing_paper_content": {"title": "Jacotext: A Pretrained Model For Java Code-Text Generation", "abstract": "Pretrained transformer-based models have shown high performance in natural language generation task. However, a new wave of interest has surged: automatic programming language generation. This task consists of translating natural language instructions to a programming code. Despite the fact that well-known pretrained models on language generation have achieved good performance in learning programming languages, effort is still needed in automatic code generation. In this paper, we introduce JaCoText, a model based on Transformers neural network. It aims to generate java source code from natural language text. JaCoText leverages advantages of both natural language and code generation models. More specifically, we study some findings from the state of the art and use them to (1) initialize our model from powerful pretrained models, (2) explore additional pretraining on our java dataset, (3) carry out experiments combining the unimodal and bimodal data in the training, and (4) scale the input and output length during the fine-tuning of the model. Conducted experiments on CONCODE dataset show that JaCoText achieves new state-of-the-art results."}, "cited_paper_content": {"title": "Exploring The Limits Of Transfer Learning With A Unified Text-To-Text Transformer", "abstract": "Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts every language problem into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new \"Colossal Clean Crawled Corpus\", we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our dataset, pre-trained models, and code."}, "keywords": ["training"], "citation_intent": "background"} {"citing_id": "2303.12582v1", "cited_id": "1907.05019", "section_title": "Training Settings", "citation": "This is similar to studies #REFR Team et al., ai) that have leveraged upsampling for under-represented languages during pre-training large language models in machine translation.", "text_before_citation": ["Relatively Equal Model Parameters: The XLS-R model consists of 315, 703, 690 model parameters while the Wav2Vec2.0-Large model has 315, 693, 962 -a requirement enforced in order to ensure that neither model has an edge over the other based on their size.", "Handling Class Imbalance: The AfroDigits dataset currently has very small audio samples for our focus languages, with an unequal balance of digits for each language (see Figures 5 -16) .", "In order to prevent the model from overfitting on the classes with many samples, we implemented weighted sampling #OTHEREFR .", "With weighted sampling in the data loading process, different from the normal sampling which favors the majority classes #OTHEREFR , the labels are chosen with a probability inversely proportional to their size in the training set.", "This means that at each training step and for each language, the labels with few samples are more likely to be chosen for training the model."], "text_after_citation": ["Training Setup: All audio samples were resampled to 16kHz for the finetuning experiments.", "We froze the encoders of each model and finetuned for 100 epochs.", "We used the Adam optimizer Kingma & Ba (2015), with a learning rate of 3e \u2212 5 for both models.", "We did not do any search for optimal hyperparameters but instead used the recommended settings from the authors.", "We ran our finetuning experiments with five different seeds, then we took the average over the different runs as well as the standard deviation."], "citing_paper_content": {"title": "Afrodigits: A Community-Driven Spoken Digit Dataset For African Languages", "abstract": "The advancement of speech technologies has been remarkable, yet its integration with African languages remains limited due to the scarcity of African speech corpora. To address this issue, we present AfroDigits, a minimalist, community-driven dataset of spoken digits for African languages, currently covering 38 African languages. As a demonstration of the practical applications of AfroDigits, we conduct audio digit classification experiments on six African languages [Igbo (ibo), Yoruba (yor), Rundi (run), Oshiwambo (kua), Shona (sna), and Oromo (gax)] using the Wav2Vec2.0-Large and XLS-R models. Our experiments reveal a useful insight on the effect of mixing African speech corpora during finetuning. AfroDigits is the first published audio digit dataset for African languages and we believe it will, among other things, pave the way for Afro-centric speech applications such as the recognition of telephone numbers, and street numbers. We release the dataset and platform publicly at https://huggingface.co/datasets/ chrisjay/crowd-speech-africa and https://huggingface.co/ spaces/chrisjay/afro-speech respectively."}, "cited_paper_content": {"title": "Massively Multilingual Neural Machine Translation In The Wild: Findings And Challenges", "abstract": "We introduce our efforts towards building a universal neural machine translation (NMT) system capable of translating between any language pair. We set a milestone towards this goal by building a single massively multilingual NMT model handling 103 languages trained on over 25 billion examples. Our system demonstrates effective transfer learning ability, significantly improving translation quality of low-resource languages, while keeping high-resource language translation quality on-par with competitive bilingual baselines. We provide in-depth analysis of various aspects of model building that are crucial to achieving quality and practicality in universal NMT. While we prototype a high-quality universal translation system, our extensive empirical analysis exposes issues that need to be further addressed, and we suggest directions for future research."}, "keywords": ["under-represented languages", "machine translation"], "citation_intent": "result"} {"citing_id": "2303.10902v2", "cited_id": "1503.02531", "section_title": "Backbone Classifier", "citation": "The reason for using a soft label is that a soft label usually provides more information #REFR .", "text_before_citation": ["K", "EQUATION", "Note that p i is a soft pseudo label rather than a hard pseudo label."], "text_after_citation": ["By using the proposed test time self-distillation, the network could map the uniformity for the current samples to improve the quality of representations.", "Although we use an entropy filter to drop noisy labels when computing prototypes, there are still some mistake predictions inevitable.", "We propose that, for a reliable sample, the outputs of the linear fully connected layer and prototype-based classifier should be similar.", "Hence, we adopt the consistency filter to identify the mistake predictions.", "Specially, if the linear classifier and prototype-based classifier produce the same predictions, i.e."], "citing_paper_content": {"title": "Feature Alignment And Uniformity For Test Time Adaptation", "abstract": "Test time adaptation (TTA) aims to adapt deep neural networks when receiving out of distribution test domain samples. In this setting, the model can only access online unlabeled test samples and pre-trained models on the training domains. We first address TTA as a feature revision problem due to the domain gap between source domains and target domains. After that, we follow the two measurements alignment and uniformity to discuss the test time feature revision. For test time feature uniformity, we propose a test time self-distillation strategy to guarantee the consistency of uniformity between representations of the current batch and all the previous batches. For test time feature alignment, we propose a memorized spatial local clustering strategy to align the representations among the neighborhood samples for the upcoming batch. To deal with the common noisy label problem, we propound the entropy and consistency filters to select and drop the possible noisy labels. To prove the scalability and efficacy of our method, we conduct experiments on four domain generalization benchmarks and four medical image segmentation tasks with various backbones. Experiment results show that our method not only improves baseline stably but also outperforms existing state-of-the-art test time adaptation methods."}, "cited_paper_content": {"title": "Distilling The Knowledge In A Neural Network", "abstract": "A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel."}, "keywords": ["soft label"], "citation_intent": "background"} {"citing_id": "2303.13528v1", "cited_id": "1605.05419", "section_title": "Introduction", "citation": "Scripting languages are well suited to such tasks because researchers can focus on high-level needs and worry less about memory management, code efficiency, and other technical details #REFR .", "text_before_citation": ["These opportunities have motivated the creation of interdisciplinary training programs, courses, workshops, and tutorials to teach computing skills in a life-sciences context 6,2,@ 7,8-12 .", "In some circumstances, it is sufficient for researchers to understand computing concepts and learn to use existing tools; in other settings, learning to write computer code is invaluable #OTHEREFR .", "A 2011 survey of scientists from many disciplines (other than computer science) found that researchers spent 35% of their time, on average, writing code #OTHEREFR .", "Computer programming makes it possible to complete tasks not supported by existing tools, interface with software libraries, adapt algorithms based on custom needs, tidy data, and more #OTHEREFR [16] #OTHEREFR .", "In these applied scenarios, computer programs are often small #OTHEREFR , and the code might only be used for one particular project."], "text_after_citation": ["Python, a scripting language, has gained much acceptance among scientists #OTHEREFR and programming educators #OTHEREFR , perhaps due to its relatively simple syntax 19 and the availability of libraries supporting common tasks #OTHEREFR .", "However, learning to program is a daunting challenge for many researchers.", "Decades of research have sought to characterize common errors and identify effective ways for novices to learn programming skills #OTHEREFR ; much remains to be discovered.", "Recent advances in artificial intelligence have shown promise for converting natural-language descriptions of programming tasks to functional code 31, #OTHEREFR .", "The first such large language models (LLMs) fine-tuned to generate code that captured widespread interest were OpenAI's Codex and DeepMind's AlphaCode #OTHEREFR ."], "citing_paper_content": {"title": "Many Bioinformatics Programming Tasks Can Be Automated With Chatgpt", "abstract": "Computer programming is a fundamental tool for life scientists, allowing them to carry out many essential research tasks. However, despite a variety of educational efforts, learning to write code can be a challenging endeavor for both researchers and students in life science disciplines. Recent advances in artificial intelligence have made it possible to translate human-language prompts to functional code, raising questions about whether these technologies can aid (or replace) life scientists' efforts to write code. Using 184 programming exercises from an introductory-bioinformatics course, we evaluated the extent to which one such model-OpenAI's ChatGPT-can successfully complete basic-to moderatelevel programming tasks. On its first attempt, ChatGPT solved 139 (75.5%) of the exercises. For the remaining exercises, we provided natural-language feedback to the model, prompting it to try different approaches. Within 7 or fewer attempts, ChatGPT solved 179 (97.3%) of the exercises. These findings have important implications for life-sciences research and education. For many programming tasks, researchers no longer need to write code from scratch. Instead, machine-learning models may produce usable solutions. Instructors may need to adapt their pedagogical approaches and assessment techniques to account for these new capabilities that are available to the general public."}, "cited_paper_content": {"title": "An Introduction To Programming For Bioscientists: A Python-Based Primer", "abstract": "Computing has revolutionized the biological sciences over the past several decades, such that virtually all contemporary research in the biosciences utilizes computer programs. The computational advances have come on many fronts, spurred by fundamental developments in hardware, software, and algorithms. These advances have influenced, and even engendered, a phenomenal array of bioscience fields, including molecular evolution and bioinformatics; genome-, proteome-, transcriptome- and metabolome-wide experimental studies; structural genomics; and atomistic simulations of cellular-scale molecular assemblies as large as ribosomes and intact viruses. In short, much of post-genomic biology is increasingly becoming a form of computational biology. The ability to design and write computer programs is among the most indispensable skills that a modern researcher can cultivate. Python has become a popular programming language in the biosciences, largely because (i) its straightforward semantics and clean syntax make it a readily accessible first language; (ii) it is expressive and well-suited to object-oriented programming, as well as other modern paradigms; and (iii) the many available libraries and third-party toolkits extend the functionality of the core language into virtually every biological domain (sequence and structure analyses, phylogenomics, workflow management systems, etc.). This primer offers a basic introduction to coding, via Python, and it includes concrete examples and exercises to illustrate the language's usage and capabilities; the main text culminates with a final project in structural bioinformatics. A suite of Supplemental Chapters is also provided. Starting with basic concepts, such as that of a 'variable', the Chapters methodically advance the reader to the point of writing a graphical user interface to compute the Hamming distance between two DNA sequences."}, "keywords": ["Scripting languages"], "citation_intent": "background"} {"citing_id": "2304.00445v1", "cited_id": "1703.09197", "section_title": "Introduction", "citation": "West and O'Shea #REFR apply a convolutional long short term deep neural network (CLDNN), which significantly improves classification accuracy.", "text_before_citation": ["Traditional AMC can be divided into two categories: likelihood-based (LB) methods #OTHEREFR and f eature-based (FB) methods #OTHEREFR .", "However, LB methods rely on prior knowledge about channel and signal.", "FB methods select hand-crafted features, then conduct the classification using the machine learning algorithm, such as support vector machines #OTHEREFR and random forests #OTHEREFR . FB methods highly depend on expert knowledge.", "O'Shea, Corgan, and Clanc #OTHEREFR pioneer a CNN model for AMC, initiating the application of deep learning in AMC. It outperforms traditional methods that rely on manual features.", "A model based on LSTM is proposed by #OTHEREFR . Huang et al. #OTHEREFR apply GRU to classify the signals."], "text_after_citation": ["Moreover, the dual-stream structure of CNN-LSTM is proposed in #OTHEREFR to efficiently classify signals.", "It uses the information of I/Q channel and amplitude/phase to achieve better performance.", "I and Q channel are intergrated in #OTHEREFR to learn the correlations of signals in parallel. Furthermore, Huynh-The et al.", "#OTHEREFR implement different asymmetric convolution kernels and skip connections to learn spatial correlations. Liang et al.", "#OTHEREFR combine the attention mechanism and complex-valued neural network to better represent the signal."], "citing_paper_content": {"title": "Amc-Net: An Effective Network For Automatic Modulation Classification", "abstract": "Automatic modulation classification (AMC) is a crucial stage in the spectrum management, signal monitoring, and control of wireless communication systems. The accurate classification of the modulation format plays a vital role in the subsequent decoding of the transmitted data. End-to-end deep learning methods have been recently applied to AMC, outperforming traditional feature engineering techniques. However, AMC still has limitations in low signal-to-noise ratio (SNR) environments. To address the drawback, we propose a novel AMC-Net that improves recognition by denoising the input signal in the frequency domain while performing multi-scale and effective feature extraction. Experiments on two representative datasets demonstrate that our model performs better in efficiency and effectiveness than the most current methods."}, "cited_paper_content": {"title": "Deep Architectures For Modulation Recognition", "abstract": "We survey the latest advances in machine learning with deep neural networks by applying them to the task of radio modulation recognition. Results show that radio modulation recognition is not limited by network depth and further work should focus on improving learned synchronization and equalization. Advances in these areas will likely come from novel architectures designed for these tasks or through novel training methods."}, "keywords": ["deep neural network"], "citation_intent": "method"} {"citing_id": "2303.02347v1", "cited_id": "1606.06160", "section_title": "Quantization-Aware Training (Qat)", "citation": "DoReFa-Net #REFR proposed to optimize the clipping value and the scaling factor of the uniform quantizers for weights and activations separately.", "text_before_citation": [], "text_after_citation": ["It was validated on image classification tasks under multiple bit-widths, but only with the rather simple AlexNet architecture.", "Most QAT works quantize weights and activations simultaneously by optimizing the uniform quantization parameters #OTHEREFR , layer-wise or channel-wise mixed-precision quantization #OTHEREFR , or leveraging nonuniform quantization such as Logarithmic quantizer #OTHEREFR .", "Most recent QAT works #OTHEREFR used \"Straight-Through Estimator\" (STE) #OTHEREFR to estimate the gradient of the nondifferentiable quantization function, while another work #OTHEREFR softened the linear quantization operation in order to match the true gradient with STE.", "#OTHEREFR adopted a primitive quantizer design based on a uniform quantizer for gradients (without scaling and other optimization), and large performance drops were observed when training with low-bit gradients.", "SBM #OTHEREFR adopted fixed-point 8-bit gradient quantization but only focused on improving the quantization schemes in the forward pass."], "citing_paper_content": {"title": "Metagrad: Adaptive Gradient Quantization With Hypernetworks", "abstract": "A popular track of network compression approach is Quantization aware Training (QAT), which accelerates the forward pass during the neural network training and inference. However, not much prior efforts have been made to quantize and accelerate the backward pass during training, even though that contributes around half of the training time. This can be partly attributed to the fact that errors of low-precision gradients during backward cannot be amortized by the training objective as in the QAT setting. In this work, we propose to solve this problem by incorporating the gradients into the computation graph of the next training iteration via a hypernetwork. Various experiments on CIFAR-10 dataset with different CNN network architectures demonstrate that our hypernetwork-based approach can effectively reduce the negative effect of gradient quantization noise and successfully quantizes the gradients to INT4 with only 0.64 accuracy drop for VGG-16 on CIFAR-10."}, "cited_paper_content": {"title": "Dorefa-Net: Training Low Bitwidth Convolutional Neural Networks With Low Bitwidth Gradients", "abstract": "We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1\\% top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly."}, "keywords": ["uniform quantizers"], "citation_intent": "background"} {"citing_id": "2303.15404v1", "cited_id": "1608.00163", "section_title": "Introduction", "citation": "Although there is no firm answer concerning these rules #REFR , it is commonly admitted that the definition of a community relates to a difference in connection density between its interior and its boundary.", "text_before_citation": ["Nodes of these networks are often arranged in closely tight groups called communities.", "These communities delineate the organizational supports of function, property, purpose or categories.", "They thus highlight a structure of the network providing an organizational understanding behind the topology.", "Formally, the goal is to identify a node partition of the nodes of the network.", "A community structure is a partition of the vertices of a graph defined according rules structuring the vertex distribution."], "text_after_citation": ["The density of connection between nodes inside a community must be higher than the density of connection across communities.", "Such community obtained by this method is called the topological community #OTHEREFR .", "Community detection algorithms capture this difference of connection density for detecting communities in a network #OTHEREFR .", "The quality of a community structure is evaluated by a measure assessing this partitioning rule.", "A recognized standard is the modularity introduced by Newmman #OTHEREFR ."], "citing_paper_content": {"title": "Chromatic Community Structure Detection", "abstract": "The detection of community structure is probably one of the hottest trends in complex network research as it reveals the internal organization of people, molecules or processes behind social, biological or computer networks.. . The issue is to provide a network partition representative of this organization so that each community presumably gathers nodes sharing a common mission, purpose or property. Usually the identification is based on the difference between the connectivity density of the interior and the boundary of a community. Indeed, nodes sharing a common purpose or property are expected to interact closely. Although this rule appears mostly relevant, some fundamental scientific problems like disease module detection highlight the inability to determine significantly the communities under this connectivity rule. The main reason is that the connectivity density is not correlated to a shared property or purpose. Therefore, another paradigm is required for properly formalize this issue in order to meaningfully detect these communities. In this article we study the community formation from this new principle. Considering colors formally figures the shared properties, the issue is thus to maximize group of nodes with the same color within communities.. We study this novel community framework by introducing new measurement called chromarity assessing the quality of the community structure regarding this constraint. Next we propose an algorithm solving the community structure detection based on this new community formation paradigm."}, "cited_paper_content": {"title": "Community Detection In Networks: A User Guide", "abstract": "Community detection in networks is one of the most popular topics of modern network science. Communities, or clusters, are usually groups of vertices having higher probability of being connected to each other than to members of other groups, though other patterns are possible. Identifying communities is an ill-defined problem. There are no universal protocols on the fundamental ingredients, like the definition of community itself, nor on other crucial issues, like the validation of algorithms and the comparison of their performances. This has generated a number of confusions and misconceptions, which undermine the progress in the field. We offer a guided tour through the main aspects of the problem. We also point out strengths and weaknesses of popular methods, and give directions to their use."}, "keywords": ["community"], "citation_intent": "background"} {"citing_id": "2303.06588v1", "cited_id": "1808.09781", "section_title": "Experimental Setup 5.1 Experimental Settings", "citation": "SASRec #REFR is trained with adam optimizer and a learning rate of 0.001. The number of layers and attention heads is 2.", "text_before_citation": ["For sequential recommendation baselines, we keep a consistent batch size of 4096 and the maximum interaction length to be 50.", "We employ a leave-one-out strategy for validation and testing, and the full item set is used for evaluation. We use early stopping patience of 10.", "We consider the sequential recommendation task as a multiclass classification task and use the cross-entropy loss for training the models."], "text_after_citation": ["The dropout rate is 0.5 and gelu is used as an activation function.", "HGN #OTHEREFR and SINE #OTHEREFR are trained with embedding size 64, learning rate 0.001, and adam optimizer.", "LigthSANs #OTHEREFR uses the latent interest dimension to be 5, 2 attention heads and 2 transformer layers.", "Training is done with a learning rate of 0.001 and adam optimizer.", "GCSAN #OTHEREFR has 2 transformer encoder layers, 2 attention heads, 64 is the hidden state features size, feed-forward layers' hidden size is 256, weight is set to be 0.6 and the number of layers in graph neural network is 1, adam is used for optimization with a learning rate of 0.001."], "citing_paper_content": {"title": "Mobilerec: A Large-Scale Dataset For Mobile Apps Recommendation", "abstract": "Recommender systems have become ubiquitous in our digital lives, from recommending products on e-commerce websites to suggesting movies and music on streaming platforms. Existing recommendation datasets, such as Amazon Product Reviews and MovieLens, greatly facilitated the research and development of recommender systems in their respective domains. While the number of mobile users and applications (aka apps) has increased exponentially over the past decade, research in mobile app recommender systems has been significantly constrained, primarily due to the lack of high-quality benchmark datasets, as opposed to recommendations for products, movies, and news. To facilitate research for app recommendation systems, we introduce a large-scale dataset, called MobileRec. We constructed MobileRec from users' activity on the Google play store. MobileRec contains 19.3 million user interactions (i.e., user reviews on apps) with over 10K unique apps across 48 categories. MobileRec records the sequential activity of a total of 0.7 million distinct users. Each of these users has interacted with no fewer than five distinct apps, which stands in contrast to previous datasets on mobile apps that recorded only a single interaction per user. Furthermore, MobileRec presents users' ratings as well as sentiments on installed apps, and each app contains rich metadata such as app name, category, description, and overall rating, among others. We demonstrate that MobileRec can serve as an excellent testbed for app recommendation through a comparative study of several state-of-the-art recommendation approaches. The quantitative results can act as a baseline for other researchers to compare their results against."}, "cited_paper_content": {"title": "Self-Attentive Sequential Recommendation", "abstract": "Sequential dynamics are a key feature of many modern recommender systems, which seek to capture the 'context' of users' activities on the basis of actions they have performed recently. To capture such patterns, two approaches have proliferated: Markov Chains (MCs) and Recurrent Neural Networks (RNNs). Markov Chains assume that a user's next action can be predicted on the basis of just their last (or last few) actions, while RNNs in principle allow for longer-term semantics to be uncovered. Generally speaking, MC-based methods perform best in extremely sparse datasets, where model parsimony is critical, while RNNs perform better in denser datasets where higher model complexity is affordable. The goal of our work is to balance these two goals, by proposing a self-attention based sequential model (SASRec) that allows us to capture long-term semantics (like an RNN), but, using an attention mechanism, makes its predictions based on relatively few actions (like an MC). At each time step, SASRec seeks to identify which items are 'relevant' from a user's action history, and use them to predict the next item. Extensive empirical studies show that our method outperforms various state-of-the-art sequential models (including MC/CNN/RNN-based approaches) on both sparse and dense datasets. Moreover, the model is an order of magnitude more efficient than comparable CNN/RNN-based models. Visualizations on attention weights also show how our model adaptively handles datasets with various density, and uncovers meaningful patterns in activity sequences."}, "keywords": ["layers"], "citation_intent": "method"} {"citing_id": "2304.01823v1", "cited_id": "1602.04505", "section_title": "Contracting A Single Crossedge.", "citation": "Its proof is the same as the proof of #REFR Lemma 4 .28], which directly translates to the locally finite case.", "text_before_citation": ["If there exists N such that s \u2032 \u2208 Y \u2032 N then for all n N we must have s \u2032 \u2208 Y n by definition of .", "Up to extracting another infinite subsequence, we can assume that either s \u2032 \u2208 Y \u2032 n for all n or s \u2032 \u2208 Z \u2032 n for all n. As a result,", "((Y \u2032 n ) \u2227 , S \u2032 n , (Z \u2032 n ) \u2227 )", "n\u2208N is an infinite decreasing sequence of separations of order 3 in T , contradicting the fact that T is a well-quasi-ordered set.", "We conclude this subsection with the following result relating the degeneracy of minimal separations in G and G \u2032 ."], "text_after_citation": ["To be more precise, we also need the additional assumption that T \u2032 is a region tangle to make the proof work, which is given by Lemma 3.25.", "Lemma 3.26 (Lemma 4.28 and Corollary 4.29 in #OTHEREFR ).", "Either G \u2032 is 4-connected and T \u2032 min = {(\u2205, \u2205, V (G \u2032 ))}, or T \u2032 min = {(Y, S, Z) \u2228 : (Y, S, Z) \u2208 T min and S \u2228 is a separator of G \u2032 }.", "In the latter case, for all (Y, S, Z) \u2208 T min , (Y, S, Z) is non-degenerate if and only if (Y, S, Z) \u2228 is non-degenerate.", "3.6. Contracting all the crossedges."], "citing_paper_content": {"title": "The Structure Of Quasi-Transitive Graphs Avoiding A Minor With Applications To The Domino Problem", "abstract": "An infinite graph is quasi-transitive if its vertex set has finitely many orbits under the action of its automorphism group. In this paper we obtain a structure theorem for locally finite quasi-transitive graphs avoiding a minor, which is reminiscent of the Robertson-Seymour Graph Minor Structure Theorem. We prove that every locally finite quasi-transitive graph G avoiding a minor has a treedecomposition whose torsos are finite or planar; moreover the tree-decomposition is canonical, i.e. invariant under the action of the automorphism group of G. As applications of this result, we prove the following. \u2022 Every locally finite quasi-transitive graph attains its Hadwiger number, that is, if such a graph contains arbitrarily large clique minors, then it contains an infinite clique minor. This extends a result of Thomassen (1992) who proved it in the 4-connected case and suggested that this assumption could be omitted. In particular, this shows that a Cayley graph excludes a finite minor if and only if it avoids the countable clique as a minor. \u2022 Locally finite quasi-transitive graphs avoiding a minor are accessible (in the sense of Thomassen and Woess), which extends known results on planar graphs to any proper minor-closed family. \u2022 Minor-excluded finitely generated groups are accessible (in the group-theoretic sense) and finitely presented, which extends classical results on planar groups. \u2022 The domino problem is decidable in a minor-excluded finitely generated group if and only if the group is virtually free, which proves the minorexcluded case of a conjecture of Ballier and Stein (2018)."}, "cited_paper_content": {"title": "Quasi-4-Connected Components", "abstract": "We introduce a new decomposition of a graphs into quasi-4-connected components, where we call a graph quasi-4-connected if it is 3-connected and it only has separations of order 3 that remove a single vertex. Moreover, we give a cubic time algorithm computing the decomposition of a given graph. Our decomposition into quasi-4-connected components refines the well-known decompositions of graphs into biconnected and triconnected components. We relate our decomposition to Robertson and Seymour's theory of tangles by establishing a correspondence between the quasi-4-connected components of a graph and its tangles of order 4."}, "keywords": ["locally finite case", "Lemma"], "citation_intent": "background"} {"citing_id": "2303.06982v1", "cited_id": "2002.05709", "section_title": "Related Works", "citation": "This is in line with other studies in the image community #REFR . Chung et al.", "text_before_citation": ["As SSL models are becoming mainstream, many researchers have tried to understand what is happening under the hood of these pre-trained models either by learning a probing tasks #OTHEREFR or analyzing the representations directly #OTHEREFR are two most popular approaches.", "The authors of #OTHEREFR study the WAV2VEC2.0 model #OTHEREFR representations directly rather than training additional classifiers as probes.", "They show that the pre-trained model follows an auto-encoder style behavior i.e., intermediate layers provide richer information of higherlevel classes (phone/word information) than the initial and last layers."], "text_after_citation": ["#OTHEREFR studies the similarity between the representations of three SSL pre-training techniques i.e., contrastive predictive coding (CPC), auto-regressive predictive coding (APC), and masked predictive coding (MPC).", "Their findings suggest that it is the learning objective which controls the representation similarity than architectural choices such as building blocks and directionality.", "Similarly, in a comprehensive review of self-supervised speech representation learning by #OTHEREFR", "(2022) #OTHEREFR , the authors posited that the choice of training criterion has a greater impact on the performance gains compared to the architecture or the directionality of input.", "Given pre-training criterion has the highest impact on the model's performance, in this paper we investigate the impact of the masked prediction loss on the encoded information type in various layers of HuBERT model #OTHEREFR ."], "citing_paper_content": {"title": "Analysing The Masked Predictive Coding Training Criterion For Pre-Training A Speech Representation Model", "abstract": "Recent developments in pre-trained speech representation utilizing self-supervised learning (SSL) have yielded exceptional results on a variety of downstream tasks. One such technique, known as masked predictive coding (MPC), has been employed by some of the most high-performing models. In this study, we investigate the impact of MPC loss on the type of information learnt at various layers in the HuBERT model, using nine probing tasks. Our findings indicate that the amount of content information learned at various layers of the HuBERT model has a positive correlation to the MPC loss. Additionally, it is also observed that any speaker-related information learned at intermediate layers of the model, is an indirect consequence of the learning process, and therefore cannot be controlled using the MPC loss. These findings may serve as inspiration for further research in the speech community, specifically in the development of new pre-training tasks or the exploration of new pre-training criterion's that directly preserves both speaker and content information at various layers of a learnt model."}, "cited_paper_content": {"title": "A Simple Framework For Contrastive Learning Of Visual Representations", "abstract": "This paper presents SimCLR: a simple framework for contrastive learning of visual representations. We simplify recently proposed contrastive self-supervised learning algorithms without requiring specialized architectures or a memory bank. In order to understand what enables the contrastive prediction tasks to learn useful representations, we systematically study the major components of our framework. We show that (1) composition of data augmentations plays a critical role in defining effective predictive tasks, (2) introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and (3) contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning. By combining these findings, we are able to considerably outperform previous methods for self-supervised and semi-supervised learning on ImageNet. A linear classifier trained on self-supervised representations learned by SimCLR achieves 76.5% top-1 accuracy, which is a 7% relative improvement over previous state-of-the-art, matching the performance of a supervised ResNet-50. When fine-tuned on only 1% of the labels, we achieve 85.8% top-5 accuracy, outperforming AlexNet with 100X fewer labels."}, "keywords": ["Chung", "image community"], "citation_intent": "result"} {"citing_id": "2304.13292v1", "cited_id": "1810.04805", "section_title": "Results And Discussions", "citation": "The larger improvement in scores for intent classification may be correlated with the fact that for our data augmentation experiment on paraphrasing and machine translation, we were only able to Table 6 : Accuracy results for intent classification on the validation set. Baseline: mBERT #REFR .", "text_before_citation": ["Our findings regarding the performance of larger models are also observed in the test set.", "Table 6 presents the evaluation results for both slot filling and intent classification tasks across all three target languages. Our mT0 models strongly outperform the baseline models.", "Specifically, our mT0 models outperformed the baseline models in all target languages for the intent classification task, highlighting the effectiveness of larger models for intent classification.", "Moreover, our mT0 models also outperform the baseline models in two of the target languages for slot filling task, further indicating the superiority of larger models for sentence-level classification tasks.", "The improvement in scores for intent classification is more evident than for slot filling."], "text_after_citation": ["LS: LASER #OTHEREFR +sBERT #OTHEREFR . LL: LASER+LaBSE #OTHEREFR . LX: LASER+XLM-R #OTHEREFR . Underline: Best-performing models for each setting.", "Bold: Best F 1 score across all the experiments and settings.", "augment data for intent classification, resulting in a larger improvement in performance for this task compared to slot filling.", "It is worth noting that we use the validation set for model selection, which resulted in higher scores than those achieved on the test set.", "This is because the validation data is similar to the data used during training, while the test data is entirely new and unseen."], "citing_paper_content": {"title": "Zero-Shot Slot And Intent Detection In Low-Resource Languages", "abstract": "Intent detection and slot filling are critical tasks in spoken and natural language understanding for task-oriented dialog systems. In this work we describe our participation in the slot and intent detection for low-resource language varieties (SID4LR; Aepli et al. (2023)). We investigate the slot and intent detection (SID) tasks using a wide range of models and settings. Given the recent success of multitaskprompted finetuning of large language models, we also test the generalization capability of the recent encoder-decoder model mT0 (Muennighoff et al., 2022) on new tasks (i.e., SID) in languages they have never intentionally seen. We show that our best model outperforms the baseline by a large margin (up to +30 F 1 points) in both SID tasks."}, "cited_paper_content": {"title": "Bert: Pre-Training Of Deep Bidirectional Transformers For Language Understanding", "abstract": "We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. ::: BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement)."}, "keywords": ["intent classification", "machine translation"], "citation_intent": "result"} {"citing_id": "2304.04544v1", "cited_id": "1306.0187", "section_title": "Image Motion Deblurring", "citation": "First we have found that MALA with subgradient clearly has the worst performance among all the methods, a finding agreeing with #REFR .", "text_before_citation": ["Next we test the algorithms with the additional Metropolis (accept-reject) step included.", "More precisely we implement the following algorithms: MALA with subgradient, the PMALA method in #OTHEREFR , a variant of PMALA with Chambolle2004 replaced by K-step Chambolle-Pock, and the proposed PDFP based algorithm denoted as MALA-PDFP.", "The results of all the methods are compared in Table 6 .3, and we reinstate that thanks to the Metropolis step, the samples are asymptotically unbiased.", "For the stability of PMALA and MALA-PDFP, step size \u03b4 should be no larger than parameter \u03c1.", "Following #OTHEREFR we fix \u03b4 = \u03c1 and the values of them (that are shown in Table 6 .3) are chosen such that the acceptance rates of all the algorithms are around 50% [31, #OTHEREFR for fair comparison."], "text_after_citation": ["Moreover, in both MALA-PDFP and PMALA-CP, we can see that the results of K = 5 are rather close to those of K = 100 and PMALA where in both cases the subproblem is solved accurately.", "Notably in Table 6 .3 the run time of MALA-PDFP for K = 100 is similar or less than that for K = 5, this is because in this experiments \u03c1 is much smaller than Table 6 .2 and the stopping criteria x n,k+1 \u2212 x n,k < 10 \u22125 is met even k < 5.", "More interestingly, however, PMALA-CP with K = 1 yields substantially worse results (in terms of ESS and ESJD) than the algorithms that solve the subprobem accurately, while MALA-PDFP with K = 1 produces results that are comparable to those.", "While this is an interesting indicator that the 1-step MALA-PDFP may be an effective and efficient sampling algorithm, further investigation and more comprehensive tests of the method are needed. Table 6 .3: Comparison of the Metropolis-adjusted Langevin algorithms."], "citing_paper_content": {"title": "Approximate Primal-Dual Fixed-Point Based Langevin Algorithms For Non-Smooth Convex Potentials *", "abstract": "The Langevin algorithms are frequently used to sample the posterior distributions in Bayesian inference. In many practical problems, however, the posterior distributions often consist of non-differentiable components, posing challenges for the standard Langevin algorithms, as they require to evaluate the gradient of the energy function in each iteration. To this end, a popular remedy is to utilize the proximity operator, and as a result one needs to solve a proximity subproblem in each iteration. The conventional practice is to solve the subproblems accurately, which can be exceedingly expensive, as the subproblem needs to be solved in each iteration. We propose an approximate primal-dual fixed-point algorithm for solving the subproblem, which only seeks an approximate solution of the subproblem and therefore reduces the computational cost considerably. We provide theoretical analysis of the proposed method and also demonstrate its performance with numerical examples."}, "cited_paper_content": {"title": "Proximal Markov Chain Monte Carlo Algorithms", "abstract": "This paper presents a new Metropolis-adjusted Langevin algorithm (MALA) that uses convex analysis to simulate efficiently from high-dimensional densities that are log-concave, a class of probability distributions that is widely used in modern high-dimensional statistics and data analysis. The method is based on a new first-order approximation for Langevin diffusions that exploits log-concavity to construct Markov chains with favourable convergence properties. This approximation is closely related to Moreau-Yoshida regularisations for convex functions and uses proximity mappings instead of gradient mappings to approximate the continuous-time process. The proposed method complements existing MALA methods in two ways. First, the method is shown to have very robust stability properties and to converge geometrically for many target densities for which other MALA are not geometric, or only if the step size is sufficiently small. Second, the method can be applied to high-dimensional target densities that are not continuously differentiable, a class of distributions that is increasingly used in image processing and machine learning and that is beyond the scope of existing MALA and HMC algorithms. To use this method it is necessary to compute or to approximate efficiently the proximity mappings of the logarithm of the target density. For several popular models, including many Bayesian models used in modern signal and image processing and machine learning, this can be achieved with convex optimisation algorithms and with approximations based on proximal splitting techniques, which can be implemented in parallel. The proposed method is demonstrated on two challenging high-dimensional and non-differentiable models related to image resolution enhancement and low-rank matrix estimation that are not well addressed by existing MCMC methodology."}, "keywords": ["subgradient"], "citation_intent": "result"} {"citing_id": "2303.15782v1", "cited_id": "2003.08934", "section_title": "Related Work", "citation": "Recent advances in differential rendering have enabled learning of shapes, as well as other scene properties such as appearance, only from images and without the need for 3D supervision #REFR .", "text_before_citation": ["coordinate-based multi-layer perceptrons #OTHEREFR , have become a popular method for reconstruction in recent years.", "These methods encode continuous functions that model various scene properties, such as Signed Distance #OTHEREFR , radiance #OTHEREFR , and occupancy #OTHEREFR .", "Variations of these include hybrid discrete-continuous representations that employ an external data structure, i.e.", "a grid or an octree, to partition the implicit function #OTHEREFR .", "The encoded shape can then be extracted via sphere tracing #OTHEREFR after querying the implicit function repeatedly."], "text_after_citation": ["Our approach falls into the paradigm of using neural fields for articulated object reconstruction and further learns a complete system for detection, pose estimation, and articulated shape reconstruction from a single observation.", "Implicit Reconstruction of Non-Rigid Objects: Going beyond static scenes with rigid objects, #OTHEREFR handle dynamic scenes while #OTHEREFR focus on reconstructing humans by leveraging their strong shape and kinematic prior as well as the amount of readily available datasets.", "[42] propose a general reconstruction framework to reconstruct any nonrigid entity (i.e.", "humans, animals, or objects) given only an RGB-video without requiring a category-specific shape template, while #OTHEREFR focus on point cloud input data and split the prediction into a canonical shape and a deformation field.", "One downside of general reconstruction methods is that they do not leverage the rigidity and kinematic constraints of articulated objects."], "citing_paper_content": {"title": "Carto: Category And Joint Agnostic Reconstruction Of Articulated Objects", "abstract": "Figure 1. Visualization of CARTO on unseen object instances. We first use CARTO to jointly detect all objects in the scene and then articulate them while keeping the predicted shape code constant."}, "cited_paper_content": {"title": "Nerf: Representing Scenes As Neural Radiance Fields For View Synthesis", "abstract": "We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. Our algorithm represents a scene using a fully-connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location $(x,y,z)$ and viewing direction $(\\theta, \\phi)$) and whose output is the volume density and view-dependent emitted radiance at that spatial location. We synthesize views by querying 5D coordinates along camera rays and use classic volume rendering techniques to project the output colors and densities into an image. Because volume rendering is naturally differentiable, the only input required to optimize our representation is a set of images with known camera poses. We describe how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrate results that outperform prior work on neural rendering and view synthesis. View synthesis results are best viewed as videos, so we urge readers to view our supplementary video for convincing comparisons."}, "keywords": ["3D supervision", "differential rendering"], "citation_intent": "background"} {"citing_id": "2303.03975v1", "cited_id": "2003.07082", "section_title": "Corpus Development Process", "citation": "For example, we used Stanza #REFR to filter some web-scraped data for those containing an animate noun marked as an indirect object and provided this to the linguists.", "text_before_citation": ["\u2022 1,000 single animate noun AGME \u2022 500 single pronoun AGME \u2022 500 with two or more AGMEs Linguists were given details of the various categories and attributes listed in section A and asked to find sentences such that each such category is well represented (depending on the relative ease of finding such sentences).", "Linguists were also asked to prioritize diversity of animate nouns where possible.", "They were allowed to pull examples sentences from natural text or construct them from scratch as they saw fit.", "However, except for a small number of toy examples, we asked that they include only sentences that were natural in both English and their target language, and could reasonably appear in some imaginable context.", "We provided samples of web-scraped data that had been filtered with various heuristics to help identify sentences fitting some of the harder-tosatisfy criteria."], "text_after_citation": ["In some cases these sentences were used directly, and in others they were modified slightly to fit the requirements.", "Throughout the process, we prioritized diversity of sentence structure, domain and vocabulary.", "Rather than produce a representative sample, our intention was to produce a corpus that would chal-lenge any tested systems on a wide range of phenomena.", "3 Evaluation with GATE"], "citing_paper_content": {"title": "Gate: A Challenge Set For Gender-Ambiguous Translation Examples", "abstract": "Although recent years have brought significant progress in improving translation of unambiguously gendered sentences, translation of ambiguously gendered input remains relatively unexplored. When source gender is ambiguous, machine translation models typically default to stereotypical gender roles, perpetuating harmful bias. Recent work has led to the development of \"gender rewriters\" that generate alternative gender translations on such ambiguous inputs, but such systems are plagued by poor linguistic coverage. To encourage better performance on this task we present and release GATE, a linguistically diverse corpus of gender-ambiguous source sentences along with multiple alternative target language translations. We also provide tools for evaluation and system analysis when using GATE and use them to evaluate our translation rewriter system."}, "cited_paper_content": {"title": "Stanza: A Python Natural Language Processing Toolkit For Many Human Languages", "abstract": "We introduce Stanza, an open-source Python natural language processing toolkit supporting 66 human languages. Compared to existing widely used toolkits, Stanza features a language-agnostic fully neural pipeline for text analysis, including tokenization, multi-word token expansion, lemmatization, part-of-speech and morphological feature tagging, dependency parsing, and named entity recognition. We have trained Stanza on a total of 112 datasets, including the Universal Dependencies treebanks and other multilingual corpora, and show that the same neural architecture generalizes well and achieves competitive performance on all languages tested. Additionally, Stanza includes a native Python interface to the widely used Java Stanford CoreNLP software, which further extends its functionalities to cover other tasks such as coreference resolution and relation extraction. Source code, documentation, and pretrained models for 66 languages are available at https://stanfordnlp.github.io/stanza."}, "keywords": ["linguists"], "citation_intent": "method"} {"citing_id": "2303.09105v1", "cited_id": "1710.06081", "section_title": "Cosine Similarity Encourager", "citation": "Although directly applying this algorithm can achieve good results, it is incompatible with SAM due to the varying scales of gradient norm #REFR .", "text_before_citation": ["where", "x 0 t = x t .", "Once the update for every model is complete, we calculate the final update using a larger step size \u03b1 as", "x t+1 = clip xnat, (x t + \u03b1 \u2022 (x n t \u2212 x t ))", "."], "text_after_citation": ["To solve this problem, we normalize the gradient at each update by their 2 norm.", "We discover that the modified version actually maximizes the cosine similarity between gradients (proof in Appendix E).", "Thus, we call it Cosine Similarity Encourager (CSE), which can be further combined with MI as MI-CSE.", "MI-CSE involves an inner momentum term to accumulate the gradients of each model. We provide the pseudocode in Appendix E."], "citing_paper_content": {"title": "Rethinking Model Ensemble In Transfer-Based Adversarial Attacks", "abstract": "Deep learning models are vulnerable to adversarial examples. Transfer-based adversarial attacks attract tremendous attention as they can identify the weaknesses of deep learning models in a black-box manner. An effective strategy to improve the transferability of adversarial examples is attacking an ensemble of models. However, previous works simply average the outputs of different models, lacking an in-depth analysis on how and why model ensemble can strongly improve the transferability. In this work, we rethink the ensemble in adversarial attacks and define the common weakness of model ensemble with the properties of the flatness of loss landscape and the closeness to the local optimum of each model. We empirically and theoretically show that these two properties are strongly correlated with the transferability and propose a Common Weakness Attack (CWA) to generate more transferable adversarial examples by promoting these two properties. Experimental results on both image classification and object detection tasks validate the effectiveness of our approach to improve the adversarial transferability, especially when attacking adversarially trained models."}, "cited_paper_content": {"title": "Boosting Adversarial Attacks With Momentum", "abstract": "Deep neural networks are vulnerable to adversarial examples, which poses security concerns on these algorithms due to the potentially severe consequences. Adversarial attacks serve as an important surrogate to evaluate the robustness of deep learning models before they are deployed. However, most of existing adversarial attacks can only fool a black-box model with a low success rate. To address this issue, we propose a broad class of momentum-based iterative algorithms to boost adversarial attacks. By integrating the momentum term into the iterative process for attacks, our methods can stabilize update directions and escape from poor local maxima during the iterations, resulting in more transferable adversarial examples. To further improve the success rates for black-box attacks, we apply momentum iterative algorithms to an ensemble of models, and show that the adversarially trained models with a strong defense ability are also vulnerable to our black-box attacks. We hope that the proposed methods will serve as a benchmark for evaluating the robustness of various deep models and defense methods. With this method, we won the first places in NIPS 2017 Non-targeted Adversarial Attack and Targeted Adversarial Attack competitions."}, "keywords": ["good results", "gradient norm"], "citation_intent": "method"} {"citing_id": "2303.04222v1", "cited_id": "2004.01670", "section_title": "Limitations", "citation": "Class Imbalance Our dataset is more balanced on the binary label (24% sexism) than many previous datasets for hate and abuse detection #REFR .", "text_before_citation": [], "text_after_citation": ["However, there is substantial imbalance in categories and vectors of sexism.", "Thus, it is hard to confirm whether confusion between categories and vectors is due to inherent features of the data or due to the class imbalance.", "We encourage future work to examine the effect on performance and cross-vector confusion when balancing the dataset.", "That said, we explicitly made the decision to not re-balance our dataset because different types of sexism (especially at the finegrained level) do have very different base rates 'in the wild'.", "Dealing with class imbalance is then part-and-parcel of the problem we seek to address."], "citing_paper_content": {"title": "Semeval-2023 Task 10: Explainable Detection Of Online Sexism", "abstract": "Online sexism is a widespread and harmful phenomenon. Automated tools can assist the detection of sexism at scale. Binary detection, however, disregards the diversity of sexist content, and fails to provide clear explanations for why something is sexist. To address this issue, we introduce SemEval Task 10 on the Explainable Detection of Online Sexism (EDOS). We make three main contributions: i) a novel hierarchical taxonomy of sexist content, which includes granular vectors of sexism to aid explainability; ii) a new dataset of 20,000 social media comments with fine-grained labels, along with larger unlabelled datasets for model adaptation; and iii) baseline models as well as an analysis of the methods, results and errors for participant submissions to our task. Content warning: We show illustrative examples of sexist language to describe the taxonomy and analyse error types."}, "cited_paper_content": {"title": "Directions In Abusive Language Training Data: Garbage In, Garbage Out", "abstract": "Data-driven analysis and detection of abusive online content covers many different tasks, phenomena, contexts, and methodologies. This paper systematically reviews abusive language dataset creation and content in conjunction with an open website for cataloguing abusive language data. This collection of knowledge leads to a synthesis providing evidence-based recommendations for practitioners working with this complex and highly diverse data."}, "keywords": ["24% sexism", "abuse detection"], "citation_intent": "result"} {"citing_id": "2304.11104v1", "cited_id": "1811.04551", "section_title": "World Models", "citation": "DreamerV2 is composed of the following components: an image encoder \u223c ( | , \u210e ) that learns a posterior latent representation conditional on the current observation and recurrent state \u210e , the recurrent state space model (RSSM) #REFR which is a mixture of deterministic and stochastic categorical latents, and the image, reward and discount predictors.", "text_before_citation": ["To learn a world model for behaviour learning and look-ahead shielding we leverage DreamerV2 #OTHEREFR , which was used to master Atari games in the ALE #OTHEREFR ."], "text_after_citation": ["The RSSM consists of two main components: the recurrent model", "\u210e = (\u210e \u22121 , \u22121 , \u22121 )", ", which computes the next deterministic latents given the past state \u22121 = (\u210e \u22121 , \u22121 ) and action \u22121 , and the transition predictor\u02c6\u223c (\u02c6| \u210e ), which is used as the prior distribution over the stochastic latents conditional on the deterministic latents.", "The image predictor or decoder\u02c6\u223c (\u02c6| \u210e , ) is trained to predict the current observation with a reconstruction loss.", "The image predictor provides useful self-supervised gradients that help the world model learn a structured latent space for effective policy optimisation #OTHEREFR ."], "citing_paper_content": {"title": "Approximate Shielding Of Atari Agents For Safe Exploration", "abstract": "Balancing exploration and conservatism in the constrained setting is an important problem if we are to use reinforcement learning for meaningful tasks in the real world. In this paper, we propose a principled algorithm for safe exploration based on the concept of shielding. Previous approaches to shielding assume access to a safety-relevant abstraction of the environment or a high-fidelity simulator. Instead, our work is based on latent shielding-another approach that leverages world models to verify policy roll-outs in the latent space of a learned dynamics model. Our novel algorithm builds on this previous work, using safety critics and other additional features to improve the stability and farsightedness of the algorithm. We demonstrate the effectiveness of our approach by running experiments on a small set of Atari games with state dependent safety labels. We present preliminary results that show our approximate shielding algorithm effectively reduces the rate of safety violations, and in some cases improves the speed of convergence and quality of the final agent."}, "cited_paper_content": {"title": "Learning Latent Dynamics For Planning From Pixels", "abstract": "Planning has been very successful for control tasks with known environment dynamics. To leverage planning in unknown environments, the agent needs to learn the dynamics from interactions with the world. However, learning dynamics models that are accurate enough for planning has been a long-standing challenge, especially in image-based domains. We propose the Deep Planning Network (PlaNet), a purely model-based agent that learns the environment dynamics from images and chooses actions through fast online planning in latent space. To achieve high performance, the dynamics model must accurately predict the rewards ahead for multiple time steps. We approach this using a latent dynamics model with both deterministic and stochastic transition components. Moreover, we propose a multi-step variational inference objective that we name latent overshooting. Using only pixel observations, our agent solves continuous control tasks with contact dynamics, partial observability, and sparse rewards, which exceed the difficulty of tasks that were previously solved by planning with learned models. PlaNet uses substantially fewer episodes and reaches final performance close to and sometimes higher than strong model-free algorithms."}, "keywords": ["stochastic categorical latents", "posterior latent representation"], "citation_intent": "method"} {"citing_id": "2303.14365v1", "cited_id": "1308.6027", "section_title": "Introduction", "citation": "The total variation of a function is defined as |Du|(\u2126) = sup{ \u2126 u\u2207 \u2022 gdx : g \u2208 (C 1 0 (\u2126)) #REFR and |g(x)| \u2264 1 in \u2126}, which is not differentiable as L 2 and H 1 -norm.", "text_before_citation": ["Usually, the unknown conductivity \u03c3 is not smooth and does not lie in H 1 0 (\u2126 C ).", "In this situation, the H 1 -regularization is not suitable.", "For reconstructing nonsmooth parameter, it is known that total variation regularization is a good choice.", "Comparing with H 1 regularization, total regularization can deal with non-smooth parameters.", "Here we give some reference about the total-variation regularization, such as #OTHEREFR ."], "text_after_citation": ["This brings us difficulty in solving the optimization problem by iterative method based on gradient.", "One way to deal with this difficulty is replacing T V term |Du| by its smoothness |\u2207u| 2 + \u03bd 2 for small \u03bd > 0 #OTHEREFR .", "Another way is employing the splitting method #OTHEREFR .", "To deal with the discontinuity of the parameter \u03c3, we introduce the total variation regularization to the inverse eddy current problem.", "In the present work, we treat the T V regularization with splitting method."], "citing_paper_content": {"title": "Iterative Methods For An Inverse Eddy Current Problem With Total Variation Regularization", "abstract": "Conductivity reconstruction in an inverse eddy current problem is considered in the present paper. With the electric field measurement on part of domain boundary, we formulate the reconstruction problem to a constrained optimization problem with total variation regularization. Existence and stability are proved for the solution to the optimization problem. The finite element method is employed to discretize the optimization problem. The gradient Lipschitz properties of the objective functional are established for the the discrete optimization problems. We propose the alternating direction method of multipliers to solve the discrete problem. Based on the the gradient Lipschitz property, we prove the convergence by extending the admissible set to the whole finite element space. Finally, we show some numerical experiments to illustrate the efficiency of the proposed methods."}, "cited_paper_content": {"title": "Detection And Classification From Electromagnetic Induction Data", "abstract": "In this paper we introduce an efficient algorithm for identifying conductive objects using induction data derived from eddy currents. Our method consists of first extracting geometric features from the induction data and then matching them to precomputed data for known objects from a given dictionary. The matching step relies on fundamental properties of conductive polarization tensors and new invariants introduced in this paper. A new shape identification scheme is introduced and studied. We test it numerically in the presence of measurement noise. Stability and resolution capabilities of the proposed identification algorithm are quantified in numerical simulations."}, "keywords": ["H 1 -norm"], "citation_intent": "background"} {"citing_id": "2303.16133v1", "cited_id": "1405.0312", "section_title": "A. Cococon Categories & Examples", "citation": "Since the COCOCON is created from COCO Captions #REFR , the original captions predominantly contain animals defined in COCO objects.", "text_before_citation": ["Adjectives used as modifiers to a noun in the caption, such as color (red chair), height (tall building), size (small), and material (tiled wall).", "\u2022 Food. Food-related concepts including fruits, vegetables, and cooked items.", "\u2022 Sport.", "Sport-related objects and references to the sport itself are included in this category.", "\u2022 Animal. Includes all mentions of animals."], "text_after_citation": ["\u2022 Location.", "Includes broadly defined areas (e.g., bathroom, hotel, library), finer visual elements (e.g., floor, sidewalk), and spatial references (e.g., inside, outside, on table)", "\u2022 Action. Comprises transitive (e.g. flying kite) as well as intransitive actions (e.g. sitting, standing) performed by persons and animals.", "\u2022 Person.", "Concepts from one of the categories: man/male/guy, woman/female/lady, boy, girl."], "citing_paper_content": {"title": "Exposing And Addressing Cross-Task Inconsistency In Unified Vision-Language Models", "abstract": "As general purpose vision models get increasingly effective at a wide set of tasks, it is imperative that they be consistent across the tasks they support. Inconsistent AI models are considered brittle and untrustworthy by human users and are more challenging to incorporate into larger systems that take dependencies on their outputs. Measuring consistency between very heterogeneous tasks that might include outputs in different modalities is challenging since it is difficult to determine if the predictions are consistent with one another. As a solution, we introduce a benchmark dataset, COCOCON, where we use contrast sets created by modifying test instances for multiple tasks in small but semantically meaningful ways to change the gold label, and outline metrics for measuring if a model is consistent by ranking the original and perturbed instances across tasks. We find that state-of-the-art systems suffer from a surprisingly high degree of inconsistent behavior across tasks, especially for more heterogeneous tasks. Finally, we propose using a rank correlation-based auxiliary objective computed over large automatically created cross-task contrast sets to improve the multi-task consistency of large unified models, while retaining their original accuracy on downstream tasks."}, "cited_paper_content": {"title": "Microsoft Coco: Common Objects In Context", "abstract": "We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model."}, "keywords": ["COCO Captions", "COCO objects"], "citation_intent": "background"} {"citing_id": "2303.15833v1", "cited_id": "2002.08546", "section_title": "Codag: Complementary Domain Adaptation And Generalization", "citation": "To apply this approach to the SHOT algorithm #REFR with the current target domain D t , we first initialize the DA model f DA with the parameters of the DG model trained with previously experienced domains, or \u03b8 * DG,t\u22121 , treating it as a source model.", "text_before_citation": ["where \u03b8 DG,0 is the DG model parameters, L ce is the crossentropy loss for a classification setting, and R represents the RandMix augmentation #OTHEREFR .", "Generalized initialization with DG for DA In general, unsupervised source-free domain adaptation approaches involve initially training a source model with the source domain data and further updating the source model with unlabeled data from a new target domain.", "However, in our proposed framework, the DA model utilizes the parameters of the previous DG model for its initialization.", "This allows the DA model to leverage the DG model's generalization ability to learn domain-invariant features and reduce domainspecific factors.", "As a result, we achieve efficient adaptation to a new target domain, even when there is a large gap between previously experienced domains and the new target domain (see Section 4.2 for experimental results)."], "text_after_citation": ["Then, we freeze the classifier head in f DA and only update the feature extractor part of it using information maximization and self-supervised pseudolabeling with data from the current target domain D t .", "Accordingly, the loss to adapt to D t is written as,", "EQUATION", "where \u03b8 DA,t is the parameters of the DA model initialized with the optimal parameters of the DG model trained on the previous domain D t\u22121 , or \u03b8 * DG,t\u22121 , and L shot is the loss of the SHOT algorithm.", "Pseudo-label generation with DA for DG In our proposed framework, we simply use an empirical risk minimization (ERM) method along with an enhanced data augmentation method for training the DG model."], "citing_paper_content": {"title": "Complementary Domain Adaptation And Generalization For Unsupervised Continual Domain Shift Learning", "abstract": "Continual domain shift poses a significant challenge in real-world applications, particularly in situations where labeled data is not available for new domains. The challenge of acquiring knowledge in this problem setting is referred to as unsupervised continual domain shift learning. Existing methods for domain adaptation and generalization have limitations in addressing this issue, as they focus either on adapting to a specific domain or generalizing to unseen domains, but not both. In this paper, we propose Complementary Domain Adaptation and Generalization (CoDAG), a simple yet effective learning framework that combines domain adaptation and generalization in a complementary manner to achieve three major goals of unsupervised continual domain shift learning: adapting to a current domain, generalizing to unseen domains, and preventing forgetting of previously seen domains. Our approach is modelagnostic, meaning that it is compatible with any existing domain adaptation and generalization algorithms. We evaluate CoDAG on several benchmark datasets and demonstrate that our model outperforms state-of-the-art models in all datasets and evaluation metrics, highlighting its effectiveness and robustness in handling unsupervised continual domain shift learning."}, "cited_paper_content": {"title": "Do We Really Need To Access The Source Data? Source Hypothesis Transfer For Unsupervised Domain Adaptation", "abstract": "Unsupervised domain adaptation (UDA) aims to leverage the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled domain. Prior UDA methods typically require to access the source data when learning to adapt the model, making them risky and inefficient for decentralized private data. In this work we tackle a novel setting where only a trained source model is available and investigate how we can effectively utilize such a model without source data to solve UDA problems. To this end, we propose a simple yet generic representation learning framework, named \\emph{Source HypOthesis Transfer} (SHOT). Specifically, SHOT freezes the classifier module (hypothesis) of the source model and learns the target-specific feature extraction module by exploiting both information maximization and self-supervised pseudo-labeling to implicitly align representations from the target domains to the source hypothesis. In this way, the learned target model can directly predict the labels of target data. We further investigate several techniques to refine the network architecture to parameterize the source model for better transfer performance. To verify its versatility, we evaluate SHOT in a variety of adaptation cases including closed-set, partial-set, and open-set domain adaptation. Experiments indicate that SHOT yields state-of-the-art results among multiple domain adaptation benchmarks."}, "keywords": ["previously experienced domains", "current target domain"], "citation_intent": "method"} {"citing_id": "2304.04527v1", "cited_id": "1602.01783", "section_title": "A. Convergence Speed", "citation": "Further, we also visualize the different components of the QoE metric from equation #REFR to understand how ALISA performs better than other ABR algorithms.", "text_before_citation": ["We have also compared ALISA with Pensieve, an RL-based ALISA achieves a higher QoE on all metrics over all different configurations.", "ALISA obtains up to 25% higher QoE than RB, 230% higher QoE than BB, 30% higher QoE than BOLA, 25% higher QoE than RobustMPC and 20% higher QoE compared to Pensieve when tested under lossless conditions. This performance translates to lossy conditions as well.", "We note that ALISA is able to obtain up to 25%, 28%, 48%, and 48% higher QoE compared to Pensieve under losses of 0.1%, 0.5%, 1%, and 2%, respectively.", "We summarize the remained of our testing QoE metrics for a random packet loss percentage of 0.1%, 0.5%, 1%, and 2% in Table III, Table IV, Table V and Table VI , respectively.", "These results indicate that ALISA achieves a significantly better performance than many other fixed-rule-based ABR algorithms and also Pensieve."], "text_after_citation": ["Figure 5 presents the total reward achieved by various ABR algorithms with QoE lin metric for each trace when the network is emulated with 0.1% packet loss.", "Our results show that the ALISA algorithm achieves a higher average QoE of 44.58 as compared to other ABR algorithms.", "Figure 6 presents the average total reward achieved by various ABR algorithms with QoE lin metric for each trace when the network is emulated with 1% packet loss.", "Our results show that the ALISA algorithm achieves a higher average QoE of 34.87 as compared to other ABR algorithms.", "Figure 7 (a) and Figure 7 (b) shows how ALISA can consistently achieve higher bitrates than other methods for random sample traces. This increases the first component of QoE."], "citing_paper_content": {"title": "Deep Reinforcement Learning With Importance Weighted A3C For Qoe Enhancement In Video Delivery Services", "abstract": "Adaptive bitrate (ABR) algorithms are used to adapt the video bitrate based on the network conditions to improve the overall video quality of experience (QoE). Recently, reinforcement learning (RL) and asynchronous advantage actor-critic (A3C) methods have been used to generate adaptive bit rate algorithms and they have been shown to improve the overall QoE as compared to fixed rule ABR algorithms. However, a common issue in the A3C methods is the lag between behaviour policy and target policy. As a result, the behaviour and the target policies are no longer synchronized which results in suboptimal updates. In this work, we present ALISA: An Actor-Learner Architecture with Importance Sampling for efficient learning in ABR algorithms. ALISA incorporates importance sampling weights to give more weightage to relevant experience to address the lag issues with the existing A3C methods. We present the design and implementation of ALISA, and compare its performance to state-of-the-art video rate adaptation algorithms including vanilla A3C implemented in the Pensieve framework and other fixed-rule schedulers like BB, BOLA, and RB. Our results show that ALISA improves average QoE by up to 25%-48% higher average QoE than Pensieve, and even more when compared to fixed-rule schedulers."}, "cited_paper_content": {"title": "Asynchronous Methods For Deep Reinforcement Learning", "abstract": "We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input."}, "keywords": ["QoE metric", "different components"], "citation_intent": "method"} {"citing_id": "2303.15946v1", "cited_id": "1205.2618", "section_title": "Baselines", "citation": "To demonstrate the benefit of our approach we compare it against the following baselines: BPRMF #REFR Matrix factorisation optimised by the BPR loss function.", "text_before_citation": [], "text_after_citation": ["iALS #OTHEREFR matrix factorization learned by implicit alternating least squares.", "PureSVD #OTHEREFR Compute item embeddings through a singular value decomposition of the user-item interaction matrix, which will be then used to infer user representations.", "FISM #OTHEREFR Learn item embeddings through optimisation process creating user representations as a weighted combination of items in their profile.", "Additional user and item biases as well as an agreement term are considered in the score estimation.", "NGCF #OTHEREFR Work that introduces graph convolution to the collaborative filtering scenario, it uses dense layer and inner product to enrich the knowledge injected in the user item embeddings during the convolution process."], "citing_paper_content": {"title": "Item Graph Convolution Collaborative Filtering For Inductive Recommendations", "abstract": "Graph Convolutional Networks (GCN) have been recently employed as core component in the construction of recommender system algorithms, interpreting user-item interactions as the edges of a bipartite graph. However, in the absence of side information, the majority of existing models adopt an approach of randomly initialising the user embeddings and optimising them throughout the training process. This strategy makes these algorithms inherently transductive, curtailing their ability to generate predictions for users that were unseen at training time. To address this issue, we propose a convolution-based algorithm, which is inductive from the user perspective, while at the same time, depending only on implicit user-item interaction data. We propose the construction of an item-item graph through a weighted projection of the bipartite interaction network and to employ convolution to inject higher order associations into item embeddings, while constructing user representations as weighted sums of the items with which they have interacted. Despite not training individual embeddings for each user our approach achieves state-of-the-art recommendation performance with respect to transductive baselines on four real-world datasets, showing at the same time robust inductive performance."}, "cited_paper_content": {"title": "Bpr: Bayesian Personalized Ranking From Implicit Feedback", "abstract": "Item recommendation is the task of predicting a personalized ranking on a set of items (e.g. websites, movies, products). In this paper, we investigate the most common scenario with implicit feedback (e.g. clicks, purchases). There are many methods for item recommendation from implicit feedback like matrix factorization (MF) or adaptive k-nearest-neighbor (kNN). Even though these methods are designed for the item prediction task of personalized ranking, none of them is directly optimized for ranking. In this paper we present a generic optimization criterion BPR-Opt for personalized ranking that is the maximum posterior estimator derived from a Bayesian analysis of the problem. We also provide a generic learning algorithm for optimizing models with respect to BPR-Opt. The learning method is based on stochastic gradient descent with bootstrap sampling. We show how to apply our method to two state-of-the-art recommender models: matrix factorization and adaptive kNN. Our experiments indicate that for the task of personalized ranking our optimization method outperforms the standard learning techniques for MF and kNN. The results show the importance of optimizing models for the right criterion."}, "keywords": ["BPRMF Matrix factorisation"], "citation_intent": "method"} {"citing_id": "2304.01203v2", "cited_id": "1812.05905", "section_title": "C.3. Online Gcrl", "citation": "For the adaptive entropy regularizer #REFR , we regularize policy to have target entropy \u2212|A|, where the entropy regularizer weight is initialized to be 1 and optimized in log-space with a learning rate of 3 \u00d7 10 \u22124 .", "text_before_citation": ["For d \u03b8 , we use a 128-512-2048 projector followed by an IQE-maxmean head with 64 components, each of size 32.", "We use x-512-512-8 network for policy, where x is the input size and 8 parametrizes a tanh-transformed diagonal Normal distribution. L transition is optimized with a weight of 0.1.", "Our learning rates are 0.01 for \u03bb, 1 \u00d7 10 \u22124 for the model parameters, and 3 \u00d7 10 \u22125 for the policy parameters. We use a batch size of 256 in training.", "We prefill the replay buffer with 200 episodes from a random actor, and then iteratively perform (1) generating 10 rollouts and (2) optimizing QRL objective for 500 gradients steps.", "We use N (0, 0.3 2 )-perturbed action noise in exploration."], "text_after_citation": ["Since the environment has much shorter horizon (each episodes ends at 50 timesteps), we instead use a different affine-transformed softplus for maximizing d \u03b8 , where \u03c6(x) \u2212softplus(15 \u2212 x, \u03b2 = 0.1).", "QRL (Image-based Observations).", "All settings are the same as QRL for state-based observations, except a few changes:", "\u2022 We use the convolutional backbone followed by a x-512-128 network for encoder f .", "\u2022 We optimize L transition with an increased weight of 10 (since the dynamics aren't fully deterministic)."], "citing_paper_content": {"title": "Optimal Goal-Reaching Reinforcement Learning Via Quasimetric Learning", "abstract": "In goal-reaching reinforcement learning (RL), the optimal value function has a particular geometry, called quasimetric structure. This paper introduces Quasimetric Reinforcement Learning (QRL), a new RL method that utilizes quasimetric models to learn optimal value functions. Distinct from prior approaches, the QRL objective is specifically designed for quasimetrics, and provides strong theoretical recovery guarantees. Empirically, we conduct thorough analyses on a discretized MountainCar environment, identifying properties of QRL and its advantages over alternatives. On offline and online goal-reaching benchmarks, QRL also demonstrates improved sample efficiency and performance, across both state-based and image-based observations."}, "cited_paper_content": {"title": "Soft Actor-Critic Algorithms And Applications", "abstract": "Model-free deep reinforcement learning (RL) algorithms have been successfully applied to a range of challenging sequential decision making and control tasks. However, these methods typically suffer from two major challenges: high sample complexity and brittleness to hyperparameters. Both of these challenges limit the applicability of such methods to real-world domains. In this paper, we describe Soft Actor-Critic (SAC), our recently introduced off-policy actor-critic algorithm based on the maximum entropy RL framework. In this framework, the actor aims to simultaneously maximize expected return and entropy. That is, to succeed at the task while acting as randomly as possible. We extend SAC to incorporate a number of modifications that accelerate training and improve stability with respect to the hyperparameters, including a constrained formulation that automatically tunes the temperature hyperparameter. We systematically evaluate SAC on a range of benchmark tasks, as well as real-world challenging tasks such as locomotion for a quadrupedal robot and robotic manipulation with a dexterous hand. With these improvements, SAC achieves state-of-the-art performance, outperforming prior on-policy and off-policy methods in sample-efficiency and asymptotic performance. Furthermore, we demonstrate that, in contrast to other off-policy algorithms, our approach is very stable, achieving similar performance across different random seeds. These results suggest that SAC is a promising candidate for learning in real-world robotics tasks."}, "keywords": ["learning rate", "adaptive entropy regularizer"], "citation_intent": "method"} {"citing_id": "2304.13812v1", "cited_id": "1804.07802", "section_title": "I. Introduction", "citation": "Significant research has also been done on quantization-aware training methods #REFR , where the loss in accuracy due to bit precision reduction is minimized.", "text_before_citation": ["Quantization aims to shrink the memory footprint of deep neural networks by reducing the number of bits used to store the values for the learnable parameters and activations.", "This is not only ideal for application scenarios where memory resources may be restricted such as embedded systems or microcontroller environments, but with the selected weight representation you could potentially facilitate faster inference using cheaper arithmetic operations #OTHEREFR .", "With the reduction in parameter bit precision, however, it is typical that a quantized neural network will perform worse in terms of accuracy than its non-quantized counterpart using gradient-based learning methods.", "However, these drops in accuracy are usually considered minimal and worth the given benefit in memory reduction and inference speed-up.", "There exists much literature describing various techniques of quantization and successful results thereof, including works utilizing stochastic rounding to select weight values beneficial to gradient training #OTHEREFR and applications on modern deep architectures #OTHEREFR , as well as quantization methods that reduce the number of multiplication operations required during training time #OTHEREFR ."], "text_after_citation": ["Some of these quantization-aware training methods utilize a straightthrough gradient estimator (STE) #OTHEREFR to more appropriately select weights during network training that minimizes accuracy and further reduces computational burden #OTHEREFR .", "As quantization methods are used for neural network reduction, there inevitably exist discrepancies between the performances of original and compressed neural networks.", "In this work, we propose a computationally tractable approach to compute the guaranteed output error caused by quantization.", "A merged neural network is constructed to generate the output differences between two neural networks, and then reachability analysis on the merged neural network can be performed to obtain the guaranteed error.", "The remainder of the paper is organized as follows: Preliminaries are given in Section II."], "citing_paper_content": {"title": "Guaranteed Quantization Error Computation For Neural Network Model Compression", "abstract": "Neural network model compression techniques can address the computation issue of deep neural networks on embedded devices in industrial systems. The guaranteed output error computation problem for neural network compression with quantization is addressed in this paper. A merged neural network is built from a feedforward neural network and its quantized version to produce the exact output difference between two neural networks. Then, optimization-based methods and reachability analysis methods are applied to the merged neural network to compute the guaranteed quantization error. Finally, a numerical example is proposed to validate the applicability and effectiveness of the proposed approach."}, "cited_paper_content": {"title": "Value-Aware Quantization For Training And Inference Of Neural Networks", "abstract": "We propose a novel value-aware quantization which applies aggressively reduced precision to the majority of data while separately handling a small amount of large data in high precision, which reduces total quantization errors under very low precision. We present new techniques to apply the proposed quantization to training and inference. The experiments show that our method with 3-bit activations (with 2% of large ones) can give the same training accuracy as full-precision one while offering significant (41.6% and 53.7%) reductions in the memory cost of activations in ResNet-152 and Inception-v3 compared with the state-of-the-art method. Our experiments also show that deep networks such as Inception-v3, ResNet-101 and DenseNet-121 can be quantized for inference with 4-bit weights and activations (with 1% 16-bit data) within 1% top-1 accuracy drop."}, "keywords": ["quantization-aware training methods"], "citation_intent": "background"} {"citing_id": "2303.15367v1", "cited_id": "1701.09133", "section_title": "Bernoulli-Domination.", "citation": "It also requires Lemma 3.7, the proof of which is adapted from that of #REFR Lemma 7] and is deferred to the Appendix.", "text_before_citation": [", \u2113 \u03c3 (v s ) \u2264 p\u2113} |C (G)| \u2264 |C (G \\ J)| \u2022 (p\u2113) s |C (G \\ J)| \u2022 \u2113 s = p s .", "The upper bound for the numerator in the above inequality comes from two facts that rely on J being an independent set.", "First, given a colouring \u03c3 0 \u2208 C (G \\ J), the number of extensions of \u03c3 0 to a colouring \u03c3 \u2208 C (G) is v\u2208J \u2113 \u03c3 0 (v).", "Second, given such an extension \u03c3, we have \u2113 \u03c3 (v) = \u2113 \u03c3 0 (v) for every v \u2208 J. We conclude that (X v ) v\u2208I are Ber(p)-dominated.", "Leveraging Theorem 3.6, and the fact that neighbourhoods induce independent sets, we can prove the following exponential upper bound on the likelihood of short lists."], "text_after_citation": ["Lemma 3.7.", "Let G be a triangle-free graph, let v \u2208 V (G) and let \u03c3 0 be a proper k-colouring of G \\ N [v] (for some k), with at least one extension to G.", "Then if \u03c3 is the uniformly random extension of \u03c3 0 to G, writing", "\u2113 := E [\u2113 \u03c3 (v)], we have P [\u2113 \u03c3 (v) \u2264 (1 \u2212 \u03b4)\u2113] \u2264 e \u2212 \u03b4 2 \u2113 2 ,", "for all \u03b4 \u2208 (0, 1)."], "citing_paper_content": {"title": "Uniformly Random Colourings Of Sparse Graphs", "abstract": "We analyse uniformly random proper k-colourings of sparse graphs with maximum degree \u2206 in the regime \u2206 < k ln k. This regime corresponds to the lower side of the shattering threshold for random graph colouring, a paradigmatic example of the shattering threshold for random Constraint Satisfaction Problems. We prove a variety of results about the solution space geometry of colourings of fixed graphs, generalising work of Achlioptas, Coja-Oghlan [ACO08], and Molloy [Mol12] on random graphs, and justifying the performance of stochastic local search algorithms in this regime. Our central proof relies only on elementary techniques, namely the firstmoment method and a quantitative induction, yet it strengthens list-colouring results due to Vu [Vu02], and more recently Davies, Kang, P., and Sereni [DKPS20], and generalises state-of-the-art bounds from Ramsey theory in the context of sparse graphs. It further yields an approximately tight lower bound on the number of colourings, also known as the partition function of the Potts model, with implications for efficient approximate counting. The research leading to these results was partially supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)-428212407 (E. Hurley)."}, "cited_paper_content": {"title": "The List Chromatic Number Of Graphs With Small Clique Number", "abstract": "We prove that every triangle-free graph with maximum degree \u0394 has list chromatic number at most ( 1 + o ( 1 ) ) \u0394 ln \u2061 \u0394 . This matches the best-known upper bound for graphs of girth at least 5. We also provide a new proof that for any r \u2265 4 every K r -free graph has list-chromatic number at most 200 r \u0394 ln \u2061 ln \u2061 \u0394 ln \u2061 \u0394 ."}, "keywords": ["Lemma"], "citation_intent": "background"} {"citing_id": "2303.06241v1", "cited_id": "1706.06083", "section_title": "B. Results On Cifar-10", "citation": "During training the adversarial examples are constructed with = 0.0157 (4/255) as suggested by #REFR .", "text_before_citation": ["Next, we evaluated our approach on the CIFAR-10 dataset using FGSM adversarial training.", "We trained the CIFAR-10 model using Wide-Resnet 32-10 model and standard hyperparameters used by Madry et al. #OTHEREFR ."], "text_after_citation": ["Same value is used to measure robust accuracy during test time. The results are shown in Table I .", "Conventional adversarial training on the CIFAR-10 dataset took around 50 hours to converge.", "By using our approach, we were able to decrease the training time by a factor of 1.98 and bring it down to 25 hours.", "The robust accuracy of the model decreased marginally by 0.64%.", "This shows that we were able to achieve a similar level of robustness by using a subset of training data."], "citing_paper_content": {"title": "Do We Need Entire Training Data For Adversarial Training?", "abstract": "Deep Neural Networks (DNNs) are being used to solve a wide range of problems in many domains including safetycritical domains like self-driving cars and medical imagery. DNNs suffer from vulnerability against adversarial attacks. In the past few years, numerous approaches have been proposed to tackle this problem by training networks using adversarial training. Almost all the approaches generate adversarial examples for the entire training dataset, thus increasing the training time drastically. We show that we can decrease the training time for any adversarial training algorithm by using only a subset of training data for adversarial training. To select the subset, we filter the adversarially-prone samples from the training data. We perform a simple adversarial attack on all training examples to filter this subset. In this attack, we add a small perturbation to each pixel and a few grid lines to the input image. We perform adversarial training on the adversarially-prone subset and mix it with vanilla training performed on the entire dataset. Our results show that when our method-agnostic approach is plugged into FGSM [9], we achieve a speedup of 3.52x on MNIST and 1.98x on the CIFAR-10 dataset with comparable robust accuracy. We also test our approach on state-of-the-art Free adversarial training [24] and achieve a speedup of 1.2x in training time with a marginal drop in robust accuracy on the ImageNet dataset."}, "cited_paper_content": {"title": "Towards Deep Learning Models Resistant To Adversarial Attacks", "abstract": "Recent work has demonstrated that neural networks are vulnerable to adversarial examples, i.e., inputs that are almost indistinguishable from natural data and yet classified incorrectly by the network. In fact, some of the latest findings suggest that the existence of adversarial attacks may be an inherent weakness of deep learning models. To address this problem, we study the adversarial robustness of neural networks through the lens of robust optimization. This approach provides us with a broad and unifying view on much of the prior work on this topic. Its principled nature also enables us to identify methods for both training and attacking neural networks that are reliable and, in a certain sense, universal. In particular, they specify a concrete security guarantee that would protect against any adversary. These methods let us train networks with significantly improved resistance to a wide range of adversarial attacks. They also suggest the notion of security against a first-order adversary as a natural and broad security guarantee. We believe that robustness against such well-defined classes of adversaries is an important stepping stone towards fully resistant deep learning models."}, "keywords": ["adversarial examples"], "citation_intent": "method"} {"citing_id": "2304.08660v1", "cited_id": "1902.07381", "section_title": "A. Image-Based Place Recognition", "citation": "In #REFR , a method to improve feature matching with depth estimation has been introduced, opening a potential to enhance place recognition with estimated depth.", "text_before_citation": ["This approach has exhibited superior robustness to changes in appearance.", "To efficiently translate place-distinctive image encoding into localization descriptors, the negative and positive samples were trained in pairs using weak supervision in NetVLAD #OTHEREFR with a triplet loss #OTHEREFR . Radenovic et al.", "discovered more consistent and distinctive feature representation from images, such as contrastive loss with generalized mean (GeM) pooling #OTHEREFR .", "The study considers every image with sufficiently large number of co-observed 3D points or similar features for a training pair.", "This differs from weak supervision with triplet loss, which selects pairs by the image location and their descriptor distance."], "text_after_citation": ["In recent studies, methods unifying local and global features have been proposed for expansion over large-scale visual place recognition #OTHEREFR to geometrically verify that the local feature matches after global image searching."], "citing_paper_content": {"title": "(Lc) 2 : Lidar-Camera Loop Constraints For Cross-Modal Place Recognition", "abstract": "Localization has been a challenging task for autonomous navigation. A loop detection algorithm must overcome environmental changes for the place recognition and relocalization of robots. Therefore, deep learning has been extensively studied for the consistent transformation of measurements into localization descriptors. Street view images are easily accessible; however, images are vulnerable to appearance changes. LiDAR can robustly provide precise structural information. However, constructing a point cloud database is expensive, and point clouds exist only in limited places. Different from previous works that train networks to produce shared embedding directly between the 2D image and 3D point cloud, we transform both data into 2.5D depth images for matching. In this work, we propose a novel cross-matching method, called (LC) 2 , for achieving LiDAR localization without a prior point cloud map. To this end, LiDAR measurements are expressed in the form of range images before matching them to reduce the modality discrepancy. Subsequently, the network is trained to extract localization descriptors from disparity and range images. Next, the best matches are employed as a loop factor in a pose graph. Using public datasets that include multiple sessions in significantly different lighting conditions, we demonstrated that LiDAR-based navigation systems could be optimized from image databases and vice versa."}, "cited_paper_content": {"title": "Look No Deeper: Recognizing Places From Opposing Viewpoints Under Varying Scene Appearance Using Single-View Depth Estimation", "abstract": "Visual place recognition (VPR) - the act of recognizing a familiar visual place - becomes difficult when there is extreme environmental appearance change or viewpoint change. Particularly challenging is the scenario where both phenomena occur simultaneously, such as when returning for the first time along a road at night that was previously traversed during the day in the opposite direction. While such problems can be solved with panoramic sensors, humans solve this problem regularly with limited field-of-view vision and without needing to constantly turn around. In this paper, we present a new depth- and temporal-aware visual place recognition system that solves the opposing viewpoint, extreme appearance-change visual place recognition problem. Our system performs sequence-to-single frame matching by extracting depth-filtered keypoints using a state-of-the-art depth estimation pipeline, constructing a keypoint sequence over multiple frames from the reference dataset, and comparing these keypoints to the keypoints extracted from a single query image. We evaluate the system on a challenging benchmark dataset and show that it consistently outperforms state-of-the-art techniques. We also develop a range of diagnostic simulation experiments that characterize the contribution of depth-filtered keypoint sequences with respect to key domain parameters including the degree of appearance change and camera motion."}, "keywords": ["place recognition"], "citation_intent": "method"} {"citing_id": "2303.14029v1", "cited_id": "2002.11049", "section_title": "Table I: Code Context Extraction Comparison", "citation": "The well-established 'Easy to Find' #REFR features such as 'TODO', 'FIXME', and 'XXX' were excluded in the manual validation.", "text_before_citation": ["Context in prior work #OTHEREFR Code Context in SoCCMiner #OTHEREFR public #OTHEREFR with 'Sense2Vec' #OTHEREFR .", "Sense2Vec captures contextually similar words with its deep word embedding.", "For example, for feature 'problematic', Sense2Vec returns multiple semantically similar words such as 'troublesome', 'undesirable', 'contentious', 'separate issue', etc., We use these extrapolated features (words) to evaluate our code comments for SATD.", "Sense2Vec extrapolated 'Easy to Find' features such as ('sabotaging', 'damaging', 'counter-productive', etc.,) are passed to the first round of automated SATD annotation.", "The first author manually evaluates the results by reading the comments."], "text_after_citation": ["During the validation process, the first author creates the heuristics for 'Hard to Find' #OTHEREFR This iterative process involved multiple iterations of annotation, manual evaluation, creating heuristics (pattern matching), and creating features for the 'Hard to Find' SATD.", "SATD Annotation: Features that attract too many false positives are removed from the SATD feature list.", "For example, 'cause problem' will capture the comment '// Empty bboxes can cause problems' which sound right at first.", "Later, after manual scanning, several header comments had the following comment content '//This won't cause problems'.", "Such features are removed from the SATD feature list to avoid overfitting features."], "citing_paper_content": {"title": "Pentacet Data -23 Million Contextual Code Comments And 500,000 Satd Comments", "abstract": "Most Self-Admitted Technical Debt (SATD) research utilizes explicit SATD features such as 'TODO' and 'FIXME' for SATD detection. A closer look reveals several SATD research uses simple SATD ('Easy to Find') code comments without the contextual data (preceding and succeeding source code context). This work addresses this gap through PENTACET (or 5C dataset) data. PENTACET is a large Curated Contextual Code Comments per Contributor and the most extensive SATD data. We mine 9,096 Open Source Software Java projects with a total of 435 million LOC. The outcome is a dataset with 23 million code comments, preceding and succeeding source code context for each comment, and more than 500,000 comments labeled as SATD, including both 'Easy to Find' and 'Hard to Find' SATD. We believe PENTACET data will further SATD research using Artificial Intelligence techniques."}, "cited_paper_content": {"title": "Identifying Self-Admitted Technical Debts With Jitterbug: A Two-Step Approach", "abstract": "Keeping track of and managing the self-admitted technical debts (SATDs) is important to maintaining a healthy software project. This requires much time and effort from human experts to identify these SATDs manually. Currently, automated solutions do not have high enough precision and recall in identifying SATDs to fully automate the process. To solve the above problems, we propose a two-step framework called Jitterbug for identifying SATDs by first finding the \"easy to find\" SATDs automatically with close to 100% precision via a novel pattern recognition technique, then applying machine learning techniques to assist human experts in manually identifying the rest \"hard to find\" SATDs with reduced human effort. Our simulation studies on ten software projects show that Jitterbug can identify SATDs more efficiently (with less human effort) than the prior state of the art methods."}, "keywords": ["manual validation"], "citation_intent": "method"} {"citing_id": "2305.00656v1", "cited_id": "1908.06148", "section_title": "B. Convolution Neural Network", "citation": "The output of the embedding layer is fed into a convolution layer with kernels of any size. Several convolution layers can be stacked #REFR .", "text_before_citation": ["Various kernels can be used to extract various features from the input data points.", "In general, kernel filters are designed to be smaller than the input to allow for the sharing of kernel weights across input dimensions.", "In file fragment classification, the input to the CNN is either a vector of 4096 or 512 dimensions, equivalent to fragments of 4KB or 512 bytes, respectively.", "The input is then transmitted to an embedding layer that converts the input vector into a continuous (4096,32) or (512,32) dimensional vector, dependent on the fragment size.", "The embedding layer is a preliminary step to prepare the input vectors for subseque CNN layers."], "text_after_citation": ["As a result of stacking standard convolution layers when developing a Deep Neural Network (DNN), in which the model can have a large number of parameters."], "citing_paper_content": {"title": "File Fragment Classification Using Light-Weight Convolutional Neural Networks", "abstract": "In digital forensics, file fragment classification is an important step toward completing file carving process. There exist several techniques to identify the type of file fragments without relying on meta-data, such as using features like header/footer and N-gram to identify the fragment type. Recently, convolutional neural network (CNN) models have been used to build classification models to achieve this task. However, the number of parameters in CNNs tends to grow exponentially as the number of layers increases. This results in a dramatic increase in training and inference time. In this paper, we propose lightweight file fragment classification models based on depthwise separable CNNs. The evaluation results show that our proposed models provide faster inference time with comparable accuracy as compared to the state-of-art CNN based models. In particular, our models were able to achieve an accuracy of 79% on the FFT-75 dataset with nearly 100K parameters and 164M FLOPs, which is 4x smaller and 6x faster than the state-of-the-art classifier in the literature."}, "cited_paper_content": {"title": "Fifty: Large-Scale File Fragment Type Identification Using Neural Networks", "abstract": "We present FiFTy, a modern file type identification tool for memory forensics and data carving. In contrast to previous approaches based on hand-crafted features, we design a compact neural network architecture, which uses a trainable embedding space, akin to successful natural language processing models. Our approach dispenses with explicit feature extraction which is a bottleneck in legacy systems. We evaluate the proposed method on a novel dataset with 75 file types - the most diverse and balanced dataset reported to date. FiFTy consistently outperforms all baselines in terms of speed, accuracy and individual misclassification rates. We achieved an average accuracy of 77.5% with processing speed of approx 38 sec/GB, which is better and more than an order of magnitude faster than the previous state-of-the-art tool - Sceadan (69% at 9 min/GB). Our tool and the corresponding dataset are available publicly online."}, "keywords": ["Several convolution layers", "embedding layer"], "citation_intent": "method"} {"citing_id": "2303.10030v1", "cited_id": "1902.11156", "section_title": "Main Result", "citation": "Compared to the stability result in #REFR (see Theorem 2.5), in our result we observe a square-root dependence of the reconstruction error bound on the noise level \u03c4 for small noise levels.", "text_before_citation": ["To see how this compares to the the existing dimension-dependent recovery guarantee (2.6) in [ARR14; LS17; JKS17] we first reformulate (3.2) as", "X * \u2212 X 0 F (log(\u03c9L)) 1/2 X 0 F \u03c4 \u2022 \u03c4.", "Ignoring logarithmic factors, for noise levels", "L KN \u00b5 2 max X 0 F \u226a \u03c4 \u2264 X 0 F ,", "this significantly improves over the dimension-dependent recovery guarantee (2.6)."], "text_after_citation": ["In contrast, the reconstruction error bound in Theorem 2.5 becomes constant whenever the noise level \u03c4 is smaller than a certain threshold.", "The bound (3.1) in Theorem 3.1 becomes worse when the noise level \u03c4 becomes smaller.", "This reflects the instability result in #OTHEREFR , see Theorem 2.4, which shows the existence of an alternative solution which amplifies the output error by a dimension factor."], "citing_paper_content": {"title": "How Robust Is Randomized Blind Deconvolution Via Nuclear Norm Minimization Against Adversarial Noise?", "abstract": "In this paper, we study the problem of recovering two unknown signals from their convolution, which is commonly referred to as blind deconvolution. Reformulation of blind deconvolution as a low-rank recovery problem has led to multiple theoretical recovery guarantees in the past decade due to the success of the nuclear norm minimization heuristic. In particular, in the absence of noise, exact recovery has been established for sufficiently incoherent signals contained in lower-dimensional subspaces. However, if the convolution is corrupted by additive bounded noise, the stability of the recovery problem remains much less understood. In particular, existing reconstruction bounds involve large dimension factors and therefore fail to explain the empirical evidence for dimension-independent robustness of nuclear norm minimization. Recently, theoretical evidence has emerged for ill-posed behavior of low-rank matrix recovery for sufficiently small noise levels. In this work, we develop improved recovery guarantees for blind deconvolution with adversarial noise which exhibit square-root scaling in the noise level. Hence, our results are consistent with existing counterexamples which speak against linear scaling in the noise level as demonstrated for related low-rank matrix recovery problems."}, "cited_paper_content": {"title": "On The Convex Geometry Of Blind Deconvolution And Matrix Completion", "abstract": "Low-rank matrix recovery from structured measurements has been a topic of intense study in the last decade and many important problems like matrix completion and blind deconvolution have been formulated in this framework. An important benchmark method to solve these problems is to minimize the nuclear norm, a convex proxy for the rank. A common approach to establish recovery guarantees for this convex program relies on the construction of a so-called approximate dual certificate. However, this approach provides only limited insight in various respects. Most prominently, the noise bounds exhibit seemingly suboptimal dimension factors. In this paper we take a novel, more geometric viewpoint to analyze both the matrix completion and the blind deconvolution scenario. We find that for both these applications the dimension factors in the noise bounds are not an artifact of the proof, but the problems are intrinsically badly conditioned. We show, however, that bad conditioning only arises for very small noise levels: Under mild assumptions that include many realistic noise levels we derive near-optimal error estimates for blind deconvolution under adversarial noise."}, "keywords": ["reconstruction error"], "citation_intent": "result"} {"citing_id": "2303.10151v1", "cited_id": "1609.04802", "section_title": "Super-Resolution", "citation": "The first application of GANs to super-resolution was in SRGAN, which outperformed prior methods and achieved state-of-the-art results #REFR .", "text_before_citation": [], "text_after_citation": ["The authors attributed this, in part, to their use of a perceptual loss function that accounted for perceptual similarity instead of just similarity in pixel space.", "At that time, a common problem with superresolution was the presence of artifacts when upsampling.", "ESRGAN addressed this issue by identifying that batch normalization layers tended to create unwanted artifacts #OTHEREFR .", "They also improved the perceptual loss function and used Residual-in-Residual Dense Blocks to generate more realistic images consistently.", "Additionally, they later proposed REAL-ESRGAN, which incorporated a u-net discriminator and spectral normalization #OTHEREFR ."], "citing_paper_content": {"title": "Toward Super-Resolution For Appearance-Based Gaze Estimation", "abstract": "Gaze tracking is a valuable tool with a broad range of applications in various fields, including medicine, psychology, virtual reality, marketing, and safety. Therefore, it is essential to have gaze tracking software that is cost-efficient and high-performing. Accurately predicting gaze remains a difficult task, particularly in real-world situations where images are affected by motion blur, video compression, and noise. Super-resolution has been shown to improve image quality from a visual perspective. This work examines the usefulness of super-resolution for improving appearancebased gaze tracking. We show that not all SR models preserve the gaze direction. We propose a two-step framework based on SwinIR super-resolution model. The proposed method consistently outperforms the state-of-the-art, particularly in scenarios involving low-resolution or degraded images. Furthermore, we examine the use of superresolution through the lens of self-supervised learning for gaze prediction. Self-supervised learning aims to learn from unlabelled data to reduce the amount of required labeled data for downstream tasks. We propose a novel architecture called \"SuperVision\" by fusing an SR backbone network to a ResNet18 (with some skip connections). The proposed SuperVision method uses 5x less labeled data and yet outperforms, by 15%, the state-of-the-art method of GazeTR which uses 100% of training data. We will make our code publicly available upon publication."}, "cited_paper_content": {"title": "Photo-Realistic Single Image Super-Resolution Using A Generative Adversarial Network", "abstract": "Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method."}, "keywords": ["resolution", "GANs"], "citation_intent": "method"} {"citing_id": "2305.01918v2", "cited_id": "1810.04805", "section_title": "Sample Pair Generation:", "citation": "Training details: We use the base version of the pre-trained language model BERT #REFR and RoBERTa as our backbone models.", "text_before_citation": ["For CLAIF, we use unpaired sentences from the training set of STS-B as original sentences to construct sentence pairs from scratch and randomly sample two other sentences for each original sentence to construct two sentence pairs with a similarity score of 0.", "For CLHAIF, following previous studies #OTHEREFR , we use the SNLI and MNLI datasets to construct sentence pairs and add a AI feedback similarity score for each sentence pair.", "We only use the AI feedback scores for positive pairs in our experiments of CLHAIF.", "Besides, to demonstrate the scalability of CLAIF, we use sentence pairs constructed from STS-B and from NLI datasets for the training of CLAIF, which we called CLAIF scaled .", "We list statistics of some datasets used for different methods in Table 2 ."], "text_after_citation": ["We use the development set of STS-B as our validation set.", "In CLAIF, we use the mean pooling strategy to get sentence embeddings for BERT and RoBERTa.", "For CLHAIF, we take the same pooling strategy as the corresponding baseline. Other implementation details are in Appendix A."], "citing_paper_content": {"title": "Improving Contrastive Learning Of Sentence Embeddings From Ai Feedback", "abstract": "Contrastive learning has become a popular approach in natural language processing, particularly for the learning of sentence embeddings. However, the discrete nature of natural language makes it difficult to ensure the quality of positive and negative sample pairs generated through data augmentation methods. Although supervised contrastive learning can produce more accurate sample pairs with human feedback labels, it still lacks fine-grained training signals. In this paper, we propose to improve Contrastive Learning of sentence embeddings from AI Feedback (CLAIF). Our method utilizes AI feedback from large pretrained language models (LLMs) to construct sample pairs with fine-grained sample similarity scores to improve contrastive learning. Besides, we combine human feedback and AI feedback to provide better supervision signals for supervised contrastive learning of sentence embeddings. Experimental results show that our method achieves state-of-the-art performance on several semantic textual similarity (STS) and transfer learning tasks compared to other unsupervised and supervised contrastive learning methods. 1"}, "cited_paper_content": {"title": "Bert: Pre-Training Of Deep Bidirectional Transformers For Language Understanding", "abstract": "We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. ::: BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement)."}, "keywords": ["pre-trained language model"], "citation_intent": "method"} {"citing_id": "2304.04667v2", "cited_id": "1901.01412", "section_title": "Preliminaries", "citation": "Using notations in #REFR , a search problem P is a binary relation, and we say that S is a solution to instance x iff (x, S) \u2208 P .", "text_before_citation": ["Min-Cut data structure.", "Following notations from [AKT20a], a Min-Cut data structure for a graph family F is a data structure that given a graph G \u2208 F, after a preprocessing phase which makes t pq (m) Max-Flow calls on G and spends t po (m) time outside of these calls, can answer Min-Cut queries for any two nodes s, t \u2208 V in amortized query time (or output sensitive time) t mc (k st ), where k st denotes the output size (number of edges or nodes in the output cut or separator).", "In particular, it means that the preprocessing time is at most t pq (m) \u2022\u00d4(m) + t po (m) using [CKL + 22], and the query time is O(k st \u2022 t mc (k st )).", "We may also be interested in Min-Cut data structures that support queries of Max-Flow values instead of cuts, and we denote their query time by t mf (m).", "Non-reducibility."], "text_after_citation": ["Let SOL(x) = {S : (x, S) \u2208 P } denote the set of solutions for instance x.", "We say that P is a total function if every instance x has at least one solution, i.e. SOL(x) = \u2205.", "Let \u22a5 be the \"don't know\" symbol and assume \u22a5 / \u2208 SOL(x) for all x.", "In particular, in our context of Min-Cut data structures, x is a graph, and SOL(x) is the set of all Min-Cut data structures with t mc (m) =\u00d4(1) for x where m is the number of edges in x.", "Next, we define the nondeterministic complexity of a total function."], "citing_paper_content": {"title": "(Almost) Ruling Out Seth Lower Bounds For All-Pairs Max-Flow", "abstract": "The All-Pairs Max-Flow problem has gained significant popularity in the last two decades, and many results are known regarding its fine-grained complexity. Despite this, wide gaps remain in our understanding of the time complexity for several basic variants of the problem. In this paper, we aim to bridge this gap by providing algorithms, conditional lower bounds, and non-reducibility results. Notably, we show that for most problem settings, deterministic reductions based on the Strong Exponential Time Hypothesis (SETH) cannot rule out n 4\u2212o(1) time algorithms under a hypothesis called NSETH. We present results for the following two cases, where some of the bounds assume that the matrix multiplication exponent \u03c9 = 2. Node-Capacities. For directed graphs with general node-capacities, we show that for any fixed \u03b5 > 0, a deterministic SETH-based reduction cannot rule out O(n 3+\u03b5) time algorithms under NSETH, which is tight with the known conditional lower bound of n 3\u2212o(1) devised for sparse graphs under SETH [Abboud et al., ToC 2021]. The same non-reducibility bound immediately applies to undirected graphs with unit node-capacities, which can be compared to an m 3/2\u2212o(1) + n 2 SETH-based lower bound following from a combination of previous works. Edge-Capacities. For directed graphs with unit edge-capacities, we similarly rule out an \u221a m \u2022 n 5/2+\u03b5\u2212o(1) SETH-based lower bound under NSETH, which can be contrasted with the known mn 1\u2212o(1) SETH-based lower bound [Krauthgamer and Trabelsi, TALG 2018]. While it remains open whether deterministic SETH-based reductions can rule out an n 4\u2212o(1) time algorithm for general edge-capacities, a consequence of our results is that a subquartic fine-grained reduction from the directed edge-capacitated setting to the directed node-capacitated setting would rule out such deterministic SETH-based reductions under NSETH. As a step towards ruling out even mn 1+\u03b5\u2212o(1) SETH lower bounds for undirected graphs with unit node-capacities, we design a new randomized O m 2+o(1) time combinatorial algorithm. This is an improvement over the recent O m 11/5+o(1) time algorithm [Huang et al., STOC 2023] and matching their m 2\u2212o(1) lower bound (up to subpolynomial factors), thus essentially settling the time complexity for this setting of the problem. More generally, our main technical contribution is the insight that st-cuts can be verified quickly, and that in most settings, st-flows can be shipped succinctly (i.e., with respect to the flow support). This is a key idea in our non-reducibility results, and it may be of independent interest."}, "cited_paper_content": {"title": "New Algorithms And Lower Bounds For All-Pairs Max-Flow In Undirected Graphs", "abstract": "We investigate the time-complexity of the All-Pairs Max-Flow problem: Given a graph with $n$ nodes and $m$ edges, compute for all pairs of nodes the maximum-flow value between them. If Max-Flow (the version with a given source-sink pair $s,t$) can be solved in time $T(m)$, then an $O(n^2) \\cdot T(m)$ is a trivial upper bound. But can we do better? ::: For directed graphs, recent results in fine-grained complexity suggest that this time bound is essentially optimal. In contrast, for undirected graphs with edge capacities, a seminal algorithm of Gomory and Hu (1961) runs in much faster time $O(n)\\cdot T(m)$. Under the plausible assumption that Max-Flow can be solved in near-linear time $m^{1+o(1)}$, this half-century old algorithm yields an $nm^{1+o(1)}$ bound. Several other algorithms have been designed through the years, including $\\tilde{O}(mn)$ time for unit-capacity edges (unconditionally), but none of them break the $O(mn)$ barrier. Meanwhile, no super-linear lower bound was shown for undirected graphs. ::: We design the first hardness reductions for All-Pairs Max-Flow in undirected graphs, giving an essentially optimal lower bound for the node-capacities setting. For edge capacities, our efforts to prove similar lower bounds have failed, but we have discovered a surprising new algorithm that breaks the $O(mn)$ barrier for graphs with unit-capacity edges! Assuming $T(m)=m^{1+o(1)}$, our algorithm runs in time $m^{3/2 +o(1)}$ and outputs a cut-equivalent tree (similarly to the Gomory-Hu algorithm). Even with current Max-Flow algorithms we improve state-of-the-art in many density regimes."}, "keywords": ["binary relation"], "citation_intent": "background"} {"citing_id": "2303.01471v1", "cited_id": "1503.01243", "section_title": "B.3 Qhd For Quadratic Model Functions", "citation": "This rate is at par with the convergence rate of the continuous-time model of Nesterov's method #REFR .", "text_before_citation": ["The Schr\u00f6dinger equation describing QHD is often too complicated to solve analytically.", "Fortunately, we can find a closed-form solution of QHD when f is a quadratic function.", "In the following calculation, we consider the one-dimensional case f (x) = 1 2 x 2 for simplicity.", "It is worth noting that the same method also applies to general finite-dimensional quadratic forms f (x) = 1 2 x T Ax with a positive semidefinite matrix A #OTHEREFR .", "It turns out that, if we choose the same time-dependent parameters as in Nesterov's accelerated gradient descent, the convergence rate is E[f ] \u223c\u03a8t = O(t \u22123 )."], "text_after_citation": ["Remark 4.", "Our result does not mean QHD can not achieve faster convergence for quadratic objective functions.", "In fact, one can use a linear function for \u03b2 t , and the convergence rate can be exponentially fast.", "Here, our goal is to compare QHD to the classical ODE model of Nesterov's method.", "We choose the following time-dependent parameters in the QHD Hamiltonian (A.23):"], "citing_paper_content": {"title": "Quantum Hamiltonian Descent *", "abstract": "Gradient descent is a fundamental algorithm in both theory and practice for continuous optimization. Identifying its quantum counterpart would be appealing to both theoretical and practical quantum applications. A conventional approach to quantum speedups in optimization relies on the quantum acceleration of intermediate steps of classical algorithms, while keeping the overall algorithmic trajectory and solution quality unchanged. We propose Quantum Hamiltonian Descent (QHD), which is derived from the path integral of dynamical systems referring to the continuous-time limit of classical gradient descent algorithms, as a truly quantum counterpart of classical gradient methods where the contribution from classically-prohibited trajectories can significantly boost QHD's performance for non-convex optimization. Moreover, QHD is described as a Hamiltonian evolution efficiently simulatable on both digital and analog quantum computers. By embedding the dynamics of QHD into the evolution of the socalled Quantum Ising Machine (including D-Wave and others), we empirically observe that the D-Wave-implemented QHD outperforms a selection of state-of-the-art gradient-based classical solvers and the standard quantum adiabatic algorithm, based on the time-to-solution metric, on non-convex constrained quadratic programming instances up to 75 dimensions. Finally, we propose a \"three-phase picture\" to explain the behavior of QHD, especially its difference from the quantum adiabatic algorithm."}, "cited_paper_content": {"title": "A Differential Equation For Modeling Nesterov'S Accelerated Gradient Method: Theory And Insights", "abstract": "We derive a second-order ordinary differential equation (ODE) which is the limit of Nesterov's accelerated gradient method. This ODE exhibits approximate equivalence to Nesterov's scheme and thus can serve as a tool for analysis. We show that the continuous time ODE allows for a better understanding of Nesterov's scheme. As a byproduct, we obtain a family of schemes with similar convergence rates. The ODE interpretation also suggests restarting Nesterov's scheme leading to an algorithm, which can be rigorously proven to converge at a linear rate whenever the objective is strongly convex."}, "keywords": ["Nesterov's method"], "citation_intent": "result"} {"citing_id": "2303.06419v1", "cited_id": "1706.06083", "section_title": "Method", "citation": "One such approach to solving this maximization would be the straight-forward application of projected gradient descent (PGD) #REFR .", "text_before_citation": ["In order to achieve robustness to this class of perturbations we use a min-max optimization approach common in the adversarial robustness literature, but modified for our problem setting:", "EQUATION", "We use \u00d7 to denote element-wise product throughout.", "The above formulation uses a weighting \u03b1 to trade off between the standard task loss, the first term, and the adversarial loss incurred by the neural network being non-robust in the human specified shortcut directions, the second term.", "We can leverage the many advances in adversarial robustness in order to approximately solve the maximization problem posed in the second half of our loss term."], "text_after_citation": ["Given an input, x, a mask m, and a positive value \u03ba, PGD uses a first-order optimization approach to arrive at an input x * that approximately solves the maximization in the second term of our loss.", "We refer to the approach of using PGD in our loss formulation as PGD-Ex.", "Given the non-convexity of this maximization, however, no guarantees can be made about the quality of the approximate solution x * .", "Reliance on a local attack method in our optimization has the potential to bias our approaches to the specific attack chosen, as observed in the adversarial robustness literature #OTHEREFR .", "Both local explanations and local adversarial attacks share this weakness."], "citing_paper_content": {"title": "Robust Learning From Explanations", "abstract": "Machine learning from explanations (MLX) is an approach to learning that uses human-provided annotations of relevant features for each input to ensure that model predictions are right for the right reasons. Existing MLX approaches rely heavily on a specific model interpretation approach and require strong parameter regularization to align model and human explanations, leading to suboptimal performance. We recast MLX as an adversarial robustness problem, where human explanations specify a lower dimensional manifold from which perturbations can be drawn, and show both theoretically and empirically how this approach alleviates the need for strong parameter regularization. We consider various approaches to achieving robustness, leading to improved performance over prior MLX methods. Finally, we combine robustness with an earlier MLX method, yielding state-of-the-art results on both synthetic and real-world benchmarks."}, "cited_paper_content": {"title": "Towards Deep Learning Models Resistant To Adversarial Attacks", "abstract": "Recent work has demonstrated that neural networks are vulnerable to adversarial examples, i.e., inputs that are almost indistinguishable from natural data and yet classified incorrectly by the network. In fact, some of the latest findings suggest that the existence of adversarial attacks may be an inherent weakness of deep learning models. To address this problem, we study the adversarial robustness of neural networks through the lens of robust optimization. This approach provides us with a broad and unifying view on much of the prior work on this topic. Its principled nature also enables us to identify methods for both training and attacking neural networks that are reliable and, in a certain sense, universal. In particular, they specify a concrete security guarantee that would protect against any adversary. These methods let us train networks with significantly improved resistance to a wide range of adversarial attacks. They also suggest the notion of security against a first-order adversary as a natural and broad security guarantee. We believe that robustness against such well-defined classes of adversaries is an important stepping stone towards fully resistant deep learning models."}, "keywords": ["projected gradient descent"], "citation_intent": "method"} {"citing_id": "2304.03495v1", "cited_id": "1707.09700", "section_title": "Predcls", "citation": "SGCls SGDet R@50 / 100 mR@50/100 F@50 / 100 R@50 / 100 mR@50/100 F@50 / 100 R@50 / 100 mR@50/100 F@50 / 100 IMP+ \u2021 #REFR 61. Table S4 .", "text_before_citation": [], "text_after_citation": ["mR@100 on the SGDet setting for head, body, and tail classes.", "\u2020 denotes that the bi-level sampling is applied on the model to achieve these results. Bold numbers indicate the best performances."], "citing_paper_content": {"title": "Devil'S On The Edges: Selective Quad Attention For Scene Graph Generation", "abstract": "Scene graph generation aims to construct a semantic graph structure from an image such that its nodes and edges respectively represent objects and their relationships. One of the major challenges for the task lies in the presence of distracting objects and relationships in images; contextual reasoning is strongly distracted by irrelevant objects or backgrounds and, more importantly, a vast number of irrelevant candidate relations. To tackle the issue, we propose the Selective Quad Attention Network (SQUAT) that learns to select relevant object pairs and disambiguate them via diverse contextual interactions. SQUAT consists of two main components: edge selection and quad attention. The edge selection module selects relevant object pairs, i.e., edges in the scene graph, which helps contextual reasoning, and the quad attention module then updates the edge features using both edge-to-node and edge-to-edge cross-attentions to capture contextual information between objects and object pairs. Experiments demonstrate the strong performance and robustness of SQUAT, achieving the state of the art on the Visual Genome and Open Images v6 benchmarks."}, "cited_paper_content": {"title": "Scene Graph Generation From Objects, Phrases And Region Captions", "abstract": "Object detection, scene graph generation and region captioning, which are three scene understanding tasks at different semantic levels, are tied together: scene graphs are generated on top of objects detected in an image with their pairwise relationship predicted, while region captioning gives a language description of the objects, their attributes, relations, and other context information. In this work, to leverage the mutual connections across semantic levels, we propose a novel neural network model, termed as Multi-level Scene Description Network (denoted as MSDN), to solve the three vision tasks jointly in an end-to-end manner. Objects, phrases, and caption regions are first aligned with a dynamic graph based on their spatial and semantic connections. Then a feature refining structure is used to pass messages across the three levels of semantic tasks through the graph. We benchmark the learned model on three tasks, and show the joint learning across three tasks with our proposed method can bring mutual improvements over previous models. Particularly, on the scene graph generation task, our proposed method outperforms the state-of-art method with more than 3% margin."}, "keywords": ["Table S4"], "citation_intent": "background"} {"citing_id": "2303.15749v1", "cited_id": "1412.6980", "section_title": "Implementation Details", "citation": "For the training of patch embedder g(x), we use an initial learning rate of 1e-5 with Adam #REFR optimizer for 10000 iterations with batch size being 100.", "text_before_citation": ["For Camelyon16, we tile the WSIs into 256\u00d7256 patches on 20\u00d7 magnification using the official code of #OTHEREFR , while for the HCC dataset the patches are 384\u00d7384 on 40\u00d7 magnification following the pathologists' advice.", "For both datasets, we use an ImageNet pre-trained ResNet50 7 to initialize g(x) (except DS-MIL #OTHEREFR ).", "The instance embedding process is the same of #OTHEREFR , which means for each patch, it will be firstly embedded into a 1024-dimension vector, and then be projected to a 512-dimension hidden space for further bag-level training.", "For the training of bag classifier f (x), we use an initial learning rate of 2e-4 with Adam #OTHEREFR optimizer for 200 epochs with batch size being 1.", "Camelyon16 results are reported on the official test split, while the HCC dataset uses a 7:1:2 split for training, validation and test."], "text_after_citation": ["Three metrics are used for evaluation, namely area under curve (AUC), F1 score, and slide-level accuracy (Acc).", "Experiments are all conducted on a Nvidia Tesla M40 (12GB)."], "citing_paper_content": {"title": "Iteratively Coupled Multiple Instance Learning From Instance To Bag Classifier For Whole Slide Image Classification", "abstract": "Whole Slide Image (WSI) classification remains a challenge due to their extremely high resolution and the absence of fine-grained labels. Presently, WSIs are usually classified as a Multiple Instance Learning (MIL) problem when only slide-level labels are available. MIL methods involve a patch embedding process and a bag-level classification process, but they are prohibitively expensive to be trained end-to-end. Therefore, existing methods usually train them separately, or directly skip the training of the embedder. Such schemes hinder the patch embedder's access to slide-level labels, resulting in inconsistencies within the entire MIL pipeline. To overcome this issue, we propose a novel framework called Iteratively Coupled MIL (ICMIL), which bridges the loss back-propagation process from the bag-level classifier to the patch embedder. In ICMIL, we use category information in the bag-level classifier to guide the patch-level fine-tuning of the patch feature extractor. The refined embedder then generates better instance representations for achieving a more accurate bag-level classifier. By coupling the patch embedder and bag classifier at a low cost, our proposed framework enables information exchange between the two processes, benefiting the entire MIL classification model. We tested our framework on two datasets using three different backbones, and our experimental results demonstrate consistent performance improvements over state-of-the-art MIL methods. Code will be made available upon acceptance."}, "cited_paper_content": {"title": "Adam: A Method For Stochastic Optimization", "abstract": "We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and/or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm."}, "keywords": ["patch embedder g(x", "Adam optimizer"], "citation_intent": "method"} {"citing_id": "2303.11715v1", "cited_id": "1810.04805", "section_title": "I. Introduction", "citation": "Second, applying NLP methods to learn a question answering model usually requires a large-scale training set #REFR .", "text_before_citation": ["Question answering is an important topic in the field of natural language processing, and many works have been proposed recently #OTHEREFR .", "However, there are some challenges in using NLP algorithms for log question answering.", "First, there exists a domain shift between general natural language and log data.", "Log data includes domain-specific symbols, such as IP addresses and modular identifiers #OTHEREFR .", "General NLP techniques consider these symbols to be out-of-domain terms and replace them with a special token, however they are crucial for the log domain and cannot be ignored."], "text_after_citation": ["However, to the best of our knowledge, there is no public question answering dataset.", "It is difficult to implement an effective question answering system with limited data.", "In this work, we propose a question answering system for unstructured logs, namely LogQA, which aims to answer questions in the form of natural language based on large-scale unstructured log corpus.", "LogQA has two key components: Log Retriever and Log Reader. Log Retriever retrieves some relevant and helpful raw logs.", "The goal of Log Reader is to predict exact answers based on retrieved logs."], "citing_paper_content": {"title": "Logqa: Question Answering In Unstructured Logs", "abstract": "Modern systems produce a large volume of logs to record run-time status and events. System operators use these raw logs to track a system in order to obtain some useful information to diagnose system anomalies. One of the most important problems in this area is to help operators find the answers to log-based questions efficiently and user-friendly. In this work, we propose LogQA, which aims at answering logbased questions in the form of natural language based on largescale unstructured log corpora. Our system presents the answer to a question directly instead of returning a list of relevant snippets, thus offering better user-friendliness and efficiency. LogQA represents the first approach to solve question answering in lod domain. LogQA has two key components: Log Retriever and Log Reader. Log Retriever aims at retrieving relevant logs w.r.t. a given question, while Log Reader is responsible for inferring the final answer. Given the lack of a public dataset for log questing answering, we manually labelled a QA dataset of three open-source log corpus and will make them publicly available. We evaluated our proposed model on these datasets by comparing its performance with 6 other baseline methods. Our experimental results demonstrate that LogQA has outperformed other baseline methods."}, "cited_paper_content": {"title": "Bert: Pre-Training Of Deep Bidirectional Transformers For Language Understanding", "abstract": "We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. ::: BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement)."}, "keywords": ["question answering model"], "citation_intent": "method"} {"citing_id": "2303.01092v1", "cited_id": "1911.12580", "section_title": "Related Work", "citation": "Stable learning #REFR learns a set of global sample weights that could remove the confounding bias for all the potential treatments from data distribution.", "text_before_citation": ["Distribution shift in supervised learning.", "Distribution shift problem has been studied in many literature #OTHEREFR .", "Most works aim to learn a representation that performs well on different source domains simultaneously #OTHEREFR Mahajan et al., 2021; #OTHEREFR , following the idea of causal invariance #OTHEREFR Arjovsky et al., 2019) .", "Structural equation models are often assumed for theoretical analysis #OTHEREFR Mahajan et al., 2021) .", "Distributionally robust optimization optimizes a model's worst-case performance over some uncertainty set directly (Krueger et al., 2021; #OTHEREFR ."], "text_after_citation": ["Disentangled representation learning #OTHEREFR Tr\u00e4uble et al., 2021; #OTHEREFR aims to learn representations where distinct and informative factors of variations in data are separated.", "Theoretical understanding of contrastive learning.", "A number of recent works also aim to theoretically explain the success of contrastive learning in IID settings.", "One way to explain it is through the mutual information between positive samples #OTHEREFR . #OTHEREFR", "(2019) directly analyze the generalization of InfoNCE loss based on the assumption that positive samples are drawn from the same latent classes. In the same setting, #OTHEREFR"], "citing_paper_content": {"title": "", "abstract": "Self-Supervised Learning (SSL) is a paradigm that leverages unlabeled data for model training. Empirical studies show that SSL can achieve promising performance in distribution shift scenarios, where the downstream and training distributions differ. However, the theoretical understanding of its transferability remains limited. In this paper, we develop a theoretical framework to analyze the transferability of self-supervised contrastive learning, by investigating the impact of data augmentation on it. Our results reveal that the downstream performance of contrastive learning depends largely on the choice of data augmentation. Moreover, we show that contrastive learning fails to learn domain-invariant features, which limits its transferability. Based on these theoretical insights, we propose a novel method called Augmentation-robust Contrastive Learning (ArCL), which guarantees to learn domain-invariant features and can be easily integrated with existing contrastive learning algorithms. We conduct experiments on several datasets and show that ArCL significantly improves the transferability of contrastive learning. * Equal Contribution. This work was partially done when Xuyang was visiting Qing Yuan Research Institute."}, "cited_paper_content": {"title": "Stable Learning Via Sample Reweighting", "abstract": "We consider the problem of learning linear prediction models with model misspecification bias. In such case, the collinearity among input variables may inflate the error of parameter estimation, resulting in instability of prediction results when training and test distributions do not match. In this paper we theoretically analyze this fundamental problem and propose a sample reweighting method that reduces collinearity among input variables. Our method can be seen as a pretreatment of data to improve the condition of design matrix, and it can then be combined with any standard learning method for parameter estimation and variable selection. Empirical studies on both simulation and real datasets demonstrate the effectiveness of our method in terms of more stable performance across different distributed data."}, "keywords": ["Stable learning"], "citation_intent": "background"} {"citing_id": "2304.05314v1", "cited_id": "1602.03786", "section_title": "A. Optimal Control Problems", "citation": "In case that the solution of Problem 2 does not exist, we can derive an alternative trajectory for the CAVs by solving a two-level optimization problem numerically that includes piecing together the constrained and unconstrained arcs until the solution does not violate any constraints #REFR .", "text_before_citation": ["By solving Problem 2, the optimal exit time t f i along with the optimal trajectory #OTHEREFR and control law #OTHEREFR are obtained for CAV-i for t \u2208 [t 0", "[a i , b i , c i , d i ] using", "i , t f i ]", ".", "If a feasible solution to Problem 2 exists, then the solution is a cubic polynomial that guarantees none of the constraints are activated."], "text_after_citation": [], "citing_paper_content": {"title": "Coordination For Connected Automated Vehicles At Merging Roadways In Mixed Traffic Environment", "abstract": "In this paper, we present a two-level optimal control framework to address motion coordination of connected automated vehicles (CAVs) in the presence of human-driven vehicles (HDVs) in merging scenarios. Our framework combines an unconstrained trajectory solution of a low-level energyoptimal control problem with an upper-level optimization problem that yields the minimum travel time for CAVs. We predict the future trajectories of the HDVs using Newell's car-following model. To handle potential deviations of HDVs' actual behavior from the one predicted, we provide a risktriggered re-planning mechanism for the CAVs based on timeto-conflict. The effectiveness of the proposed control framework is demonstrated via simulations with heterogeneous human driving behaviors and via experiments in a scaled environment."}, "cited_paper_content": {"title": "A Decentralized Energy-Optimal Control Framework For Connected Automated Vehicles At Signal-Free Intersections", "abstract": "We address the problem of optimally controlling connected and automated vehicles (CAVs) crossing an urban intersection without any explicit traffic signaling, so as to minimize energy consumption subject to a throughput maximization requirement. We show that the solution of the throughput maximization problem depends only on the hard safety constraints imposed on CAVs and its structure enables a decentralized optimal control problem formulation for energy minimization. We present a complete analytical solution of these decentralized problems and derive conditions under which feasible solutions satisfying all safety constraints always exist. The effectiveness of the proposed solution is illustrated through simulation which shows substantial dual benefits of the proposed decentralized framework by allowing CAVs to conserve momentum and fuel while also improving travel time."}, "keywords": ["alternative trajectory", "two-level optimization problem"], "citation_intent": "background"} {"citing_id": "2304.00152v1", "cited_id": "1703.04977", "section_title": "Method", "citation": "Kendall and Gal #REFR use the negative log-likelihood of the prediction model as the loss function to be minimized in pixel-wise tasks.", "text_before_citation": ["The objective of our work is to jointly estimate the disparity and its uncertainty.", "An important benefit of this joint formulation is that the multi-task network learns to predict more accurate disparities than the standalone disparity estimator when the uncertainty subnetwork is added.", "Given a stereo image pair X = {x l , x r }, with image dimensions H \u00d7 W , and the corresponding ground truth disparity d, the predictiond of a stereo-matching network f \u03b8 can be represented asd = f \u03b8 (x l , x r ).", "For each pixel i, the error (i) of the prediction is calculated using the L1 loss."], "text_after_citation": ["We take the formulation a step further by requiring that the network generate a distribution of uncertainties that matches the distribution of errors.", "To this end, we propose to minimize the divergence D between the distributions of predicted uncertainty and actual disparity error.", "In the following subsections, we present aleatoric uncertainty estimation (Section 3.1), the proposed KL divergence loss (Section 3.2), our network architecture (Section 3.3), and the combined loss function (Section 3.4)."], "citing_paper_content": {"title": "Learning The Distribution Of Errors In Stereo Matching For Joint Disparity And Uncertainty Estimation", "abstract": "We present a new loss function for joint disparity and uncertainty estimation in deep stereo matching. Our work is motivated by the need for precise uncertainty estimates and the observation that multi-task learning often leads to improved performance in all tasks. We show that this can be achieved by requiring the distribution of uncertainty to match the distribution of disparity errors via a KL divergence term in the network's loss function. A differentiable soft-histogramming technique is used to approximate the distributions so that they can be used in the loss. We experimentally assess the effectiveness of our approach and observe significant improvements in both disparity and uncertainty prediction on large datasets. Our code is available at https://github.com/lly00412/SEDNet.git."}, "cited_paper_content": {"title": "What Uncertainties Do We Need In Bayesian Deep Learning For Computer Vision?", "abstract": "There are two major types of uncertainty one can model. Aleatoric uncertainty captures noise inherent in the observations. On the other hand, epistemic uncertainty accounts for uncertainty in the model - uncertainty which can be explained away given enough data. Traditionally it has been difficult to model epistemic uncertainty in computer vision, but with new Bayesian deep learning tools this is now possible. We study the benefits of modeling epistemic vs. aleatoric uncertainty in Bayesian deep learning models for vision tasks. For this we present a Bayesian deep learning framework combining input-dependent aleatoric uncertainty together with epistemic uncertainty. We study models under the framework with per-pixel semantic segmentation and depth regression tasks. Further, our explicit uncertainty formulation leads to new loss functions for these tasks, which can be interpreted as learned attenuation. This makes the loss more robust to noisy data, also giving new state-of-the-art results on segmentation and depth regression benchmarks."}, "keywords": ["pixel-wise tasks", "loss function"], "citation_intent": "method"} {"citing_id": "2304.08211v1", "cited_id": "1908.00080", "section_title": "Flexible-Model Neural Accelerators", "citation": "For example, Google offers a low-cost and lowpower version of the Google TPU called EdgeTPU #REFR , which can run dedicated neural networks with 8-bit precision.", "text_before_citation": ["In this section we discus neural accelerators that can support multiple models without hardware changes.", "Efforts in accelerating the execution of TensorFlow Lite models using flexible hardware have focused on deploying systolic architectures for tensor operations to obtain very high throughput."], "text_after_citation": ["The systolic array size in the EdgeTPU has 64 x 64 multiply-add cells obtaining 4 TOPS at 480MHz, and it is much smaller than the TPU cloud configurations.", "Layers with floatingpoint precision will run on a external CPU that would act as the host for the EdgeTPU device.", "Xilinx has also focused on inference including support for TensorFlow, with the Xilinx DPU #OTHEREFR unit.", "It is composed of a register configuration unit, the data controller and convolution computing modules optimized for the FPGA hardware resources.", "The original hardware is specialized for convolutional neural networks, although alternative architectures and hardware configurations are made available for other model types."], "citing_paper_content": {"title": "Dynamically Reconfigurable Variable-Precision Sparse-Dense Matrix Acceleration In Tensorflow Lite", "abstract": "In this paper, we present a dynamically reconfigurable hardware accelerator called FADES (Fused Architecture for DEnse and Sparse matrices). The FADES design offers multiple configuration options that trade off parallelism and complexity using a dataflow model to create four stages that read, compute, scale and write results. FADES is mapped to the programmable logic (PL) and integrated with the TensorFlow Lite inference engine running on the processing system (PS) of a heterogeneous SoC device. The accelerator is used to compute the tensor operations, while the dynamically reconfigurable approach can be used to switch precision between int8 and float modes. This dynamic reconfiguration enables better performance by allowing more cores to be mapped to the resource-constrained device and lower power consumption compared with supporting both arithmetic precisions simultaneously. We compare the proposed hardware with a high-performance systolic architecture for dense matrices obtaining 25% better performance in dense mode with half the DSP blocks in the same technology. In sparse mode, we show that the core can outperform dense mode even at low sparsity levels, and a single-core achieves up to 20x acceleration over the software-optimized NEON RUY library."}, "cited_paper_content": {"title": "Machine Learning At The Network Edge: A Survey", "abstract": "Devices comprising the Internet of Things, such as sensors and small cameras, usually have small memories and limited computational power. The proliferation of such resource-constrained devices in recent years has led to the generation of large quantities of data. These data-producing devices are appealing targets for machine learning applications but struggle to run machine learning algorithms due to their limited computing capability. They typically offload data to external computing systems (such as cloud servers) for further processing. The results of the machine learning computations are communicated back to the resource-scarce devices, but this worsens latency, leads to increased communication costs, and adds to privacy concerns. Therefore, efforts have been made to place additional computing devices at the edge of the network, i.e close to the IoT devices where the data is generated. Deploying machine learning systems on such edge devices alleviates the above issues by allowing computations to be performed close to the data sources. This survey describes major research efforts where machine learning has been deployed at the edge of computer networks."}, "keywords": ["dedicated neural networks"], "citation_intent": "method"} {"citing_id": "2304.01534v1", "cited_id": "1608.03983", "section_title": "Experiment Setup", "citation": "The image features are then encoded at different spatial resolutions, resulting in tensors of shape #REFR .", "text_before_citation": ["In practical applications, autonomous software suppliers often want to upgrade their models using existing customer data without accessing their customers' data directly.", "UC 3 represented a typical way of using public traffic data collected in vehicular networks for training, which often involves a large number of clients with small amounts of data and significant differences between datasets.", "Lastly, UC 4 focused on federated learning for clients with different numbers of cameras, which can happen when data is collected from different vehicle models by different manufacturers. BEVT architecture.", "We begin by feeding our input images X i,k \u2208 R L k \u00d7H\u00d7W \u00d73 through a 3-layer ResNet34 encoder.", "To ensure consistency across inputs, we use the AMCM to resize all inputs to have L k = 4."], "text_after_citation": ["Each client is locally trained for one epoch with a batch size of 4 using AdamW [30] optimizer. Baselines.", "Given the limited research on federated learning on BEVT, we conducted the first trial of training fedAvg on this platform.", "However, to validate the effectiveness of FedCaP, we also incorporated recent research findings on federated transformer learning as a baseline on BEVT.", "In addition to showcasing the results of local training on each client, we compared FedCaP with the following baselines:", "R L k \u00d764\u00d764\u00d7128 , R L k \u00d732\u00d732\u00d7256 , R L k \u00d716\u00d716\u00d7512 ."], "citing_paper_content": {"title": "Fedbevt: Federated Learning Bird'S Eye View Perception Transformer In Road Traffic Systems", "abstract": "Bird's eye view (BEV) perception is becoming increasingly important in the field of autonomous driving. It uses multi-view camera data to learn a transformer model that directly projects the perception of the road environment onto the BEV perspective. However, training a transformer model often requires a large amount of data, and as camera data for road traffic is often private, it is typically not shared. Federated learning offers a solution that enables clients to collaborate and train models without exchanging data. In this paper, we propose FedBEVT, a federated transformer learning approach for BEV perception. We address two common data heterogeneity issues in FedBEVT: (i) diverse sensor poses, and (ii) varying sensor numbers in perception systems. We present federated learning with camera-attentive personalization (FedCaP) and adaptive multi-camera masking (AMCM) to enhance the performance in real-world scenarios. To evaluate our method in real-world settings, we create a dataset consisting of four typical federated use cases. Our findings suggest that FedBEVT outperforms the baseline approaches in all four use cases, demonstrating the potential of our approach for improving BEV perception in autonomous driving. We will make all codes and data publicly available."}, "cited_paper_content": {"title": "Sgdr: Stochastic Gradient Descent With Warm Restarts", "abstract": "Restart techniques are common in gradient-free optimization to deal with multimodal functions. Partial warm restarts are also gaining popularity in gradient-based optimization to improve the rate of convergence in accelerated gradient schemes to deal with ill-conditioned functions. In this paper, we propose a simple warm restart technique for stochastic gradient descent to improve its anytime performance when training deep neural networks. We empirically study its performance on the CIFAR-10 and CIFAR-100 datasets, where we demonstrate new state-of-the-art results at 3.14% and 16.21%, respectively. We also demonstrate its advantages on a dataset of EEG recordings and on a downsampled version of the ImageNet dataset. Our source code is available at https://github.com/loshchil/SGDR"}, "keywords": ["image features"], "citation_intent": "background"} {"citing_id": "2303.06808v1", "cited_id": "1910.07517", "section_title": "Rq2: Can Existing Data Augmentation Methods Produce Robust Code Models?", "citation": "This phenomenon is consistent with the conclusion drawn by previous work #REFR that simply increasing the training data is not sufficient for improving the robustness of code models.", "text_before_citation": ["Adversarial robustness reflects how models handle the data with noise, which is an important characteristic that should be evaluated by the models.", "Table 5 presents the attack success rate of two state-of-the-art attack methods on our trained models.", "First, the results demonstrate that data augmentation can not always enhance the robustness of models.", "Compared to the No Aug models, only four out of eight models trained by using data augmentation have higher robustness."], "text_after_citation": ["Also importantly, in some cases, although data augmentation can help train a more robust model, the robustness improvement is insignificant, e.g., the greatest improvement is 5.98% (GraphCodeBERT-Refactor-BigCloneBench-ALERT). Then, we compare each data augmentation method.", "In CodeBERT, RS performs the best and has a relatively better robustness improvement in five (out of 12) cases, and reduces ASR by up to 5.10% under MHM attack and 3.69% under ALERT attack compared to No Aug.", "Besides, the second best one, RD reduces ASR by up to 5.67% in MHM and 2.66% in ALERT compared to No Aug.", "In GraphCodeBERT, RS still is the best choice for robustness improvement in five (out of 12) cases, which deduces ASR by up to 4.10% in MHM and 3.02% in ALERT compared to No Aug.", "Interestingly, based on pre-trained PL models, compared to Refactor, methods like RS that could sightly break the syntax of programs can produce more accurate and robust code models to solve downstream tasks."], "citing_paper_content": {"title": "Boosting Source Code Learning With Data Augmentation: An Empirical Study", "abstract": "The next era of program understanding is being propelled by the use of machine learning to solve software problems. Recent studies have shown surprising results of source code learning, which applies deep neural networks (DNNs) to various critical software tasks, e.g., bug detection and clone detection. This success can be greatly attributed to the utilization of massive high-quality training data, and in practice, data augmentation, which is a technique used to produce additional training data, has been widely adopted in various domains, such as computer vision. However, in source code learning, data augmentation has not been extensively studied, and existing practice is limited to simple syntax-preserved methods, such as code refactoring. Essentially, source code is often represented in two ways, namely, sequentially as text data and structurally as graph data, when it is used as training data in source code learning. Inspired by these analogy relations, we take an early step to investigate whether data augmentation methods that are originally used for text and graphs are effective in improving the training quality of source code learning. To that end, we first collect and categorize data augmentation methods in the literature. Second, we conduct a comprehensive empirical study on four critical tasks and 11 DNN architectures to explore the effectiveness of 12 data augmentation methods (including code refactoring and 11 other methods for text and graph data). Our results identify the data augmentation methods that can produce more accurate and robust models for source code learning, including those based on mixup (e.g., SenMixup for texts and Manifold-Mixup for graphs), and those that slightly break the syntax of source code (e.g., random swap and random deletion for texts)."}, "cited_paper_content": {"title": "Adversarial Examples For Models Of Code", "abstract": "Neural models of code have shown impressive performance for tasks such as predicting method names and identifying certain kinds of bugs. In this paper, we show that these models are vulnerable to adversarial examples, and introduce a novel approach for attacking trained models of code with adversarial examples. The main idea is to force a given trained model to make an incorrect prediction as specified by the adversary by introducing small perturbations that do not change the program's semantics. To find such perturbations, we present a new technique for Discrete Adversarial Manipulation of Programs (DAMP). DAMP works by deriving the desired prediction with respect to the model's inputs while holding the model weights constant and following the gradients to slightly modify the code. ::: To defend a model against such attacks, we propose placing a defensive model (Anti-DAMP) in front of it. Anti-DAMP detects unlikely mutations and masks them before feeding the input to the downstream model. ::: We show that our DAMP attack is effective across three neural architectures: code2vec, GGNN, and GNN-FiLM, in both Java and C#. We show that DAMP has up to 89% success rate in changing a prediction to the adversary's choice (\"targeted attack\"), and a success rate of up to 94% in changing a given prediction to any incorrect prediction (\"non-targeted attack\"). By using Anti-DAMP, the success rate of the attack drops drastically for both targeted and non-targeted attacks, with a minor penalty of 2% relative degradation in accuracy while not performing under attack."}, "keywords": ["code models"], "citation_intent": "result"} {"citing_id": "2304.13958v1", "cited_id": "1409.0473", "section_title": "Learning And Reasoning Poverty Estimation", "citation": "The data merging can be done in various ways, such as early fusion, late fusion, naive concatenation, attention mechanism #REFR , or gating-based fusion.", "text_before_citation": ["However, as we have limited training data, training visual or textual embeddings from scratch will, in all likelihood, not yield desirable results.", "Thus, we propose to use a transfer learning approach wherein we use pre-trained models trained on large corpora and fine-tune them to our task-specific dataset.", "For the visual inputs of satellite images, we can utilize pre-trained CNN-based models, such as VGG Net [Simonyan and Zisserman, 2014], or ResNet #OTHEREFR .", "For the textual inputs, we prefer the language models, such as BERT [Devlin et al., 2019] or RoBERTa #OTHEREFR .", "Each of the representations goes through a non-linear transformation before the data merging operation."], "text_after_citation": ["We treat our problem as a classification task wherein we have four ground-truth labels at the district level -'advanced', 'catching up', 'falling behind', and 'lagged'.", "This is calculated based on aggregating the individual MDPI scores at the district level.", "Correlation, Association, and Causality Analysing the relationship of the input variables with the target output can be a challenging step but it is important for strategic actions.", "Such insights are important to determine the driving factors (causative), factors that exhibit linear relationships (correlative), and factors that co-occur (associa- Figure 2 : Proposed multi-input deep learning model for aggregating and processing data from proxy and traditional data sources. tive).", "We propose to use the Bayesian Networks for identifying causative factors, Pearson's correlation to determine the correlative factors, and the Hypergeometric test to discover the associative factors."], "citing_paper_content": {"title": "Learning And Reasoning Multifaceted And Longitudinal Data For Poverty Estimates And Livelihood Capabilities Of Lagged Regions In Rural India", "abstract": "Poverty is a multifaceted phenomenon linked to the lack of capabilities of households to earn a sustainable livelihood, increasingly being assessed using multidimensional indicators. Its spatial pattern depends on social, economic, political, and regional variables. Artificial intelligence has shown immense scope in analyzing the complexities and nuances of poverty. The proposed project aims to examine the poverty situation of rural India for the period of 1990-2022 based on the quality of life and livelihood indicators. The districts will be classified into 'advanced', 'catching up', 'falling behind', and 'lagged' regions. The project proposes to integrate multiple data sources, including conventional national-level large sample household surveys, census surveys, and proxy variables like daytime, and nighttime data from satellite images, and communication networks, to name a few, to provide a comprehensive view of poverty at the district level. The project also intends to examine causation and longitudinal analysis to examine the reasons for poverty. Poverty and inequality could be widening in developing countries due to demographic and growth-agglomerating policies. Therefore, targeting the lagging regions and the vulnerable population is essential to eradicate poverty and improve the quality of life to achieve the goal of 'zero poverty'. Thus, the study also focuses on the districts with a higher share of the marginal section of the population compared to the national average to trace the performance of development indicators and their association with poverty in these regions."}, "cited_paper_content": {"title": "Neural Machine Translation By Jointly Learning To Align And Translate", "abstract": "Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition."}, "keywords": ["data merging", "naive concatenation"], "citation_intent": "method"} {"citing_id": "2304.02541v1", "cited_id": "1801.09536", "section_title": "Introduction", "citation": "In contrast to semantic word embeddings #REFR , we show that intrinsic and extrinsic metrics for phonetic word embeddings generally correlate with each other.", "text_before_citation": ["They range from intuitive baselines to more complex techniques using metric and contrastive learning.", "More importantly, however, we include an evaluation suite for testing the performance of phonetic embeddings. The motivations for this are two-fold. First, prior works are inconsistent in evaluating their models.", "This prevents the field from observing long-term improvements of such embeddings and from making fair comparisons across different approaches.", "Secondly, when a practitioner is deciding which phonetic word embedding method to use, the go-to approach is to first apply the embeddings (generally fast) and then train a downstream model on those embeddings (compute and time intensive).", "Instead, intrinsic embedding evaluation metrics (cheap)-if shown to correlate well with extrinsic metrics-could provide useful signals in embedding method selection prior to training of downstream models (expensive)."], "text_after_citation": ["While some work on evaluating acoustic word embeddings exists #OTHEREFR , this work specializes in phonetic word embeddings for text, not speech.", "Our contributions are threefold:", "\u2022 a survey of existing phonetic embeddings, \u2022 four novel methods for phonetic word embedding, ranging from simple baselines to complex models, and \u2022 an evaluation suite for such embeddings."], "citing_paper_content": {"title": "Pwesuite: Phonetic Word Embeddings And Tasks They Facilitate", "abstract": "Word embeddings that map words into a fixeddimensional vector space are the backbone of modern NLP. Most word embedding methods encode semantic information. However, phonetic information, which is important for some tasks, is often overlooked. In this work, we develop several novel methods which leverage articulatory features to build phonetically informed word embeddings, and present a set of phonetic word embeddings to encourage their community development, evaluation and use. While several methods for learning phonetic word embeddings already exist, there is a lack of consistency in evaluating their effectiveness. Thus, we also proposes several ways to evaluate both intrinsic aspects of phonetic word embeddings, such as word retrieval and correlation with sound similarity, and extrinsic performances, such as rhyme and cognate detection and sound analogies. We hope that our suite of tasks will promote reproducibility and provide direction for future research on phonetic word embeddings."}, "cited_paper_content": {"title": "A Survey Of Word Embeddings Evaluation Methods", "abstract": "Word embeddings are real-valued word representations able to capture lexical semantics and trained on natural language corpora. Models proposing these representations have gained popularity in the recent years, but the issue of the most adequate evaluation method still remains open. This paper presents an extensive overview of the field of word embeddings evaluation, highlighting main problems and proposing a typology of approaches to evaluation, summarizing 16 intrinsic methods and 12 extrinsic methods. I describe both widely-used and experimental methods, systematize information about evaluation datasets and discuss some key challenges."}, "keywords": ["phonetic word embeddings", "semantic word embeddings"], "citation_intent": "result"} {"citing_id": "2303.14733v1", "cited_id": "1408.1588", "section_title": "Introduction", "citation": "Also, the author in #REFR considers the synchronization of multi-dimensional mechanical and electrical systems where the subsystems interact via positive semidefinite matrix weights.", "text_before_citation": ["Corresponding to a matrix-weighted consensus system, one may define a matrix-weighted graph and a matrix-weighted Laplacian.", "It has been shown that algebraic graph properties of the matrix-weighted Laplacian determine the consensus and clustering behaviors of the whole system #OTHEREFR .", "Several applications of the matrix-weighted consensus algorithm can be found in the literature.", "For example, the authors in #OTHEREFR propose multi-dimensional opinion dynamics, where the in-teractions and inter-logical dependencies between several considered topics are captured by matrix weights, respectively.", "In bearing-based formation control and network localization #OTHEREFR , the interactions between neighboring agents are modeled by an orthogonal projection matrix obtained from the directional (or bearing) vector between neighboring agents."], "text_after_citation": ["It is noteworthy that most existing works on matrixweighted consensus assumed that the agents update their states synchronously or following a deterministic update sequence.", "Continuous-time matrix-weighted consensus with switching graph topologies was studied in #OTHEREFR .", "Discrete-time matrix-weighted consensus with fixed or switching topologies was studied in #OTHEREFR .", "Matrixweighted consensus with hybrid continuous-discrete time updates was examined in #OTHEREFR .", "The randomized consensus algorithms, which refer to a family of consensus algorithms in which the edges between agents are randomly selected according to some stochastic model for every discrete instant, have received a lot of attention in the literature #OTHEREFR ."], "citing_paper_content": {"title": "Randomized Matrix Weighted Consensus \u22c6", "abstract": "In this paper, a randomized gossip-type matrix-weighted consensus algorithm is proposed for both leaderless and leaderfollower topologies. Under some mild assumptions, the proposed pairwise asynchronous update algorithm achieves a consensus in expectation. Moreover, the probability distribution, the weighting matrices, and the updating step size jointly determine the upper bound of the \u01eb-convergence time of the algorithm. The theoretical result is verified by several simulation examples."}, "cited_paper_content": {"title": "Synchronization Under Matrix-Weighted Laplacian", "abstract": "Synchronization in a group of linear time-invariant systems is studied where the coupling between each pair of systems is characterized by a different output matrix. Simple methods are proposed to generate a (separate) linear coupling gain for each pair of systems, which ensures that all the solutions converge to a common trajectory."}, "keywords": ["positive semidefinite matrix", "synchronization"], "citation_intent": "background"} {"citing_id": "2303.01064v1", "cited_id": "1906.02192", "section_title": "Evaluation Results And Discussion", "citation": "For example, the record in validation dataset with 'celex_id' of '31999L0010' extracts 'foodstuff' but gets 'foodstuff' and 'processed foodstuff' with a reordered categories list. #REFR Ko et al.", "text_before_citation": ["And Legal-BERT-FP and BERT-base-uncased gain competitive f-scores in validation and testing dataset.", "At epoch 5, Legal-BERT-FP performs slightly better than BERT in validation and slightly worse in testing.", "The extracted answers are not independent of the positions of the category in categories list.", "The Table 6 : validation and testing results (%) in the concept multi-label classification for multi-answer questioning downstream task with proposed classification metric extracted answers might be different if the same title and same domain-tree randomly reorders the categories in the categories list.", "However, in classification, the order of the categories is not important."], "text_after_citation": ["(2020) also mentions the position bias in extractive question answering.", "The other issue in the proposed multi-label classification task is the long input problem since BERT has limited input length.", "Table 7 shows validation and testing in domain multi-label classification with the downstream task utilizing the multi-answer questioning task proposed in methodology.", "The domain multi-label classification has the constant domain label group which is different from concept multi-label classification having various lengths of label groups.", "The evaluation metrics include training loss, validation loss, and overall metrics -precision, recall, f1 score and accuracy in seqeval."], "citing_paper_content": {"title": "Adopting The Multi-Answer Questioning Task With An Auxiliary Metric For Extreme Multi-Label Text Classification Utilizing The Label Hierarchy", "abstract": "Extreme multi-label text classification utilizes the label hierarchy to partition extreme labels into multiple label groups, turning the task into simple multi-group multi-label classification tasks. Current research encodes labels as a vector with fixed length which needs establish multiple classifiers for different label groups. The problem is how to build only one classifier without sacrificing the label relationship in the hierarchy. This paper adopts the multi-answer questioning task for extreme multi-label classification. This paper also proposes an auxiliary classification evaluation metric. This study adopts the proposed method and the evaluation metric to the legal domain. The utilization of legal Berts and the study on task distribution are discussed. The experiment results show that the proposed hierarchy and multianswer questioning task can do extreme multilabel classification for EURLEX dataset. And in minor/fine-tuning the multi-label classification task, the domain adapted BERT models could not show apparent advantages in this experiment. The method is also theoretically applicable to zero-shot learning. * corresponding author 1 https://analyticsindiamag.com/ what-is-extreme-multilabel-text-classification/ 4 sklearn.preprocessing. MultiLabelBinarizer\\T1\\ textemdashscikit-learn1.0.2documentation 5 https:"}, "cited_paper_content": {"title": "Large-Scale Multi-Label Text Classification On Eu Legislation", "abstract": "We consider Large-Scale Multi-Label Text Classification (LMTC) in the legal domain. We release a new dataset of 57k legislative documents from EURLEX, annotated with ~4.3k EUROVOC labels, which is suitable for LMTC, few- and zero-shot learning. Experimenting with several neural classifiers, we show that BIGRUs with label-wise attention perform better than other current state of the art methods. Domain-specific WORD2VEC and context-sensitive ELMO embeddings further improve performance. We also find that considering only particular zones of the documents is sufficient. This allows us to bypass BERT's maximum text length limit and fine-tune BERT, obtaining the best results in all but zero-shot learning cases."}, "keywords": ["reordered categories list", "example"], "citation_intent": "background"} {"citing_id": "2303.11954v1", "cited_id": "1906.01537", "section_title": "Bo For Function Composition", "citation": "Astudillo and Frazier #REFR model the constituent functions of the composition by a single multioutput function f(x) = (f 1 (x), . . .", "text_before_citation": ["(\u2022), K (n) (\u2022, \u2022)) where \u00b5 (n) (\u2022)", "is the posterior mean function and K (n) (\u2022, \u2022) is the posterior covariance function, see #OTHEREFR for more details.", "The acquisition function then uses this posterior model to identify the next query location x n+1 .", "In doing so, vanilla BO ignores the values of the member functions in the composition h.", "BO for composite function, on the other hand, takes advantage of the available information about h, and its easy-to-compute nature."], "text_after_citation": [", f M (x)) and then model the uncertainty in f(x) using a multi-output Gaussian process to optimize h(f(x)).", "Since the prior over f is modelled as a MOGP, the proposed method tries to capture the correlations between different components of the multi-output function f(x).", "Note that the proposed EI and PI-based acquisition functions are required to be computed using Monte Carlo sampling.", "Furthermore, a sample from the posterior distribution is obtained by first sampling an n variate normal distribution, then scaling it by the lower Cholesky factor and then centering it with the mean of the posterior GP. Two problems arise due to this: 1.", "Such simulation based averaging approach increases the time complexity of the procedure linearly with the number of samples taken for averaging and 2."], "citing_paper_content": {"title": "Bayesian Optimization For Function Compositions With Applications To Dynamic Pricing", "abstract": "Bayesian Optimization (BO) is used to find the global optima of black box functions. In this work, we propose a practical BO method of function compositions where the form of the composition is known but the constituent functions are expensive to evaluate. By assuming an independent Gaussian process (GP) model for each of the constituent black-box function, we propose EI and UCB based BO algorithms and demonstrate their ability to outperform vanilla BO and the current state-of-art algorithms. We demonstrate a novel application of the proposed methods to dynamic pricing in revenue management when the underlying demand function is expensive to evaluate."}, "cited_paper_content": {"title": "Bayesian Optimization Of Composite Functions", "abstract": "We consider optimization of composite objective functions, i.e., of the form $f(x)=g(h(x))$, where $h$ is a black-box derivative-free expensive-to-evaluate function with vector-valued outputs, and $g$ is a cheap-to-evaluate real-valued function. While these problems can be solved with standard Bayesian optimization, we propose a novel approach that exploits the composite structure of the objective function to substantially improve sampling efficiency. Our approach models $h$ using a multi-output Gaussian process and chooses where to sample using the expected improvement evaluated on the implied non-Gaussian posterior on $f$, which we call expected improvement for composite functions (\\ei). Although \\ei\\ cannot be computed in closed form, we provide a novel stochastic gradient estimator that allows its efficient maximization. We also show that our approach is asymptotically consistent, i.e., that it recovers a globally optimal solution as sampling effort grows to infinity, generalizing previous convergence results for classical expected improvement. Numerical experiments show that our approach dramatically outperforms standard Bayesian optimization benchmarks, reducing simple regret by several orders of magnitude."}, "keywords": ["constituent functions", "single multioutput function"], "citation_intent": "background"} {"citing_id": "2303.16809v1", "cited_id": "1311.2037", "section_title": "Vi. Conclusion", "citation": "Though the main benefit may be further reduction in overall communication cost, it is not clear whether an advantage over pairwise approaches can be achieved when an average pairwise intersection is large compared to the total intersection (\u2229 i S i ) #REFR .", "text_before_citation": ["As a result, it is easier to formally analyze its performance, and, indeed, we have shown that it completes in time bounded by the network diameter (or logarithmic in network size for the \"small-world\" networks that reasonably model blockchain networks).", "We have also validated our analytical findings against a novel event-based simulator that we have developed.", "We run the simulator on real-world transaction pool statistics drawn from our own measurement campaign.", "In our simulations, SREP incurs only tens of gigabytes of overall bandwidth overhead to synchronize networks with ten thousand nodes, which is several times better than the current approach in the literature.", "For future work, we propose to consider multi-party set reconciliation #OTHEREFR , #OTHEREFR in the context of transaction pool sync."], "text_after_citation": [], "citing_paper_content": {"title": "Srep: Out-Of-Band Sync Of Transaction Pools For Large-Scale Blockchains", "abstract": "Synchronization of transaction pools (mempools) has shown potential for improving the performance and block propagation delay of state-of-the-art blockchains. Indeed, various heuristics have been proposed in the literature to this end, all of which incorporate exchanges of unconfirmed transactions into their block propagation protocol. In this work, we take a different approach, maintaining transaction synchronization outside (and independently) of the block propagation channel. In the process, we formalize the synchronization problem within a graph theoretic framework and introduce a novel algorithm (SREP-Set Reconciliation-Enhanced Propagation) with quantifiable guarantees. We analyze the algorithm's performance for various realistic network topologies, and show that it converges on any connected graph in a number of steps that is bounded by the diameter of the graph. We confirm our analytical findings through extensive simulations that include comparison with MempoolSync, a recent approach from the literature. Our simulations show that SREP incurs reasonable overall bandwidth overhead and, unlike MempoolSync, scales gracefully with the size of the network."}, "cited_paper_content": {"title": "Simple Multi-Party Set Reconciliation", "abstract": "As users migrate information to cloud storage, many distributed cloud-based services use multiple loosely consistent replicas of user information to avoid the high overhead of more tightly coupled synchronization. Periodically, the information must be synchronized, or reconciled. One can place this problem in the theoretical framework of {\\em set reconciliation}: two parties $A_1$ and $A_2$ each hold a set of keys, named $S_1$ and $S_2$ respectively, and the goal is for both parties to obtain $S_1 \\cup S_2$. Typically, set reconciliation is interesting algorithmically when sets are large but the set difference $|S_1-S_2|+|S_2-S_1|$ is small. In this setting the focus is on accomplishing reconciliation efficiently in terms of communication; ideally, the communication should depend on the size of the set difference, and not on the size of the sets. ::: In this paper, we extend recent approaches using Invertible Bloom Lookup Tables (IBLTs) for set reconciliation to the multi-party setting. In this setting there are three or more parties $A_1,A_2,\\ldots,A_n$ holding sets of keys $S_1,S_2,\\ldots,S_n$ respectively, and the goal is for all parties to obtain $\\cup_i S_i$. This could of course be done by pairwise reconciliations, but we seek more effective methods. ::: Our methodology uses network coding techniques in conjunction with IBLTs, allowing efficiency in network utilization along with efficiency obtained by passing messages of size $O(|\\cup_i S_i - \\cap_i S_i|)$. Further, our approach can function even if the number of parties is not exactly known in advance, and in many cases can be used to determine which parties contain keys not in the joint union. By connecting reconciliation with network coding, we can allow for substantially more efficient reconciliation methods that apply to a number of natural distributed computing problems."}, "keywords": ["overall communication cost", "average pairwise intersection"], "citation_intent": "background"} {"citing_id": "2304.14271v1", "cited_id": "1712.01887", "section_title": "D. Federated Learning And Autonomous Driving", "citation": "To further overcome this performance gap, authors in #REFR proposed modifications to the existing approach through deep gradient compression.", "text_before_citation": ["However, the challenge is to choose the favorable lower-limit value, as similar to soft-filter pruning, the quantization and selection of the wrong lower-limit value can directly impact the overall model aggregation, which may provide an overall reduced model size but decreases the accuracy.", "To overcome the previous challenge, stochastic gradient descent with k-sparsification is proposed in #OTHEREFR , by reducing the data and model size and also improving convergence through error compensation for the transmission taking place between edge and server.", "A similar approach is used in #OTHEREFR , the method proposes to fix the sparsity rate.", "The communication or transmission of the gradient is only enabled for a fraction of the gradient with the highest magnitudes and keeping the unused gradient in the container.", "The sparsity rate used by the authors is p = 0.001, and this approach has relatively less impact the learned model overall accuracy and performance."], "text_after_citation": ["Deep gradient compression uses approaches such as: momentum correction, local gradient clipping, for the convolutional neural network and recurrent neural network.", "Results show that gradients are compressed by ratio of 270-660 following a hierarchical approach, without slowing down the model convergence.", "Sparsification methods were initially proposed with the function of improving and promoting distributed and parallel training among the cloud and data-centers.", "However, these methods lacked model convergence and aggregation as a scope which is currently a most essential metrics for the federated and distributed machine learning.", "Similarly, attention should be given to the number of edge devices participating in the transmission and the server participating in collaborative training."], "citing_paper_content": {"title": "A Survey On Approximate Edge Ai For Energy Efficient Autonomous Driving Services", "abstract": "Autonomous driving services rely heavily on sensors such as cameras, LiDAR, radar, and communication modules. A common practice of processing the sensed data is using a high-performance computing unit placed inside the vehicle, which deploys AI models and algorithms to act as the brain or administrator of the vehicle. The vehicular data generated from average hours of driving can be up to 20 Terabytes depending on the data rate and specification of the sensors. Given the scale and fast growth of services for autonomous driving, it is essential to improve the overall energy and environmental efficiency, especially in the trend towards vehicular electrification (e.g., battery-powered). Although the areas have seen significant advancements in sensor technologies, wireless communications, computing and AI/ML algorithms, the challenge still exists in how to apply and integrate those technology innovations to achieve energy efficiency. This survey reviews and compares the connected vehicular applications, vehicular communications, approximation and Edge AI techniques. The focus is on energy efficiency by covering newly proposed approximation and enabling frameworks. To the best of our knowledge, this survey is the first to review the latest approximate Edge AI frameworks and publicly available datasets in energy-efficient autonomous driving. The insights and vision from this survey can be beneficial for the collaborative driving service development on low-power and memory-constrained systems and also for the energy optimization of autonomous vehicles."}, "cited_paper_content": {"title": "Deep Gradient Compression: Reducing The Communication Bandwidth For Distributed Training", "abstract": "Large-scale distributed training requires significant communication bandwidth for gradient exchange that limits the scalability of multi-node training, and requires expensive high-bandwidth network infrastructure. The situation gets even worse with distributed training on mobile devices (federated learning), which suffers from higher latency, lower throughput, and intermittent poor connections. In this paper, we find 99.9% of the gradient exchange in distributed SGD is redundant, and propose Deep Gradient Compression (DGC) to greatly reduce the communication bandwidth. To preserve accuracy during compression, DGC employs four methods: momentum correction, local gradient clipping, momentum factor masking, and warm-up training. We have applied Deep Gradient Compression to image classification, speech recognition, and language modeling with multiple datasets including Cifar10, ImageNet, Penn Treebank, and Librispeech Corpus. On these scenarios, Deep Gradient Compression achieves a gradient compression ratio from 270x to 600x without losing accuracy, cutting the gradient size of ResNet-50 from 97MB to 0.35MB, and for DeepSpeech from 488MB to 0.74MB. Deep gradient compression enables large-scale distributed training on inexpensive commodity 1Gbps Ethernet and facilitates distributed training on mobile."}, "keywords": ["deep gradient compression"], "citation_intent": "method"} {"citing_id": "2305.02414v1", "cited_id": "1512.04385", "section_title": "Introduction", "citation": "Combining this with #REFR After rearranging terms, we conclude that Lemma 3.1 holds when G is connected. Suppose G is not connected.", "text_before_citation": ["Observe that no edge on the boundary of a face of length three is also in a face of length four, or else a triangle and 4-cycle are adjacent.", "Because G is not a triangle and no two triangles are adjacent, no two length-3 faces share an edge.", "Therefore, we conclude that at least 3|X| edges of G are on the boundary of a face in X.", "Moreover, every edge on the boundary of a length-4 face is on the boundary of at most two length-4 faces, so at least 2|Y | edges of G are on the boundary of a face in Y .", "Thus, we have |E(G)| \u2265 3|X| + 2|Y |, which implies #OTHEREFR ."], "text_after_citation": ["Let J be the set of components of G with less than four vertices, and let K be the set of components of G with at least four vertices.", "Note that for all j \u2208 J, |E(j)| \u2264 |V (j)|.", "Therefore, if |K| \u2265 1, Lemma 3.1 holds by induction on |J|. Otherwise, |K| = 0.", "Note that n \u2264 15(n \u2212 2)/7 if n \u2265 4, and in this case |E(G)| \u2264 |V (G)|.", "Therefore, if the total number of vertices in the components in J is at least four, Lemma 3.1 holds."], "citing_paper_content": {"title": "The Independence Ratio Of 4-Cycle-Free Planar Graphs", "abstract": "We prove that every n-vertex planar graph G with no triangle sharing an edge with a 4-cycle has independence ratio n/\u03b1(G) \u2264 4 \u2212 \u03b5 for \u03b5 = 1/30. This result implies that the same bound holds for 4-cycle-free planar graphs and planar graphs with no adjacent triangles and no triangle sharing an edge with a 5-cycle. For the latter case we strengthen the bound to \u03b5 = 2/9."}, "cited_paper_content": {"title": "Extremal C4-Free/C5-Free Planar Graphs", "abstract": "We study the topic of \"extremal\" planar graphs, defining $\\mathrm{ex_{_{\\mathcal{P}}}}(n,H)$ to be the maximum number of edges possible in a planar graph on $n$ vertices that does not contain a given graph $H$ as a subgraph. In particular,we examine the case when $H$ is a small cycle,obtaining $\\mathrm{ex_{_{\\mathcal{P}}}}(n,C_{4}) \\leq \\frac{15}{7}(n-2)$ for all $n \\geq 4$ and $\\mathrm{ex_{_{\\mathcal{P}}}}(n,C_{5}) \\leq \\frac{12n-33}{5}$ for all $n \\geq 11$, and showing that both of these bounds are tight."}, "keywords": ["Lemma", "G"], "citation_intent": "background"} {"citing_id": "2303.10945v1", "cited_id": "2003.12267", "section_title": "Related Work", "citation": "ADGAN #REFR ameliorates this issue by extracting expressive textures vectors from different semantic entities to synthesize the target image.", "text_before_citation": ["(a) To investigate the domain gap between source and OOD datasets, we obtain the high-level feature by a person ReID model #OTHEREFR and use t-SNE for visualization. (b) The domain generalization of NTED #OTHEREFR .", "Typical pose transfer model performs reasonably well on the source domain, however, the generated results could be easily violated with OOD input.", "We first propose an Open-World Pose Transfer (OWPT) framework to investigate the domain generalization of a pre-defined model toward OOD appearance and skeleton.", "The former presents a coarse-to-fine structure which utilizes coarse shape or foreground masks to ensure generalization on arbitrary poses.", "However, these methods are not efficient at inference and require additional computing power."], "text_after_citation": ["CASD #OTHEREFR introduces attention-based methods to distribute the semantic vectors to the target poses.", "Compared to these parser-based methods, NTED #OTHEREFR applies sparse attention-based operation to extract the semantic textures without the assistance of the external parser.", "Although these methods have been validated well on the DeepFashion dataset #OTHEREFR , there is no relevant research to extend these pre-trained models to OOD dataset.", "Our work first explores the performance of these models on OOD data.", "However, the performance still remains a large gap between photo-realistic image due to the pre-trained models overfitting on DeepFashion dataset #OTHEREFR ."], "citing_paper_content": {"title": "Open-World Pose Transfer Via Sequential Test-Time Adaption", "abstract": "Figure 1. Visualization of open-world pose transfer (OWPT). With open-world references, it can be observed that typical pose transfer method (e.g. NTED [32]) exhibits a twisty pattern. In this sense, we call for a solid model that can handle open-world instances beyond a specific dataset."}, "cited_paper_content": {"title": "Controllable Person Image Synthesis With Attribute-Decomposed Gan", "abstract": "This paper introduces the Attribute-Decomposed GAN, a novel generative model for controllable person image synthesis, which can produce realistic person images with desired human attributes (e.g., pose, head, upper clothes and pants) provided in various source inputs. The core idea of the proposed model is to embed human attributes into the latent space as independent codes and thus achieve flexible and continuous control of attributes via mixing and interpolation operations in explicit style representations. Specifically, a new architecture consisting of two encoding pathways with style block connections is proposed to decompose the original hard mapping into multiple more accessible subtasks. In source pathway, we further extract component layouts with an off-the-shelf human parser and feed them into a shared global texture encoder for decomposed latent codes. This strategy allows for the synthesis of more realistic output images and automatic separation of un-annotated attributes. Experimental results demonstrate the proposed method's superiority over the state of the art in pose transfer and its effectiveness in the brand-new task of component attribute transfer."}, "keywords": ["target image", "expressive textures vectors"], "citation_intent": "background"} {"citing_id": "2304.01585v1", "cited_id": "1802.00761", "section_title": "Networks", "citation": "Attribute representations is a method of describing the data semantically #REFR . An attribute vector a represents a set of soft-biometrics.", "text_before_citation": ["Prior to concatenation, the outputs of each convolutional blocks are processed by a fully connected layer or LSTM layer depending on the network type.", "Concatenation is followed by a two-layered fully connected MLP and a classifier layer for tCNN-IMU M LP .", "In the case of tCNN-IMU LST M , the concatenation is followed by two LSTM layers and a classifier layer.", "The networks use a softmax classifier for person identification, whereas a sigmoid layer for soft-biometrics identification #OTHEREFR .", "Soft-biometrics of individuals describe or categorise an individual or a group of individuals #OTHEREFR , e.g., Gender Identity, Age, Weight and Height."], "text_after_citation": ["A similar combination of soft-biometrics could represent different persons with similar features.", "The Nearest Neighbour Approach (NNA) is used for soft-biometrics-based identification.", "The NNA calculates the distance between a prediction attribute vector a and an attribute representation A, with all the different combinations of soft-biometrics.", "The person identity is assigned to the one related with the least distance from A.", "NNA is performed by computing a certain similarity between a A and the vector a from the network; typically, the cosine similarity #OTHEREFR and the Probabilistic Retrieval Model (PRM) similarity #OTHEREFR ."], "citing_paper_content": {"title": "Multi-Channel Time-Series Person And Soft-Biometric Identification", "abstract": "Multi-channel time-series datasets are popular in the context of human activity recognition (HAR). On-body device (OBD) recordings of human movements are often preferred for HAR applications not only for their reliability but as an approach for identity protection, e.g., in industrial settings. Contradictory, the gait activity is a biometric, as the cyclic movement is distinctive and collectable. In addition, the gait cycle has proven to contain soft-biometric information of human groups, such as age and height. Though general human movements have not been considered a biometric, they might contain identity information. This work investigates person and soft-biometrics identification from OBD recordings of humans performing different activities using deep architectures. Furthermore, we propose the use of attribute representation for soft-biometric identification. We evaluate the method on four datasets of multi-channel time-series HAR, measuring the performance of a person and soft-biometrics identification and its relation concerning performed activities. We find that person identification is not limited to gait activity. The impact of activities on the identification performance was found to be training and dataset specific. Soft-biometric based attribute representation shows promising results and emphasis the necessity of larger datasets."}, "cited_paper_content": {"title": "Learning Attribute Representation For Human Activity Recognition", "abstract": "Attribute representations became relevant in image recognition and word spotting, providing support under the presence of unbalance and disjoint datasets. However, for human activity recognition using sequential data from on-body sensors, human-labeled attributes are lacking. This paper introduces a search for attributes that represent favorably signal segments for recognizing human activities. It presents three deep architectures, including temporal-convolutions and an IMU centered design, for predicting attributes. An empiric evaluation of random and learned attribute representations, and as well as the networks is carried out on two datasets, outperforming the state-of-the art."}, "keywords": ["soft-biometrics", "Attribute representations"], "citation_intent": "background"} {"citing_id": "2303.08989v1", "cited_id": "1910.11333", "section_title": "Introduction", "citation": "For instance, to simulate Google's Sycamore #REFR , which has 53 qubits, we would require 128 PB of memory.", "text_before_citation": ["In a quantum computer, all operations follow quantum mechanics: preparing qubits (a quantum version of classical bits), applying unitary gates, and measuring qubits to get classical data.", "The goal of quantum circuit simulation is to reproduce the classical result obtained by these quantum operations only with a classical computer.", "There exist various types of quantum simulators #OTHEREFR 29] .", "For general circuits dominated by non-Clifford gates, the two types of simulators: 1) state vector and 2) tensor network methods are widely used. We choose these simulation methods according to the objectives.", "The state vector simulations require 2 n complex values on memory, where n is the number of qubits."], "text_after_citation": ["Since the total memory capacity of the current largest supercomputers is in the order of a few PB, state vector methods are limited by memory capacity.", "One advantage of state vector methods is that the computational complexity for a circuit of depth d is O(2 n \u00d7d) and scales linearly with d.", "Furthermore, there are some studies to reduce the required memory size, such as by splitting the circuit #OTHEREFR .", "On the other hand, the tensor contraction method #OTHEREFR can simulate several thousands of qubits with low-depth layers at a slightly higher computational cost.", "Therefore, tensor contraction is the method of choice for many recent studies that aim to validate quantum supremacy in both quantum computing and high performance computing #OTHEREFR ."], "citing_paper_content": {"title": "Quantum Circuit Simulation By Sgemm Emulation On Tensor Cores And Automatic Precision Selection", "abstract": "Quantum circuit simulation provides the foundation for the development of quantum algorithms and the verification of quantum supremacy. Among the various methods for quantum circuit simulation, tensor network contraction has been increasing in popularity due to its ability to simulate a larger number of qubits. During tensor contraction, the input tensors are reshaped to matrices and computed by a GEMM operation, where these GEMM operations could reach up to 90% of the total calculation time. GEMM throughput can be improved by utilizing mixed-precision hardware such as Tensor Cores, but straightforward implementation results in insufficient fidelity for deep and large quantum circuits. Prior work has demonstrated that compensated summation with special care of the rounding mode can fully recover the FP32 precision of SGEMM even when using TF32 or FP16 Tensor Cores. The exponent range is a critical issue when applying such techniques to quantum circuit simulation. While TF32 supports almost the same exponent range as FP32, FP16 supports a much smaller exponent range. In this work, we use the exponent range statistics of input tensor elements to select which Tensor Cores we use for the GEMM. We evaluate our method on Random Circuit Sampling (RCS), including Sycamore's quantum circuit, and show that the throughput is 1.86 times higher at maximum while maintaining accuracy."}, "cited_paper_content": {"title": "Supplementary Information For\"Quantum Supremacy Using A Programmable Superconducting Processor\"", "abstract": "The promise of quantum computers is that certain computational tasks might be executed exponentially faster on a quantum processor than on a classical processor1. A fundamental challenge is to build a high-fidelity processor capable of running quantum algorithms in an exponentially large computational space. Here we report the use of a processor with programmable superconducting qubits2\u20137 to create quantum states on 53 qubits, corresponding to a computational state-space of dimension 253 (about 1016). Measurements from repeated experiments sample the resulting probability distribution, which we verify using classical simulations. Our Sycamore processor takes about 200 seconds to sample one instance of a quantum circuit a million times\u2014our benchmarks currently indicate that the equivalent task for a state-of-the-art classical supercomputer would take approximately 10,000 years. This dramatic increase in speed compared to all known classical algorithms is an experimental realization of quantum supremacy8\u201314 for this specific computational task, heralding a much-anticipated computing paradigm. Quantum supremacy is demonstrated using a programmable superconducting processor known as Sycamore, taking approximately 200 seconds to sample one instance of a quantum circuit a million times, which would take a state-of-the-art supercomputer around ten thousand years to compute."}, "keywords": ["53 qubits"], "citation_intent": "background"} {"citing_id": "2305.01828v1", "cited_id": "1707.00291", "section_title": "Los Probability Models", "citation": "For the UMi and UMa scenarios, 3GPP SCM had a higher probability of predicting LOS channel conditions at larger distances (several hundred meters) than NYUSIM #REFR .", "text_before_citation": ["The first step in generating a channel model for a UE is determining whether the channel condition is LOS/NLOS.", "We extend the ChannelCondition class developed in #OTHEREFR with five different classes, namely, NYURmaChannelConditionModel, NYUUma-ChannelConditionModel, NYUUmiChannelConditionModel, NYU-InHChannelConditionModel, and NYUInFChannelConditionModel, each handling a different scenario for NYUSIM.", "All the newly introduced NYUSIM classes mentioned above derive from the same base class, called NYUChannelCondition, which extends the Cha-nnelConditionModel interface.", "The NYUSIM LOS probability models for outdoor scenarios were developed based on radio propagation measurements conducted at 28 and 73 GHz in New York City #OTHEREFR .", "The Tx-Rx locations were selected from measurement data, and ray tracing was employed to determine whether the path between the Tx and Rx is in LOS/NLOS #OTHEREFR ."], "text_after_citation": ["However, in a real-world UMi and UMa scenario, getting LOS propagation over several hundred meters is very challenging #OTHEREFR .", "Thus, the UMi and UMa, LOS probability models, implemented in NYUUmiChannelConditionModel and NYU-UmaChannelConditionModel, use the NYU (squared) model based on Tables 1 and 2 in #OTHEREFR respectively and is given in (1) and #OTHEREFR .", "EQUATION", "EQUATION", "In (1) 1 = 22m, 2 = 100m, in (2) 1 = 20m, 2 = 160m, is defined in Table 7 .4.2 in #OTHEREFR and \u210e is the height of the UE in meters."], "citing_paper_content": {"title": "Ns-3 Implementation Of Sub-Terahertz And Millimeter Wave Drop-Based Nyu Channel Model (Nyusim)", "abstract": "The next generation of wireless networks will use sub-THz frequencies alongside mmWave frequencies to enable multi-Gbps and low latency applications. To enable different verticals and use cases, engineers must take a holistic approach to build, analyze, and study different parts of the network and the interplay among the lower and higher layers of the protocol stack. It is of paramount importance to accurately characterize the radio propagation in diverse scenarios such as urban microcell (UMi), urban macrocell (UMa), rural macrocell (RMa), indoor hotspot (InH), and indoor factory (InF) for a wide range of frequencies. The 3GPP statistical channel model (SCM) is oversimplified and restricted to the frequency range of 0.5-100 GHz. Thus, to overcome these limitations, this paper presents a detailed implementation of the drop-based NYU channel model (NYUSIM) for the frequency range of 0.5-150 GHz for the UMi, UMa, RMa, InH, and InF scenarios. NYUSIM allows researchers to design and evaluate new algorithms and protocols for future sub-THz wireless networks in ns-3. CCS CONCEPTS \u2022 Networks \u2192 Network simulation; Mobile networks;."}, "cited_paper_content": {"title": "Investigation And Comparison Of 3Gpp And Nyusim Channel Models For 5G Wireless Communications", "abstract": "Channel models describe how wireless channel parameters behave in a given scenario, and help evaluate link- and system-level performance. A proper channel model should be able to faithfully reproduce the channel parameters obtained in field measurements and accurately predict the spatial and temporal channel impulse response along with large-scale fading. This paper compares two popular channel models for next generation wireless communications: the 3rd Generation Partnership Project (3GPP) TR 38.900 Release 14 channel model and the statistical spatial channel model NYUSIM developed by New York University (NYU). The two channel models employ different modeling approaches in many aspects, such as the line-of-sight probability, path loss, and clustering methodology. Simulations are performed using the two channel models to analyze the channel eigenvalue distribution and spectral efficiency leveraging the analog/digital hybrid beamforming methods found in the literature. Simulation results show that the 3GPP model produces different eigenvalue and spectral efficiency distributions for mmWave bands, as compared to the outcome from NYUSIM that is based on massive amounts of real-world measured data in New York City. This work shows NYUSIM is more accurate for realistic simulations than 3GPP in urban environments."}, "keywords": ["LOS channel conditions"], "citation_intent": "background"} {"citing_id": "2304.08983v1", "cited_id": "1805.07944", "section_title": "Iv. Numerical Example", "citation": "If the attack identification is performed with all the measurements, then the number of cases is equal to #REFR 4 = 4845.", "text_before_citation": [", 10, 1 2 x 3 \u2212 1 2 sin x 2 , i = 11, .", ". .", ", 20, and the noise v(t) \u2208 R 20 be given such that v(t) \u2264 0.01.", "In this example, the partial information \u03a6 i (x) of the state x obtained from each individual output y i is no more than h i (x), i.e., \u03a6 i = h i , \u2200i, as it can be computed from (5) that L f h i = \u2212h i for i \u2264 10, and L f h i = 0 for i \u2265 11.", "Then, it can be verified that the state x(t) can be restored from any 12 elements of the measurements {y i (t)} #OTHEREFR i=1 , so that the attack identification and the resilient state estimation is possible when up to 4 measurements are compromised."], "text_after_citation": ["But, the attack identification can also be performed with the local measurements {y i (t)} 10 i=1 and {y i (t)} 20 i=11 , separately, although the state x(t) is not observable from either of them.", "This is because both the collections {\u03a6 i } 10", "i=1 and {\u03a6 i } 20"], "citing_paper_content": {"title": "", "abstract": "A resilient state estimation scheme for uniformly observable nonlinear systems, based on a method for local identification of sensor attacks, is presented. The estimation problem is combinatorial in nature, and so many methods require substantial computational and storage resources as the number of sensors increases. To reduce the complexity, the proposed method performs the attack identification with local subsets of the measurements, not with the set of all measurements. A condition for nonlinear attack identification is introduced as a relaxed version of existing redundant observability condition. It is shown that an attack identification can be performed even when the state cannot be recovered from the measurements. As a result, although a portion of measurements are compromised, they can be locally identified and excluded from the state estimation, and thus the true state can be recovered. Simulation results demonstrate the effectiveness of the proposed scheme."}, "cited_paper_content": {"title": "Detection Of Sensor Attack And Resilient State Estimation For Uniformly Observable Nonlinear Systems Having Redundant Sensors", "abstract": "This paper presents a detection algorithm for sensor attacks and a resilient state estimation scheme for a class of uniformly observable nonlinear systems. An adversary is supposed to corrupt a subset of sensors with the possibly unbounded signals, while the system has sensor redundancy. We design an individual high-gain observer for each measurement output so that only the observable portion of the system state is obtained. Then, a nonlinear error correcting problem is solved by collecting all the information from those partial observers and exploiting redundancy. A computationally efficient, on-line monitoring scheme is presented for attack detection. Based on the attack detection scheme, an algorithm for resilient state estimation is provided. The simulation results demonstrate the effectiveness of the proposed algorithm."}, "keywords": ["attack identification"], "citation_intent": "background"} {"citing_id": "2304.03026v1", "cited_id": "1804.08489", "section_title": "A. Related Work", "citation": "In #REFR , authors studied the reliability of command and control channel between UAVs and a traditional cellular network with massive MIMO.", "text_before_citation": ["A high mobility UAV communication relay was analyzed in #OTHEREFR .", "The authors studied the throughput maximization problems by designing UAV trajectory and power allocations.", "Authors in #OTHEREFR , #OTHEREFR investigated a Max-Min SNR based UAV trajectory optimization problem.", "Authors in #OTHEREFR jointly designed multiple UAVs' trajectories and transmit powers to maximize the throughput of a group of ground users.", "In #OTHEREFR , the authors considered a UAV that collects data from multiple wireless sensor networks and jointly optimized UAV's trajectory and communication scheduling to maximize the minimum average data collection rate."], "text_after_citation": ["Besides, a deep reinforcement learning-based approach of optimizing multiple cellular-connected UAVs to minimize their interference with ground users was analyzed in #OTHEREFR .", "Authors in #OTHEREFR studied the uplink inter-cell interference coordination design for a cellular network simultaneously serving UAVs and ground users.", "Typically, the maximum weighted sum-rate of UAVs and users is jointly optimized based on power allocations and uplink cell associations.", "Authors in #OTHEREFR maximized the sum rate of UAV uplink transmission by optimizing the precoding vectors at the multi-antenna UAVs.", "A deep reinforcement learning-based intelligent navigation task of a cellular-connected UAV network was investigated in #OTHEREFR which aims to minimize the weighted sum of time cost and expected outage duration alongside UAV trajectory."], "citing_paper_content": {"title": "Coverage Analysis And Trajectory Optimization For Aerial Users With Dedicated Cellular Infrastructure", "abstract": "In this paper, we consider a novel cellular network for aerial users, which is composed of dedicated base stations (BSs), whose antennas are directed towards aerial users, and traditional terrestrial BSs (TBSs). Besides, the dedicated BSs are deployed on roadside furniture, such as lampposts and traffic lights, to achieve multiple features while occupying less space. Therefore, the locations of dedicated BSs and TBSs are modeled by a Poisson-line-Cox-process (PLCP) and Poisson point process (PPP), respectively. For the proposed network, we first compute the aerial coverage probability and show that the deployment of dedicated BSs improves the coverage probability in both high dense areas and rural areas. We then consider a cellular-connected UAV that has a flying mission and optimize its trajectory to maximize the minimal achievable signal-to-interference-plus-noise ratio (SINR) (Max-Min SINR). To obtain the Max-Min SINR and minimal time trajectory that satisfies the Max-Min SINR, we proposed two algorithms that are practical in large-scale networks. Finally, our results show that the optimal density of dedicated BSs which maximizes Max-Min SINR decreases with the increase of the road densities."}, "cited_paper_content": {"title": "Understanding Uav Cellular Communications: From Existing Networks To Massive Mimo", "abstract": "The purpose of this article is to bestow the reader with a timely study of UAV cellular communications, bridging the gap between the 3GPP standardization status quo and the more forward-looking research. Special emphasis is placed on the downlink command and control (C&C) channel to aerial users, whose reliability is deemed of paramount technological importance for the commercial success of UAV cellular communications. Through a realistic side-by-side comparison of two network deployments -- a present-day cellular infrastructure versus a next-generation massive MIMO system -- a plurality of key facts are cast light upon, with the three main ones summarized as follows: (i) UAV cell selection is essentially driven by the secondary lobes of a base station's radiation pattern, causing UAVs to associate to far-flung cells; (ii) over a 10 MHz bandwidth, and for UAV heights of up to 300 m, massive MIMO networks can support 100 kbps C&C channels in 74% of the cases when the uplink pilots for channel estimation are reused among base station sites, and in 96% of the cases without pilot reuse across the network; (iii) supporting UAV C&C channels can considerably affect the performance of ground users on account of severe pilot contamination, unless suitable power control policies are in place."}, "keywords": ["traditional cellular network", "massive MIMO"], "citation_intent": "background"} {"citing_id": "2303.12357v1", "cited_id": "1706.06083", "section_title": "Introduction", "citation": "Projected gradient descent attack #REFR is a widely-used attack method that applies small steps of maximizing the loss objective iteratively and clipping the values of intermediate results after each step (projection to the norm ball) to ensure that they are in a constrained neighbourhood of the original inputs.", "text_before_citation": ["The time series that appear more distinguished to our eyes are captured by larger Wasserstein distances.", "The example in the upper figure may not be a successful adversarial example under 1 distance as it is distant in the 1 space. However, it is almost imperceptible to human evaluation.", "In this case, Wasserstein distance is a better measurement of adversarial examples.", "In this work, we study the adversarial attack on time series in the Wasserstein space for the first time.", "Our goal is to generate adversarial examples that have small Wasserstein perturbation so it is more indistinguishable and natural to human, e.g., physician who examines ECG data."], "text_after_citation": ["Similarly, we propose a Wasserstein PGD method to search for adversarial examples in the Wasserstein space for univariant time series.", "Wasserstein distance cannot be calculated directly without solving an optimization subproblem and has no closed-form solution in most cases, which limits its applications.", "At present, there are only two cases that the Wasserstein distance can be directly calculated, one is the case of the dimension of inputs being 1, and the other is the inputs following Gaussian distribution.", "For the univariant time series, we can take advantage of its 1 characteristic and use the closedform Wasserstein distance to apply the projection of intermediate results of each step onto the Wasserstein ball with gradient descent method.", "The direct projection of intermediate results onto the Wasserstein ball using gradient descent could be time-consuming and converge to a sub-optimal solution."], "citing_paper_content": {"title": "Wasserstein Adversarial Examples On Univariant Time Series Data", "abstract": "Adversarial examples are crafted by adding indistinguishable perturbations to normal examples in order to fool a well-trained deep learning model to misclassify. In the context of computer vision, this notion of indistinguishability is typically bounded by \u221e or other norms. However, these norms are not appropriate for measuring indistinguishiability for time series data. In this work, we propose adversarial examples in the Wasserstein space for time series data for the first time and utilize Wasserstein distance to bound the perturbation between normal examples and adversarial examples. We introduce Wasserstein projected gradient descent (WPGD), an adversarial attack method for perturbing univariant time series data. We leverage the closed-form solution of Wasserstein distance in the 1D space to calculate the projection step of WPGD efficiently with the gradient descent method. We further propose a two-step projection so that the search of adversarial examples in the Wasserstein space is guided and constrained by Euclidean norms to yield more effective and imperceptible perturbations. We empirically evaluate the proposed attack on several time series datasets in the healthcare domain. Extensive results demonstrate that the Wasserstein attack is powerful and can successfully attack most of the target classifiers with a high attack success rate. To better study the nature of Wasserstein adversarial example, we evaluate a strong defense mechanism named Wasserstein smoothing for potential certified robustness defense. Although the defense can achieve some accuracy gain, it still has limitations in many cases and leaves space for developing a stronger certified robustness method to Wasserstein adversarial examples on univariant time series data."}, "cited_paper_content": {"title": "Towards Deep Learning Models Resistant To Adversarial Attacks", "abstract": "Recent work has demonstrated that neural networks are vulnerable to adversarial examples, i.e., inputs that are almost indistinguishable from natural data and yet classified incorrectly by the network. In fact, some of the latest findings suggest that the existence of adversarial attacks may be an inherent weakness of deep learning models. To address this problem, we study the adversarial robustness of neural networks through the lens of robust optimization. This approach provides us with a broad and unifying view on much of the prior work on this topic. Its principled nature also enables us to identify methods for both training and attacking neural networks that are reliable and, in a certain sense, universal. In particular, they specify a concrete security guarantee that would protect against any adversary. These methods let us train networks with significantly improved resistance to a wide range of adversarial attacks. They also suggest the notion of security against a first-order adversary as a natural and broad security guarantee. We believe that robustness against such well-defined classes of adversaries is an important stepping stone towards fully resistant deep learning models."}, "keywords": ["Projected gradient descent", "loss objective"], "citation_intent": "method"} {"citing_id": "2303.00498v1", "cited_id": "1706.03762", "section_title": "C. Adaptive Hybrid Spatial-Temporal Learning Block", "citation": "We also apply multi-head attention #REFR to capture the relation semantics between nodes from different learning subspaces.", "text_before_citation": ["Dynamic Graph Learning.", "DGL implements Graph Attention Network (GAT) #OTHEREFR to perform dynamic spatial feature aggregation, owing to GAT's ability to capture the dynamic influences of neighbors.", "We believe that short-term dynamic influences mostly occur between nearby cell towers, so we deploy the exponential distance-decay matrix #OTHEREFR as the prior graph structure of GAT.", "We call it the distance adjacency matrix, denoted as A dis \u2208 R N \u00d7N . As shown in Fig.", "2 , GAT achieves weighted feature aggregation by calculating the attention scores of the central node and neighbor nodes."], "text_after_citation": ["Moreover, the multi-head attention can be computed in parallel for reducing time complexity.", "For simplicity, taking a graph node v i as an example, the graph attention mechanism of DGL can be formulated as follows:", "EQUATION", "EQUATION", "where a ij \u2208 R represents the computed attention score between v i and v j , where j \u2208 N i ."], "citing_paper_content": {"title": "Adaptive Hybrid Spatial-Temporal Graph Neural Network For Cellular Traffic Prediction", "abstract": "Cellular traffic prediction is an indispensable part for intelligent telecommunication networks. Nevertheless, due to the frequent user mobility and complex network scheduling mechanisms, cellular traffic often inherits complicated spatialtemporal patterns, making the prediction incredibly challenging. Although recent advanced algorithms such as graph-based prediction approaches have been proposed, they frequently model spatial dependencies based on static or dynamic graphs and neglect the coexisting multiple spatial correlations induced by traffic generation. Meanwhile, some works lack the consideration of the diverse cellular traffic patterns, result in suboptimal prediction results. In this paper, we propose a novel deep learning network architecture, Adaptive Hybrid Spatial-Temporal Graph Neural Network (AHSTGNN), to tackle the cellular traffic prediction problem. First, we apply adaptive hybrid graph learning to learn the compound spatial correlations among cell towers. Second, we implement a Temporal Convolution Module with multi-periodic temporal data input to capture the nonlinear temporal dependencies. In addition, we introduce an extra Spatial-Temporal Adaptive Module to conquer the heterogeneity lying in cell towers. Our experiments on two real-world cellular traffic datasets show AHSTGNN outperforms the state-of-the-art by a significant margin, illustrating the superior scalability of our method for spatial-temporal cellular traffic prediction."}, "cited_paper_content": {"title": "Attention Is All You Need", "abstract": "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data."}, "keywords": ["nodes", "multi-head attention"], "citation_intent": "method"} {"citing_id": "2304.02091v1", "cited_id": "0912.2371", "section_title": "Subgraph Problems", "citation": "In fact, one of the fastest methods for Subgraph Isomorphism works via an arithmetic circuit for evaluating the homomorphism polynomial #REFR , allowing for the randomized detection of a subgraph H with |V (H)| = k and of treewidth w in time O(2 k n w+1 ).", "text_before_citation": ["In our first application in this section, we note a generalisation of the above results. The second result regards subgraph isomorphism.", "Given graphs G and H, the problem of checking whether H is a subgraph of G parameterized by |V (H)| can be either FPT, as when H is a path, or W[1]-hard, as when H is a clique.", "More generally, Subgraph Isomorphism is FPT by |V (H)| if H comes from a family of graphs with bounded treewidth (originally shown using the colour-coding technique of Alon et al.", "#OTHEREFR ), and there is good evidence that no more general such class exists #OTHEREFR .", "In general, the parameterized complexity of subgraph isomorphism problems has been extensively and meticulously investigated #OTHEREFR ."], "text_after_citation": ["Furthermore, the exponent w + 1 here is optimal, up to plausible conjectures #OTHEREFR .", "We observe that this running time is compatible with an additional constraint that the copy of H found in G should be independent in a given linear matroid."], "citing_paper_content": {"title": "Determinantal Sieving", "abstract": "We introduce a new, remarkably powerful tool to the toolbox of algebraic FPT algorithms, determinantal sieving. Given a polynomial P (x 1 ,. .. , x n) over a field F of characteristic 2, on a set of variables X = {x 1 ,. .. , x n }, and a linear matroid M = (X, I) over F of rank k, in 2 k evaluations of P we can sieve for those terms in the monomial expansion of P which are multilinear and whose support is a basis for M. The known tools of multilinear detection and constrained multilinear detection then correspond to the case where M is a uniform matroid, respectively the truncation of a disjoint union of uniform matroids. More generally, let the odd support of a monomial m be the set of variables which have odd degree in m. Using 2 k evaluations of P , we can sieve for those terms m whose odd support spans M. Applying this framework to well-known efficiently computable polynomial families allows us to simplify, generalize and improve on a range of algebraic FPT algorithms, such as:"}, "cited_paper_content": {"title": "Faster Algorithms For Finding And Counting Subgraphs", "abstract": "In this paper we study a natural generalization of both {\\sc $k$-Path} and {\\sc $k$-Tree} problems, namely, the {\\sc Subgraph Isomorphism} problem. In the {\\sc Subgraph Isomorphism} problem we are given two graphs $F$ and $G$ on $k$ and $n$ vertices respectively as an input, and the question is whether there exists a subgraph of $G$ isomorphic to $F$. We show that if the treewidth of $F$ is at most $t$, then there is a randomized algorithm for the {\\sc Subgraph Isomorphism} problem running in time $\\cO^*(2^k n^{2t})$. To do so, we associate a new multivariate {Homomorphism polynomial} of degree at most $k$ with the {\\sc Subgraph Isomorphism} problem and construct an arithmetic circuit of size at most $n^{\\cO(t)}$ for this polynomial. Using this polynomial, we also give a deterministic algorithm to count the number of homomorphisms from $F$ to $G$ that takes $n^{\\cO(t)}$ time and uses polynomial space. For the counting version of the {\\sc Subgraph Isomorphism} problem, where the objective is to count the number of distinct subgraphs of $G$ that are isomorphic to $F$, we give a deterministic algorithm running in time and space $\\cO^*({n \\choose k/2}n^{2p})$ or ${n\\choose k/2}n^{\\cO(t \\log k)}$. We also give an algorithm running in time $\\cO^{*}(2^{k}{n \\choose k/2}n^{5p})$ and taking space polynomial in $n$. Here $p$ and $t$ denote the pathwidth and the treewidth of $F$, respectively. Thus our work not only improves on known results on {\\sc Subgraph Isomorphism} but it also extends and generalize most of the known results on {\\sc $k$-Path} and {\\sc $k$-Tree}."}, "keywords": ["Subgraph Isomorphism"], "citation_intent": "method"} {"citing_id": "2303.15901v1", "cited_id": "2003.05991", "section_title": "Introduction", "citation": "A DAE is a type of DNN that is trained to detect and discard noise from input data and reconstruct it back to its original form #REFR .", "text_before_citation": ["Several defence strategies have been proposed to overcome this problem, including defensive distillation, which has been successful in defending DNNs from adversarial attacks in run-time settings #OTHEREFR .", "Nevertheless, one of the drawbacks of the defensive distillation method is that it remains susceptible to data poisoning attacks, in which adversaries aim to impair the model's performance by inserting erroneous data entries into the training set.", "These harmful data points could be carefully crafted to be close to the model's decision boundary to circumvent the countermeasures offered by defensive distillation.", "This study presents a novel method for robustifying a distilled network against data poisoning adversarial threats by integrating a denoising autoencoder (DAE) #OTHEREFR in the defensive distillation cycle.", "Defensive distillation involves training two DNNs, the instructor model (the first model), and distilling its knowledge into the student model (the second or distilled model) to make it robust adversarial examples and previously unseen input #OTHEREFR ."], "text_after_citation": ["This paper is motivated by the fact that the instructor model is not immune to data poisoning adversarial attacks.", "Although the student model has more latitude to reject input modifications because it leverages the \"distilled\" version of the training data, where the training examples are transformed by a temperature parameter T #OTHEREFR .", "Thus, minimising the instructor model's susceptibility to data poisoning attacks is pivotal for developing a reliable, distilled DNN.", "To achieve this, we designed a DAE to detect and reconstruct poisonous adversarial inputs in the training data.", "The defensive distillation method already offers a strong foundation, but when combined with a DAE, the protection mechanism against data poisoning adversarial attacks is significantly strengthened."], "citing_paper_content": {"title": "Denoising Autoencoder-Based Defensive Distillation As An Adversarial Robustness Algorithm", "abstract": "Adversarial attacks significantly threaten the robustness of deep neural networks (DNNs). Despite the multiple defensive methods employed, they are nevertheless vulnerable to poison attacks, where attackers meddle with the initial training data. In order to defend DNNs against such adversarial attacks, this work proposes a novel method that combines the defensive distillation mechanism with a denoising autoencoder (DAE). This technique tries to lower the sensitivity of the distilled model to poison attacks by spotting and reconstructing poisonous adversarial inputs in the training data. We added carefully created adversarial samples to the initial training data to assess the proposed method's performance. Our experimental findings demonstrate that our method successfully identified and reconstructed the poisonous inputs while also considering enhancing the DNN's resilience. The proposed approach provides a potent and robust defence mechanism for DNNs in various applications where data poisoning attacks are a concern. Thus, the defensive distillation technique's limitation posed by poisonous adversarial attacks is overcome."}, "cited_paper_content": {"title": "Autoencoders", "abstract": "An autoencoder is a specific type of a neural network, which is mainlydesigned to encode the input into a compressed and meaningful representation, andthen decode it back such that the reconstructed input is similar as possible to theoriginal one. This chapter surveys the different types of autoencoders that are mainlyused today. It also describes various applications and use-cases of autoencoders."}, "keywords": ["DNN", "input data"], "citation_intent": "background"} {"citing_id": "2303.05648v1", "cited_id": "1906.06365", "section_title": "B. Machine Learning-Based Methods", "citation": "In #REFR , a function-aggregation method is proposed, which assumes that one utility function can capture only a part of users' preferences, and the whole preferences can be infinitely approximated by introducing enough utility functions.", "text_before_citation": ["Recently, many machine learning-based methods have been proposed to specially address choice problems.", "These methods do not need an extra transformation process and avoid possible preference distortions.", "For example, the pointer neural network (PNN) #OTHEREFR , which is an encoder-decoder network based on recurrent neural network and attention mechanism, is utilized as the mapping function between the available items in the markets and users'", "choices.", "The available items are fed into PNN one by one, and PNN would point to the item that is predicted as the best choice #OTHEREFR ."], "text_after_citation": ["The function-aggregation method adopts multiple utility functions simultaneously, and the total utility is equal to their weighted sum.", "The restricted Boltzmann machine is adopted in #OTHEREFR to model context effects.", "It is proved that the restricted Boltzmann machine can be translated into the MNL formulation, together with an additional term accounting for the comparisons among items."], "citing_paper_content": {"title": "Pacos: Modeling Users' Interpretable And Context-Dependent Choices In Preference Reversals", "abstract": "Choice problems refer to selecting the best choices from several items, and learning users' preferences in choice problems is of great significance in understanding the decision making mechanisms and providing personalized services. Existing works typically assume that people evaluate items independently. In practice, however, users' preferences depend on the market in which items are placed, which is known as context effects; and the order of users' preferences for two items may even be reversed, which is referred to preference reversals. In this work, we identify three factors contributing to context effects: users' adaptive weights, the inter-item comparison, and display positions. We propose a context-dependent preference model named Pacos as a unified framework for addressing three factors simultaneously, and consider two design methods including an additive method with high interpretability and an ANN-based method with high accuracy. We study the conditions for preference reversals to occur and provide an theoretical proof of the effectiveness of Pacos in addressing preference reversals. Experimental results show that the proposed method has better performance than prior works in predicting users' choices, and has great interpretability to help understand the cause of preference reversals."}, "cited_paper_content": {"title": "Predicting Choice With Set-Dependent Aggregation", "abstract": "Providing users with alternatives to choose from is an essential component in many online platforms, making the accurate prediction of choice vital to their success. A renewed interest in learning choice models has led to significant progress in modeling power, but most current methods are either limited in the types of choice behavior they capture, cannot be applied to large-scale data, or both. ::: Here we propose a learning framework for predicting choice that is accurate, versatile, theoretically grounded, and scales well. Our key modeling point is that to account for how humans choose, predictive models must capture certain set-related invariances. Building on recent results in economics, we derive a class of models that can express any behavioral choice pattern, enjoy favorable sample complexity guarantees, and can be efficiently trained end-to-end. Experiments on three large choice datasets demonstrate the utility of our approach."}, "keywords": ["users' preferences", "whole preferences"], "citation_intent": "method"} {"citing_id": "2304.12486v2", "cited_id": "1902.06705", "section_title": "Threat Model", "citation": "We define this threat model according to Carlini et al #REFR by defining the goals, capabilities, and knowledge of the target AI system, that the adversary has.", "text_before_citation": ["In order to have correct metrics, and to assess the validity of our approach in a real-world context, it is essential that we define a threat model using a precise taxonomy."], "text_after_citation": ["Let C(x) : X \u2192 Y be a classifier, and x \u2208 X \u2282 R d an input to the classifier.", "Let y true \u2208 Y = {1, 2, ..., N } be the ground truth of the input x, i.e. the true class label of x among N classes.", "We call y pred = C(x) the predicted label for x.", "In our study, we perform untargeted attacks, ie.", "the adversary's goal is to generate an adversarial example x adv for x that is misclassified by C."], "citing_paper_content": {"title": "Evaluating Adversarial Robustness On Document Image Classification", "abstract": "Adversarial attacks and defenses have gained increasing interest on computer vision systems in recent years, but as of today, most investigations are limited to natural images. However, many artificial intelligence models actually handle documentary data, which is very different from real world images. Hence, in this work, we try to apply the adversarial attack philosophy on documentary data and to protect models against such attacks. Our methodology is to implement untargeted gradient-based, transfer-based and score-based attacks and evaluate the impact of defenses such as adversarial training, JPEG input compression and grey-scale input transformation on the robustness of ResNet50 and EfficientNetB0 model architectures. To the best of our knowledge, no such work has been conducted by the community in order to study the impact of these attacks on the document image classification task."}, "cited_paper_content": {"title": "On Evaluating Adversarial Robustness", "abstract": "Correctly evaluating defenses against adversarial examples has proven to be extremely difficult. Despite the significant amount of recent work attempting to design defenses that withstand adaptive attacks, few have succeeded; most papers that propose defenses are quickly shown to be incorrect. ::: We believe a large contributing factor is the difficulty of performing security evaluations. In this paper, we discuss the methodological foundations, review commonly accepted best practices, and suggest new methods for evaluating defenses to adversarial examples. We hope that both researchers developing defenses as well as readers and reviewers who wish to understand the completeness of an evaluation consider our advice in order to avoid common pitfalls."}, "keywords": ["target AI system"], "citation_intent": "method"} {"citing_id": "2304.02742v1", "cited_id": "1703.10593", "section_title": "A. Image-To-Image Translation", "citation": "One of the most representative GAN-based methods is CycleGAN #REFR , which uses cyclic consistency losses to constrain the entire model into two generative networks.", "text_before_citation": ["The primary idea behind the image-to-image translation task is to establish a mapping between images from two different domains.", "In medical image translation, the main connection between two domains is the underlying anatomical structure information.", "The previous works on the image translation are mainly based on GANs #OTHEREFR and VAEs #OTHEREFR ."], "text_after_citation": ["It can translate a source domain image to the target domain and vice versa.", "Another representative work is the Geometry-consistent generative adversarial network (GcGAN) #OTHEREFR , which provides unilateral unsupervised mapping.", "GcGAN maps the original image into two different predefined geometric transformations and generates two images in a new domain under the corresponding geometric consistency constraints.", "However, GAN-based models have some inherent drawbacks since they are trained by two networks (generator and discriminator) playing against each other, which can render the training difficult and unstable.", "For VAE-based methods, one of the most representative works on unpaired image translation is UNIT #OTHEREFR , which assumes that two domains share a common latent space and the corresponding images in both domains are mapped to the same latent code."], "citing_paper_content": {"title": "Zero-Shot Medical Image Translation Via Frequency-Guided Diffusion Models", "abstract": "Recently, the diffusion model has emerged as a superior generative model that can produce highquality images with excellent realism. There is a growing interest in applying diffusion models to image translation tasks. However, for medical image translation, the existing diffusion models are deficient in accurately retaining structural information since the structure details of source domain images are lost during the forward diffusion process and cannot be fully recovered through learned reverse diffusion, while the integrity of anatomical structures is extremely important in medical images. Training and conditioning diffusion models using paired source and target images with matching anatomy can help. However, such paired data are very difficult and costly to obtain, and may also reduce the robustness of the developed model to outof-distribution testing data. We propose a frequency-guided diffusion model (FGDM) that employs frequency-domain filters to guide the diffusion model for structure-preserving image translation. Based on its design, FGDM allows zeroshot learning, as it can be trained solely on the data from the target domain, and used directly for source-to-target domain translation without any exposure to the sourcedomain data during training. We trained FGDM solely on the head-and-neck CT data, and evaluated it on both headand-neck and lung cone-beam CT (CBCT)-to-CT translation tasks. FGDM outperformed the state-of-the-art methods (GAN-based, VAE-based, and diffusion-based) in all metrics, showing its significant advantages in zero-shot medical image translation."}, "cited_paper_content": {"title": "Unpaired Image-To-Image Translation Using Cycle-Consistent Adversarial Networks", "abstract": "Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain $X$ to a target domain $Y$ in the absence of paired examples. Our goal is to learn a mapping $G: X \\rightarrow Y$ such that the distribution of images from $G(X)$ is indistinguishable from the distribution $Y$ using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping $F: Y \\rightarrow X$ and introduce a cycle consistency loss to push $F(G(X)) \\approx X$ (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach."}, "keywords": ["representative GAN-based methods"], "citation_intent": "method"} {"citing_id": "2304.01585v1", "cited_id": "1802.00761", "section_title": "Networks", "citation": "The networks use a softmax classifier for person identification, whereas a sigmoid layer for soft-biometrics identification #REFR .", "text_before_citation": ["OBDs from each human limb is allotted a branch of four convolutional layers.", "The convolutional blocks extract descriptive local features from the input OBD data, while the subsequent layers assimilate the global view of the extracted features.", "Prior to concatenation, the outputs of each convolutional blocks are processed by a fully connected layer or LSTM layer depending on the network type.", "Concatenation is followed by a two-layered fully connected MLP and a classifier layer for tCNN-IMU M LP .", "In the case of tCNN-IMU LST M , the concatenation is followed by two LSTM layers and a classifier layer."], "text_after_citation": ["Soft-biometrics of individuals describe or categorise an individual or a group of individuals #OTHEREFR , e.g., Gender Identity, Age, Weight and Height.", "Attribute representations is a method of describing the data semantically #OTHEREFR . An attribute vector a represents a set of soft-biometrics.", "A similar combination of soft-biometrics could represent different persons with similar features.", "The Nearest Neighbour Approach (NNA) is used for soft-biometrics-based identification.", "The NNA calculates the distance between a prediction attribute vector a and an attribute representation A, with all the different combinations of soft-biometrics."], "citing_paper_content": {"title": "Multi-Channel Time-Series Person And Soft-Biometric Identification", "abstract": "Multi-channel time-series datasets are popular in the context of human activity recognition (HAR). On-body device (OBD) recordings of human movements are often preferred for HAR applications not only for their reliability but as an approach for identity protection, e.g., in industrial settings. Contradictory, the gait activity is a biometric, as the cyclic movement is distinctive and collectable. In addition, the gait cycle has proven to contain soft-biometric information of human groups, such as age and height. Though general human movements have not been considered a biometric, they might contain identity information. This work investigates person and soft-biometrics identification from OBD recordings of humans performing different activities using deep architectures. Furthermore, we propose the use of attribute representation for soft-biometric identification. We evaluate the method on four datasets of multi-channel time-series HAR, measuring the performance of a person and soft-biometrics identification and its relation concerning performed activities. We find that person identification is not limited to gait activity. The impact of activities on the identification performance was found to be training and dataset specific. Soft-biometric based attribute representation shows promising results and emphasis the necessity of larger datasets."}, "cited_paper_content": {"title": "Learning Attribute Representation For Human Activity Recognition", "abstract": "Attribute representations became relevant in image recognition and word spotting, providing support under the presence of unbalance and disjoint datasets. However, for human activity recognition using sequential data from on-body sensors, human-labeled attributes are lacking. This paper introduces a search for attributes that represent favorably signal segments for recognizing human activities. It presents three deep architectures, including temporal-convolutions and an IMU centered design, for predicting attributes. An empiric evaluation of random and learned attribute representations, and as well as the networks is carried out on two datasets, outperforming the state-of-the art."}, "keywords": ["soft-biometrics identification", "softmax classifier"], "citation_intent": "method"} {"citing_id": "2303.02223v1", "cited_id": "2003.00803", "section_title": "Related Work", "citation": "Due to these reasons, techniques such as ML and deep learning created a new research direction in the financial literature #REFR .", "text_before_citation": ["Statistical models make some simplifying assumptions to be able to obtain some theoretical guarantees and thus have a simple structure and potentially lover performance compared to machine learning techniques.", "Machine learning models are more general, make fewer simplifying assumptions, and offer better model-fitting capabilities."], "text_after_citation": ["To forecast the future value of financial assets and find the reason for these assets' behavior, numerous machine learning (ML) techniques were employed, such as SVM #OTHEREFR , Support Vector Regression (SVR) #OTHEREFR , Random Forest (RF) #OTHEREFR , and convolutional neural networks (CNN) #OTHEREFR .", "These works show that it is possible to use ML techniques to make more accurate forecasts and as alternative approaches to conventional techniques for investigating the behavior of financial assets. #OTHEREFR", "(2019) applied several ML techniques such as single hidden layer feed-forward neural network, MLP, autoencoders, and bagof-features algorithms to forecast the mid-price movement of the stock price.", "The results indicated that the mentioned ML techniques could forecast price movement. #OTHEREFR", "(2019) organized the stock price prediction techniques into four categories, namely statistical methods, pattern recognition, machine learning, and sentiment analysis."], "citing_paper_content": {"title": "Feature Selection For Forecasting *", "abstract": "This work investigates the importance of feature selection for improving the forecasting performance of machine learning algorithms for financial data. Artificial neural networks (ANN), convolutional neural networks (CNN), long-short term memory (LSTM) networks, as well as linear models were applied for forecasting purposes. The Feature Selection with Annealing (FSA) algorithm was used to select the features from about 1000 possible predictors obtained from 26 technical indicators with specific periods and their lags. In addition to this, the Boruta feature selection algorithm was applied as a baseline feature selection method. The dependent variables consisted of daily logarithmic returns and daily trends of ten financial data sets, including cryptocurrency and different stocks. Experiments indicate that the FSA algorithm increased the performance of ML models regardless of the problem type. The FSA hybrid machine learning models showed better performance in 10 out of 10 data sets for regression and 8 out of 10 data sets for classification. None of the hybrid Boruta models outperformed the hybrid FSA models. However, the BOR-CNN model performance was comparable to the best model for 4 out of 10 data sets for regression estimates. BOR-LR and BOR-CNN models showed comparable performance with the best hybrid FSA models in 2 out of 10 datasets for classification. FSA was observed to improve the model performance in both better performance metrics as well as a decreased computation time by providing a lower dimensional input feature space."}, "cited_paper_content": {"title": "Ascertaining Price Formation In Cryptocurrency Markets With Deeplearning", "abstract": "The cryptocurrency market is amongst the fastest-growing of all the financial markets in the world. Unlike traditional markets, such as equities, foreign exchange and commodities, cryptocurrency market is considered to have larger volatility and illiquidity. This paper is inspired by the recent success of using deep learning for stock market prediction. In this work, we analyze and present the characteristics of the cryptocurrency market in a high-frequency setting. In particular, we applied a deep learning approach to predict the direction of the mid-price changes on the upcoming tick. We monitored live tick-level data from $8$ cryptocurrency pairs and applied both statistical and machine learning techniques to provide a live prediction. We reveal that promising results are possible for cryptocurrencies, and in particular, we achieve a consistent $78\\%$ accuracy on the prediction of the mid-price movement on live exchange rate of Bitcoins vs US dollars."}, "keywords": ["deep learning"], "citation_intent": "background"} {"citing_id": "2303.04178v1", "cited_id": "1807.03819", "section_title": "Resources Needed For Picante", "citation": "To generate 2 22 reduced samples, 2 #REFR /n matrices must be reduced (one n \u00d7 n matrix produces 2n reduced samples, see \u00a7 4.1).", "text_before_citation": ["The total cost of PICANTE is the sum of the resources needed to preprocess data, train the model, and recover the secret.", "Data preprocessing is the most resource intensive part of PICANTE."], "text_after_citation": ["As the dimension increases, the number of matrices scales down linearly.", "To avoid the exponential cost of BKZ-reduction #OTHEREFR , we fix the block size at most \u03b2 = 20 so that the preprocessing step scales polynomially with n and log q.", "See discussion of the parameter choices for preprocessing in \u00a7 5.4 below.", "In practice, to save resources, we choose smaller \u03b2 for larger dimensions.", "Table 5 reports the preprocessing resources (in cpu hours) for each n. Our preprocessing is fully parallelizable."], "citing_paper_content": {"title": "Salsa Picante: A Machine Learning Attack On Lwe With Binary Secrets", "abstract": "The Learning With Errors (LWE) problem is one of the major hard problems in post-quantum cryptography. For example, 1) the only Key Exchange Mechanism KEM standardized by NIST [14] is based on LWE; and 2) current publicly available Homomorphic Encryption (HE) libraries are based on LWE. NIST KEM schemes use random secrets, but homomorphic encryption schemes use binary or ternary secrets, for efficiency reasons. In particular, sparse binary secrets have been proposed, but not standardized [2], for HE. Prior work SALSA [49] demonstrated a new machine learning attack on sparse binary secrets for the LWE problem in small dimensions (up to n = 128) and low Hamming weights (up to h = 4). However, this attack assumed access to millions of LWE samples, and was not scaled to higher Hamming weights or dimensions. Our attack, PICANTE, reduces the number of samples required to just m = 4n samples. Moreover, it can recover secrets with much larger dimensions (up to 350) and Hamming weights (roughly n/10, or h = 33 for n = 300). To achieve this, we introduce a preprocessing step which allows us to generate the training data from a linear number of samples and changes the distribution of the training data to improve transformer training. We also improve the distinguisher/secret recovery methods of SALSA and introduce a novel cross-attention recovery mechanism which allows us to read-off the secret directly from the trained models."}, "cited_paper_content": {"title": "Universal Transformers", "abstract": "Recurrent neural networks (RNNs) sequentially process data by updating their state with each new data point, and have long been the de facto choice for sequence modeling tasks. However, their inherently sequential computation makes them slow to train. Feed-forward and convolutional architectures have recently been shown to achieve superior results on some sequence modeling tasks such as machine translation, with the added advantage that they concurrently process all inputs in the sequence, leading to easy parallelization and faster training times. Despite these successes, however, popular feed-forward sequence models like the Transformer fail to generalize in many simple tasks that recurrent models handle with ease, e.g. copying strings or even simple logical inference when the string or formula lengths exceed those observed at training time. We propose the Universal Transformer (UT), a parallel-in-time self-attentive recurrent sequence model which can be cast as a generalization of the Transformer model and which addresses these issues. UTs combine the parallelizability and global receptive field of feed-forward sequence models like the Transformer with the recurrent inductive bias of RNNs. We also add a dynamic per-position halting mechanism and find that it improves accuracy on several tasks. In contrast to the standard Transformer, under certain assumptions, UTs can be shown to be Turing-complete. Our experiments show that UTs outperform standard Transformers on a wide range of algorithmic and language understanding tasks, including the challenging LAMBADA language modeling task where UTs achieve a new state of the art, and machine translation where UTs achieve a 0.9 BLEU improvement over Transformers on the WMT14 En-De dataset."}, "keywords": ["2n reduced samples", "one n \u00d7"], "citation_intent": "background"} {"citing_id": "2304.06868v1", "cited_id": "1910.11664", "section_title": "Results And Discussion", "citation": "This result aligns with the intuition that the Fourier tempogram is the most similar to the CQT representation in #REFR as it tends to have upper harmonics but not sub-harmonics.", "text_before_citation": ["Since calibration results correlate with model performance, i.e.", "a model with more linear and less disperse calibration curves leads to more consistent estimations #OTHEREFR , we look at calibration results for each model, depicted in Figure 3 .", "The Fourier tempogram shows more consistent calibration results across data distributions and leads to smoother curves for both synthetic and real data."], "text_after_citation": ["On the contrary, the autocorrelation tempogram results present the most variation, meaning that for similar tempo values, the model struggles to assign the same tempo output.", "Surprisingly, the hybrid tempogram does worse than the Fourier tempogram, which suggests that the multiplication of the autocorrelation and Fourier tempograms removes harmonic information that the model relies on to perform tempo estimation.", "The experiments with the different data distributions show that the model does not benefit from the wider range of a log-uniform distribution, and instead is able to extrapolate from a log-normal distribution centered in the lower tempi end (70 bpm).", "When we look at this result closely, an explanation may be found in Figure 2 .", "Given a tempo distribution expressed in linear BPM values (Figure 2 top), the model will be fed a slightly different tempo distribution ( Figure 2 bottom) because of the logarithmic effect of the input tempogram."], "citing_paper_content": {"title": "Tempo Vs. Pitch: Understanding Self-Supervised Tempo Estimation", "abstract": "Self-supervision methods learn representations by solving pretext tasks that do not require human-generated labels, alleviating the need for time-consuming annotations. These methods have been applied in computer vision, natural language processing, environmental sound analysis, and recently in music information retrieval, e.g. for pitch estimation. Particularly in the context of music, there are few insights about the fragility of these models regarding different distributions of data, and how they could be mitigated. In this paper, we explore these questions by dissecting a self-supervised model for pitch estimation adapted for tempo estimation via rigorous experimentation with synthetic data. Specifically, we study the relationship between the input representation and data distribution for self-supervised tempo estimation."}, "cited_paper_content": {"title": "Spice: Self-Supervised Pitch Estimation", "abstract": "We propose a model to estimate the fundamental frequency in monophonic audio, often referred to as pitch estimation. We acknowledge the fact that obtaining ground truth annotations at the required temporal and frequency resolution is a particularly daunting task. Therefore, we propose to adopt a self-supervised learning technique, which is able to estimate (relative) pitch without any form of supervision. The key observation is that pitch shift maps to a simple translation when the audio signal is analysed through the lens of the constant-Q transform (CQT). We design a self-supervised task by feeding two shifted slices of the CQT to the same convolutional encoder, and require that the difference in the outputs is proportional to the corresponding difference in pitch. In addition, we introduce a small model head on top of the encoder, which is able to determine the confidence of the pitch estimate, so as to distinguish between voiced and unvoiced audio. Our results show that the proposed method is able to estimate pitch at a level of accuracy comparable to fully supervised models, both on clean and noisy audio samples, yet it does not require access to large labeled datasets"}, "keywords": ["Fourier tempogram"], "citation_intent": "result"} {"citing_id": "2304.04445v1", "cited_id": "1202.2336", "section_title": "Prdos", "citation": "To ensure that this oracle has query time O(log h), we generalize the argument of Wulff-Nilsen #REFR from full oracles to partial ones.", "text_before_citation": ["The two paths P u,u , P v,v have the property that", "w(P u,u ), w(P v,v ) \u2264 h \u2022 d G (u, v) .", "Intuitively, given a query (u, v), the partial PRDO provides a u \u2212 v path P u,v (rather than paths P u,u , P v,v ) if and only if the Thorup-Zwick oracle does so while employing only the first h levels of the oracle.", "We note that using the first h levels of the TZ oracle indeed guarantees almost all the aforementioned properties.", "The only problem is that the query time of such partial oracle is O(h), rather than O(log h)."], "text_after_citation": ["In the second step, to find a low-stretch path between u and v , we construct an interactive emulator for the set S.", "An interactive emulator is an oracle that similarly to PRDOs, provides approximate shortest paths between two queried vertices.", "However, as opposed to a PRDO, the provided path does not use actual edges in the original graph, but rather employs virtual edges that belong to some sparse low-stretch emulator of the graph.", "Our construction of interactive emulator is based on the distance oracle of Mendel and Naor #OTHEREFR . Recall that the Mendel-Naor oracle is not path-reporting.", "Built for N -vertex graph and a parameter k 1 , this oracle provides stretch O(k 1 ), has size O(N 1+ 1 k 1 ), and has query time O(1)."], "citing_paper_content": {"title": "Path-Reporting Distance Oracles With Near-Logarithmic Stretch And Linear Size", "abstract": "Given an n-vertex undirected graph G = (V, E, w), and a parameter k \u2265 1, a pathreporting distance oracle (or PRDO) is a data structure of size S(n, k), that given a query (u, v) \u2208 V 2 , returns an f (k)-approximate shortest u \u2212 v path P in G within time q(k) + O(|P |). Here S(n, k), f (k) and q(k) are arbitrary (hopefully slowly-growing) functions. A distance oracle that only returns an approximate estimated(u, v) of the distance d G (u, v) between the queried vertices is called a non-path-reporting distance oracle. A landmark PRDO due to Thorup and Zwick [54] has S(n, k) = O(k \u2022 n 1+ 1 k), f (k) = 2k \u2212 1 and q(k) = O(k). Wulff-Nilsen [57] devised an improved query algorithm for this oracle with q(k) = O(log k). The size of this oracle is \u2126(n log n) for all k. Elkin and Pettie [28] devised a PRDO with S(n, k) = O(log k \u2022 n 1+ 1 k), f (k) = O(k log 4/3 7) and q(k) = O(log k). Neiman and Shabat [44] recently devised an improved PRDO with S(n, k) = O(n 1+ 1 k), f (k) = O(k log 4/3 4) and q(k) = O(log k). These oracles (of [28, 44]) can be much sparser than O(n log n) (the oracle of [44] can have linear size), but their stretch is polynomially larger than the optimal bound of 2k \u2212 1. On the other hand, a long line of non-path-reporting distance oracles culminated in a celebrated result by Chechik [12], in which S(n, k) = O(n 1+ 1 k), f (k) = 2k \u2212 1 and q(k) = O(1). In this paper we make a dramatic progress in bridging the gap between path-reporting and non-path-reporting distance oracles. In particular, we devise a PRDO with size S(n, k) = O k log log n log n"}, "cited_paper_content": {"title": "Approximate Distance Oracles With Improved Query Time", "abstract": "Given an undirected graph $G$ with $m$ edges, $n$ vertices, and non-negative edge weights, and given an integer $k\\geq 2$, we show that a $(2k-1)$-approximate distance oracle for $G$ of size $O(kn^{1 + 1/k})$ and with $O(\\log k)$ query time can be constructed in $O(\\min\\{kmn^{1/k},\\sqrt km + kn^{1 + c/\\sqrt k}\\})$ time for some constant $c$. This improves the $O(k)$ query time of Thorup and Zwick. Furthermore, for any $0<\\epsilon \\leq 1$, we give an oracle of size $O(kn^{1 + 1/k})$ that answers $((2 + \\epsilon)k)$-approximate distance queries in $O(1/\\epsilon)$ time. At the cost of a $k$-factor in size, this improves the $128k$ approximation achieved by the constant query time oracle of Mendel and Naor and approaches the best possible tradeoff between size and stretch, implied by a widely believed girth conjecture of Erd\\H{o}s. We can match the $O(n^{1 + 1/k})$ size bound of Mendel and Naor for any constant $\\epsilon>0$ and $k = O(\\log n/\\log\\log n)$."}, "keywords": ["full oracles"], "citation_intent": "method"} {"citing_id": "2305.00724v1", "cited_id": "1811.03508", "section_title": "Results And Discussion", "citation": "This is somewhat contrary to the results obtained in LDP paper #REFR , but it is apparently an advantage of RF, since it considers each feature separately, while calculating tree splits.", "text_before_citation": ["Additionally, for each value, we also calculate the absolute average difference between its result and the best result for a given hyperparameter on each dataset.", "The lower the absolute difference, the better, since it means that a given hyperparameter value, on average, gives the best results among all its possible values. results are presented in Table 3 .", "For the number of bins, we can clearly select 50 bins as the optimal value.", "While 30 bins gave the best results the same number of times, on average they performed worse compared to the optimal hyperparameter value.", "Similarly, for normalization it is evident that we do not need to perform any kind of normalization, since using no normalization obtained the best results on majority of datasets and on average."], "text_after_citation": ["For aggregation method, the results are very close, both for number of wins and average difference compared to the best result.", "In this case, the choice does not matter that much, and we choose the simpler histogram method.", "The linear scale obtained much better results on average than the log scale, so the choice is obvious.", "Overall, this means that we can confidently recommend default values for all LDP hyperparameters, and tuning them is not particularly helpful.", "This dramatically decreases the computational cost, while having little effect on accuracy on average, which is a desirable tradeoff in a baseline method. Table 3 ."], "citing_paper_content": {"title": "Strengthening Structural Baselines For Graph Classification Using Local Topological Profile", "abstract": "We present the analysis of the topological graph descriptor Local Degree Profile (LDP), which forms a widely used structural baseline for graph classification. Our study focuses on model evaluation in the context of the recently developed fair evaluation framework, which defines rigorous routines for model selection and evaluation for graph classification, ensuring reproducibility and comparability of the results. Based on the obtained insights, we propose a new baseline algorithm called Local Topological Profile (LTP), which extends LDP by using additional centrality measures and local vertex descriptors. The new approach provides the results outperforming or very close to the latest GNNs for all datasets used. Specifically, state-of-the-art results were obtained for 4 out of 9 benchmark datasets. We also consider computational aspects of LDP-based feature extraction and model construction to propose practical improvements affecting execution speed and scalability. This allows for handling modern, large datasets and extends the portfolio of benchmarks used in graph representation learning. As the outcome of our work, we obtained LTP as a simple to understand, fast and scalable, still robust baseline, capable of outcompeting modern graph classification models such as Graph Isomorphism Network (GIN). We provide open-source implementation at GitHub."}, "cited_paper_content": {"title": "A Simple Yet Effective Baseline For Non-Attribute Graph Classification", "abstract": "Graphs are complex objects that do not lend themselves easily to typical learning tasks. Recently, a range of approaches based on graph kernels or graph neural networks have been developed for graph classification and for representation learning on graphs in general. As the developed methodologies become more sophisticated, it is important to understand which components of the increasingly complex methods are necessary or most effective. As a first step, we develop a simple yet meaningful graph representation, and explore its effectiveness in graph classification. We test our baseline representation for the graph classification task on a range of graph datasets. Interestingly, this simple representation achieves similar performance as the state-of-the-art graph kernels and graph neural networks for non-attributed graph classification. Its performance on classifying attributed graphs is slightly weaker as it does not incorporate attributes. However, given its simplicity and efficiency, we believe that it still serves as an effective baseline for attributed graph classification. Our graph representation is efficient (linear-time) to compute. We also provide a simple connection with the graph neural networks. Note that these observations are only for the task of graph classification while existing methods are often designed for a broader scope including node embedding and link prediction. The results are also likely biased due to the limited amount of benchmark datasets available. Nevertheless, the good performance of our simple baseline calls for the development of new, more comprehensive benchmark datasets so as to better evaluate and analyze different graph learning methods. Furthermore, given the computational efficiency of our graph summary, we believe that it is a good candidate as a baseline method for future graph classification (or even other graph learning) studies."}, "keywords": ["LDP paper"], "citation_intent": "result"} {"citing_id": "2304.01046v1", "cited_id": "2002.04326", "section_title": "Dataset", "citation": "The dataset we used was the ReClor #REFR training and validation sets, which contain 5138 samples.", "text_before_citation": [], "text_after_citation": ["Each data point has a context, a question, and 4 answer choices.", "Exactly 1 answer choice is correct; each of the other choices is either logically or factually inaccurate.", "The ReClor questions are sourced from past GMAT and LSAT exams and high-quality practice exams."], "citing_paper_content": {"title": "Polytuplet Loss: A Reverse Approach To Training Reading Comprehension And Logical Reasoning Models", "abstract": "Throughout schooling, students are tested on reading comprehension and logical reasoning. Students have developed various strategies for completing such exams, some of which are generally thought to outperform others. One such strategy involves emphasizing relative accuracy over absolute accuracy and can theoretically produce the correct answer without full knowledge of the information required to solve the question. This paper examines the effectiveness of applying such a strategy to train transfer learning models to solve reading comprehension and logical reasoning questions. The models were evaluated on the ReClor dataset, a challenging reading comprehension and logical reasoning benchmark. While previous studies targeted logical reasoning skills, we focus on a general training method and model architecture. We propose the polytuplet loss function, an extension of the triplet loss function, to ensure prioritization of learning the relative correctness of answer choices over learning the true accuracy of each choice. Our results indicate that models employing polytuplet loss outperform existing baseline models. Although polytuplet loss is a promising alternative to other contrastive loss functions, further research is required to quantify the benefits it may present. 1"}, "cited_paper_content": {"title": "Reclor: A Reading Comprehension Dataset Requiring Logical Reasoning", "abstract": "Recent powerful pre-trained language models have achieved remarkable performance on most of the popular datasets for reading comprehension. It is time to introduce more challenging datasets to push the development of this field towards more comprehensive reasoning of text. In this paper, we introduce a new Reading Comprehension dataset requiring logical reasoning (ReClor) extracted from standardized graduate admission examinations. As earlier studies suggest, human-annotated datasets usually contain biases, which are often exploited by models to achieve high accuracy without truly understanding the text. In order to comprehensively evaluate the logical reasoning ability of models on ReClor, we propose to identify biased data points and separate them into EASY set while the rest as HARD set. Empirical results show that the state-of-the-art models have an outstanding ability to capture biases contained in the dataset with high accuracy on EASY set. However, they struggle on HARD set with poor performance near that of random guess, indicating more research is needed to essentially enhance the logical reasoning ability of current models."}, "keywords": ["ReClor training", "dataset"], "citation_intent": "method"} {"citing_id": "2304.10701v1", "cited_id": "1904.02868", "section_title": "Related Work 2.1 Data Valuation", "citation": "To reduce the computation cost, Monte-Carlo and gradient-based techniques are proposed to approximate data SV when using large dataset to train the complex models #REFR . Jia et al.", "text_before_citation": ["And most of efforts have been devoted to the last two directions.", "For strategies of data valuation, the commonly-used approach is based on marginal contribution (MC).", "As the basic method for data valuation, LOO is used to evaluate the value of training data point X i by observational change of model performance when leaving out that data point from the training dataset.", "To overcome inaccuracy and strict desirability of LOO, SV originated from Cooperative Game Theory (CGT) is widely used to measure the contribution of data #OTHEREFR .", "Considering the joining sequence of each training data point, SV needs to calculate the marginal performance of all possible subsets in which the time complexity is exponential."], "text_after_citation": ["#OTHEREFR propose an efficient approximation approach to estimate the exact SV for K-nearest-neighor (KNN).", "Other works also apply these to other fields such as collaboration machine learning #OTHEREFR .", "Though several works devotes to accelerate the computation of traditional SV, it is still impossible to use these approaches in practical due to their inevitable expensive computation.", "When it comes to performance metric of data valuation, some recent studies start to consider data-driven metric rather than model-driven metric. Tay et al.", "#OTHEREFR propose Cosine Gradient Shapley Value (CGSV) which leverages cosine similarity between gradient vectors of the model to evaluate the contribution of each participant in collaborative machine learning."], "citing_paper_content": {"title": "Matching-Based Data Valuation For Generative Model", "abstract": "Data valuation is critical in machine learning, as it helps enhance model transparency and protect data properties. Existing data valuation methods have primarily focused on discriminative models, neglecting deep generative models that have recently gained considerable attention. Similar to discriminative models, there is an urgent need to assess data contributions in deep generative models as well. However, previous data valuation approaches mainly relied on discriminative model performance metrics and required model retraining. Consequently, they cannot be applied directly and efficiently to recent deep generative models, such as generative adversarial networks and diffusion models, in practice. To bridge this gap, we formulate the data valuation problem in generative models from a similarity-matching perspective. Specifically, we introduce Generative Model Valuator (GMValuator), the first model-agnostic approach for any generative models, designed to provide data valuation for generation tasks. We have conducted extensive experiments to demonstrate the effectiveness of the proposed method. To the best of their knowledge, GMValuator is the first work that offers a training-free, post-hoc data valuation strategy for deep generative models. Keywords Generative Model \u2022 Data Valuation 1 Introduction Recently, applications of generative models have exploded across many fields. For example, ChatGPT [1] is capable of automatically generating code by utilizing a generative model, and Stable Diffusion [2] can generate photo-realistic images. As a form of unsupervised machine learning, generative models learn the probabilities distribution of the training datasets from which they sample the synthetic data, which is similar to the original training dataset. Data is regarded as the fuel for the model artificial intelligence (i.e., deep learning) [3]. Obtaining high-quality generative models requires a large amount of data, and including informative data samples related to the task is also an important ingredient in the training process of deep generative models. From a data privacy perspective, personal data is protected by various regulations including General Data Protection Regulation (GDPR) [4] and become a valuable asset. Privacy Calculus [5] requires assessing the trade-off between potential benefits and expected costs or risks of disclosing private data. Recent AI Generated Content further arouse people's attention[6]. Therefore, measuring the value of the collected data in generative model training is vital to achieving the training objectives. Unfortunately, there is no existing studies devoting to data valuation for generative model. Most of the previous data valuation methods are for discriminate model [7]. For example, some performance metrics (e.g., loss value, accuracy) of the machine learning model are always used to calculate the score function while the strategy of data valuation is Leave-one-out (LOO), Shapley Value (SV) and Banzhaf Index (BI), etc [8, 9, 10]."}, "cited_paper_content": {"title": "Data Shapley: Equitable Valuation Of Data For Machine Learning", "abstract": "As data becomes the fuel driving technological and economic growth, a fundamental challenge is how to quantify the value of data in algorithmic predictions and decisions. For example, in healthcare and consumer markets, it has been suggested that individuals should be compensated for the data that they generate, but it is not clear what is an equitable valuation for individual data. In this work, we develop a principled framework to address data valuation in the context of supervised machine learning. Given a learning algorithm trained on $n$ data points to produce a predictor, we propose data Shapley as a metric to quantify the value of each training datum to the predictor performance. Data Shapley value uniquely satisfies several natural properties of equitable data valuation. We develop Monte Carlo and gradient-based methods to efficiently estimate data Shapley values in practical settings where complex learning algorithms, including neural networks, are trained on large datasets. In addition to being equitable, extensive experiments across biomedical, image and synthetic data demonstrate that data Shapley has several other benefits: 1) it is more powerful than the popular leave-one-out or leverage score in providing insight on what data is more valuable for a given learning task; 2) low Shapley value data effectively capture outliers and corruptions; 3) high Shapley value data inform what type of new data to acquire to improve the predictor."}, "keywords": ["Monte-Carlo", "large dataset"], "citation_intent": "method"} {"citing_id": "2304.06275v1", "cited_id": "1803.08024", "section_title": "Cross-Modal Retrieval", "citation": "To capture fine-grained interplay between modalities, SCAN #REFR introduce stacked cross attention to enable attention with context from both image and sentence.", "text_before_citation": ["The extra label information is employed to boost the discrimination of common representations.", "However, the lack of annotations limited the practicability in the real world.", "Our proposed approach falls in the category of unsupervised methods.", "For example, Wang #OTHEREFR uses a twobranch neural networks to learn the joint embeddings of multimodal data.", "Inspired by hard negative mining, VSE++ #OTHEREFR uses hard negatives to improve the retrieval performance."], "text_after_citation": ["Recently, motivated by the powerful learning ability of graph model, GSMN #OTHEREFR and SGRAF #OTHEREFR construct graph structure for multimodal data to benefit the learning of finegrained correspondence.", "Although existing works have achieved remarkable results, they usually depend on the correct correspondence among multimodal data and cannot tackle the noise issue.", "Thus, it is significant to explore how to learn cross-modal retrieval with noisy correspondence, but which is rarely touched in previous studies."], "citing_paper_content": {"title": "Noisy Correspondence Learning With Meta Similarity Correction", "abstract": "Despite the success of multimodal learning in crossmodal retrieval task, the remarkable progress relies on the correct correspondence among multimedia data. However, collecting such ideal data is expensive and timeconsuming. In practice, most widely used datasets are harvested from the Internet and inevitably contain mismatched pairs. Training on such noisy correspondence datasets causes performance degradation because the cross-modal retrieval methods can wrongly enforce the mismatched data to be similar. To tackle this problem, we propose a Meta Similarity Correction Network (MSCN) to provide reliable similarity scores. We view a binary classification task as the meta-process that encourages the MSCN to learn discrimination from positive and negative meta-data. To further alleviate the influence of noise, we design an effective data purification strategy using meta-data as prior knowledge to remove the noisy samples. Extensive experiments are conducted to demonstrate the strengths of our method in both synthetic and real-world noises, including Flickr30K, MS-COCO, and Conceptual Captions. Our code is publicly available. 1"}, "cited_paper_content": {"title": "Stacked Cross Attention For Image-Text Matching", "abstract": "In this paper, we study the problem of image-text matching. Inferring the latent semantic alignment between objects or other salient stuff (e.g. snow, sky, lawn) and the corresponding words in sentences allows to capture fine-grained interplay between vision and language, and makes image-text matching more interpretable. Prior work either simply aggregates the similarity of all possible pairs of regions and words without attending differentially to more and less important words or regions, or uses a multi-step attentional process to capture limited number of semantic alignments which is less interpretable. In this paper, we present Stacked Cross Attention to discover the full latent alignments using both image regions and words in a sentence as context and infer image-text similarity. Our approach achieves the state-of-the-art results on the MS-COCO and Flickr30K datasets. On Flickr30K, our approach outperforms the current best methods by 22.1% relatively in text retrieval from image query, and 18.2% relatively in image retrieval with text query (based on Recall@1). On MS-COCO, our approach improves sentence retrieval by 17.8% relatively and image retrieval by 16.6% relatively (based on Recall@1 using the 5K test set). Code has been made available at: https://github.com/kuanghuei/SCAN."}, "keywords": ["stacked cross attention"], "citation_intent": "background"} {"citing_id": "2304.00948v1", "cited_id": "2002.05227", "section_title": "Ii. Related Work", "citation": "In #REFR , the Riemannian Brownian motion model was constructed over the manifold by using VAE (R-VAE).", "text_before_citation": ["A normal prior, on the other hand, causes the approximate posterior to be over-regularized, leading to a less effective learned latent representation of the input.", "Manifold learning.", "The goal of manifold learning approaches is to infer latent representations that reflect the inherent geometric structure of data.", "Such methods are mainly used for data visualization and dimensionality reduction and recently are used to improve the performance of generative models #OTHEREFR , #OTHEREFR , #OTHEREFR , #OTHEREFR .", "Traditional non-linear manifold learning methods like local tangent space alignment #OTHEREFR and linear embedding #OTHEREFR maintain local structures and extract the features of data manifolds."], "text_after_citation": ["The Riemannian structure overcomes the identifiability problem by providing a meaningful representation which is not affected by reparametrizations.", "However, Riemannian manifold-based models cannot accurately find the shortest path between two samples that are far away from each other.", "Generally, Riemannian geometry methods use Ordinary Differential Equation (ODE) to find the shortest interpolation curve in the latent space.", "ODE is computationally expensive and cannot find the shortest interpolation curve more consistently than straight-length methods.", "The interpolation among data samples is widely used to attain a smooth transformation from one sample to another."], "citing_paper_content": {"title": "Vtae: Variational Transformer Autoencoder With Manifolds Learning", "abstract": "Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables and these models use a non-linear function (generator) to map latent samples into the data space. On the other hand, the non-linearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning. This weak projection, however, can be addressed by a Riemannian metric, and we show that geodesics computation and accurate interpolations between data samples on the Riemannian manifold can substantially improve the performance of deep generative models. In this paper, a Variational spatial-Transformer AutoEncoder (VTAE) is proposed to minimize geodesics on a Riemannian manifold and improve representation learning. In particular, we carefully design the variational autoencoder with an encoded spatial-Transformer to explicitly expand the latent variable model to data on a Riemannian manifold, and obtain global context modelling. Moreover, to have smooth and plausible interpolations while traversing between two different objects' latent representations, we propose a geodesic interpolation network different from the existing models that use linear interpolation with inferior performance. Experiments on benchmarks show that our proposed model can improve predictive accuracy and versatility over a range of computer vision tasks, including image interpolations, and reconstructions."}, "cited_paper_content": {"title": "Variational Autoencoders With Riemannian Brownian Motion Priors", "abstract": "Variational Autoencoders (VAEs) represent the given data in a low-dimensional latent space, which is generally assumed to be Euclidean. This assumption naturally leads to the common choice of a standard Gaussian prior over continuous latent variables. Recent work has, however, shown that this prior has a detrimental effect on model capacity, leading to subpar performance. We propose that the Euclidean assumption lies at the heart of this failure mode. To counter this, we assume a Riemannian structure over the latent space, which constitutes a more principled geometric view of the latent codes, and replace the standard Gaussian prior with a Riemannian Brownian motion prior. We propose an efficient inference scheme that does not rely on the unknown normalizing factor of this prior. Finally, we demonstrate that this prior significantly increases model capacity using only one additional scalar parameter."}, "keywords": ["manifold", "Riemannian Brownian motion"], "citation_intent": "method"} {"citing_id": "2303.11101v2", "cited_id": "1911.05722", "section_title": "Byol Method:", "citation": "BYOL method is highly sensitive to the exponential moving average (EMA) value of a momentum encoder #REFR .", "text_before_citation": ["We used an Adam optimizer #OTHEREFR with a learning rate of 1e-3 and 2 regularization parameter 1e-6."], "text_after_citation": ["Therefore, we followed the same EMA scheduler as in the original implementation #OTHEREFR , of which the momentum starts from 0.996 to 1.", "We should also note that we did not use the EMA scheduler for the experiments with short epochs.", "SwAV method: We mostly followed the same settings as the SimCLR method, such as an SGD optimizer with a learning rate of 0.1 and weight decay of 1e-4.", "In the case of pretraining on ImageNet, we used the temperature value of 0.1 and an epsilon value of 0.02, while we froze the 3,000-dimensional prototypes for one epoch.", "For every other cases, we froze the 100-dimensional prototypes for ten epochs."], "citing_paper_content": {"title": "Coreset Sampling From Open-Set For Fine-Grained Self-Supervised Learning", "abstract": "Deep learning in general domains has constantly been extended to domain-specific tasks requiring the recognition of fine-grained characteristics. However, real-world applications for fine-grained tasks suffer from two challenges: a high reliance on expert knowledge for annotation and necessity of a versatile model for various downstream tasks in a specific domain (e.g., prediction of categories, bounding boxes, or pixel-wise annotations). Fortunately, the recent self-supervised learning (SSL) is a promising approach to pretrain a model without annotations, serving as an effective initialization for any downstream tasks. Since SSL does not rely on the presence of annotation, in general, it utilizes the large-scale unlabeled dataset, referred to as an open-set. In this sense, we introduce a novel Open-Set Self-Supervised Learning problem under the assumption that a large-scale unlabeled open-set is available, as well as the fine-grained target dataset, during a pretraining phase. In our problem setup, it is crucial to consider the distribution mismatch between the open-set and target dataset. Hence, we propose SimCore algorithm to sample a coreset, the subset of an open-set that has a minimum distance to the target dataset in the latent space. We demonstrate that SimCore significantly improves representation learning performance through extensive experimental settings, including eleven fine-grained datasets and seven open-sets in various downstream tasks."}, "cited_paper_content": {"title": "Momentum Contrast For Unsupervised Visual Representation Learning", "abstract": "We present Momentum Contrast (MoCo) for unsupervised visual representation learning. From a perspective on contrastive learning as dictionary look-up, we build a dynamic dictionary with a queue and a moving-averaged encoder. This enables building a large and consistent dictionary on-the-fly that facilitates contrastive unsupervised learning. MoCo provides competitive results under the common linear protocol on ImageNet classification. More importantly, the representations learned by MoCo transfer well to downstream tasks. MoCo can outperform its supervised pre-training counterpart in 7 detection/segmentation tasks on PASCAL VOC, COCO, and other datasets, sometimes surpassing it by large margins. This suggests that the gap between unsupervised and supervised representation learning has been largely closed in many vision tasks."}, "keywords": ["momentum encoder"], "citation_intent": "method"} {"citing_id": "2305.00656v1", "cited_id": "1908.06148", "section_title": "E. Discussion", "citation": "This is in contrast to the results reported in FiFTy #REFR , where the inference time of FiFTy can be significantly larger.", "text_before_citation": ["In summary, the results show that all three models have similar inference times, with minimal variance between runs."], "text_after_citation": ["Our models are optimized for efficient inference by utilizing cutting-edge techniques to minimize computational overhead and reduce latency.", "As a result, we were able to achieve inference times that are comparable to state-of-the-art models while still maintaining high accuracy.", "This demonstrates the effectiveness of our proposed models in real-world applications where fast and efficient inference is crucial.", "Similar to previous work #OTHEREFR , we observe that files with high entropy are difficult to classify because there is no statistical trace that the convolutional kernel can extract.", "Moreover, many files are container types that contain other files as embedded objects, e.g., pdf files that can contain embedded jpg images. As a result, classifiers behave erratically."], "citing_paper_content": {"title": "File Fragment Classification Using Light-Weight Convolutional Neural Networks", "abstract": "In digital forensics, file fragment classification is an important step toward completing file carving process. There exist several techniques to identify the type of file fragments without relying on meta-data, such as using features like header/footer and N-gram to identify the fragment type. Recently, convolutional neural network (CNN) models have been used to build classification models to achieve this task. However, the number of parameters in CNNs tends to grow exponentially as the number of layers increases. This results in a dramatic increase in training and inference time. In this paper, we propose lightweight file fragment classification models based on depthwise separable CNNs. The evaluation results show that our proposed models provide faster inference time with comparable accuracy as compared to the state-of-art CNN based models. In particular, our models were able to achieve an accuracy of 79% on the FFT-75 dataset with nearly 100K parameters and 164M FLOPs, which is 4x smaller and 6x faster than the state-of-the-art classifier in the literature."}, "cited_paper_content": {"title": "Fifty: Large-Scale File Fragment Type Identification Using Neural Networks", "abstract": "We present FiFTy, a modern file type identification tool for memory forensics and data carving. In contrast to previous approaches based on hand-crafted features, we design a compact neural network architecture, which uses a trainable embedding space, akin to successful natural language processing models. Our approach dispenses with explicit feature extraction which is a bottleneck in legacy systems. We evaluate the proposed method on a novel dataset with 75 file types - the most diverse and balanced dataset reported to date. FiFTy consistently outperforms all baselines in terms of speed, accuracy and individual misclassification rates. We achieved an average accuracy of 77.5% with processing speed of approx 38 sec/GB, which is better and more than an order of magnitude faster than the previous state-of-the-art tool - Sceadan (69% at 9 min/GB). Our tool and the corresponding dataset are available publicly online."}, "keywords": ["inference time", "FiFTy"], "citation_intent": "result"} {"citing_id": "2303.09668v1", "cited_id": "1409.0575", "section_title": "B. Implementation Details", "citation": "For the feature extractor, HG-FEN with ResNet-50 as the backbone and Imagenet-pretrained model #REFR as the initialized weights.", "text_before_citation": ["All the experiments are implemented using PyTorch and run on a desktop with 11th Gen Intel(R) Core(TM) i7-11700K @ 3.60GHz and a single NVIDIA GeForce RTX 3090 GPU.", "we directly apply the publicly available detector of YOLOX #OTHEREFR trained by #OTHEREFR for MOT17, MOT20, and ablation study on MOT17."], "text_after_citation": ["We train our HG-FEN model on MOT17 and MOT20, its parameters are updated using the Adam optimizer #OTHEREFR with weight decay of 5 \u00d7 10 \u22124 .", "During the training procedure, the initial learning rate is 3.5\u00d710 \u22124 , input batch size is set as 64 and the resolution of every image is 256 \u00d7 128. Total training 120 epochs.", "For the tracker, we set four tracklet states, including tentative, confirmed, deleted and lost state.", "To initialize a new tracklet, in the first frame of each video sequence we set the tracklet with a confidence score larger than the new tracklet threshold of 0.7 to the confirm state, while subsequent frames in the case where the condition is met will be set to the tentative.", "For the lost tracklets, we keep them for 30 frames in case it appears again. We set \u03bb = 0.95 in Eq. #OTHEREFR ."], "citing_paper_content": {"title": "Rt-Track: Robust Tricks For Multi-Pedestrian Tracking", "abstract": "Object tracking is divided into single-object tracking (SOT) and multi-object tracking (MOT). MOT aims to maintain the identities of multiple objects across a series of continuous video sequences. In recent years, MOT has made rapid progress. However, modeling the motion and appearance models of objects in complex scenes still faces various challenging issues. In this paper, we design a novel direction consistency method for smooth trajectory prediction (STP-DC) to increase the modeling of motion information and overcome the lack of robustness in previous methods in complex scenes. Existing methods use pedestrian reidentification (Re-ID) to model appearance, however, they extract more background information which lacks discriminability in occlusion and crowded scenes. We propose a hyper-grain feature embedding network (HG-FEN) to enhance the modeling of appearance models, thus generating robust appearance descriptors. We also proposed other robustness techniques, including CF-ECM for storing robust appearance information and SK-AS for improving association accuracy. To achieve state-of-the-art performance in MOT, we propose a robust tracker named Rttrack, incorporating various tricks and techniques. It achieves 79.5 MOTA, 76.0 IDF1 and 62.1 HOTA on the test set of MOT17.Rt-track also achieves 77.9 MOTA, 78.4 IDF1 and 63.3 HOTA on MOT20, surpassing all published methods."}, "cited_paper_content": {"title": "Imagenet Large Scale Visual Recognition Challenge", "abstract": "The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the five years of the challenge, and propose future directions and improvements."}, "keywords": ["Imagenet-pretrained model"], "citation_intent": "method"} {"citing_id": "2303.03909v1", "cited_id": "1711.10275", "section_title": "B. Network Structure", "citation": "We then fuse them by performing convolutions, and utilize the same number of layers of 3D sparse deconvolutions #REFR as those convolutions used in the instance detection module to resume the point-level features hierarchically.", "text_before_citation": [", where C represents the number of instance categories, U represents the number of instances,x and\u1ef9 represent coordinate quantization offsets, z represents the height, l, m and q represent 3D bounding box size and \u03c9 represents orientation.", "Each score in heatmap\u00ca cij represents the confidence at location (i, j) belonging to the instance class c.", "For more details about CenterHead, please refer to #OTHEREFR .", "3) Upsample Fusion Module: In upsample fusion module, we aim to resume point-level features by fusing the spatio-temporal features and the instance features.", "We directly concatenate the spatio-temporal features and the instance features as the input."], "text_after_citation": ["To strengthen the instance information in the final features and maintain more details learned in different layers, we propagate the instance features from multi-resolution point clouds and form the instance pyramid.", "After concatenating with the corresponding spatio-temporal features, we inject them into the upsample fusion module.", "Finally, the upsample fusion module outputs the point-", "wise features F = {f i \u2208 R 3 } M i=1", ", which are then used to determine the moving label of LiDAR points after passing a softmax function."], "citing_paper_content": {"title": "Insmos: Instance-Aware Moving Object Segmentation In Lidar Data", "abstract": "Identifying moving objects is a crucial capability for autonomous navigation, consistent map generation, and future trajectory prediction of objects. In this paper, we propose a novel network that addresses the challenge of segmenting moving objects in 3D LiDAR scans. Our approach not only predicts point-wise moving labels but also detects instance information of main traffic participants. Such a design helps determine which instances are actually moving and which ones are temporarily static in the current scene. Our method exploits a sequence of point clouds as input and quantifies them into 4D voxels. We use 4D sparse convolutions to extract motion features from the 4D voxels and inject them into the current scan. Then, we extract spatio-temporal features from the current scan for instance detection and feature fusion. Finally, we design an upsample fusion module to output point-wise labels by fusing the spatio-temporal features and predicted instance information. We evaluated our approach on the LiDAR-MOS benchmark based on SemanticKITTI and achieved better moving object segmentation performance compared to state-of-theart methods, demonstrating the effectiveness of our approach in integrating instance information for moving object segmentation. Furthermore, our method shows superior performance on the Apollo dataset with a pre-trained model on SemanticKITTI, indicating that our method generalizes well in different scenes. The code and pre-trained models of our method will be released at https://github.com/nubot-nudt/InsMOS."}, "cited_paper_content": {"title": "3D Semantic Segmentation With Submanifold Sparse Convolutional Networks", "abstract": "Convolutional networks are the de-facto standard for analyzing spatio-temporal data such as images, videos, and 3D shapes. Whilst some of this data is naturally dense (e.g., photos), many other data sources are inherently sparse. Examples include 3D point clouds that were obtained using a LiDAR scanner or RGB-D camera. Standard \"dense\" implementations of convolutional networks are very inefficient when applied on such sparse data. We introduce new sparse convolutional operations that are designed to process spatially-sparse data more efficiently, and use them to develop spatially-sparse convolutional networks. We demonstrate the strong performance of the resulting models, called submanifold sparse convolutional networks (SSCNs), on two tasks involving semantic segmentation of 3D point clouds. In particular, our models outperform all prior state-of-the-art on the test set of a recent semantic segmentation competition."}, "keywords": ["point-level features", "3D sparse deconvolutions"], "citation_intent": "method"} {"citing_id": "2304.12579v1", "cited_id": "1907.04595", "section_title": "Experiments", "citation": "The reasons behind a larger learning rate resulting in a smaller generalization error have been explored in #REFR ; Barrett & Dherin (2020).", "text_before_citation": ["Further discussions on why a small learning rate leads to a larger generalization error can be found in #OTHEREFR ; Barrett & Dherin (2020).", "The complexity of learning trajectory correlates with the generalization error under mild learning rate In Figure 3 , we carry out experiments under various settings.", "Each data point in the figure represents the average of three repeated experiments.", "The results demonstrate that both the generalization error and C(J t ) increase as the level of label noise is raised (Figure 3 Left).", "The another experiments measure C(J t ) and generalization error for different learning rate and discover that C(J t ) can capture the trend generalization error."], "text_after_citation": ["Additionally, Appendix A.7 discusses why a larger learning rate can lead to a smaller C(J t )."], "citing_paper_content": {"title": "Learning Trajectories Are Generalization Indicators", "abstract": "This paper aims to investigate the relation between learning trajectories of the Deep Neural Networks (DNNs) and their corresponding generalization capabilities when being optimized with gradient descent and stochastic gradient descent algorithms. In this paper, we construct Linear Approximation Function to model the trajectory information and we propose a new generalization bound with richer trajectory information based on this modelling. Our proposed generalization bound relies on the complexity of learning trajectory and the ratio between the bias and diversity of training set. Experimental results indicate that the proposed method effectively captures the generalization trend across various iterations, learning rates, and label noise levels across entire training process."}, "cited_paper_content": {"title": "Towards Explaining The Regularization Effect Of Initial Large Learning Rate In Training Neural Networks", "abstract": "Stochastic gradient descent with a large initial learning rate is widely used for training modern neural net architectures. Although a small initial learning rate allows for faster training and better test performance initially, the large learning rate achieves better generalization soon after the learning rate is annealed. Towards explaining this phenomenon, we devise a setting in which we can prove that a two layer network trained with large initial learning rate and annealing provably generalizes better than the same network trained with a small learning rate from the start. The key insight in our analysis is that the order of learning different types of patterns is crucial: because the small learning rate model first memorizes low-noise, hard-to-fit patterns, it generalizes worse on hard-to-generalize, easier-to-fit patterns than its large learning rate counterpart. This concept translates to a larger-scale setting: we demonstrate that one can add a small patch to CIFAR-10 images that is immediately memorizable by a model with small initial learning rate, but ignored by the model with large learning rate until after annealing. Our experiments show that this causes the small learning rate model's accuracy on unmodified images to suffer, as it relies too much on the patch early on."}, "keywords": ["smaller generalization error", "larger learning rate"], "citation_intent": "background"} {"citing_id": "2304.10248v1", "cited_id": "0906.0483", "section_title": "Introduction", "citation": "However, this is no longer true when these vectors are not orthogonal-in fact, subtracting a best rank-1 approximation from X can even yield a tensor of higher rank #REFR .", "text_before_citation": ["One example is in latent variable model learning #OTHEREFR , where the vectors x i appearing in the decomposition are directly related to the model's parameters (for instance, each x i is the mean of a Gaussian component in a mixture model).", "If the vectors x i in (1) were orthogonal, then one could retrieve them by resorting to a greedy deflation procedure, first J. H. de M.", "Goulart's work was supported by the ANR LabEx CIMI (ANR-11-LABX-0040) within the French Programme \"Investissements d'Avenir.\".", "introduced by Hotelling for matrices #OTHEREFR : a best rank-1 approximation of X is computed and then subtracted from X, and the process is repeated r times.", "Algorithmically, each such approximation can be computed by power iteration #OTHEREFR ."], "text_after_citation": ["In some applications, this can in principle be circumvented by transforming X in such a way that it becomes a rank-r symmetric orthogonal decomposition, as long as r \u2264 n.", "For instance, in latent variable model learning the eigendecomposition of a matrix of second-order statistics can be exploited to obtain a whitening matrix W \u2208 R r\u00d7n such that the vectors", "x i = W x i \u2208 R r , i = 1, .", ". . , r, are pairwise orthogonal.", "An analysis of an algorithm employing this technique coupled with tensor power iteration was carried out in #OTHEREFR , including a robust estimation result quantifying the performance in the case one observes Y = X + E, in terms of the spectral norm of the perturbation E."], "citing_paper_content": {"title": "Hotelling Deflation On Large Symmetric Spiked Tensors", "abstract": "-Cet article \u00e9tudie l'algorithme de d\u00e9flation appliqu\u00e9 \u00e0 l'estimation d'un mod\u00e8le de rang faible sym\u00e9trique contenu dans un tenseur de grandes dimensions corrompu par un bruit additif gaussien. Plus pr\u00e9cis\u00e9ment, nous fournissons une caract\u00e9risation pr\u00e9cise de la performance en grandes dimensions de la d\u00e9flation en termes des alignements des vecteurs obtenus par approximations successives de rang 1 et de leurs poids, en supposant des corr\u00e9lations (fixes) non-triviales entre les composantes du mod\u00e8le. Notre analyse permet de comprendre le m\u00e9canisme de d\u00e9flation en pr\u00e9sence de bruit et peut \u00eatre exploit\u00e9e pour concevoir des m\u00e9thodes d'estimation plus efficaces."}, "cited_paper_content": {"title": "Subtracting A Best Rank-1 Approximation May Increase Tensor Rank", "abstract": "It has been shown that a best rank-R approximation of an order-k tensor may not exist when R>1 and k>2. This poses a serious problem to data analysts using tensor decompositions. It has been observed numerically that, generally, this issue cannot be solved by consecutively computing and subtracting best rank-1 approximations. The reason for this is that subtracting a best rank-1 approximation generally does not decrease tensor rank. In this paper, we provide a mathematical treatment of this property for real-valued 2x2x2 tensors, with symmetric tensors as a special case. Regardless of the symmetry, we show that for generic 2x2x2 tensors (which have rank 2 or 3), subtracting a best rank-1 approximation results in a tensor that has rank 3 and lies on the boundary between the rank-2 and rank-3 sets. Hence, for a typical tensor of rank 2, subtracting a best rank-1 approximation increases the tensor rank."}, "keywords": ["best rank-1 approximation"], "citation_intent": "background"} {"citing_id": "2304.04358v1", "cited_id": "1702.05379", "section_title": "Introduction", "citation": "As a result, Wikipedia articles become the best bet for most users when searching answers for factual queries on the Web #REFR .", "text_before_citation": ["Information acquisition is one of the fundamental daily needs of human beings.", "Acquiring information from the Web is undoubtedly a convenient and efficient way.", "However, with the exponential growth of the Web, information on the Web becomes scattered and evolves quickly, making it challenging for users to acquire the expected information quickly."], "text_after_citation": ["The reason is that Wikipedia articles provide credible content in which most claims can be supported by references from reputable sources.", "While Wikipedia is a good source of answers for factual queries, the need for manual editing (crowd-sourcing and editor checking) curbs its growth of coverage on a broader range of information needs. What if Wikipedia articles could be automatically generated?", "In this paper, we introduce a new task, WEBBRAIN, exploring the capacity of generating short factual articles for queries via a large web corpus.", "Given a factual query, the goal of the task is to enable a system to mine supporting evidence from the Web and generate a short factual article in which the claims are supported by the mined evidence (defined in Section 3.1).", "One of the potential generation targets for WEBBRAIN is the first section of a new Wiki page, based on which we can further explore generating long factual articles (e.g., a complete Wiki page)."], "citing_paper_content": {"title": "Webbrain: Learning To Generate Factually Correct Articles For Queries By Grounding On Large Web Corpus", "abstract": "In this paper, we introduce a new NLP task-generating short factual articles with references for queries by mining supporting evidence from the Web. In this task, called WEBBRAIN, the ultimate goal is to generate a fluent, informative, and factually-correct short article (e.g., a Wikipedia article) for a factual query unseen in Wikipedia. To enable experiments on WEBBRAIN, we construct a large-scale dataset WebBrain-Raw by extracting English Wikipedia articles and their crawlable Wikipedia references. WebBrain-Raw is ten times larger than the previous biggest peer dataset, which can greatly benefit the research community. From WebBrain-Raw, we construct two task-specific datasets: WebBrain-R and WebBrain-G, which are used to train in-domain retriever and generator, respectively. Besides, we empirically analyze the performances of the current state-ofthe-art NLP techniques on WEBBRAIN and introduce a new framework ReGen, which enhances the generation factualness by improved evidence retrieval and task-specific pre-training for generation. Experiment results show that ReGen outperforms all baselines in both automatic and human evaluations. Our code and datasets are released at https://github.com/qhjqhj00/WebBrain."}, "cited_paper_content": {"title": "Why We Read Wikipedia", "abstract": "Wikipedia is one of the most popular sites on the Web, with millions of users relying on it to satisfy a broad range of information needs every day. Although it is crucial to understand what exactly these needs are in order to be able to meet them, little is currently known about why users visit Wikipedia. The goal of this paper is to fill this gap by combining a survey of Wikipedia readers with a log-based analysis of user activity. Based on an initial series of user surveys, we build a taxonomy of Wikipedia use cases along several dimensions, capturing users' motivations to visit Wikipedia, the depth of knowledge they are seeking, and their knowledge of the topic of interest prior to visiting Wikipedia. Then, we quantify the prevalence of these use cases via a large-scale user survey conducted on live Wikipedia with almost 30,000 responses. Our analyses highlight the variety of factors driving users to Wikipedia, such as current events, media coverage of a topic, personal curiosity, work or school assignments, or boredom. Finally, we match survey responses to the respondents' digital traces in Wikipedia's server logs, enabling the discovery of behavioral patterns associated with specific use cases. For instance, we observe long and fast-paced page sequences across topics for users who are bored or exploring randomly, whereas those using Wikipedia for work or school spend more time on individual articles focused on topics such as science. Our findings advance our understanding of reader motivations and behavior on Wikipedia and can have implications for developers aiming to improve Wikipedia's user experience, editors striving to cater to their readers' needs, third-party services (such as search engines) providing access to Wikipedia content, and researchers aiming to build tools such as recommendation engines."}, "keywords": ["Wikipedia articles"], "citation_intent": "background"} {"citing_id": "2303.07287v1", "cited_id": "1906.05247", "section_title": "Application In Multi-Armed Bandit Problem", "citation": "Many previous UCB algorithms based on non-asymptotic inference in the literature assume that the sub-G parameter is a preset constant, see the algorithm in #REFR for instance.", "text_before_citation": ["(2022) ), so Algorithm 1 can be seen as an optimal algorithm.", "Compared with the traditional vanilla UCB, we do improve the constant.", "When Y k \u223c N (\u00b5 k , 1), the constant factor in regret bound in #OTHEREFR", "(2002) is 256, which is larger than 16(2 + \u221a 2) 2 in our theorem.", "When the UCB has unknown sub-G parameters, Theorem 4 first studies a feasible UCB algorithm with sub-G parameter plugging estimation."], "text_after_citation": ["Next, we give an simulation for Theorem 4 in two sub-G cases to verify the performance of estimated norms. Similar to #OTHEREFR", "(2019) ; , we design the three methods as follows:", "1.", "Use our method \u03d5(Y T k (t) ) with Estimated Norm in Theorem 4;", "2."], "citing_paper_content": {"title": "Tight Non-Asymptotic Inference Via Sub-Gaussian Intrinsic Moment Norm", "abstract": "In non-asymptotic statistical inferences, variance-type parameters of sub-Gaussian distributions play a crucial role. However, direct estimation of these parameters based on the empirical moment generating function (MGF) is infeasible. To this end, we recommend using a sub-Gaussian intrinsic moment norm [Buldygin and Kozachenko (2000), Theorem 1.3] through maximizing a series of normalized moments. Importantly, the recommended norm can not only recover the exponential moment bounds for the corresponding MGFs, but also lead to tighter Hoeffiding's sub-Gaussian concentration inequalities. In practice, we propose an intuitive way of checking sub-Gaussian data with a finite sample size by the sub-Gaussian plot. Intrinsic moment norm can be robustly estimated via a simple plug-in approach. Our theoretical results are applied to non-asymptotic analysis, including the multi-armed bandit."}, "cited_paper_content": {"title": "Bootstrapping Upper Confidence Bound", "abstract": "Upper Confidence Bound (UCB) method is arguably the most celebrated one used in online decision making with partial information feedback. Existing techniques for constructing confidence bounds are typically built upon various concentration inequalities, which thus lead to over-exploration. In this paper, we propose a non-parametric and data-dependent UCB algorithm based on the multiplier bootstrap. To improve its finite sample performance, we further incorporate second-order correction into the above construction. In theory, we derive both problem-dependent and problem-independent regret bounds for multi-armed bandits under a much weaker tail assumption than the standard sub-Gaussianity. Numerical results demonstrate significant regret reductions by our method, in comparison with several baselines in a range of multi-armed and linear bandit problems."}, "keywords": ["non-asymptotic inference"], "citation_intent": "background"} {"citing_id": "2303.02861v1", "cited_id": "1902.00751", "section_title": "Introduction", "citation": "There thus has been a growing interest in developing parameter-efficient methods for model tuning #REFR , where the goal is to learn only a small number of additional parameters per task while achieving performance comparable to full finetuning.", "text_before_citation": ["Finetuning pretrained language models (PLMs) has led to significant improvements across various downstream NLP tasks #OTHEREFR .", "However, the conventional paradigm of full task-specific finetuning (FT) is difficult to scale to multiple tasks, given that modern PLMs can have hundreds of millions (or even billions) of parameters."], "text_after_citation": ["Prompt tuning (PT), which prepends tunable continuous prompt vectors to the input, has emerged as a promising approach for parameter-efficient transfer learning with PLMs #OTHEREFR .", "PT freezes the PLM parameters and only learns a small set of task-specific prompt vectors.", "However, despite their impressive performance, there is still a large gap between prompt tuning and full finetuning .", "Additionally, this approach is sensitive to initialization and often requires more training time than finetuning #OTHEREFR .", "Recent work has proposed to address these issues by transferring prompt vectors from various tasks #OTHEREFR ."], "citing_paper_content": {"title": "Multitask Prompt Tuning Enables Parameter-Efficient Transfer Learning", "abstract": "Prompt tuning, in which a base pretrained model is adapted to each task via conditioning on learned prompt vectors, has emerged as a promising approach for efficiently adapting large language models to multiple downstream tasks. However, existing methods typically learn soft prompt vectors from scratch, and it has not been clear how to exploit the rich cross-task knowledge with prompt vectors in a multitask learning setting. We propose multitask prompt tuning (MPT), which first learns a single transferable prompt by distilling knowledge from multiple task-specific source prompts. We then learn multiplicative low rank updates to this shared prompt to efficiently adapt it to each downstream target task. Extensive experiments on 23 NLP datasets demonstrate that our proposed approach outperforms the state-of-the-art methods, including the full finetuning baseline in some cases, despite only tuning 0.035% as many task-specific parameters. 1 * Work done during an internship at MIT-IBM Watson AI Lab."}, "cited_paper_content": {"title": "Parameter-Efficient Transfer Learning For Nlp", "abstract": "Fine-tuning large pre-trained models is an effective transfer mechanism in NLP. However, in the presence of many downstream tasks, fine-tuning is parameter inefficient: an entire new model is required for every task. As an alternative, we propose transfer with adapter modules. Adapter modules yield a compact and extensible model; they add only a few trainable parameters per task, and new tasks can be added without revisiting previous ones. The parameters of the original network remain fixed, yielding a high degree of parameter sharing. To demonstrate adapter's effectiveness, we transfer the recently proposed BERT Transformer model to 26 diverse text classification tasks, including the GLUE benchmark. Adapters attain near state-of-the-art performance, whilst adding only a few parameters per task. On GLUE, we attain within 0.4% of the performance of full fine-tuning, adding only 3.6% parameters per task. By contrast, fine-tuning trains 100% of the parameters per task."}, "keywords": ["model tuning"], "citation_intent": "background"} {"citing_id": "2303.16880v1", "cited_id": "1909.11500", "section_title": "", "citation": "Our model is also deeply related to the socalled hidden-manifold model #REFR , which has been used as an analytically solvable model of feedforward neural networks fitting datapoints that live on a low dimensional sub-manifold of their embedding space.", "text_before_citation": ["Most theoretical studies of (generalized) HMs assume simple distributions for the patterns [2, 3] , while in prac- * Corresponding author; matteo.negri@uniroma1.it tical applications the patterns are linearly or non-linearly encoded from and decoded to a different space [16] .", "In this work, we addressed this limitation by proposing a generative model for the patterns where each pattern is produced by the linear combinations of a fixed vocabulary of what we call factors weighted by pattern specific coefficients, followed by an elementwise non-linearity.", "We analyze the model in the high-dimensional regime using the replica method for the statistical physics of disordered systems.", "This data-generating process generalizes the structure of linear superposition proposed in [17] , where it was discussed in relation to the mapping between a Hopfield network and a restricted Boltzmann machine.", "A similar linear (but dense) mapping has been discussed in [18, #OTHEREFR ."], "text_after_citation": ["In fact, this lowdimensional latent structure is typical of many real-world datasets, e.g. the ones made of natural images.", "Here we do not modify the Hebb rule, as we will see that it is enough to produce a new behaviour of the model, in conjunction with the structure of correlation that we choose.", "In fact, we observe that if the correlations in the data are strong enough the model switches from a storage phase to a learning phase, in the sense that attractors appear corresponding to the factors in the data.", "We argue that this behaviour opens up a new paradigm for this model and shows that it may have some phenomenology in common with neural networks.", "Model definition."], "citing_paper_content": {"title": "The Hidden-Manifold Hopfield Model And A Learning Phase Transition", "abstract": "The Hopfield model has a long-standing tradition in statistical physics, being one of the few neural networks for which a theory is available. Extending the theory of Hopfield models for correlated data could help understand the success of deep neural networks, for instance describing how they extract features from data. Motivated by this, we propose and investigate a generalized Hopfield model that we name Hidden-Manifold Hopfield Model : we generate the couplings from P = \u03b1N examples with the Hebb rule using a non-linear transformation of D = \u03b1DN random vectors that we call factors, with N the number of neurons. Using the replica method, we obtain a phase diagram for the model that shows a phase transition where the factors hidden in the examples become attractors of the dynamics; this phase exists above a critical value of \u03b1 and below a critical value of \u03b1D. We call this behaviour learning transition."}, "cited_paper_content": {"title": "Modelling The Influence Of Data Structure On Learning In Neural Networks: The Hidden Manifold Model", "abstract": "The lack of crisp mathematical models that capture the structure of real-world data sets is a major obstacle to the detailed theoretical understanding of deep neural networks. Here, we introduce a generative model for data sets that we call the hidden manifold model (HMM). The idea is to have high-dimensional inputs lie on a lower-dimensional manifold, with labels that depend only on their position within this manifold, akin to a single layer decoder or generator in a generative adversarial network. We first demonstrate the effect of structured data sets by experimentally comparing the dynamics and the performance of two-layer neural networks trained on three different data sets: (i) an unstructured synthetic data set containing random i.i.d. inputs, (ii) a structured data set drawn from the HMM and (iii) a simple canonical data set containing MNIST images. We pinpoint two phenomena related to the dynamics of the networks and their ability to generalise that only appear when training on structured data sets, and we experimentally demonstrate that training networks on data sets drawn from the HMM reproduces both the phenomena seen during training on real dataset. Our main theoretical result is that we show that the learning dynamics in the hidden manifold model is amenable to an analytical treatment by proving a \"Gaussian Equivalence Theorem\", opening the way to further detailed theoretical studies. In particular, we show how the dynamics of stochastic gradient descent for a two-layer network is captured by a set of ordinary differential equations that track the generalisation error at all times."}, "keywords": ["socalled hidden-manifold model", "feedforward neural networks"], "citation_intent": "method"} {"citing_id": "2303.03540v1", "cited_id": "2004.02843", "section_title": "I. Introduction", "citation": "CODEGNN #REFR is a graph neural network (GNN) that utilizes both sequential and AST representation of source code.", "text_before_citation": ["It is possible that this status quo is caused by the lack of information about the performance of optimizers in ML4SE tasks, as it is unclear how to infer the optimizer effectiveness for the ML4SE domain.", "This motivated us to study the efficiency of various optimizers on various ML4SE problems.", "In our work, we consider four models.", "CODE2SEQ #OTHEREFR and TREELSTM #OTHEREFR are recurrent neural networks (RNNs) that use Abstract Syntax Trees (AST) together with the source code itself.", "Both models are loosely based on the LSTM model, but CODE2SEQ derives a path-based representation of code from the AST, while TREELSTM uses the AST information as is."], "text_after_citation": ["Finally, CODETRANSFORMER #OTHEREFR is an augmented transformer model for code, which combines information from multiple relations between tokens to calculate attention.", "To test these models, we use two code-to-text generation problems as benchmarks: documentation generation and method name generation.", "We chose these two ML4SE related problems to check the consistency of the optimizer performance.", "Moreover, these two problems are often used as the benchmarking problems for ML4SE models #OTHEREFR , #OTHEREFR , #OTHEREFR - #OTHEREFR .", "For the documentation generation problem, we use the Python and Java parts of the CodeXGLUE dataset #OTHEREFR (JAVA-CODEXGLUE, PYTHON-CODEXGLUE), thus checking whether the programming language of the processed source code affects the optimizer performance."], "citing_paper_content": {"title": "Judging Adam: Studying The Performance Of Optimization Methods On Ml4Se Tasks", "abstract": "Solving a problem with a deep learning model requires researchers to optimize the loss function with a certain optimization method. The research community has developed more than a hundred different optimizers, yet there is scarce data on optimizer performance in various tasks. In particular, none of the benchmarks test the performance of optimizers on source code-related problems. However, existing benchmark data indicates that certain optimizers may be more efficient for particular domains. In this work, we test the performance of various optimizers on deep learning models for source code and find that the choice of an optimizer can have a significant impact on the model quality, with up to twofold score differences between some of the relatively well-performing optimizers. We also find that RAdam optimizer (and its modification with the Lookahead envelope) is the best optimizer that almost always performs well on the tasks we consider. Our findings show a need for a more extensive study of the optimizers in code-related tasks, and indicate that the ML4SE community should consider using RAdam instead of Adam as the default optimizer for coderelated deep learning tasks."}, "cited_paper_content": {"title": "Improved Code Summarization Via A Graph Neural Network", "abstract": "Automatic source code summarization is the task of generating natural language descriptions for source code. Automatic code summarization is a rapidly expanding research area, especially as the community has taken greater advantage of advances in neural network and AI technologies. In general, source code summarization techniques use the source code as input and outputs a natural language description. Yet a strong consensus is developing that using structural information as input leads to improved performance. The first approaches to use structural information flattened the AST into a sequence. Recently, more complex approaches based on random AST paths or graph neural networks have improved on the models using flattened ASTs. However, the literature still does not describe the using a graph neural network together with source code sequence as separate inputs to a model. Therefore, in this paper, we present an approach that uses a graph-based neural architecture that better matches the default structure of the AST to generate these summaries. We evaluate our technique using a data set of 2.1 million Java method-comment pairs and show improvement over four baseline techniques, two from the software engineering literature, and two from machine learning literature."}, "keywords": ["CODEGNN", "graph neural network"], "citation_intent": "background"} {"citing_id": "2303.16502v1", "cited_id": "2002.12410", "section_title": "Unified Analysis", "citation": "Let {x k } k\u22650 be the iterates produced by SGD (Algorithm in (6)), where stochastic gradients are unbiased (i.e., satisfy #REFR ).", "text_before_citation": ["The main building block of the unified analysis from #OTHEREFR is the following parametric assumption on the gradient estimate g k and the problem itself. Assumption 1 (Assumption 4.1 from #OTHEREFR )."], "text_after_citation": ["Assume that there exist non-negative constants A, B, C, D 1 , D 2 \u2265 0, \u03c1 \u2208 (0, 1] and a (possibly) random non-negative sequence {\u03c3 #OTHEREFR k } k\u22650 such that the following two relations hold", "EQUATION", "E k \u03c3 2 k+1 \u2264 (1 \u2212 \u03c1)\u03c3 2 k + 2C f (x k ) \u2212 f (x * ) + D 2 . (9)", "The above assumption is motivated by the analysis of different SGD-type methods and can be derived for standard setups.", "The simplest example is Gradient Descent (GD) applied to the minimization problem of L-smooth function f ."], "citing_paper_content": {"title": "Unified Analysis Of Sgd-Type Methods", "abstract": "This note focuses on a simple approach to the unified analysis of SGD-type methods from [21] for strongly convex smooth optimization problems. The similarities in the analyses of different stochastic first-order methods are discussed along with the existing extensions of the framework. The limitations of the analysis and several alternative approaches are mentioned as well."}, "cited_paper_content": {"title": "On Biased Compression For Distributed Learning", "abstract": "In the last few years, various communication compression techniques have emerged as an indispensable tool helping to alleviate the communication bottleneck in distributed learning. However, despite the fact {\\em biased} compressors often show superior performance in practice when compared to the much more studied and understood {\\em unbiased} compressors, very little is known about them. In this work we study three classes of biased compression operators, two of which are new, and their performance when applied to (stochastic) gradient descent and distributed (stochastic) gradient descent. We show for the first time that biased compressors can lead to linear convergence rates both in the single node and distributed settings. Our {\\em distributed} SGD method enjoys the ergodic rate $\\mathcal{O}\\left(\\frac{\\delta L \\exp(-K) }{\\mu} + \\frac{(C + D)}{K\\mu}\\right)$, where $\\delta$ is a compression parameter which grows when more compression is applied, $L$ and $\\mu$ are the smoothness and strong convexity constants, $C$ captures stochastic gradient noise ($C=0$ if full gradients are computed on each node) and $D$ captures the variance of the gradients at the optimum ($D=0$ for over-parameterized models). Further, via a theoretical study of several synthetic and empirical distributions of communicated gradients, we shed light on why and by how much biased compressors outperform their unbiased variants. Finally, we propose a new highly performing biased compressor---combination of Top-$k$ and natural dithering---which in our experiments outperforms all other compression techniques."}, "keywords": ["stochastic gradients"], "citation_intent": "background"} {"citing_id": "2303.12408v2", "cited_id": "2003.08934", "section_title": "Optimization", "citation": "The images of EgoNeRF are synthesized by applying the volume rendering equation along the camera ray #REFR and the optional environment map.", "text_before_citation": [], "text_after_citation": ["Specifically, the points x i = o + t i d along the camera ray from camera position o and ray direction d are accumulated to find the pixel value b\u0177", "C = N i=1 \u03c4 i (1 \u2212 e \u2212\u03c3(xi)\u03b4i )c(x i , d) + \u03c4 N +1 c env (d). (11) N = N c + N f", "is the number of samples as described in Sec. 4.1.", "\u03c3(x) and c(x, d) are obtained from our balanced feature grids in Eq. (5).", "Since the size of our feature grid is exponentially increasing along the r direction, we distribute N c coarse samples exponentially rather than uniformly. The second term in Eq. 11is fetched from the environment map"], "citing_paper_content": {"title": "Balanced Spherical Grid For Egocentric View Synthesis", "abstract": "Figure 1. We propose a practical solution to reconstruct large-scale scenes from a short egocentric video. (a) Our scalable capturing setup observes the holistic environment by casually swiping a selfie stick with an omnidirectional camera attached. (b) Then we optimize our balanced spherical feature grids which are tailored for the outward-looking setup."}, "cited_paper_content": {"title": "Nerf: Representing Scenes As Neural Radiance Fields For View Synthesis", "abstract": "We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. Our algorithm represents a scene using a fully-connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location $(x,y,z)$ and viewing direction $(\\theta, \\phi)$) and whose output is the volume density and view-dependent emitted radiance at that spatial location. We synthesize views by querying 5D coordinates along camera rays and use classic volume rendering techniques to project the output colors and densities into an image. Because volume rendering is naturally differentiable, the only input required to optimize our representation is a set of images with known camera poses. We describe how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrate results that outperform prior work on neural rendering and view synthesis. View synthesis results are best viewed as videos, so we urge readers to view our supplementary video for convincing comparisons."}, "keywords": ["camera ray", "volume"], "citation_intent": "method"} {"citing_id": "2305.00817v1", "cited_id": "1911.10692", "section_title": "Loss Function Based Mitigation.", "citation": "In support of this review of prior work on racial bias mitigation a summary table of related work is provided to compare overall relative performance on the RFW dataset #REFR (Table 3 ).", "text_before_citation": ["L = L + \u00d7 L + \u00d7 L \u210e L = , ( , ) 2 , 0 < ( , ) < (7)", "where is the upper bound of the penalty interval, and is the number of feature representation vectors pairs whose cosine similarity lies within the interval (0, ). domains.", "In another example in face recognition, #OTHEREFR introduces Cross-Domain Triplet (CDT) loss based on the triplet loss #OTHEREFR and uses similarity metrics from one domain to learn compact feature clusters of identities by incorporating them into another domain.", "Relative performance for both CDT and MFR on the RFW dataset are shown in Table 3 .", "This section presents a brief overview of face representation learning, including the potential sources of biases and mitigation studies within this stage of the face recognition processing pipeline (Fig. 3) ."], "text_after_citation": [], "citing_paper_content": {"title": "Racial Bias Within Face Recognition: A Survey", "abstract": "Facial recognition is one of the most academically studied and industrially developed areas within computer vision where we readily find associated applications deployed globally. This widespread adoption has uncovered significant performance variation across subjects of different racial profiles leading to focused research attention on racial bias within face recognition spanning both current causation and future potential solutions. In support, this study provides an extensive taxonomic review of research on racial bias within face recognition exploring every aspect and stage of the face recognition processing pipeline. Firstly, we discuss the problem definition of racial bias, starting with race definition, grouping strategies, and the societal implications of using race or race-related groupings. Secondly, we divide the common face recognition processing pipeline into four stages: image acquisition, face localisation, face representation, face verification and identification, and review the relevant corresponding literature associated with each stage. The overall aim is to provide comprehensive coverage of the racial bias problem with respect to each and every stage of the face recognition processing pipeline whilst also highlighting the potential pitfalls and limitations of contemporary mitigation strategies that need to be considered within future research endeavours or commercial applications alike."}, "cited_paper_content": {"title": "Mitigate Bias In Face Recognition Using Skewness-Aware Reinforcement Learning", "abstract": "Racial equality is an important theme of international human rights law, but it has been largely obscured when the overall face recognition accuracy is pursued blindly. More facts indicate racial bias indeed degrades the fairness of recognition system and the error rates on non-Caucasians are usually much higher than Caucasians. To encourage fairness, we introduce the idea of adaptive margin to learn balanced performance for different races based on large margin losses. A reinforcement learning based race balance network (RL-RBN) is proposed. We formulate the process of finding the optimal margins for non-Caucasians as a Markov decision process and employ deep Q-learning to learn policies for an agent to select appropriate margin by approximating the Q-value function. Guided by the agent, the skewness of feature scatter between races can be reduced. Besides, we provide two ethnicity aware training datasets, called BUPT-Globalface and BUPT-Balancedface dataset, which can be utilized to study racial bias from both data and algorithm aspects. Extensive experiments on RFW database show that RL-RBN successfully mitigates racial bias and learns more balanced performance for different races."}, "keywords": ["racial bias"], "citation_intent": "background"} {"citing_id": "2305.02093v1", "cited_id": "1402.5886", "section_title": "Experimental Results", "citation": "UFODT performs even better than VFDT and EFDT on LED, Heart and Fico datasets. 2018] as in #REFR .", "text_before_citation": ["We compute the average number of queries per online session to compare the costs of algorithms.", "We also evaluate the generalization power of classifiers via holdout test sets.", "Additionally, (in the appendix F) we measure the prediction performance on training sets during learning together with various other aspects of our framework. Datasets.", "We have used three stationary datasets in our experiments that are standard binary classification datasets taken from UCI repository #OTHEREFR .", "Furthermore, we conduct experiments on the ProPublica recidivism (Compas) dataset #OTHEREFR and the Fair Isaac (Fico) credit risk dataset [FICO et al., Figure 2 : Test utilities during training: UFODT reaches test utilities comparable with those from VFDT and EFDT but with significantly lower costs."], "text_after_citation": ["In Compas dataset, we predict the individuals arrested after two years of release, and in Fico we predict if an individual will default on a loan.", "For concept drifting experiments, we adopt the non-stationary Stagger dataset #OTHEREFR , where each data has three nominal attributes and the target concept will change abruptly at some point.", "For extensions to continuous features (as well as for feature selection in the appendix), we use Prima Indians Diabetes Dataset [Smith et al., 1988] Algorithms.", "The VFDT algorithm #OTHEREFR ] is used as a classical baseline ODT model.", "We also compare our method with the EFDT algorithm #OTHEREFR ."], "citing_paper_content": {"title": "Efficient Online Decision Tree Learning With Active Feature Acquisition", "abstract": "Constructing decision trees online is a classical machine learning problem. Existing works often assume that features are readily available for each incoming data point. However, in many real world applications, both feature values and the labels are unknown a priori and can only be obtained at a cost. For example, in medical diagnosis, doctors have to choose which tests to perform (i.e., making costly feature queries) on a patient in order to make a diagnosis decision (i.e., predicting labels). We provide a fresh perspective to tackle this practical challenge. Our framework consists of an active planning oracle embedded in an online learning scheme for which we investigate several information acquisition functions. Specifically, we employ a surrogate information acquisition function based on adaptive submodularity to actively query feature values with a minimal cost, while using a posterior sampling scheme to maintain a low regret for online prediction. We demonstrate the efficiency and effectiveness of our framework via extensive experiments on various real-world datasets. Our framework also naturally adapts to the challenging setting of online learning with concept drift and is shown to be competitive with baseline models while being more flexible."}, "cited_paper_content": {"title": "Near Optimal Bayesian Active Learning For Decision Making", "abstract": "How should we gather information to make effective decisions? We address Bayesian active learning and experimental design problems, where we sequentially select tests to reduce uncertaintyaboutasetofhypotheses. Instead ofminimizinguncertaintyperse,weconsidera set of overlappingdecision regions of these hypotheses. Our goal is to drive uncertainty into a single decision region as quickly as possible. We identify necessary and sucient conditionsforcorrectlyidentifyingadecisionregion that contains all hypotheses consistent with observations. We develop a novel Hyperedge Cutting (HEC) algorithm for this problem, and prove that is competitive with the intractable optimal policy. Our ecient implementation of the algorithm relies on computingsubsetsofthecompletehomogeneoussymmetric polynomials. Finally, we demonstrate its eectiveness on two practical applications: approximate comparison-based learning and activelocalizationusingarobotmanipulator."}, "keywords": ["Fico datasets"], "citation_intent": "result"} {"citing_id": "2304.09595v1", "cited_id": "1912.02292", "section_title": "C.1 Analysis Of The Relationship Between Gnn Size And Error", "citation": "Second, the training error does not approach zero, indicating that even a large model is not over-parameterized #REFR .", "text_before_citation": ["This inequality suggests that smaller model sizes typically lead to lower test errors when training a model from scratch.", "This forms the basis for the effectiveness of our delta tuning method.", "We conducted experiments on six small molecular datasets, varying model sizes in two ways: (1) by varying embedding dimensions while fixing the MLP middle dimension at twice the embedding dimensions, and (2) by fixing embedding dimensions at 300 and varying the middle dimensions.", "The results are presented in Figures 6 and 7 . For all datasets, we made two observations.", "First, the test error initially decreases and then increases as the model size grows."], "text_after_citation": ["These observations suggest that our tasks fall within the scope of the classical regime and reducing the size of the parameter space, unless too small, can help improve generalization ability.", "It is important to note that the phenomenon in question occurs exclusively in the context of training from scratch.", "To leverage the knowledge gained from pre-training, employing a larger model is necessary.", "To enhance the generalization ability of this large model, the full fine-tuning approach can be replaced with delta tuning. As illustrated in Fig.", "3(a) , embedding dimensions between 200 to 300 exhibit the highest performance for full fine-tuning, albeit with inferior test error when training from scratch."], "citing_paper_content": {"title": "Adaptergnn: Efficient Delta Tuning Improves Generalization Ability In Graph Neural Networks", "abstract": "Fine-tuning pre-trained models has recently yielded remarkable performance gains in graph neural networks (GNNs). In addition to pre-training techniques, inspired by the latest work in the natural language fields, more recent work has shifted towards applying effective fine-tuning approaches, such as parameter-efficient tuning (delta tuning). However, given the substantial differences between GNNs and transformer-based models, applying such approaches directly to GNNs proved to be less effective. In this paper, we present a comprehensive comparison of delta tuning techniques for GNNs and propose a novel delta tuning method specifically designed for GNNs, called AdapterGNN. AdapterGNN preserves the knowledge of the large pre-trained model and leverages highly expressive adapters for GNNs, which can adapt to downstream tasks effectively with only a few parameters, while also improving the model's generalization ability on the downstream tasks. Extensive experiments show that AdapterGNN achieves higher evaluation performance (outperforming full fine-tuning by 1.4% and 5.5% in the chemistry and biology domains respectively, with only 5% of its parameters tuned) and lower generalization gaps compared to full fine-tuning. Moreover, we empirically show that a larger GNN model can have a worse generalization ability, which differs from the trend observed in large language models. We have also provided a theoretical justification for delta tuning can improve the generalization ability of GNNs by applying generalization bounds."}, "cited_paper_content": {"title": "Deep Double Descent: Where Bigger Models And More Data Hurt", "abstract": "We show that a variety of modern deep learning tasks exhibit a \"double-descent\" phenomenon where, as we increase model size, performance first gets worse and then gets better. Moreover, we show that double descent occurs not just as a function of model size, but also as a function of the number of training epochs. We unify the above phenomena by defining a new complexity measure we call the effective model complexity and conjecture a generalized double descent with respect to this measure. Furthermore, our notion of model complexity allows us to identify certain regimes where increasing (even quadrupling) the number of train samples actually hurts test performance."}, "keywords": ["large model"], "citation_intent": "background"} {"citing_id": "2304.06244v1", "cited_id": "2002.04881", "section_title": "Related Works", "citation": "Further, they proposed to learn \"flat manifolds\" with VAEs #REFR , whose decoder essentially preserves the Euclidean distance between points in the latent space and the decoded points in the data space.", "text_before_citation": ["The encoders in commonly used traditional codecs such as H.264 #OTHEREFR and HEVC #OTHEREFR are also equipped with an exhaustive search procedure to select the optimal block partitioning and coding modes for image frame.", "More recently, the idea of iterative and optimization-based encoding is becoming increasingly prominent in nonlinear transform coding #OTHEREFR , as well as computer vision in the form of implicit neural representations #OTHEREFR .", "It is therefore interesting to see whether ideas from vector quantization and implicit neural representations may prove fruitful for further reducing the decoding complexity in NTC.", "Manifold learning A distantly related line of work is in metric learning with deep generative models, where the idea is to learn a latent representation of the data such that distance in the latent space preserves the similarity in the data space. Chen et al.", "#OTHEREFR proposes the use of the Riemannian distance metric induced by a decoding transform of a latent variable to measure similarity in the data space."], "text_after_citation": ["Their method is based on regularizing the Jacobian of the VAE to be constant, essentially resulting in a linear (affine) decoder with similar behavior to what we observed in learned synthesis transforms in Section 3.1."], "citing_paper_content": {"title": "Asymmetrically-Powered Neural Image Compression With Shallow Decoders", "abstract": "Neural image compression methods have seen increasingly strong performance in recent years. However, they suffer orders of magnitude higher computational complexity compared to traditional codecs, which stands in the way of real-world deployment. This paper takes a step forward in closing this gap in decoding complexity by adopting shallow or even linear decoding transforms. To compensate for the resulting drop in compression performance, we exploit the often asymmetrical computation budget between encoding and decoding, by adopting more powerful encoder networks and iterative encoding. We theoretically formalize the intuition behind, and our experimental results establish a new frontier in the trade-off between rate-distortion and decoding complexity for neural image compression. Specifically, we achieve rate-distortion performance competitive with the established mean-scale hyperprior architecture of Minnen et al. (2018), while reducing the overall decoding complexity by 80 %, or over 90 % for the synthesis transform alone. Our code can be found at https: //github.com/mandt-lab/shallow-ntc. 1"}, "cited_paper_content": {"title": "Learning Flat Latent Manifolds With Vaes", "abstract": "Measuring the similarity between data points often requires domain knowledge. This can in parts be compensated by relying on unsupervised methods such as latent-variable models, where similarity/distance is estimated in a more compact latent space. Prevalent is the use of the Euclidean metric, which has the drawback of ignoring information about similarity of data stored in the decoder, as captured by the framework of Riemannian geometry. Alternatives---such as approximating the geodesic---are often computationally inefficient, rendering the methods impractical. We propose an extension to the framework of variational auto-encoders allows learning flat latent manifolds, where the Euclidean metric is a proxy for the similarity between data points. This is achieved by defining the latent space as a Riemannian manifold and by regularising the metric tensor to be a scaled identity matrix. Additionally, we replace the compact prior typically used in variational auto-encoders with a recently presented, more expressive hierarchical one---and formulate the learning problem as a constrained optimisation problem. We evaluate our method on a range of data-sets, including a video-tracking benchmark, where the performance of our unsupervised approach nears that of state-of-the-art supervised approaches, while retaining the computational efficiency of straight-line-based approaches."}, "keywords": ["latent space"], "citation_intent": "background"} {"citing_id": "2303.05194v1", "cited_id": "2002.05709", "section_title": "Cross-Domain Contrastive Loss", "citation": "Although the CDC loss could in principle be applied to encoder features directly, we first project the dense features to a dedicated 128-dimensional embedding space, as per standard practice #REFR .", "text_before_citation": ["The cross-domain contrastive (CDC) loss is the central component of CMA.", "It incentivizes the encoder to learn features which discriminatively reflect the semantics, but are invariant to the visual condition.", "To this end, spatial target image features represent anchors, which are pulled towards spatially corresponding reference image features-the positives.", "The anchors and positives are assumed to represent similar semantics in distinct visual conditions.", "Simultaneously, the anchors are contrasted to other target image features-the negatives-to prevent mode collapse."], "text_after_citation": ["The projection head PROJ consists of two 1\u00d71 convolutions with a ReLU non-linearity inbetween. As shown in Fig.", "2 , the embeddings of the trainable model PROJ \u03b8 \u2022 ENC \u03b8 serve as anchors, while-as proposed in #OTHEREFR -positives and negatives are obtained by an exponential moving average model PROJ ema \u2022 ENC ema to improve their consistency.", "Furthermore, we use a queue to accumulate negatives #OTHEREFR .", "This enables the use of a large number of negatives during instance discrimination, which encourages the learning of meaningful representations by making the discrimination more challenging.", "Finally, the positives are spatially warped to align them with the anchors, as detailed in Sec. 3.2."], "citing_paper_content": {"title": "Contrastive Model Adaptation For Cross-Condition Robustness In Semantic Segmentation", "abstract": "Standard unsupervised domain adaptation methods adapt models from a source to a target domain using labeled source data and unlabeled target data jointly. In model adaptation, on the other hand, access to the labeled source data is prohibited, i.e., only the source-trained model and unlabeled target data are available. We investigate normal-to-adverse condition model adaptation for semantic segmentation, whereby image-level correspondences are available in the target domain. The target set consists of unlabeled pairs of adverse-and normalcondition street images taken at GPS-matched locations. Our method-CMA-leverages such image pairs to learn condition-invariant features via contrastive learning. In particular, CMA encourages features in the embedding space to be grouped according to their condition-invariant semantic content and not according to the condition under which respective inputs are captured. To obtain accurate cross-domain semantic correspondences, we warp the normal image to the viewpoint of the adverse image and leverage warp-confidence scores to create robust, aggregated features. With this approach, we achieve state-of-the-art semantic segmentation performance for model adaptation on several normal-to-adverse adaptation benchmarks, such as ACDC and Dark Zurich. We also evaluate CMA on a newly procured adverse-condition generalization benchmark and report favorable results compared to standard unsupervised domain adaptation methods, despite the comparative handicap of CMA due to source data inaccessibility. Code is available at https://github.com/brdav/cma."}, "cited_paper_content": {"title": "A Simple Framework For Contrastive Learning Of Visual Representations", "abstract": "This paper presents SimCLR: a simple framework for contrastive learning of visual representations. We simplify recently proposed contrastive self-supervised learning algorithms without requiring specialized architectures or a memory bank. In order to understand what enables the contrastive prediction tasks to learn useful representations, we systematically study the major components of our framework. We show that (1) composition of data augmentations plays a critical role in defining effective predictive tasks, (2) introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and (3) contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning. By combining these findings, we are able to considerably outperform previous methods for self-supervised and semi-supervised learning on ImageNet. A linear classifier trained on self-supervised representations learned by SimCLR achieves 76.5% top-1 accuracy, which is a 7% relative improvement over previous state-of-the-art, matching the performance of a supervised ResNet-50. When fine-tuned on only 1% of the labels, we achieve 85.8% top-5 accuracy, outperforming AlexNet with 100X fewer labels."}, "keywords": ["dense features", "dedicated 128-dimensional embedding"], "citation_intent": "method"} {"citing_id": "2303.07109v1", "cited_id": "1912.01603", "section_title": "Transformer", "citation": "This is in contrast to related works #REFR Chen et al., 2022) that require the full world model during inference. 2.", "text_before_citation": ["The Transformer-XL architecture #OTHEREFR is much more computationally efficient than vanilla transformers at inference time and introduces relative positional encodings, which remove the dependence on absolute time steps.", "Our contributions: The contributions of this work can be summarized as follows:", "1.", "We present a new autoregressive world model based on the Transformer-XL #OTHEREFR ) architecture and a model-free agent trained in latent imagination.", "Running our policy is computationally efficient, as the transformer is not needed at inference time."], "text_after_citation": ["Our world model is provided with information on how much reward has already been emitted by feeding back predicted rewards into the world model. As shown in our ablation study, this improves performance. 3.", "We rewrite the balanced KL divergence loss of #OTHEREFR", "(2021) to allow us to fine-tune the relative weight of the involved entropy and cross-entropy terms. 4.", "We introduce a new thresholded entropy loss that stabilizes the policy's entropy during training and hereby simplifies the selection of hyperparameters that behave well across different games. 5.", "We propose a new effective sampling procedure for the growing dataset of experience, which balances the training distribution to shift the focus towards the latest experience."], "citing_paper_content": {"title": "Transformer-Based World Models Are Happy With 100K Interactions", "abstract": "Deep neural networks have been successful in many reinforcement learning settings. However, compared to human learners they are overly data hungry. To build a sample-efficient world model, we apply a transformer to real-world episodes in an autoregressive manner: not only the compact latent states and the taken actions but also the experienced or predicted rewards are fed into the transformer, so that it can attend flexibly to all three modalities at different time steps. The transformer allows our world model to access previous states directly, instead of viewing them through a compressed recurrent state. By utilizing the Transformer-XL architecture, it is able to learn long-term dependencies while staying computationally efficient. Our transformer-based world model (TWM) generates meaningful, new experience, which is used to train a policy that outperforms previous model-free and model-based reinforcement learning algorithms on the Atari 100k benchmark."}, "cited_paper_content": {"title": "Dream To Control: Learning Behaviors By Latent Imagination", "abstract": "To select effective actions in complex environments, intelligent agents need to generalize from past experience. World models can represent knowledge about the environment to facilitate such generalization. While learning world models from high-dimensional sensory inputs is becoming feasible through deep learning, there are many potential ways for deriving behaviors from them. We present Dreamer, a reinforcement learning agent that solves long-horizon tasks purely by latent imagination. We efficiently learn behaviors by backpropagating analytic gradients of learned state values through trajectories imagined in the compact state space of a learned world model. On 20 challenging visual control tasks, Dreamer exceeds existing approaches in data-efficiency, computation time, and final performance."}, "keywords": ["full world model"], "citation_intent": "result"} {"citing_id": "2304.12904v1", "cited_id": "1910.01108", "section_title": "Experiments", "citation": "We use DistilBERT #REFR as the pre-trained language model for the first stage models and for reranking Electra-large and T5-3B.", "text_before_citation": ["Note that this could be extended to other models, but we chose the three that we had most familiarity with.", "In more detail, we use DPR-cls #OTHEREFR as our dense retriever, SPLADE-max #OTHEREFR as our learned sparse retriever and RankT5-3b encoder only #OTHEREFR , monoElectra-large #OTHEREFR as our cross encoding rerankers.", "We train models on either the \"titled\" or the official MS MARCO-passage dataset. Titled passages are generated using \"Title [SEP] Passage\". We use only contrastive training (i.e.", "no distillation) and hard-negatives #OTHEREFR to train the first stage models and the negatives from SPLADE to train the reranker.", "First-stage retrieval training uses 8 queries per batch, and 32 negatives per query, and trains for 3 epochs."], "text_after_citation": ["Statistical significance is computed only when we directly compare the original vs \"title\" datasets (i.e.", "only Tables 1 and 4 and only on the same model with different corpus), with Student's t-test and \u2264 0.05."], "citing_paper_content": {"title": "The Tale Of Two Ms Marco -And Their Unfair Comparisons", "abstract": "The MS MARCO-passage dataset has been the main large-scale dataset open to the IR community and it has fostered successfully the development of novel neural retrieval models over the years. But, it turns out that two different corpora of MS MARCO are used in the literature, the official one and a second one where passages were augmented with titles, mostly due to the introduction of the Tevatron code base. However, the addition of titles actually leaks relevance information, while breaking the original guidelines of the MS MARCO-passage dataset. In this work, we investigate the differences between the two corpora and demonstrate empirically that they make a significant difference when evaluating a new method. In other words, we show that if a paper does not properly report which version is used, reproducing fairly its results is basically impossible. Furthermore, given the current status of reviewing, where monitoring state-of-the-art results is of great importance, having two different versions of a dataset is a large problem. This is why this paper aims to report the importance of this issue so that researchers can be made aware of this problem and appropriately report their results."}, "cited_paper_content": {"title": "Distilbert, A Distilled Version Of Bert: Smaller, Faster, Cheaper And Lighter", "abstract": "As Transfer Learning from large-scale pre-trained models becomes more prevalent in Natural Language Processing (NLP), operating these large models in on-the-edge and/or under constrained computational training or inference budgets remains challenging. In this work, we propose a method to pre-train a smaller general-purpose language representation model, called DistilBERT, which can then be fine-tuned with good performances on a wide range of tasks like its larger counterparts. While most prior work investigated the use of distillation for building task-specific models, we leverage knowledge distillation during the pre-training phase and show that it is possible to reduce the size of a BERT model by 40%, while retaining 97% of its language understanding capabilities and being 60% faster. To leverage the inductive biases learned by larger models during pre-training, we introduce a triple loss combining language modeling, distillation and cosine-distance losses. Our smaller, faster and lighter model is cheaper to pre-train and we demonstrate its capabilities for on-device computations in a proof-of-concept experiment and a comparative on-device study."}, "keywords": ["pre-trained language model"], "citation_intent": "method"} {"citing_id": "2303.17078v1", "cited_id": "1905.11075", "section_title": "Discussion", "citation": "In the past, many advances have been driven in the field of fluid mechanics #REFR , and this is likely to continue.", "text_before_citation": ["Understanding how these solution operators vary with system parameters is an important avenue of ongoing research #OTHEREFR .", "Similarly, machine learning may be used to accelerate traditional scientific computing workflows, for example by flexible super-resolution or learning of improved solution stencils.", "However, here are several challenges with these approaches, foremost the fact that traditional numerical algorithms are extremely mature and scaleable, so that machine learning solutions are expected to compete with decades of progress.", "In all of the cases explored in this perspective, progress will be accelerated by a diverse and robust set of benchmark problems with which to assess new solutions #OTHEREFR .", "In addition, we must stress that these techniques are primarily tools to be used by human experts for scientific discovery."], "text_after_citation": ["For example, understanding sensitivities with resolvent analysis #OTHEREFR , using physics informed neural networks (PINNs) #OTHEREFR for RANS modeling #OTHEREFR , and using wall measurements to estimate turbulent flow fields #OTHEREFR are all exciting avenues of research.", "Interestingly, there are also efforts to understand neural networks using techniques from PDEs #OTHEREFR .", "Although there is a desire for automated machine learning algorithms, when applied to science and engineering applications, this is still primarily a human endeavor.", "However, progress in the field of PDEs, enabled by machine learning, is undeniable.", "Despite this progress, there is still much we don't know about PDEs."], "citing_paper_content": {"title": "Machine Learning For Partial Differential Equations", "abstract": "Partial differential equations (PDEs) are among the most universal and parsimonious descriptions of natural physical laws, capturing a rich variety of phenomenology and multi-scale physics in a compact and symbolic representation. This review will examine several promising avenues of PDE research that are being advanced by machine learning, including: 1) the discovery of new governing PDEs and coarse-grained approximations for complex natural and engineered systems, 2) learning effective coordinate systems and reduced-order models to make PDEs more amenable to analysis, and 3) representing solution operators and improving traditional numerical algorithms. In each of these fields, we summarize key advances, ongoing challenges, and opportunities for further development."}, "cited_paper_content": {"title": "Machine Learning For Fluid Mechanics", "abstract": "The field of fluid mechanics is rapidly advancing, driven by unprecedented volumes of data from field measurements, experiments and large-scale simulations at multiple spatiotemporal scales. Machine learning offers a wealth of techniques to extract information from data that could be translated into knowledge about the underlying fluid mechanics. Moreover, machine learning algorithms can augment domain knowledge and automate tasks related to flow control and optimization. This article presents an overview of past history, current developments, and emerging opportunities of machine learning for fluid mechanics. It outlines fundamental machine learning methodologies and discusses their uses for understanding, modeling, optimizing, and controlling fluid flows. The strengths and limitations of these methods are addressed from the perspective of scientific inquiry that considers data as an inherent part of modeling, experimentation, and simulation. Machine learning provides a powerful information processing framework that can enrich, and possibly even transform, current lines of fluid mechanics research and industrial applications."}, "keywords": ["fluid mechanics"], "citation_intent": "background"} {"citing_id": "2304.05894v1", "cited_id": "1803.01616", "section_title": "Sdsbm Works With Little Data.", "citation": "When the number of observations is high, the \"no coupling\" baseline #REFR reaches the performances of SDSBM.", "text_before_citation": ["4b ., we vary the number of observations available for each item from 100 to 10.000, distributed over 100 epochs.", "Thus, in the most challenging situation, there is only one observation per epoch used to determine dynamic memberships over 3 clusters. In the expression of , is set to 0.05. We see Fig. 4b .", "that for both patterns, the predictive power of SDSBM remains high in such conditions.", "Moreover, the RMSE on the true dynamic memberships in this case is fairly low, and decreases rapidly as the number of observations increases.", "This is due to SDSBM linking the time slices together during the training phase: the smoothing constraint makes it so that every time slices indirectly benefits from the training data of its neighbours."], "text_after_citation": ["This is because as the number of observations in each slice goes to infinity, the models need less to rely on temporal neighbours.", "However, even for 10.000 observations per item (100 observations per epoch), SDSBM yields better results. As an illustration, the results in Fig.", "1 have been obtained using only 5 observations per epoch.", "4.1.3 SDSBM handles highly stochastic interaction patterns.", "Finally, we control the deterministic character of the block-interaction matrix by varying ."], "citing_paper_content": {"title": "Dynamic Mixed Membership Stochastic Block Model For Weighted Labeled Networks", "abstract": "Most real-world networks evolve over time. Existing literature proposes models for dynamic networks that are either unlabeled or assumed to have a single membership structure. On the other hand, a new family of Mixed Membership Stochastic Block Models (MMSBM) allows to model static labeled networks under the assumption of mixed-membership clustering. In this work, we propose to extend this later class of models to infer dynamic labeled networks under a mixed membership assumption. Our approach takes the form of a temporal prior on the model's parameters. It relies on the single assumption that dynamics are not abrupt. We show that our method significantly differs from existing approaches, and allows to model more complex systems-dynamic labeled networks. We demonstrate the robustness of our method with several experiments on both synthetic and real-world datasets. A key interest of our approach is that it needs very few training data to yield good results. The performance gain under challenging conditions broadens the variety of possible applications of automated learning tools-as in social sciences, which comprise many fields where small datasets are a major obstacle to the introduction of machine learning methods."}, "cited_paper_content": {"title": "Tensorial And Bipartite Block Models For Link Prediction In Layered Networks And Temporal Networks", "abstract": "Many real-world complex systems are well represented as multilayer networks; predicting interactions in those systems is one of the most pressing problems in predictive network science. To address this challenge, we introduce two stochastic block models for multilayer and temporal networks; one of them uses nodes as its fundamental unit, whereas the other focuses on links. We also develop scalable algorithms for inferring the parameters of these models. Because our models describe all layers simultaneously, our approach takes full advantage of the information contained in the whole network when making predictions about any particular layer. We illustrate the potential of our approach by analyzing two empirical datasets---a temporal network of email communications, and a network of drug interactions for treating different cancer types. We find that modeling all layers simultaneously does result, in general, in more accurate link prediction. However, the most predictive model depends on the dataset under consideration; whereas the node-based model is more appropriate for predicting drug interactions, the link-based model is more appropriate for predicting email communication."}, "keywords": ["SDSBM", "\"no coupling\" baseline"], "citation_intent": "background"} {"citing_id": "2303.11577v1", "cited_id": "1903.00104", "section_title": "Unsteady Lid-Driven Flow Problem", "citation": "According to #REFR , we work on inferring the newly defined effective reaction rate k f = v A k f,r as well as a r .", "text_before_citation": ["EQUATION", "where C A and C B are the concentration of solutes A and B respectively, q = 0.5 is the Darcy velocity, \u03c8 = 0.4 is the porosity, D = 1 \u00d7 10 \u22128 is the diffusion coefficient, v A = a r and v B = \u22121 are the stoichiometric coefficients.", "The constant k f,r is the chemical reaction rate and a r is the order of the chemical reaction, both of which are difficult to measure and will be inferred by the observations of C A .", "To obtain the observations, numerical simulations are conducted using a secondorder finite difference method for spatial discretization and the second-order backward difference method for implicitly marching in time.", "In simulations, the mesh size is \u2206x = 0.0125 and the time step size is \u2206t = 0.005."], "text_after_citation": ["We set the exact values to k f = 1.577 and a r = 2. The high-fidelity observations at Table 3 .", "It is shown that the inferred results are highly accurate with a small relative error and variance.", "Our results are significantly outperforming the reported results in #OTHEREFR .", "It is worth noting that only 50 residual points are used for training the networks, while 30000 residual points are used in #OTHEREFR .", "In order to take a further look at the prediction details, Fig."], "citing_paper_content": {"title": "Feature-Adjacent Multi-Fidelity Physics-Informed Machine Learning For Partial Differential Equations", "abstract": "Physics-informed neural networks have emerged as an alternative method for solving partial differential equations. However, for complex problems, the training of such networks can still require high-fidelity data which can be expensive to generate. To reduce or even eliminate the dependency on high-fidelity data, we propose a novel multi-fidelity architecture which is based on a feature space shared by the lowand high-fidelity solutions. In the feature space, the projections of the low-fidelity and high-fidelity solutions are adjacent by constraining their relative distance. The feature space is represented with an encoder and its mapping to the original solution space is effected through a decoder. The proposed multi-fidelity approach is validated on forward and inverse problems for steady and unsteady problems described by partial differential equations."}, "cited_paper_content": {"title": "A Composite Neural Network That Learns From Multi-Fidelity Data: Application To Function Approximation And Inverse Pde Problems", "abstract": "Currently the training of neural networks relies on data of comparable accuracy but in real applications only a very small set of high-fidelity data is available while inexpensive lower fidelity data may be plentiful. We propose a new composite neural network (NN) that can be trained based on multi-fidelity data. It is comprised of three NNs, with the first NN trained using the low-fidelity data and coupled to two high-fidelity NNs, one with activation functions and another one without, in order to discover and exploit nonlinear and linear correlations, respectively, between the low-fidelity and the high-fidelity data. We first demonstrate the accuracy of the new multi-fidelity NN for approximating some standard benchmark functions but also a 20-dimensional function that is not easy to approximate with other methods, e.g. Gaussian process regression. Subsequently, we extend the recently developed physics-informed neural networks (PINNs) to be trained with multi-fidelity data sets (MPINNs). MPINNs contain four fully-connected neural networks, where the first one approximates the low-fidelity data, while the second and third construct the correlation between the low- and high-fidelity data and produce the multi-fidelity approximation, which is then used in the last NN that encodes the partial differential equations (PDEs). Specifically, by decomposing the correlation into a linear and nonlinear part, the present model is capable of learning both the linear and complex nonlinear correlations between the low- and high-fidelity data adaptively. By training the MPINNs, we can: (1) obtain the correlation between the low- and high-fidelity data, (2) infer the quantities of interest based on a few scattered data, and (3) identify the unknown parameters in the PDEs. In particular, we employ the MPINNs to learn the hydraulic conductivity field for unsaturated flows as well as the reactive models for reactive transport. The results demonstrate that MPINNs can achieve relatively high accuracy based on a very small set of high-fidelity data. Despite the relatively low dimension and limited number of fidelities (two-fidelity levels) for the benchmark problems in the present study, the proposed model can be readily extended to very high-dimensional regression and classification problems involving multi-fidelity data."}, "keywords": ["reaction rate"], "citation_intent": "background"} {"citing_id": "2303.15100v2", "cited_id": "2003.00104", "section_title": "Tokenization Analysis -Datasets", "citation": "To further explore the number of word pieces per entity type, we isolate the unique entities #REFR .", "text_before_citation": ["We choose the biomedical (ADE) dataset to explore the effect of tokenization in a special domain.", "The ADE dataset contains entities of Drugs and Adverse Effects (AE) and has labels for the relations between them.", "The tokenizer of cased BERT #OTHEREFR and bioclinical BERT (b-BERT) #OTHEREFR is based on the WordPiece algorithm, while ALBERT #OTHEREFR adopts the SentencePiece algorithm. In Tab.", "2, we present the effect of tokenization on the average sentence length, in terms of word pieces (subwords), for each dataset.", "The sentence length increases by approximately 12 tokens, up to 58%, after the tokenization in the biomedical domain."], "text_after_citation": ["Then, we find the unique words that are part of each entity type and tokenize the unique entities and words using the different tokenizers to notice the difference in the length and the addition of the word pieces. In Tab.", "1 the last column represents the average tokenized word length per entity type, and the Out type describes the words that are not part of an entity of interest.", "In the ADE dataset, the length of the drug and AE entities increases substantially, and the drug entities are split into more word pieces.", "Particularly, a word that is part of a drug entity is split into approximately 4 word pieces, on average, when using the tokenizer of cased BERT and b-BERT.", "The tokenizer of ALBERT tends to split the entities of interest into fewer pieces."], "citing_paper_content": {"title": "An Information Extraction Study: Take In Mind The Tokenization!", "abstract": "Current research on the advantages and trade-offs of using characters, instead of tokenized text, as input for deep learning models, has evolved substantially. New token-free models remove the traditional tokenization step; however, their efficiency remains unclear. Moreover, the effect of tokenization is relatively unexplored in sequence tagging tasks. To this end, we investigate the impact of tokenization when extracting information from documents and present a comparative study and analysis of subword-based and character-based models. Specifically, we study Information Extraction (IE) from biomedical texts. The main outcome is twofold: tokenization patterns can introduce inductive bias that results in state-of-the-art performance, and the character-based models produce promising results; thus, transitioning to token-free IE models is feasible."}, "cited_paper_content": {"title": "Arabert: Transformer-Based Model For Arabic Language Understanding", "abstract": "The Arabic language is a morphologically rich language with relatively few resources and a less explored syntax compared to English. Given these limitations, Arabic Natural Language Processing (NLP) tasks like Sentiment Analysis (SA), Named Entity Recognition (NER), and Question Answering (QA), have proven to be very challenging to tackle. Recently, with the surge of transformers based models, language-specific BERT based models have proven to be very efficient at language understanding, provided they are pre-trained on a very large corpus. Such models were able to set new standards and achieve state-of-the-art results for most NLP tasks. In this paper, we pre-trained BERT specifically for the Arabic language in the pursuit of achieving the same success that BERT did for the English language. The performance of AraBERT is compared to multilingual BERT from Google and other state-of-the-art approaches. The results showed that the newly developed AraBERT achieved state-of-the-art performance on most tested Arabic NLP tasks. The pretrained araBERT models are publicly available on https://github.com/aub-mind/arabert hoping to encourage research and applications for Arabic NLP."}, "keywords": ["word pieces", "entity type"], "citation_intent": "method"} {"citing_id": "2304.02991v1", "cited_id": "1505.04597", "section_title": "Base 2D/3D Architecture", "citation": "The 2D branch processes images to obtain a pixel-wise prediction given x 2D and it consists of a standard 2D U-Net #REFR .", "text_before_citation": ["We build our contributions upon the two independent branches (2D and 3D) architecture proposed in #OTHEREFR ."], "text_after_citation": ["On the other hand, the 3D branch takes in input point clouds to estimate the class of each point of x 3D and it is implemented as a 3D sparse convolutional network #OTHEREFR .", "Thanks to the fact that 2D-3D correspondences are known, 3D points can be projected into the image plane to supervise the 2D branch, as supervision is provided only for the sparse 3D points.", "We denote the 3D semantic labels projected into 2D with the symbol y 3D\u21922D .", "As argued by #OTHEREFR , such design choice allows one to take advantage of the strengths of each input modality, and final predictions can be obtained by averaging the outputs of the two branches to achieve an effective ensemble.", "In our work, we adopt the same framework, and we give an intuitive explanation of why this design choice is particularly effective."], "citing_paper_content": {"title": "Exploiting The Complementarity Of 2D And 3D Networks To Address Domain-Shift In 3D Semantic Segmentation", "abstract": "2D Receptive Field Re-Projected into 3D Figure 1. 3D (top) and 2D (bottom) networks processing point clouds and images of the same scene extract features that contain complementary information. Indeed, 2D and 3D effective receptive fields [34] centered on a point n focus on different portions of the scene, i.e., 2D or 3D neighborhoods respectively. Thus, corresponding features have different content by construction. We exploit this property to reduce the domain gap in 3D semantic segmentation."}, "cited_paper_content": {"title": "U-Net: Convolutional Networks For Biomedical Image Segmentation", "abstract": "There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net ."}, "keywords": ["standard 2D U-Net"], "citation_intent": "method"} {"citing_id": "2303.12711v1", "cited_id": "1610.05683", "section_title": "Sampling From The Von Mises-Fisher Distribution", "citation": "The gradient for this sampling procedure can be computed using the reparameterization trick for acceptance-rejection sampling schemes as proposed by #REFR and further defined for the vMF distribution by .", "text_before_citation": [".", "The next step is to construct a Householder reflection matrix H, defined as H = I \u2212 2hh T , where H = I is the identity matrix and h = e1\u2212\u00b5 e1\u2212\u00b5 , with modal vector e 1 = (1, 0, \u2022 \u2022 \u2022 , 0).", "Applying this Householder transform to z essentially reflects it across the hyperplane that lies between \u00b5 and e 1 , resulting in z = Hz , a direction vector sampled from the vMF distribution.", "Algorithm 1 vMF Sampling 1: input: dimension m, mean \u00b5, concentration \u03ba 2: Acceptance-rejection sampling:", "\u03c9 \u223c g(\u03c9 | \u03ba, m) \u221d exp(\u03c9\u03ba) 1 \u2212 \u03c9 2 1 2 (m\u22123) 3: Sample v from Uniform distribution: v \u223c U S m\u22122 4: Householder transform: z \u2190 \u03c9; \u221a 1 \u2212 \u03c9 2 v H \u2190 Householder (e 1 , \u00b5) 5: return : Hz"], "text_after_citation": [], "citing_paper_content": {"title": "Geometry-Aware Latent Representation Learning For Modeling Disease Progression Of Barrett'S Esophagus", "abstract": "First and foremost, I would like to thank my supervisors, Erik Bekkers, Onno de Boer and Sybren Meijer. Erik for his valuable supervision and feedback, and for always supplying me with many interesting research ideas. His insights introduced me to many new ideas that have also fueled my enthusiasm for research in general. Furthermore, Onno and Sybren for introducing me to the field of histopathology and offering me an interesting look into the medical world. Working with you all made me see how technology and AI can directly impact clinical practice, and affirmed for me the importance of the medical applications of my work. I would also like to thank Sharvaree Vadgama for our weekly paper meetings at the beginning of this project, which greatly helped me in learning the technical preliminaries necessary to conduct this study. Moreover, I of course want to thank my family for supporting me unconditionally. And finally, I thank my friends from the former master AI room, who were always there struggling with me during this period and who made working on this thesis an almost enjoyable experience."}, "cited_paper_content": {"title": "Reparameterization Gradients Through Acceptance-Rejection Sampling Algorithms", "abstract": "Variational inference using the reparameterization trick has enabled large-scale approximate Bayesian inference in complex probabilistic models, leveraging stochastic optimization to sidestep intractable expectations. The reparameterization trick is applicable when we can simulate a random variable by applying a differentiable deterministic function on an auxiliary random variable whose distribution is fixed. For many distributions of interest (such as the gamma or Dirichlet), simulation of random variables relies on acceptance-rejection sampling. The discontinuity introduced by the accept-reject step means that standard reparameterization tricks are not applicable. We propose a new method that lets us leverage reparameterization gradients even when variables are outputs of a acceptance-rejection sampling algorithm. Our approach enables reparameterization on a larger class of variational distributions. In several studies of real and synthetic data, we show that the variance of the estimator of the gradient is significantly lower than other state-of-the-art methods. This leads to faster convergence of stochastic gradient variational inference."}, "keywords": ["reparameterization trick", "acceptance-rejection sampling schemes"], "citation_intent": "method"} {"citing_id": "2303.03382v1", "cited_id": "1906.02107", "section_title": "Why Should We Care About Threshold Networks?", "citation": "Specifically, these networks have significantly lower memory footprint, less computational complexity, and consume less energy #REFR .", "text_before_citation": ["Neural networks with threshold activations are highly desirable due to the following reasons:", "\u2022 Since the threshold activation (1) is restricted to take values in {0, s}, threshold neural network models are far more suitable for hardware implementations #OTHEREFR ."], "text_after_citation": ["\u2022 Modern neural networks have extremely large number of full precision trainable parameters so that several computational barriers emerge during hardware implementations. One approach to 7with the non-convex training heuristic STE.", "We also indicate the time taken to solve the convex programs with markers.", "For non-convex STE, we repeat the training with 5 different initializations.", "In each case, our convex training algorithms achieve lower objective than all the non-convex heuristics (see Appendix B.5 for details).", "mitigate these issues is reducing the network size by grouping the parameters via a hash function #OTHEREFR ."], "citing_paper_content": {"title": "Globally Optimal Training Of Neural Net-Works With Threshold Activation Functions", "abstract": "Threshold activation functions are highly preferable in neural networks due to their efficiency in hardware implementations. Moreover, their mode of operation is more interpretable and resembles that of biological neurons. However, traditional gradient based algorithms such as Gradient Descent cannot be used to train the parameters of neural networks with threshold activations since the activation function has zero gradient except at a single non-differentiable point. To this end, we study weight decay regularized training problems of deep neural networks with threshold activations. We first show that regularized deep threshold network training problems can be equivalently formulated as a standard convex optimization problem, which parallels the LASSO method, provided that the last hidden layer width exceeds a certain threshold. We also derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network. We corroborate our theoretical results with various numerical experiments."}, "cited_paper_content": {"title": "Latent Weights Do Not Exist: Rethinking Binarized Neural Network Optimization", "abstract": "Optimization of Binarized Neural Networks (BNNs) currently relies on real-valued latent weights to accumulate small update steps. In this paper, we argue that these latent weights cannot be treated analogously to weights in real-valued networks. Instead their main role is to provide inertia during training. We interpret current methods in terms of inertia and provide novel insights into the optimization of BNNs. We subsequently introduce the first optimizer specifically designed for BNNs, Binary Optimizer (Bop), and demonstrate its performance on CIFAR-10 and ImageNet. Together, the redefinition of latent weights as inertia and the introduction of Bop enable a better understanding of BNN optimization and open up the way for further improvements in training methodologies for BNNs."}, "keywords": ["networks", "less computational complexity"], "citation_intent": "background"} {"citing_id": "2303.05496v1", "cited_id": "1703.06103", "section_title": "Real-World Knowledge Graph Reasoning", "citation": "We also include Hit@10 as an additional metric following knowledge graph embedding literature and GraIL #REFR .", "text_before_citation": ["SpaLoc also has comparable performance with KE methods on the FB15K-237 datasets, outperforming GraIL by a large margin.", "Comparing SpaLoc with node embedding-based methods (TransE) and GNN-based methods (GraIL) that only consider binary edges, we see that our hyperedge-based model enables better relation prediction that requires reasoning about other entities.", "The necessity of hyperedges is further supported by Appendix E, where we show that setting the maximum arity of SpaLoc to 2 (i.e., removing hyperedges) significantly degrades the performance.", "Notably, in contrast to other methods for the transductive setting that store entity embeddings for all knowledge graph nodes, SpaLoc directly uses the inductive learning setting.", "That is, SpaLoc does Table 5 : Results of transductive link prediction on real-world knowledge graphs."], "text_after_citation": ["The scores and standard errors (appendix Table 8 not store entity information about each knowledge graph node.", "SpaLoc performs pretty well in the classification setting and ranking setting when negative candidate sets are small (Table 5 ).", "However, we find inductive reasoning methods, including SpaLoc, GraIL #OTHEREFR , and TACT #OTHEREFR , perform poorly in the traditional knowledge graph ranking setting #OTHEREFR (i.e., ranking the positive triplets against negative triplets of all replacement of head/tail) entities.", "The ranking scores are close to zero when the negative candidates set are large.", "We attribute this phenomenon to the fact that the inductive models do not store entity information."], "citing_paper_content": {"title": "Sparse And Local Networks For Hypergraph Reasoning", "abstract": "Reasoning about the relationships between entities from input facts (e.g., whether Ari is a grandparent of Charlie) generally requires explicit consideration of other entities that are not mentioned in the query (e.g., the parents of Charlie). In this paper, we present an approach for learning to solve problems of this kind in large, real-world domains, using sparse and local hypergraph neural networks (SpaLoc). SpaLoc is motivated by two observations from traditional logic-based reasoning: relational inferences usually apply locally (i.e., involve only a small number of individuals), and relations are usually sparse (i.e., only hold for a small percentage of tuples in a domain). We exploit these properties to make learning and inference efficient in very large domains by (1) using a sparse tensor representation for hypergraph neural networks, (2) applying a sparsification loss during training to encourage sparse representations, and (3) subsampling based on a novel information sufficiency-based sampling process during training. SpaLoc achieves state-of-the-art performance on several real-world, large-scale knowledge graph reasoning benchmarks, and is the first framework for applying hypergraph neural networks on real-world knowledge graphs with more than 10k nodes."}, "cited_paper_content": {"title": "Modeling Relational Data With Graph Convolutional Networks", "abstract": "Knowledge bases play a crucial role in many applications, for example question answering and information retrieval. Despite the great effort invested in creating and maintaining them, even the largest representatives (e.g., Yago, DBPedia or Wikidata) are highly incomplete. We introduce relational graph convolutional networks (R-GCNs) and apply them to two standard knowledge base completion tasks: link prediction (recovery of missing facts, i.e.~subject-predicate-object triples) and entity classification (recovery of missing attributes of entities). R-GCNs are a generalization of graph convolutional networks, a recent class of neural networks operating on graphs, and are developed specifically to deal with highly multi-relational data, characteristic of realistic knowledge bases. Our methods achieve competitive performance on standard benchmarks for both tasks, demonstrating especially promising results on the challenging FB15k-237 subset of Freebase."}, "keywords": ["knowledge graph"], "citation_intent": "method"} {"citing_id": "2303.10766v1", "cited_id": "1504.00325", "section_title": "Introduction", "citation": "The experimental results on the standard MSCOCO image captioning dataset #REFR show an improved or comparable performance comparing with several state-of-the-art methods in fair conditions.", "text_before_citation": ["This allows the captioning system to exploit both spatial and semantic features simultaneously, and makes end-to-end training of the system possible.", "Finally, a multi-modal reward function is proposed for DRL in a two-phase supervised-reinforced training process of the captioning system.", "This reward function consists of CIDEr, as the language reward, and an image-caption similarity as vision reward.", "The language reward measures the similarity between the generated and ground truth captions.", "The vision reward measures the similarity between the generated caption and the image contents, obtained from an embedding network for mapping both caption and image into a common latent space."], "text_after_citation": ["Detailed experimental analysis reveal the effectiveness of the visual relationships information on improving the performance of the captioning system based on common evaluation metrics.", "Moreover, the results indicate that using the proposed multi-modal reward for DRL with SCST algorithm outperforms the original version of this algorithm.", "In summary, the main contributions of this study are as follows:", "\u2022 Scene graph-based visual relationship triplets are introduced as high-level semantic features to improve the image captioning results.", "\u2022 An AoA-based deep neural architecture is proposed for fusion of visual relationships and spatial features extracted from images in the decoder of the captioning system."], "citing_paper_content": {"title": "Multi-Modal Reward For Visual Relationships-Based Image Captioning", "abstract": "Deep neural networks have achieved promising results in automatic image captioning due to their effective representation learning and context-based content generation capabilities. As a prominent type of deep features used in many of the recent image captioning methods, the well-known bottomup features provide a detailed representation of different objects of the image in comparison with the feature maps directly extracted from the raw image. However, the lack of high-level semantic information about the relationships between these objects is an important drawback of bottom-up features, despite their expensive and resource-demanding extraction procedure. To take advantage of visual relationships in caption generation, this paper proposes a deep neural network architecture for image captioning based on fusing the visual relationships information extracted from an image's scene graph with the spatial feature maps of the image. A multi-modal reward function is then introduced for deep reinforcement learning of the proposed network using a combination of language and vision similarities in a common embedding space. The results of extensive experimentation on the MSCOCO dataset show the effectiveness of using visual relationships in the proposed captioning method. Moreover, the results clearly indicate that the proposed multi-modal reward in deep reinforcement learning leads to better model optimization, outperforming several state-of-the-art image captioning algorithms, while using light and easy to extract image features. A detailed experimental study of the components constituting the proposed method is also presented."}, "cited_paper_content": {"title": "Microsoft Coco Captions: Data Collection And Evaluation Server", "abstract": "In this paper we describe the Microsoft COCO Caption dataset and evaluation server. When completed, the dataset will contain over one and a half million captions describing over 330,000 images. For the training and validation images, five independent human generated captions will be provided. To ensure consistency in evaluation of automatic caption generation algorithms, an evaluation server is used. The evaluation server receives candidate captions and scores them using several popular metrics, including BLEU, METEOR, ROUGE and CIDEr. Instructions for using the evaluation server are provided."}, "keywords": ["captioning dataset"], "citation_intent": "result"} {"citing_id": "2303.17597v2", "cited_id": "1807.01697", "section_title": "Evaluation Metrics", "citation": "We follow #REFR and use the mean CE (mCE) as the primary metric in comparing models' robustness.", "text_before_citation": ["Corruption Error (CE)."], "text_after_citation": ["To normalize the severity effects, we choose Cen-terPoint #OTHEREFR and MinkUNet #OTHEREFR as the baseline models for the 3D detectors and segmentors, respectively. The CE and mCE scores are calculated as follows:", "EQUATION", "where Acc i,l denotes task-specific accuracy scores, i.e., mIoU, AP, NDS, or APH(L2), on corruption type i at severity level l.", "N = 8 is the total number of corruption types. Resilience Rate (RR).", "We define mean RR (mRR) as the relative robustness indicator for measuring how much accuracy can a model retain when evaluated on the corruption sets. The RR and mRR scores are calculated as follows."], "citing_paper_content": {"title": "Robo3D: Towards Robust And Reliable 3D Perception Against Corruptions", "abstract": "Figure 1: Taxonomy of the Robo3D benchmark. We simulate eight corruption types from three categories: 1) Severe weather conditions, such as fog, rain, and snow; 2) External disturbances that are caused by motion blur or result in the missing of LiDAR beams; and 3) Internal sensor failure, including LiDAR crosstalk, possible incomplete echo, and cross-sensor scenarios. Each corruption is further split into three levels (light, moderate, and heavy) based on its severity."}, "cited_paper_content": {"title": "Benchmarking Neural Network Robustness To Common Corruptions And Surface Variations", "abstract": "In this paper we establish rigorous benchmarks for image classifier robustness. Our first benchmark, ImageNet-C, standardizes and expands the corruption robustness topic, while showing which classifiers are preferable in safety-critical applications. Then we propose a new dataset called ImageNet-P which enables researchers to benchmark a classifier's robustness to common perturbations. Unlike recent robustness research, this benchmark evaluates performance on common corruptions and perturbations not worst-case adversarial perturbations. We find that there are negligible changes in relative corruption robustness from AlexNet classifiers to ResNet classifiers. Afterward we discover ways to enhance corruption and perturbation robustness. We even find that a bypassed adversarial defense provides substantial common perturbation robustness. Together our benchmarks may aid future work toward networks that robustly generalize."}, "keywords": ["models' robustness"], "citation_intent": "method"} {"citing_id": "2303.12818v1", "cited_id": "1809.00846", "section_title": "Regularization And Batchnorm", "citation": "The algorithm in #REFR produces accuracies that are comparable to BatchNorm, but unlike the case with BatchNorm, they found that introducing dropout layers improved performance further.", "text_before_citation": ["In the 2018 paper #OTHEREFR , population statistics were used instead of batch statistics, and a regularization term was included for the parameter.", "In these experiments, it was noted that for batches of size larger than 32, the population statistics function as well as the batch statistics.", "It was also noted that BatchNorm introduces Gaussian noise into the mean and variance parameters."], "text_after_citation": ["They also state that BatchNorm has very similar effects to norm regularization where the -norm is defined as", "\u2016 \u2016 = (\ufe01 \u2211\ufe01 =1 | | )\ufe01 1/ .", "If norm regularization was sufficient, there would be no need for an optimization such as BatchNorm, since regularization is less computationally intensive.", "However, this regularization claim is not consistent with other empirical studies, such as that presented in the 2018 paper #OTHEREFR ."], "citing_paper_content": {"title": "An Empirical Analysis Of The Shift And Scale Parameters In Batchnorm", "abstract": "Batch Normalization (BatchNorm) is a technique that improves the training of deep neural networks, especially Convolutional Neural Networks (CNN). It has been empirically demonstrated that BatchNorm increases performance, stability, and accuracy, although the reasons for such improvements are unclear. BatchNorm includes a normalization step as well as trainable shift and scale parameters. In this paper, we empirically examine the relative contribution to the success of BatchNorm of the normalization step, as compared to the re-parameterization via shifting and scaling. To conduct our experiments, we implement two new optimizers in PyTorch, namely, a version of BatchNorm that we refer to as AffineLayer, which includes the re-parameterization step without normalization, and a version with just the normalization step, that we call BatchNorm-minus. We compare the performance of our AffineLayer and BatchNorm-minus implementations to standard BatchNorm, and we also compare these to the case where no batch normalization is used. We experiment with four ResNet architectures (ResNet18, ResNet34, ResNet50, and ResNet101) over a standard image dataset and multiple batch sizes. Among other findings, we provide empirical evidence that the success of BatchNorm may derive primarily from improved weight initialization."}, "cited_paper_content": {"title": "Towards Understanding Regularization In Batch Normalization", "abstract": "Batch Normalization (BN) improves both convergence and generalization in training neural networks. This work understands these phenomena theoretically. We analyze BN by using a basic block of neural networks, consisting of a kernel layer, a BN layer, and a nonlinear activation function. This basic network helps us understand the impacts of BN in three aspects. First, by viewing BN as an implicit regularizer, BN can be decomposed into population normalization (PN) and gamma decay as an explicit regularization. Second, learning dynamics of BN and the regularization show that training converged with large maximum and effective learning rate. Third, generalization of BN is explored by using statistical mechanics. Experiments demonstrate that BN in convolutional neural networks share the same traits of regularization as the above analyses."}, "keywords": ["BatchNorm"], "citation_intent": "result"} {"citing_id": "2303.15904v1", "cited_id": "1905.04804", "section_title": "Arxiv:2303.15904V1 [Cs.Cv] 28 Mar 2023", "citation": "Results are reported using ResNet-50 as backbone on the YTVIS 2019 #REFR benchmark. Video Mask: using YTVIS video mask labels.", "text_before_citation": ["To further enforce temporal consistency through the video clip, TK-Loss is employed in a cyclic manner instead of using dense frame-wise connections. This greatly reduces memory cost with negligible performance drop.", "We extensively evaluate our MaskFreeVIS on four largescale VIS benchmarks, i.e., YouTube-VIS 2019/2021 #OTHEREFR , OVIS #OTHEREFR , and BDD100K MOTS #OTHEREFR .", "MaskFreeVIS achieves competitive VIS performance without using any video masks or even image mask labels on all datasets.", "Validated on various methods and backbones, MaskFreeVIS achieves 91.25% performance of its fully supervised counterparts, even outperforming a few recent fully-supervised methods #OTHEREFR on the popular YTVIS benchmark.", "Our simple yet effective design greatly narrows the performance gap between weakly-supervised and fully- Table 1 . Mask annotation requirement for state-of-the-art VIS methods."], "text_after_citation": ["Image Mask: using COCO #OTHEREFR image mask labels for image-based pretraining.", "Pseudo Video: using Pseudo Videos from COCO images for joint training #OTHEREFR . MaskFreeVIS achieves 91.5% (42.5 vs.", "46.4) of its fully-supervised baseline performance (Mask2Former) without using any masks during training. supervised video instance segmentation.", "It further demonstrates that expensive video masks, or even image masks, is not necessary for training high-performing VIS models.", "Our contributions are summarized as follows: (i) To utilize temporal information, we develop a new parameterfree Temporal KNN-patch Loss, which leverages temporal masks consistency using unsupervised one-to-k patch correspondence. We extensively analyze the TK-Loss through ablative experiments."], "citing_paper_content": {"title": "Mask-Free Video Instance Segmentation", "abstract": "Figure 1. Video instance segmentation (VIS) results of our MaskFreeVIS, trained without using any video or image mask annotation. By achieving a remarkable 42.5% mask AP on the YouTube-VIS val dataset, with a ResNet-50 backbone, our approach demonstrates that high-performing VIS can be learned even without any mask annotations."}, "cited_paper_content": {"title": "Video Instance Segmentation", "abstract": "In this paper we present a new computer vision task, named video instance segmentation. The goal of this new task is simultaneous detection, segmentation and tracking of instances in videos. In words, it is the first time that the image instance segmentation problem is extended to the video domain. To facilitate research on this new task, we propose a large-scale benchmark called YouTube-VIS, which consists of 2883 high-resolution YouTube videos, a 40-category label set and 131k high-quality instance masks. In addition, we propose a novel algorithm called MaskTrack R-CNN for this task. Our new method introduces a new tracking branch to Mask R-CNN to jointly perform the detection, segmentation and tracking tasks simultaneously. Finally, we evaluate the proposed method and several strong baselines on our new dataset. Experimental results clearly demonstrate the advantages of the proposed algorithm and reveal insight for future improvement. We believe the video instance segmentation task will motivate the community along the line of research for video understanding."}, "keywords": ["Video Mask"], "citation_intent": "method"} {"citing_id": "2304.11033v1", "cited_id": "1901.03206", "section_title": "Mutability By", "citation": "Worse yet, the protocol introduces an additional issue in that it requires a majority of miners to act faithfully and actually perform the (legally mandated) mutations-something that it cannot guarantee by design #REFR .", "text_before_citation": ["create and formally prove an editable blockchain protocol #OTHEREFR .", "While any user can propose edits, the protocol ensures consensus-based voting on the proposals to prevent arbitrary edits.", "This also means that no trusted third party is required.", "The protocol is compatible with any consensus mechanism and even offers accountability of the performed edits.", "#OTHEREFR While this solution removes the need for a trusted third party, it does not solve the other issue of redactable and forgetting blockchains: every individual node in the network still needs to be trusted, as mutations are published as chain updates as well."], "text_after_citation": ["In contrast to that, our solution functions even with of adversarial network participants, as noted above.", "Furthermore, we do not depend on the honesty of the miners and, better still, do not require any changes of the blockchain software."], "citing_paper_content": {"title": "Decentralized Inverse Transparency With Blockchain", "abstract": "Employee data can be used to facilitate work, but their misusage may pose risks for individuals. Inverse transparency therefore aims to track all usages of personal data, allowing individuals to monitor them to ensure accountability for potential misusage. This necessitates a trusted log to establish an agreed-upon and non-repudiable timeline of events. The unique properties of blockchain facilitate this by providing immutability and availability. For power asymmetric environments such as the workplace, permissionless blockchain is especially beneficial as no trusted third party is required. Yet, two issues remain: (1) In a decentralized environment, no arbiter can facilitate and attest to data exchanges. Simple peer-to-peer sharing of data, conversely, lacks the required non-repudiation. (2) With data governed by privacy legislation such as the GDPR, the core advantage of immutability becomes a liability. After a rightful request, an individual's personal data need to be rectified or deleted, which is impossible in an immutable blockchain. To solve these issues, we present Kovacs, a decentralized data exchange and usage logging system for inverse transparency built on blockchain. Its new-usage protocol ensures non-repudiation, and therefore accountability, for inverse transparency. Its one-time pseudonym generation algorithm guarantees unlinkability and enables proof of ownership, which allows data subjects to exercise their legal rights regarding their personal data. With our implementation, we show the viability of our solution. The decentralized communication impacts performance and scalability, but exchange duration and storage size are still reasonable. More importantly, the provided information security meets high requirements. We conclude that Kovacs realizes decentralized inverse transparency through secure and GDPR-compliant use of permissionless blockchain. CCS Concepts: \u2022 Computer systems organization \u2192 Peer-to-peer architectures; \u2022 Security and privacy \u2192 Distributed systems security; Privacy-preserving protocols; Cryptography."}, "cited_paper_content": {"title": "Redactable Blockchain In The Permissionless Setting", "abstract": "Bitcoin is an immutable permissionless blockchain system that has been extensively used as a public bulletin board by many different applications that heavily relies on its immutability. However, Bitcoin's immutability is not without its fair share of demerits. Interpol exposed the existence of harmful and potentially illegal documents, images and links in the Bitcoin blockchain, and since then there have been several qualitative and quantitative analysis on the types of data currently residing in the Bitcoin blockchain. Although there is a lot of attention on blockchains, surprisingly the previous solutions proposed for data redaction in the permissionless setting are far from feasible, and require additional trust assumptions. Hence, the problem of harmful data still poses a huge challenge for law enforcement agencies like Interpol (Tziakouris, IEEE SP if a redaction gathers enough votes the operation is performed on the chain. As an extra feature, our protocol offers public verifiability and accountability for the redacted chain. Moreover, we provide formal security definitions and proofs showing that our protocol is secure against redactions that were not agreed by consensus. Additionally, we show the viability of our approach with a proof-of-concept implementation that shows only a tiny overhead in the chain validation of our protocol when compared to an immutable one."}, "keywords": ["protocol"], "citation_intent": "background"} {"citing_id": "2303.01925v1", "cited_id": "1211.0358", "section_title": "Discussion And Limitations", "citation": "Another is to extend our framework and instead represent H with a deep GP prior #REFR .", "text_before_citation": ["The performance of the HGP on task 2, is somewhat underwhelming especially for larger K, relative to NODE.", "We believe some of this effect can be attributed to our choice of GP prior.", "We use a stationary GP prior, which is likely non-optimal for most Hamiltonian systems, which are typically nonstationary.", "The assumption of non-stationarity likely leads to the poor generalisation at new phase space points, since the GP prior will revert to zero mean functions there, which is not the case for the NN models.", "One option to improve the suitability of the GP prior is via the use of specifically designed kernels to represent symmetries in the system #OTHEREFR , or kernel structure learning #OTHEREFR ."], "text_after_citation": ["Deep GPs are adept at modelling complex, nonstationary functions, and so would be well suited to the task.", "Using the SVI scheme proposed by #OTHEREFR would make integration of deep GPs into our framework relatively straight forward, and would be interesting to undertake as part of future work.", "Control.", "The present method has significant potential at improving Bayesian online #OTHEREFR or policy-based RL (Yildiz et al., 2021) by incorporating Hamiltonian inductive biases.", "This requires expanding the model towards Hamiltonian systems with external forces, which relax the energy conservation assumption."], "citing_paper_content": {"title": "Learning Energy Conserving Dynamics Efficiently With Hamiltonian Gaussian Processes", "abstract": "Hamiltonian mechanics is one of the cornerstones of natural sciences. Recently there has been significant interest in learning Hamiltonian systems in a free-form way directly from trajectory data. Previous methods have tackled the problem of learning from many short, low-noise trajectories, but learning from a small number of long, noisy trajectories, whilst accounting for model uncertainty has not been addressed. In this work, we present a Gaussian process model for Hamiltonian systems with efficient decoupled parameterisation, and introduce an energy-conserving shooting method that allows robust inference from both short and long trajectories. We demonstrate the method's success in learning Hamiltonian systems in various data settings."}, "cited_paper_content": {"title": "Deep Gaussian Processes", "abstract": "In this paper we introduce deep Gaussian process (GP) models. Deep GPs are a deep belief network based on Gaussian process mappings. The data is modeled as the output of a multivariate GP. The inputs to that Gaussian process are then governed by another GP. A single layer model is equivalent to a standard GP or the GP latent variable model (GP-LVM). We perform inference in the model by approximate variational marginalization. This results in a strict lower bound on the marginal likelihood of the model which we use for model selection (number of layers and nodes per layer). Deep belief networks are typically applied to relatively large data sets using stochastic gradient descent for optimization. Our fully Bayesian treatment allows for the application of deep models even when data is scarce. Model selection by our variational bound shows that a five layer hierarchy is justified even when modelling a digit data set containing only 150 examples."}, "keywords": ["deep GP"], "citation_intent": "method"} {"citing_id": "2303.10538v1", "cited_id": "1803.08475", "section_title": "Running Time Discussion", "citation": "As discussed in #REFR , running time is important but hard to compare since it is affected by many factors.", "text_before_citation": [], "text_after_citation": ["We report the clock time for solving all the test instances in Table 1 .", "For the UTSP (our method) and the state-of-the-art learning-based method Att-GCRN #OTHEREFR , we run the search algorithm on exactly the same environment (one Intel Xeon Gold 6326) for a fair comparison.", "And for other baselines, we directly refer to the results from #OTHEREFR .", "So the time there are only for indicative purpose since the computing hardware is not the same."], "citing_paper_content": {"title": "Unsupervised Learning For Solving The Travelling Salesman Problem", "abstract": "We propose UTSP, an unsupervised learning (UL) framework for solving the Travelling Salesman Problem (TSP). We train a Graph Neural Network (GNN) using a surrogate loss. The GNN outputs a heat map representing the probability for each edge to be part of the optimal path. We then apply local search to generate our final prediction based on the heat map. Our loss function consists of two parts: one pushes the model to find the shortest path and the other serves as a surrogate for the constraint that the route should form a Hamiltonian Cycle. Experimental results show that UTSP outperforms the existing data-driven TSP heuristics. Our approach is parameter efficient as well as data efficient: the model takes \u223c 10% of the number of parameters and \u223c 0.2% of training samples compared with reinforcement learning or supervised learning methods."}, "cited_paper_content": {"title": "Attention, Learn To Solve Routing Problems!", "abstract": "The recently presented idea to learn heuristics for combinatorial optimization problems is promising as it can save costly development. However, to push this idea towards practical implementation, we need better models and better ways of training. We contribute in both directions: we propose a model based on attention layers with benefits over the Pointer Network and we show how to train this model using REINFORCE with a simple baseline based on a deterministic greedy rollout, which we find is more efficient than using a value function. We significantly improve over recent learned heuristics for the Travelling Salesman Problem (TSP), getting close to optimal results for problems up to 100 nodes. With the same hyperparameters, we learn strong heuristics for two variants of the Vehicle Routing Problem (VRP), the Orienteering Problem (OP) and (a stochastic variant of) the Prize Collecting TSP (PCTSP), outperforming a wide range of baselines and getting results close to highly optimized and specialized algorithms."}, "keywords": ["running time"], "citation_intent": "background"} {"citing_id": "2304.08994v1", "cited_id": "1804.01654", "section_title": "Introduction", "citation": "This bounding box is subdivided and serves as initial mesh, which is refined by an iterative mesh refinement as proposed in #REFR .", "text_before_citation": ["Transportation logistics and warehousing are a central part of every supply chain and play an important strategic role in the Industry 4.0 era #OTHEREFR .", "However, several challenges need to be faced by companies working in the logistics sector: clients demand cheaper, faster and more pre- Figure 1 .", "We take an RGB image as input and use Cube R-CNN's Cube Head #OTHEREFR to estimate a 3D bounding box."], "text_after_citation": ["For training and evaluation we present Parcel3D, a novel dataset of normal and damaged parcels with full 3D annotations.", "cisely scheduled deliveries while at the same time, cities and highways are congested, and environmental concerns are of rising importance.", "To tackle these challenges, process automation has huge potential #OTHEREFR .", "Key processes for automation in logistics are identification, digital measurement, damage detection and tampering recognition of packaging units, all of which we work towards with the approach presented in this work. Identification is necessary for process documentation and parcel tracking.", "Damage and tampering detection can be utilized to increase the safety and security along the supply chain #OTHEREFR ."], "citing_paper_content": {"title": "Parcel3D: Shape Reconstruction From Single Rgb Images For Applications In Transportation Logistics", "abstract": "We focus on enabling damage and tampering detection in logistics and tackle the problem of 3D shape reconstruction of potentially damaged parcels. As input we utilize single RGB images, which corresponds to use-cases where only simple handheld devices are available, e.g. for postmen during delivery or clients on delivery. We present a novel synthetic dataset, named Parcel3D, that is based on the Google Scanned Objects (GSO) dataset and consists of more than 13,000 images of parcels with full 3D annotations. The dataset contains intact, i.e. cuboid-shaped, parcels and damaged parcels, which were generated in simulations. We work towards detecting mishandling of parcels by presenting a novel architecture called CubeRefine R-CNN, which combines estimating a 3D bounding box with an iterative mesh refinement. We benchmark our approach on Parcel3D and an existing dataset of cuboidshaped parcels in real-world scenarios. Our results show, that while training on Parcel3D enables transfer to the real world, enabling reliable deployment in real-world scenarios is still challenging. CubeRefine R-CNN yields competitive performance in terms of Mesh AP and is the only model that directly enables deformation assessment by 3D mesh comparison and tampering detection by comparing viewpoint invariant parcel side surface representations. Dataset and code are available at https://a-nau.github.io/parcel3d."}, "cited_paper_content": {"title": "Pixel2Mesh: Generating 3D Mesh Models From Single Rgb Images", "abstract": "We propose an end-to-end deep learning architecture that produces a 3D shape in triangular mesh from a single color image. Limited by the nature of deep neural network, previous methods usually represent a 3D shape in volume or point cloud, and it is non-trivial to convert them to the more ready-to-use mesh model. Unlike the existing methods, our network represents 3D mesh in a graph-based convolutional neural network and produces correct geometry by progressively deforming an ellipsoid, leveraging perceptual features extracted from the input image. We adopt a coarse-to-fine strategy to make the whole deformation procedure stable, and define various of mesh related losses to capture properties of different levels to guarantee visually appealing and physically accurate 3D geometry. Extensive experiments show that our method not only qualitatively produces mesh model with better details, but also achieves higher 3D shape estimation accuracy compared to the state-of-the-art."}, "keywords": ["iterative mesh refinement"], "citation_intent": "method"} {"citing_id": "2304.14122v1", "cited_id": "1903.12395", "section_title": "C. Comparison With State-Of-The-Arts", "citation": "Compared with recent DIL #REFR , our method still achieves better results on MARS and DukeMCMT-VID.", "text_before_citation": ["Meanwhile, with pyramid spatial-temporal learning, PSTA #OTHEREFR gains remarkable 91.5% and 98.3% Rank-1 accuracy on MARS and DukeMCMT-VID datasets.", "Different from PSTA, we propose a novel spatialtemporal complementary learning framework to obtain richer video representations.", "Thus, we obtain better performance than PSTA on MARS and DukeMCMT-VID, which validates the effectiveness of our proposed method.", "When compared with some methods that utilize Transformer, our method also shows great superiority in performance.", "For example, compared with STT #OTHEREFR , our method has 1.2% and 3.6% gains in mAP and Rank-1 accuracy."], "text_after_citation": ["Besides, TMT #OTHEREFR introduces a multi-view Transformer to extract comprehensive video representations. However, our method still performs better than TMT.", "We note that all these methods add Transformer layers after CNN backbones to obtain enhanced representations.", "Different from them, we deeply couple CNNs and Transformers, and propose a complementary content attention for spatial complementary learning.", "Besides, the hierarchical temporal aggregation is leveraged to progressively integrate temporal information.", "In this way, we utilize two kinds of typical visual features from the same videos and achieve spatial-temporal complementary learning for more informative representations."], "citing_paper_content": {"title": "Deeply-Coupled Convolution-Transformer With Spatial-Temporal Complementary Learning For Video-Based Person Re-Identification", "abstract": "Advanced deep Convolutional Neural Networks (CNNs) have shown great success in video-based person Re-Identification (Re-ID). However, they usually focus on the most obvious regions of persons with a limited global representation ability. Recently, it witnesses that Transformers explore the inter-patch relations with global observations for performance improvements. In this work, we take both sides and propose a novel spatial-temporal complementary learning framework named Deeply-Coupled Convolution-Transformer (DCCT) for high-performance video-based person Re-ID. Firstly, we couple CNNs and Transformers to extract two kinds of visual features and experimentally verify their complementarity. Further, in spatial, we propose a Complementary Content Attention (CCA) to take advantages of the coupled structure and guide independent features for spatial complementary learning. In temporal, a Hierarchical Temporal Aggregation (HTA) is proposed to progressively capture the inter-frame dependencies and encode temporal information. Besides, a gated attention is utilized to deliver aggregated temporal information into the CNN and Transformer branches for temporal complementary learning. Finally, we introduce a self-distillation training strategy to transfer the superior spatial-temporal knowledge to backbone networks for higher accuracy and more efficiency. In this way, two kinds of typical features from same videos are integrated mechanically for more informative representations. Extensive experiments on four public Re-ID benchmarks demonstrate that our framework could attain better performances than most state-of-the-art methods."}, "cited_paper_content": {"title": "Few-Shot Deep Adversarial Learning For Video-Based Person Re-Identification", "abstract": "Video-based person re-identification (re-ID) refers to matching people across camera views from arbitrary unaligned video footages. Existing methods rely on supervision signals to optimise a projected space under which the distances between inter/intra-videos are maximised/minimised. However, this demands exhaustively labelling people across camera views, rendering them unable to be scaled in large networked cameras. Also, it is noticed that learning effective video representations with view invariance is not explicitly addressed for which features exhibit different distributions otherwise. Thus, matching videos for person re-ID demands flexible models to capture the dynamics in time-series observations and learn view-invariant representations with access to limited labeled training samples. In this paper, we propose a novel few-shot deep learning approach to video-based person re-ID, to learn comparable representations that are discriminative and view-invariant. The proposed method is developed on the variational recurrent neural networks (VRNNs) and trained adversarially to produce latent variables with temporal dependencies that are highly discriminative yet view-invariant in matching persons. Through extensive experiments conducted on three benchmark datasets, we empirically show the capability of our method in creating view-invariant temporal features and state-of-the-art performance achieved by our method."}, "keywords": ["DukeMCMT-VID", "recent DIL"], "citation_intent": "result"} {"citing_id": "2304.10060v1", "cited_id": "1805.10074", "section_title": "Main Results", "citation": "Recently, under the same capacity assumption, the work #REFR studies the mean square convergence of averaging online estimator with least square loss and multiple passes.", "text_before_citation": ["A small value of \u03b2 implies a fast polynomially decaying rate at least achieved by the eigenvalues {\u03bb k } k\u2208N .", "One can refer to Theorem 5 in our recent work #OTHEREFR , which provides a characterization of the relationship between the capacity assumption (9) in this paper and the decaying rate of integral operator eigenvalues.", "If the eigenvalues decay exponentially, the index \u03b2 can be arbitrarily close to zero.", "The polynomially decaying of the eigenvalue is typical for the Sobolev smooth kernels on domains in Euclidean spaces, and the parameter \u03b2 depends on the smoothness of the kernel K.", "While the exponentially decaying integral operator is typical for the analytic kernels on domains in Euclidean spaces."], "text_after_citation": ["Under the regularity condition (7) on the target function f \u03c1 and the capacity assumption (9) on the hypothesis space H K , we obtain the following sharp capacity dependent results for strong convergence in H K .", "Theorem 2. Let {f t } T +1", "t=1 be defined by algorithm #OTHEREFR with a windowing function W .", "Assume that W satisfies (2) and (3) with some p > 0, the regularity condition #OTHEREFR holds with r > 1 2 and the capacity assumption (9) holds with 0 < \u03b2 < 1. Choose step size \u03b7 = 1", "\u03b7 0 T 1\u22122r\u2212\u03b2 2r+\u03b2 with \u03b7 0 \u2265 max C W \u03ba 2 , 1 e + 2\u03ba 2 W \u2032 + (0) 2 ."], "citing_paper_content": {"title": "Optimality Of Robust Online Learning \u2020", "abstract": "In this paper, we study an online learning algorithm with a robust loss function L \u03c3 for regression over a reproducing kernel Hilbert space (RKHS). The loss function L \u03c3 involving a scaling parameter \u03c3 > 0 can cover a wide range of commonly used robust losses. The proposed algorithm is then a robust alternative for online least squares regression aiming to estimate the conditional mean function. For properly chosen \u03c3 and step size, we show that the last iterate of this online algorithm can achieve optimal capacity independent convergence in the mean square distance. Moreover, if additional information on the underlying function space is known, we also establish optimal capacity dependent rates for strong convergence in RKHS. To the best of our knowledge, both of the two results are new to the existing literature of online learning."}, "cited_paper_content": {"title": "Statistical Optimality Of Stochastic Gradient Descent On Hard Learning Problems Through Multiple Passes", "abstract": "We consider stochastic gradient descent (SGD) for least-squares regression with potentially several passes over the data. While several passes have been widely reported to perform practically better in terms of predictive performance on unseen data, the existing theoretical analysis of SGD suggests that a single pass is statistically optimal. While this is true for low-dimensional easy problems, we show that for hard problems, multiple passes lead to statistically optimal predictions while single pass does not; we also show that in these hard models, the optimal number of passes over the data increases with sample size. In order to define the notion of hardness and show that our predictive performances are optimal, we consider potentially infinite-dimensional models and notions typically associated to kernel methods, namely, the decay of eigenvalues of the covariance matrix of the features and the complexity of the optimal predictor as measured through the covariance matrix. We illustrate our results on synthetic experiments with non-linear kernel methods and on a classical benchmark with a linear model."}, "keywords": ["online estimator", "least square loss"], "citation_intent": "background"} {"citing_id": "2304.11514v1", "cited_id": "1602.03602", "section_title": "I. Introduction", "citation": "Unmanned aerial vehicle (UAV), thanks to its low cost, high flexibility and high probability of line-of-propagation (LoP) links, has become an attractive means of improving air-ground transmission quality #REFR .", "text_before_citation": ["Nowadays, wireless networks serve a wide range of civilian and military applications and have become an essential part of our routine #OTHEREFR , #OTHEREFR .", "Improving the performance of wireless communication has become a popular research topic."], "text_after_citation": ["In general, UAVs can not only act as base stations #OTHEREFR , relays #OTHEREFR , etc, to transmit or forward signals, but also for data collection #OTHEREFR and positioning #OTHEREFR . Currently, UAV transmission technology has been widely explored.", "For instance, the authors in #OTHEREFR considered a cellular network deployment in which UAV-to-UAV launch-receive pairs used the identical spectrum as the uplink of cellular ground users, and the performances of underlay and overlay spectrum sharing mechanisms were analyzed and compared.", "In #OTHEREFR , to enhance the throughput of a single-cell multiuser orthogonal frequency division multiple access network with single UAV while guaranteeing the user fairness, an efficient method was proposed that outperformed the random and cellular schemes in terms of user fairness and sum rate.", "UAV-enabled relay under malicious jamming was considered in #OTHEREFR , and the successive convex approximation (SCA) algorithm was employed to maximize the end-to-end throughput.", "However, UAV network also presents some challenges that affect their performance."], "citing_paper_content": {"title": "Joint Beamforming And Phase Shift Design For Hybrid-Irs-And-Uav-Aided Directional Modulation Network", "abstract": "Recently, intelligent reflecting surface (IRS) and unmanned aerial vehicle (UAV) have been introduced into wireless communication systems to enhance the performance of air-ground transmission. To make a good balance between performance, cost, and power consumption, a hybrid-IRS-and-UAV-assisted directional modulation (DM) network is investigated in this paper, where the hybrid IRS consists of passive and active reflecting elements. To maximize the achievable rate, three optimization algorithms, called maximum signal-to-noise ratio (SNR)-fractional programming (FP) (Max-SNR-FP), maximum SNR-equal amplitude reflecting (EAR) (Max-SNR-EAR), and maximum SNR-majorization-minimization (MM) (Max-SNR-MM), are proposed to jointly design the beamforming vector and phase shift matrix (PSM) of hybrid IRS by alternately optimizing one and giving another. The Max-SNR-FP method employs the successive convex approximation and FP methods to derive the beamforming vector and hybrid IRS PSM. The Max-SNR-EAR method adopts the maximum signal-to-leakagenoise ratio method and the criteria of phase alignment and EAR to design them. In addition, the Max-SNR-MM method utilizes the MM criterion to derive the IRS PSM. Simulation results show that the rates harvested by the proposed three methods are slightly lower than those of active IRS with higher power consumption, which are 35 percent higher than those of no IRS and random phase IRS, while passive IRS achieves only about 17 percent rate gain over the latter. Moreover, compared to Max-SNR-FP, the proposed Max-SNR-EAR and Max-SNR-MM methods make an obvious complexity degradation at the price of a slight performance loss."}, "cited_paper_content": {"title": "Wireless Communications With Unmanned Aerial Vehicles: Opportunities And Challenges", "abstract": "Wireless communication systems that include unmanned aerial vehicles promise to provide cost-effective wireless connectivity for devices without infrastructure coverage. Compared to terrestrial communications or those based on high-altitude platforms, on-demand wireless systems with low-altitude UAVs are in general faster to deploy, more flexibly reconfigured, and likely to have better communication channels due to the presence of short-range line-of-sight links. However, the utilization of highly mobile and energy-constrained UAVs for wireless communications also introduces many new challenges. In this article, we provide an overview of UAV-aided wireless communications, by introducing the basic networking architecture and main channel characteristics, highlighting the key design considerations as well as the new opportunities to be exploited."}, "keywords": ["Unmanned aerial vehicle"], "citation_intent": "background"} {"citing_id": "2303.15916v2", "cited_id": "1802.06739", "section_title": "Experiment 5: T-Sne Visualization Of Generated Data", "citation": "Similar to the previous dataset, the DPWGAN #REFR produces a distribution with an offset, resulting in a bad performance.", "text_before_citation": ["between the private and public data, which is reflected in the accuracy, as the public classifier cannot perform well on the private dataset. Furthermore, the two classes are not separated well.", "In contrast to that, Figure 1d shows that the generated data of the GSWGAN #OTHEREFR , although it is different from the original distribution, separated the two classes well.", "In addition, this model was able to perform well, although the distribution shows some differences.", "For the accuracy results related to this figure, the reader is referred to Table 2 .", "The last dataset is visualized in Figure 1g and Figure 1h ."], "text_after_citation": ["In contrast to that, the GSWGAN #OTHEREFR generated a dataset that shows a similar distribution within the T-SNE plot, resulting in high performance for the classifier trained on that dataset.", "One general finding across all datasets was that the DPWGAN #OTHEREFR produces data that is less similar to the original data.", "This can be explained by the fact that differential privacy is applied to both the discriminator and generator, resulting in a worse performance of the discriminator.", "This is not the case for the GSWGAN #OTHEREFR as the privacy constraints are only applied to the generator.", "The stronger discriminator seems to improve the quality of the generated data."], "citing_paper_content": {"title": "From Private To Public: Benchmarking Gans In The Context Of Private Time Series Classification", "abstract": "Deep learning has proven to be successful in various domains and for different tasks. However, when it comes to private data several restrictions are making it difficult to use deep learning approaches in these application fields. Recent approaches try to generate data privately instead of applying a privacy-preserving mechanism directly, on top of the classifier. The solution is to create public data from private data in a manner that preserves the privacy of the data. In this work, two very prominent GAN-based architectures were evaluated in the context of private time series classification. In contrast to previous work, mostly limited to the image domain, the scope of this benchmark was the time series domain. The experiments show that especially GSWGAN performs well across a variety of public datasets outperforming the competitor DPWGAN. An analysis of the generated datasets further validates the superiority of GSWGAN in the context of time series generation."}, "cited_paper_content": {"title": "Differentially Private Generative Adversarial Network", "abstract": "Generative Adversarial Network (GAN) and its variants have recently attracted intensive research interests due to their elegant theoretical foundation and excellent empirical performance as generative models. These tools provide a promising direction in the studies where data availability is limited. One common issue in GANs is that the density of the learned generative distribution could concentrate on the training data points, meaning that they can easily remember training samples due to the high model complexity of deep networks. This becomes a major concern when GANs are applied to private or sensitive data such as patient medical records, and the concentration of distribution may divulge critical patient information. To address this issue, in this paper we propose a differentially private GAN (DPGAN) model, in which we achieve differential privacy in GANs by adding carefully designed noise to gradients during the learning procedure. We provide rigorous proof for the privacy guarantee, as well as comprehensive empirical evidence to support our analysis, where we demonstrate that our method can generate high quality data points at a reasonable privacy level."}, "keywords": ["previous dataset", "DPWGAN"], "citation_intent": "result"} {"citing_id": "2304.08979v1", "cited_id": "1809.02789", "section_title": "Evaluation Dataset", "citation": "The answers (yes/no) are marked by human annotators if certain Wikipedia pages contain sufficient information to address the questions. \u2022 OpenbookQA (OQA) #REFR is a multiple-choice reasoning dataset. The questions are derived from 1,326 core science facts.", "text_before_citation": ["We employ ten widely used benchmark QA datasets in our study, including BoolQ #OTHEREFR , Open-bookQA (OQA) #OTHEREFR , RACE #OTHEREFR , ARC #OTHEREFR , Common-senseQA (CQA) #OTHEREFR , SQuAD1 #OTHEREFR , SQuAD2 #OTHEREFR , Narra-tiveQA (NQA) #OTHEREFR , ELI5 #OTHEREFR , and TruthfulQA (TQA) #OTHEREFR .", "These datasets encompass a broad range of QA capabilities, such as reading comprehension (BoolQ, SQuDA1/2, RACE), reasoning (OQA, ARC), commonsense (CQA), full document comprehension (NQA, ELI5), and truthfulness (TQA).", "Furthermore, they comprise all four QA tasks #OTHEREFR , including yes/no (BoolQ), multiple-choice (OQA, RACE, ARC, CQA), extractive (SQuAD 1/2), and abstractive tasks (NQA, ELI5, TQA).", "They thus offer a solid foundation to comprehensively evaluate the ChatGPT's reliability in various realworld QA scenarios.", "Their details are outlined below and summarized in Table 2. \u2022 BoolQ #OTHEREFR is a yes/no reading comprehension dataset. The questions are derived from aggregated Google searches."], "text_after_citation": ["The answers consist of 4 candidates, of which only one is correct, requiring reasoning between questions and the given science facts and common knowledge.", "\u2022 RACE #OTHEREFR is a multiple-choice reading comprehension dataset.", "The questions are derived from English exams for Chinese students.", "The answers include 4 candidates, of which only one is correct, requiring reading comprehension of English passages.", "\u2022 ARC #OTHEREFR is a multiple-choice reasoning dataset."], "citing_paper_content": {"title": "In Chatgpt We Trust? Measuring And Characterizing The Reliability Of Chatgpt", "abstract": "The way users acquire information is undergoing a paradigm shift with the advent of ChatGPT. Unlike conventional search engines, ChatGPT retrieves knowledge from the model itself and generates answers for users. ChatGPT's impressive question-answering (QA) capability has attracted more than 100 million users within a short period of time but has also raised concerns regarding its reliability. In this paper, we perform the first large-scale measurement of ChatGPT's reliability in the generic QA scenario with a carefully curated set of 5,695 questions across ten datasets and eight domains. We find that ChatGPT's reliability varies across different domains, especially underperforming in law and science questions. We also demonstrate that system roles, originally designed by OpenAI to allow users to steer ChatGPT's behavior, can impact ChatGPT's reliability. We further show that ChatGPT is vulnerable to adversarial examples, and even a single character change can negatively affect its reliability in certain cases. We believe that our study provides valuable insights into ChatGPT's reliability and underscores the need for strengthening the reliability and security of large language models (LLMs)."}, "cited_paper_content": {"title": "Can A Suit Of Armor Conduct Electricity? A New Dataset For Open Book Question Answering", "abstract": "We present a new kind of question answering dataset, OpenBookQA, modeled after open book exams for assessing human understanding of a subject. The open book that comes with our questions is a set of 1329 elementary level science facts. Roughly 6000 questions probe an understanding of these facts and their application to novel situations. This requires combining an open book fact (e.g., metals conduct electricity) with broad common knowledge (e.g., a suit of armor is made of metal) obtained from other sources. While existing QA datasets over documents or knowledge bases, being generally self-contained, focus on linguistic understanding, OpenBookQA probes a deeper understanding of both the topic---in the context of common knowledge---and the language it is expressed in. Human performance on OpenBookQA is close to 92%, but many state-of-the-art pre-trained QA methods perform surprisingly poorly, worse than several simple neural baselines we develop. Our oracle experiments designed to circumvent the knowledge retrieval bottleneck demonstrate the value of both the open book and additional facts. We leave it as a challenge to solve the retrieval problem in this multi-hop setting and to close the large gap to human performance."}, "keywords": ["multiple-choice reasoning dataset"], "citation_intent": "method"} {"citing_id": "2304.11546v1", "cited_id": "1904.02689", "section_title": "Guide Line", "citation": "Then, the relation between response range and grazing angle could be described in Equation #REFR .", "text_before_citation": ["However, in some case it is observed that the response is very scattered, and the response peak is often indistinguishable from other non-response points.", "Figure 3 helps to understand the relation between the response range and grazing angle.", "The black dotted line denotes the detected vectorized lane (hereinafter referred to as \"vector line\"), and the green dotted lines alongside indicate the range of pixels that generate a response on the heatmap.", "The greener the background is, the more positive the sample is. d is the radius of the range.", "The tangents of two guide lines are illustrated in orange and magenta, with grazing angles as \u03b11 and \u03b12, respectively."], "text_after_citation": ["The bigger the angle (close to 90 \u2022 ), the tighter the response range.", "As the grazing angle gets smaller, the range scatters rapidly. This reflects the results in Figure 1a .", "Therefore, the guide line principle is to \"reduce the number of lanes of small grazing angles\".", "EQUATION"], "citing_paper_content": {"title": "Canet: Curved Guide Line Network With Adaptive Decoder For Lane Detection", "abstract": "Lane detection is challenging due to the complicated onroad scenarios and line deformation from different camera perspectives. Lots of solutions were proposed, but can not deal with \"corner lanes\" well. To address this problem, this paper proposes a new top-down deep learning lane detection approach, CANET. A lane instance is first responded by the heatmap on the U-shaped \"curved guide line\" at global semantic level, thus the corresponding features of each lane are aggregated at the response point. Then CANET obtains the heatmap response of the entire lane through conditional convolution, and finally decodes the point set to describe lanes via adaptive decoder. The prototype is implemented with Pytorch, and evaluated against 3 well-known datasets extensively. The experimental results show that CANET reaches SOTA in different metrics. Our code will be released soon."}, "cited_paper_content": {"title": "Yolact: Real-Time Instance Segmentation", "abstract": "We present a simple, fully-convolutional model for real-time instance segmentation that achieves 29.8 mAP on MS COCO at 33.5 fps evaluated on a single Titan Xp, which is significantly faster than any previous competitive approach. Moreover, we obtain this result after training on only one GPU. We accomplish this by breaking instance segmentation into two parallel subtasks: (1) generating a set of prototype masks and (2) predicting per-instance mask coefficients. Then we produce instance masks by linearly combining the prototypes with the mask coefficients. We find that because this process doesn't depend on repooling, this approach produces very high-quality masks and exhibits temporal stability for free. Furthermore, we analyze the emergent behavior of our prototypes and show they learn to localize instances on their own in a translation variant manner, despite being fully-convolutional. Finally, we also propose Fast NMS, a drop-in 12 ms faster replacement for standard NMS that only has a marginal performance penalty."}, "keywords": ["grazing angle", "Equation"], "citation_intent": "background"} {"citing_id": "2303.16129v1", "cited_id": "1909.11875", "section_title": "F. Security And Privacy", "citation": "Fortunately, several privacy-preserving distributed learning frameworks, such as FL #REFR , have been proposed to empower privacypreserving AIGC model fine-tuning and inference at mobile AIGC networks.", "text_before_citation": ["For example, AI-generated text can be used by malicious users to complete phishing emails, thus compromising the security and privacy of normal users #OTHEREFR .", "To ensure secure AIGC services, providers must choose trusted AIGC solutions and train AI models in a secure manner while providing secure hints and answers to AIGC service users.", "1) Privacy-preserving AIGC Service Provisioning: During the lifecycle of providing AIGC services, privacy information in large-scale datasets and user requests needs to be kept secure to prevent privacy breaches.", "In mobile AIGC networks, the generation and storage of data for AIGC model training occur at edge servers and mobile devices #OTHEREFR .", "Unlike resourceful cloud data centers, edge and mobile layers have limited defense capacities against various attacks."], "text_after_citation": ["In preserving user privacy in AIGC networks, FL is a distributed ML approach that allows users to transmit local models instead of data during model training #OTHEREFR - #OTHEREFR . Specifically, as illustrated in Fig.", "20 , there are two major approaches to employing FL in AIGC networks \u2022 Secure aggregation: While FL is being learned, the mobile devices send local updates to edge servers for global aggregation.", "During global aggregation, authenticated encryption allows the use of secret sharing mechanisms.", "\u2022 Differential privacy: Differential privacy can prevent FL servers from identifying the owners of a local update.", "Differential privacy is similar to secure aggregation in that it prevents FL servers from identifying owners of local updates."], "citing_paper_content": {"title": "Unleashing The Power Of Edge-Cloud Generative Ai In Mobile Networks: A Survey Of Aigc Services", "abstract": "Intelligence-Generated Content (AIGC) is an automated method for generating, manipulating, and modifying valuable and diverse data using AI algorithms creatively. This survey paper focuses on the deployment of AIGC applications, e.g., ChatGPT and Dall-E, at mobile edge networks, namely mobile AIGC networks, that provide personalized and customized AIGC services in real time while maintaining user privacy. We begin by introducing the background and fundamentals of generative models and the lifecycle of AIGC services at mobile AIGC networks, which includes data collection, training, finetuning, inference, and product management. We then discuss the collaborative cloud-edge-mobile infrastructure and technologies required to support AIGC services and enable users to access AIGC at mobile edge networks. Furthermore, we explore AIGCdriven creative applications and use cases for mobile AIGC networks. Additionally, we discuss the implementation, security, and privacy challenges of deploying mobile AIGC networks. Finally, we highlight some future research directions and open issues for the full realization of mobile AIGC networks."}, "cited_paper_content": {"title": "Federated Learning In Mobile Edge Networks: A Comprehensive Survey", "abstract": "In recent years, mobile devices are equipped with increasingly advanced sensing and computing capabilities. Coupled with advancements in Deep Learning (DL), this opens up countless possibilities for meaningful applications. Traditional Machine Learning (ML) approaches require the data to be centralized in a cloud server. However, this results in critical issues related to unacceptable latency and communication inefficiency. To this end, Mobile Edge Computing (MEC) has been proposed to bring intelligence closer to the edge, where data is produced. However, conventional enabling technologies for ML at mobile edge networks still require personal data to be shared with external parties, e.g., edge servers. Recently, in light of increasingly stringent data privacy legislation and growing privacy concerns, the concept of Federated Learning (FL) has been introduced. In FL, end devices use their local data to train an ML model required by the server. The end devices then send the model updates rather than raw data to the server for aggregation. FL can serve as an enabling technology in mobile edge networks since it enables the collaborative training of an ML model and also enables DL for mobile edge network optimization. However, in a large-scale and complex mobile edge network, heterogeneous devices with varying constraints are involved. This raises challenges of communication costs, resource allocation, and privacy and security in the implementation of FL at scale. In this survey, we begin with an introduction to the background and fundamentals of FL. Then, we highlight the aforementioned challenges of FL implementation and review existing solutions. Furthermore, we present the applications of FL for mobile edge network optimization. Finally, we discuss the important challenges, open issues and future research directions in FL."}, "keywords": ["privacypreserving AIGC model", "mobile AIGC networks"], "citation_intent": "background"} {"citing_id": "2303.12408v2", "cited_id": "2003.08934", "section_title": "D. Inward-Facing Dataset", "citation": "EgoNeRF shows comparable results in the Synthetic-NeRF dataset #REFR , which contains 8 synthetic objects.", "text_before_citation": ["The spherical grid of EgoNeRF aligns nicely with outward-facing scenes, not inward-facing images of typical NeRF settings.", "We optionally report results from widely-used datasets for novel view synthesis in Tab. 3. Table 5 . Out of distribution test.", "r is the distance between the center of training camera trajectory and position of test view."], "text_after_citation": ["In mip-NeRF 360 dataset #OTHEREFR , which contains inward-facing objects but has unbounded background scenes, EgoNeRF outperforms other baselines except mip-NeRF 360."], "citing_paper_content": {"title": "Balanced Spherical Grid For Egocentric View Synthesis", "abstract": "Figure 1. We propose a practical solution to reconstruct large-scale scenes from a short egocentric video. (a) Our scalable capturing setup observes the holistic environment by casually swiping a selfie stick with an omnidirectional camera attached. (b) Then we optimize our balanced spherical feature grids which are tailored for the outward-looking setup."}, "cited_paper_content": {"title": "Nerf: Representing Scenes As Neural Radiance Fields For View Synthesis", "abstract": "We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. Our algorithm represents a scene using a fully-connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location $(x,y,z)$ and viewing direction $(\\theta, \\phi)$) and whose output is the volume density and view-dependent emitted radiance at that spatial location. We synthesize views by querying 5D coordinates along camera rays and use classic volume rendering techniques to project the output colors and densities into an image. Because volume rendering is naturally differentiable, the only input required to optimize our representation is a set of images with known camera poses. We describe how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrate results that outperform prior work on neural rendering and view synthesis. View synthesis results are best viewed as videos, so we urge readers to view our supplementary video for convincing comparisons."}, "keywords": ["Synthetic-NeRF dataset", "8 synthetic objects"], "citation_intent": "result"} {"citing_id": "2303.06507v1", "cited_id": "1708.00171", "section_title": "D. Learning A Varying Noise Model", "citation": "They show that an Inverse-Wishart distribution can be applied as the prior on the covariance, which results in a Student's t distribution over the noise #REFR .", "text_before_citation": ["We first review their methodology for uncorrelated noise prediction (i.e., \u03b8(\u03c8 * |D) = W(\u03c8 * |D)), then present how we adapt their methodology. Vega-Brown et al.", "#OTHEREFR formulate an estimator for \u03b8 * given a target feature, \u03c8 * .", "A joint posterior between \u03b8 * and \u03b8 1:N is formed using Bayes' rule and \u03b8 1:N is then marginalized out, resulting in", "p(\u03b8 * |\u03c8 * , D) \u221d N i=1 p(y i |x i , \u03b8 * , \u03c8 i , \u03c8 * ) p(\u03b8 * |\u03c8 * ), (15)", "where we note the uncorrelated factorization of the likelihood terms."], "text_after_citation": ["In our work we will apply an uninformative prior, p(\u03b8 * |\u03c8 * ) \u221d 1, which is an adaptation of their prior work, CELLO #OTHEREFR , #OTHEREFR .", "The key innovation of Vega-Brown et al.", "#OTHEREFR is their choice in modelling the likelihood in (15) (referred to as the extended likelihood), as", "EQUATION", "where h(\u03c8 i , \u03c8 * ) is a kernel function."], "citing_paper_content": {"title": "Towards Consistent Batch State Estimation Using A Time-Correlated Measurement Noise Model", "abstract": "In this paper, we present an algorithm for learning time-correlated measurement covariances for application in batch state estimation. We parameterize the inverse measurement covariance matrix to be block-banded, which conveniently factorizes and results in a computationally efficient approach for correlating measurements across the entire trajectory. We train our covariance model through supervised learning using the groundtruth trajectory. In applications where the measurements are time-correlated, we demonstrate improved performance in both the mean posterior estimate and the covariance (i.e., improved estimator consistency). We use an experimental dataset collected using a mobile robot equipped with a laser rangefinder to demonstrate the improvement in performance. We also verify estimator consistency in a controlled simulation using a statistical test over several trials."}, "cited_paper_content": {"title": "Probe-Gk: Predictive Robust Estimation Using Generalized Kernels", "abstract": "Many algorithms in computer vision and robotics make strong assumptions about uncertainty, and rely on the validity of these assumptions to produce accurate and consistent state estimates. In practice, dynamic environments may degrade sensor performance in predictable ways that cannot be captured with static uncertainty parameters. In this paper, we employ fast nonparametric Bayesian inference techniques to more accurately model sensor uncertainty. By setting a prior on observation uncertainty, we derive a predictive robust estimator, and show how our model can be learned from sample images, both with and without knowledge of the motion used to generate the data. We validate our approach through Monte Carlo simulations, and report significant improvements in localization accuracy relative to a fixed noise model in several settings, including on synthetic data, the KITTI dataset, and our own experimental platform."}, "keywords": ["covariance"], "citation_intent": "background"} {"citing_id": "2303.13396v1", "cited_id": "1905.05055", "section_title": "C. Thresholds Used In Metrics", "citation": "For a given algorithm, varying the threshold values can result in distinct performance profiles, e.g., a precision-recall curve, and several thresholds may be used together for the purposes of evaluation and comparison, as is common practice in the object detection literature #REFR .", "text_before_citation": ["We provide Segment-to-text IoU (IoU st ) scores with several \u03c4 CLIP threshold values in Figure 12 and Table 6 .", "Selecting the threshold \u03c4 CLIP is more challenging, since there is no established consensus or user studies to rely on.", "Figure 13 shows histograms of CLIP similarity scores between groundtruth image segments and their corresponding ground-truth labels in Pascal Context and Pascal VOC datasets.", "Given the distributions, we select \u03c4 CLIP = 0.1 to be on the safe side to report Segment-to-text IoU scores in the main experiment.", "It is important to note that for our zero-guidance segmentation problem, the thresholds \u03c4 CLIP and \u03c4 SBERT are used in the label reassignment verification process (Section 4.2), which is part of the evaluation not the segmentation algorithm itself."], "text_after_citation": ["IoU threshold (\u03c4 IoU ).", "We use \u03c4 IoU = 0.5, which is commonly used in object detection tasks to determine if a predicted bounding box is \"correct\" compared to the ground"], "citing_paper_content": {"title": "Zero-Guidance Segmentation Using Zero Segment Labels", "abstract": "CLIP has enabled new and exciting joint vision-language applications, one of which is open-vocabulary segmentation, which can locate any segment given an arbitrary text query. In our research, we ask whether it is possible to discover semantic segments without any user guidance in the form of text queries or predefined classes, and label them using natural language automatically? We propose a novel problem zero-guidance segmentation and the first baseline that leverages two pre-trained generalist models, DINO and CLIP, to solve this problem without any finetuning or segmentation dataset. The general idea is to first segment an image into small over-segments, encode them into CLIP's visual-language space, translate them into text labels, and merge semantically similar segments together. The key challenge, however, is how to encode a visual segment into a segment-specific embedding that balances global and local context information, both useful for recognition. Our main contribution is a novel attention-masking technique that balances the two contexts by analyzing the attention layers inside CLIP. We also introduce several metrics for the evaluation of this new task. With CLIP's innate knowledge, our method can precisely locate the Mona Lisa painting among a museum crowd (Figure 1"}, "cited_paper_content": {"title": "Object Detection In 20 Years: A Survey", "abstract": "Object detection, as of one the most fundamental and challenging problems in computer vision, has received great attention in recent years. Its development in the past two decades can be regarded as an epitome of computer vision history. If we think of today's object detection as a technical aesthetics under the power of deep learning, then turning back the clock 20 years we would witness the wisdom of cold weapon era. This paper extensively reviews 400+ papers of object detection in the light of its technical evolution, spanning over a quarter-century's time (from the 1990s to 2019). A number of topics have been covered in this paper, including the milestone detectors in history, detection datasets, metrics, fundamental building blocks of the detection system, speed up techniques, and the recent state of the art detection methods. This paper also reviews some important detection applications, such as pedestrian detection, face detection, text detection, etc, and makes an in-deep analysis of their challenges as well as technical improvements in recent years."}, "keywords": ["object detection literature"], "citation_intent": "method"} {"citing_id": "2303.17111v1", "cited_id": "1804.07723", "section_title": "Classification Module", "citation": "To process the masked image, we resort to the partial convolution operator (PConv) #REFR , whose convolution kernel is renormalized to be applied only on unmasked pixels.", "text_before_citation": ["Partial Convolution.", "Unlike prior work #OTHEREFR whose ultimate goal is to localize the forgery mask, we reuse the forgery mask to help HiFi-Net learn the optimal feature for classifying fine-grained forged attributes.", "Specifically, we generate a binary maskM, then overlayM with the input image as X M to obtain the masked image", "X mask \u2208 R 3\u00d7W0\u00d7H0 ."], "text_after_citation": ["The idea is to have feature maps only describe pixels at the manipulated region.", "PConv acts as conditioned dot product for each kernel, conditioned on the mask. Denoting W par as the convolution kernel, we have:", "EQUATION", "where the dot product is \"renormalized\" to account for zeros in the mask.", "At different layers, we update and propa- gate the new maskM according to the following equation:"], "citing_paper_content": {"title": "Hierarchical Fine-Grained Image Forgery Detection And Localization", "abstract": "Differences in forgery attributes of images generated in CNN-synthesized and image-editing domains are large, and such differences make a unified image forgery detection and localization (IFDL) challenging. To this end, we present a hierarchical fine-grained formulation for IFDL representation learning. Specifically, we first represent forgery attributes of a manipulated image with multiple labels at different levels. Then we perform fine-grained classification at these levels using the hierarchical dependency between them. As a result, the algorithm is encouraged to learn both comprehensive features and inherent hierarchical nature of different forgery attributes, thereby improving the IFDL representation. Our proposed IFDL framework contains three components: multi-branch feature extractor, localization and classification modules. Each branch of the feature extractor learns to classify forgery attributes at one level, while localization and classification modules segment the pixel-level forgery region and detect imagelevel forgery, respectively. Lastly, we construct a hierarchical fine-grained dataset to facilitate our study. We demonstrate the effectiveness of our method on 7 different benchmarks, for both tasks of IFDL and forgery attribute classification. Our source code and dataset can be found: github.com/CHELSEA234/HiFi-IFDL."}, "cited_paper_content": {"title": "Image Inpainting For Irregular Holes Using Partial Convolutions", "abstract": "Existing deep learning based image inpainting methods use a standard convolutional network over the corrupted image, using convolutional filter responses conditioned on both valid pixels as well as the substitute values in the masked holes (typically the mean value). This often leads to artifacts such as color discrepancy and blurriness. Post-processing is usually used to reduce such artifacts, but are expensive and may fail. We propose the use of partial convolutions, where the convolution is masked and renormalized to be conditioned on only valid pixels. We further include a mechanism to automatically generate an updated mask for the next layer as part of the forward pass. Our model outperforms other methods for irregular masks. We show qualitative and quantitative comparisons with other methods to validate our approach."}, "keywords": ["unmasked pixels"], "citation_intent": "method"} {"citing_id": "2303.00280v2", "cited_id": "1707.00418", "section_title": "Related Work", "citation": "The authors in #REFR propose DNN architecture for solving the multilabel classification task, which incorporates the construction of label embeddings with feature and label interdependency awareness.", "text_before_citation": ["The work #OTHEREFR examines the same problem statement of multi-label classification in an event stream as we do.", "The authors' model targets capturing temporal and probabilistic dependencies between concurrent event types by encoding historical information with a transformer and then leveraging a conditional mixture of Bernoulli experts.", "Approaches to leveraging label dependencies.", "The authors of #OTHEREFR construct a model called C-Tran for a multi-label image classification task that leverages Transformer architecture that encourages capturing the dependencies among image features and target labels.", "The key idea is to train the model with label masking."], "text_after_citation": ["A label-correlation sensitive loss improves the efficiency of the constructed model.", "Another popular way to consider label relationships is to use Graph Neural Networks as a part of the pipeline.", "Namely, #OTHEREFR captures the correlation between the labels in the task of Multi-Label Text Classification by adopting Graph Attention Network (GAT).", "They predict the final set of labels combining feature vectors from BiLSTM and attended label features from GAT.", "Event sequences processing also tries to derive dependencies between different event types and consider specific attention mechanisms #OTHEREFR ."], "citing_paper_content": {"title": "Label Attention Network For Sequential Multi-Label Classification: You Were Looking At A Wrong Self-Attention", "abstract": "Most of the available user information can be represented as a sequence of timestamped events. Each event is assigned a set of categorical labels whose future structure is of great interest. For instance, our goal is to predict a group of items in the next customer's purchase or tomorrow's client transactions. This is a multi-label classification problem for sequential data. Modern approaches focus on transformer architecture for sequential data introducing self-attention for the elements in a sequence. In that case, we take into account events' time interactions but lose information on label inter-dependencies. Motivated by this shortcoming, we propose leveraging a self-attention mechanism over labels preceding the predicted step. As our approach is a Label-Attention NETwork, we call it LANET. Experimental evidence suggests that LANET outperforms the established models' performance and greatly captures interconnections between labels. For example, the micro-AUC of our approach is 0.9536 compared to 0.7501 for a vanilla transformer. We provide an implementation of LANET to facilitate its wider usage."}, "cited_paper_content": {"title": "Learning Deep Latent Spaces For Multi-Label Classification", "abstract": "Multi-label classification is a practical yet challenging task in machine learning related fields, since it requires the prediction of more than one label category for each input instance. We propose a novel deep neural networks (DNN) based model, Canonical Correlated AutoEncoder (C2AE), for solving this task. Aiming at better relating feature and label domain data for improved classification, we uniquely perform joint feature and label embedding by deriving a deep latent space, followed by the introduction of label-correlation sensitive loss function for recovering the predicted label outputs. Our C2AE is achieved by integrating the DNN architectures of canonical correlation analysis and autoencoder, which allows end-to-end learning and prediction with the ability to exploit label dependency. Moreover, our C2AE can be easily extended to address the learning problem with missing labels. Our experiments on multiple datasets with different scales confirm the effectiveness and robustness of our proposed method, which is shown to perform favorably against state-of-the-art methods for multi-label classification."}, "keywords": ["multilabel classification task", "label embeddings"], "citation_intent": "background"} {"citing_id": "2304.08376v1", "cited_id": "1503.09016", "section_title": "1", "citation": "We will achieve this goal by designing a method for finding a nontrivial pair of subsets having equal sum and then, like in #REFR , applying the algorithm recursively to obtain 4, 8, 16, etc. disjoint subsets with equal sum.", "text_before_citation": ["5 Zero sum subsequences in Z n p", "In this section, we assume that our input is a sequence of vectors from Z n p .", "We also assume that p is an odd prime as for p = 2 a zero sum subsequence can be obtained from n + 1 vectors in the form of a zero linear combination.", "As subsequences can be represented as subsets of the index set, it will not be too misleading to use the term (sub)set for a (sub)sequence.", "Our strategy will be finding p pairwise disjoint subsets of input vectors having equal sums."], "text_after_citation": ["Note that a pair of disjoint subsets with equal sum can be interpreted as a representation of the zero vector by a linear combination of the input vectors with nonzero coefficients 1 or \u22121 only.", "Based on this, it will be convenient to use the term signed subsets and signed subset sums.", "A signed subset of a set S of vectors is formally a function from S to the set {0, 1, \u22121}.", "The support of such a signed subset is the set of elements on which the function takes nonzero values.", "With some sloppiness, we use the term signed subset sum to refer both to the signed subset and to the value of the signed sum."], "citing_paper_content": {"title": "Zero Sum Subsequences And Hidden Subgroups", "abstract": "We propose a method for solving the hidden subgroup problem in nilpotent groups. The main idea is iteratively transforming the hidden subgroup to its images in the quotient groups by the members of a central series, eventually to its image in the commutative quotient of the original group; and then using an abelian hidden subgroup algorithm to determine this image. Knowing this image allows one to descend to a proper subgroup unless the hidden subgroup is the full group. The transformation relies on finding zero sum subsequences of sufficiently large sequences of vectors over finite prime fields. We present a new deterministic polynomial time algorithm for the latter problem in the case when the size of the field is constant. The consequence is a polynomial time exact quantum algorithm for the hidden subgroup problem in nilpotent groups having constant nilpotency class and whose order only have prime factors also bounded by a constant."}, "cited_paper_content": {"title": "On Solving Systems Of Diagonal Polynomial Equations Over Finite Fields", "abstract": "We present an algorithm to solve a system of diagonal polynomial equations over finite fields when the number of variables is greater than some fixed polynomial of the number of equations whose degree depends only on the degree of the polynomial equations. Our algorithm works in time polynomial in the number of equations and the logarithm of the size of the field, whenever the degree of the polynomial equations is constant. As a consequence we design polynomial time quantum algorithms for two algebraic hidden structure problems: for the hidden subgroup problem in certain semidirect product p-groups of constant nilpotency class, and for the multi-dimensional univariate hidden polynomial graph problem when the degree of the polynomials is constant."}, "keywords": ["subsets", "algorithm"], "citation_intent": "method"} {"citing_id": "2303.09375v2", "cited_id": "1912.04958", "section_title": "Avatar Generation Model", "citation": "The encoder E is based on the StyleGAN2 #REFR discriminator architecture and compresses the input image I rgb to a vector v of dimension 512.", "text_before_citation": ["This RGB texture allows us to explicitly save information about high-frequency details and original colors, which are hard to preserve when mapping the whole image to a vector of limited dimensionality (as discussed below).", "We additionally apply inpainting of small gaps with averaging neighbor pixels to fill the gaps in T rgb .", "We also save the binary map of sampled pixels to the B smp and the map of the sampled and inpainted pixels to the B fill .", "The main part of the neural texture is T gen .", "It has the number of channels L = 16 and is generated using the encoder-generator architecture T gen = G(E(I rgb ))."], "text_after_citation": ["The generator G( v) has the architecture of the StyleGAN2 generator and converts the vector v into a T gen neural texture with the number of channels L = 16 as in StylePeople #OTHEREFR .", "The final neural texture used in our method has a dimension of 256 \u00d7 256 \u00d7 21 and consists of the concatenation of: the generated texture T gen (256 \u00d7 256 \u00d7 16), the texture T rgb sampled from the RGB image (256 \u00d7 256 \u00d7 3) and the two binary segmentation maps (B smp and B fill ):", "EQUATION", "We note that such an approach with the explicit use of RGB channels as part of the neural texture was originally used in #OTHEREFR .", "We use the neural renderer \u03b8(R(F UV , T )) to translate the rasterized image R(F UV , T ) with L channels into I rend output RGB image."], "citing_paper_content": {"title": "Dinar: Diffusion Inpainting Of Neural Textures For One-Shot Human Avatars", "abstract": "We present DINAR, an approach for creating realistic rigged fullbody avatars from single RGB images. Similarly to previous works, our method uses neural textures combined with the SMPL-X body model to achieve photorealistic quality of avatars while keeping them easy to animate and fast to infer. To restore the texture, we use a latent diffusion model and show how such model can be trained in the neural texture space. The use of the diffusion model allows us to realistically reconstruct large unseen regions such as the back of a person given the frontal view. The models in our pipeline are trained using 2D images and videos only. In the experiments, our approach achieves state-of-the-art rendering quality and good generalization to new poses and viewpoints. In particular, the approach improves state-of-the-art on the SnapshotPeople public benchmark."}, "cited_paper_content": {"title": "Analyzing And Improving The Image Quality Of Stylegan", "abstract": "The style-based GAN architecture (StyleGAN) yields state-of-the-art results in data-driven unconditional generative image modeling. We expose and analyze several of its characteristic artifacts, and propose changes in both model architecture and training methods to address them. In particular, we redesign the generator normalization, revisit progressive growing, and regularize the generator to encourage good conditioning in the mapping from latent codes to images. In addition to improving image quality, this path length regularizer yields the additional benefit that the generator becomes significantly easier to invert. This makes it possible to reliably attribute a generated image to a particular network. We furthermore visualize how well the generator utilizes its output resolution, and identify a capacity problem, motivating us to train larger models for additional quality improvements. Overall, our improved model redefines the state of the art in unconditional image modeling, both in terms of existing distribution quality metrics as well as perceived image quality."}, "keywords": ["StyleGAN2 discriminator architecture"], "citation_intent": "method"} {"citing_id": "2303.12383v1", "cited_id": "1106.1819", "section_title": "Figure 4: Reusing Subtrees Of D-Dnnf", "citation": "Smoothing As discussed in Section 3.2, computing the cardinalities on an unsmooth d-DNNF causes computational overhead as the algorithm needs to keep track on the set of variables during the traversal #REFR .", "text_before_citation": [], "text_after_citation": ["To ensure smoothness, we add designated nodes for each unsmooth \u2228-node as seen in Figure 3 in a preprocessing step.", "Hereby, for each missing variable v \u2208 missingVariables \u03a6 in a child subtree \u03a6 of the \u2228-node, we add a disjunction v \u2228 \u00acv resulting in the following smooth disjunction on the right side.", "\u03a6\u2208children \u03a6 \u2192 \u03a6\u2208children (\u03a6 \u2227 v\u2208missingVariables \u03a6 v \u2228 \u00acv)", "The added subformulas are tautologies as v \u2228 \u00acv \u2261 holds for every variable v.", "Hence, when adding smoothing nodes to a subformula \u03a6, the result \u03a6 \u2227 equivalent to \u03a6."], "citing_paper_content": {"title": "Exploiting D-Dnnfs For Repetitive Counting Queries On Feature Models", "abstract": "Feature models are commonly used to specify the valid configurations of a product line. In industry, feature models are often complex due to a large number of features and constraints. Thus, a multitude of automated analyses have been proposed. Many of those rely on computing the number of valid configurations which typically depends on solving a #SAT problem, a computationally expensive operation. Further, most counting-based analyses require numerous #SAT computations on the same feature model. In particular, many analyses depend on multiple computations for evaluating the number of valid configurations that include certain features or conform to partial configurations. Instead of using expensive repetitive computations on highly similar formulas, we aim to improve the performance by reusing knowledge between these computations. In this work, we are the first to propose reusing d-DNNFs for performing efficient repetitive queries on features and partial configurations. Our empirical evaluation shows that our approach is up-to 8,300 times faster (99.99% CPU-time saved) than the state of the art of repetitively invoking #SAT solvers. Applying our tool ddnnife reduces runtimes from days to minutes compared to using #SAT solvers."}, "cited_paper_content": {"title": "A Knowledge Compilation Map", "abstract": "We propose a perspective on knowledge compilation which calls for analyzing different compilation approaches according to two key dimensions: the succinctness of the target compilation language, and the class of queries and transformations that the language supports in polytime. We then provide a knowledge compilation map, which analyzes a large number of existing target compilation languages according to their succinctness and their polytime transformations and queries. We argue that such analysis is necessary for placing new compilation approaches within the context of existing ones. We also go beyond classical, flat target compilation languages based on CNF and DNF, and consider a richer, nested class based on directed acyclic graphs (such as OBDDs), which we show to include a relatively large number of target compilation languages."}, "keywords": ["computational overhead", "traversal"], "citation_intent": "background"} {"citing_id": "2304.01008v1", "cited_id": "1804.03641", "section_title": "Matching Prediction", "citation": "Owens and Efros #REFR also utilize this pretext task, but instead take temporal video frames and audio as input.", "text_before_citation": ["EQUATION", "where the pseudo-label z i is a one-hot vector representing whether the inputs are matched.", "Matching prediction is widely used for modeling audiovisual correspondence (AVC).", "AVC was introduced by L 3 -Net #OTHEREFR , which uses a fused representation from audio and video to make a binary prediction of whether the audioimage pair is from the same video clip.", "This strategy is adopted by AVE-Net #OTHEREFR by only using Euclidean distance alignment without fusion, leading to localization of the object that sounds within an image."], "text_after_citation": ["Also, they sample video frames from the same video to construct unaligned pairs.", "This makes the pretext task more difficult and thus learns betteraligned representation and more accurate localization.", "In order to achieve better audio-visual localization and separation, researchers have proposed more complex pretext tasks.", "One such approach is the mix-and-separate method #OTHEREFR , which combines audio signals from different videos to create an input mixture with known constituent source signals.", "The network is then trained to separate the audio sources based on corresponding video frames by predicting binary spectrogram masks for each sound source."], "citing_paper_content": {"title": "Self-Supervised Multimodal Learning: A Survey", "abstract": "Multimodal learning, which aims to understand and analyze information from multiple modalities, has achieved substantial progress in the supervised regime in recent years. However, the heavy dependence on data paired with expensive human annotations impedes scaling up models. Meanwhile, given the availability of large-scale unannotated data in the wild, self-supervised learning has become an attractive strategy to alleviate the annotation bottleneck. Building on these two directions, self-supervised multimodal learning (SSML) provides ways to leverage supervision from raw multimodal data. In this survey, we provide a comprehensive review of the state-of-the-art in SSML, which we categorize along three orthogonal axes: objective functions, data alignment, and model architectures. These axes correspond to the inherent characteristics of self-supervised learning methods and multimodal data. Specifically, we classify training objectives into instance discrimination, clustering, and masked prediction categories. We also discuss multimodal input data pairing and alignment strategies during training. Finally, we review model architectures including the design of encoders, fusion modules, and decoders, which are essential components of SSML methods. We review downstream multimodal application tasks, reporting the concrete performance of the state-of-the-art image-text models and multimodal video models, and also review real-world applications of SSML algorithms in diverse fields such as healthcare, remote sensing, and machine translation. Finally, we discuss challenges and future directions for SSML. A collection of related resources can be found at: https://github.com/ys-zong/awesome-self-supervised-multimodal-learning."}, "cited_paper_content": {"title": "Audio-Visual Scene Analysis With Self-Supervised Multisensory Features", "abstract": "The thud of a bouncing ball, the onset of speech as lips open\u2014when visual and audio events occur together, it suggests that there might be a common, underlying event that produced both signals. In this paper, we argue that the visual and audio components of a video signal should be modeled jointly using a fused multisensory representation. We propose to learn such a representation in a self-supervised way, by training a neural network to predict whether video frames and audio are temporally aligned. We use this learned representation for three applications: (a) sound source localization, i.e. visualizing the source of sound in a video; (b) audio-visual action recognition; and (c) on/off-screen audio source separation, e.g. removing the off-screen translator\u2019s voice from a foreign official\u2019s speech. Code, models, and video results are available on our webpage: http://andrewowens.com/multisensory."}, "keywords": ["pretext task", "temporal video frames"], "citation_intent": "background"} {"citing_id": "2304.09793v1", "cited_id": "1610.08336", "section_title": "Camera Poses Estimation", "citation": "We first evaluate the performance of motion-compensation methods on rotational sequences #REFR by measuring the Root Mean Square (RMS) of angular velocity errors, as presented in Tab. 3.", "text_before_citation": [], "text_after_citation": ["CMax #OTHEREFR exhibits good performance for the 3-DoF rotational motion of event cameras, with the lowest time complexity among the evaluated methods.", "DMin #OTHEREFR extends CMax to high-dimensional feature spaces and applies entropy minimization to the projected events, resulting in an improvement in the performance of approximately 20%. However, DMin is computationally expensive.", "To address this issue, approximate DMin uses a truncated kernel to balance performance and efficiency.", "ST-PPP #OTHEREFR offers an alternative solution by employing a probabilistic model, achieving the highest performance among the evaluated methods, with a 39% improvement in the shapes sequence.", "Then, we assess the performance of both deep learning and motion-compensation methods on the outdoor day 1 sequence by measuring the APRE, ARRE, and AEE metrics. As shown in Tab."], "citing_paper_content": {"title": "Event-Based Simultaneous Localization And Mapping: A Comprehensive Survey", "abstract": "In recent decades, visual simultaneous localization and mapping (vSLAM) has gained significant interest in both academia and industry. It estimates camera motion and reconstructs the environment concurrently using visual sensors on a moving robot. However, conventional cameras are limited by hardware, including motion blur and low dynamic range, which can negatively impact performance in challenging scenarios like high-speed motion and high dynamic range illumination. Recent studies have demonstrated that event cameras, a new type of bio-inspired visual sensor, offer advantages such as high temporal resolution, dynamic range, low power consumption, and low latency. This paper presents a timely and comprehensive review of event-based vSLAM algorithms that exploit the benefits of asynchronous and irregular event streams for localization and mapping tasks. The review covers the working principle of event cameras and various event representations for preprocessing event data. It also categorizes event-based vSLAM methods into four main categories: feature-based, direct, motion-compensation, and deep learning methods, with detailed discussions and practical guidance for each approach. Furthermore, the paper evaluates the state-of-the-art methods on various benchmarks, highlighting current challenges and future opportunities in this emerging research area. A public repository will be maintained to keep track of the rapid developments in this field at https://github.com/kun150kun/ESLAM-survey."}, "cited_paper_content": {"title": "The Event-Camera Dataset And Simulator: Event-Based Data For Pose Estimation, Visual Odometry, And Slam", "abstract": "New vision sensors, such as the dynamic and active-pixel vision sensor (DAVIS), incorporate a conventional global-shutter camera and an event-based sensor in the same pixel array. These sensors have great potential for high-speed robotics and computer vision because they allow us to combine the benefits of conventional cameras with those of event-based sensors: low latency, high temporal resolution, and very high dynamic range. However, new algorithms are required to exploit the sensor characteristics and cope with its unconventional output, which consists of a stream of asynchronous brightness changes (called \u201cevents\u201d) and synchronous grayscale frames. For this purpose, we present and release a collection of datasets captured with a DAVIS in a variety of synthetic and real environments, which we hope will motivate research on new algorithms for high-speed and high-dynamic-range robotics and computer-vision applications. In addition to global-shutter intensity images and asynchronous events, we provide inertial measurements and ground-truth camera poses from a motion-capture system. The latter allows comparing the pose accuracy of ego-motion estimation algorithms quantitatively. All the data are released both as standard text files and binary files (i.e. rosbag). This paper provides an overview of the available data and describes a simulator that we release open-source to create synthetic event-camera data."}, "keywords": ["motion-compensation methods"], "citation_intent": "method"} {"citing_id": "2304.11141v1", "cited_id": "1909.06982", "section_title": "Ii. The Proposed H2Tf", "citation": "Here, L(Y, X ) denotes the fidelity term and \u03c8(Z (k) ) represents the low-rank characterization of Z (k) (which denotes the k-th frontal (spatial) slice of Z \u2208 R h\u00d7w\u00d7b #REFR ).", "text_before_citation": ["A. The t-SVD framework", "We first introduce the general formulation of t-SVD.", "Suppose that the noisy HSI Y \u2208 R h\u00d7w\u00d7b admits Y = X + N , where X denotes the clean HSI and N denotes noise.", "To infer the underlying clean HSI X from the observed Y, t-SVD method generally formulates the following model:", "EQUATION"], "text_after_citation": ["\u03c6 \u03b8 (\u2022) : R h\u00d7w\u00d7b \u2192 R h\u00d7w\u00d7b denotes a transform with learnable parameters \u03b8, which transforms the low-rank representation Z into the original domain.", "Sometimes the transform \u03c6 \u03b8 (\u2022) may not be learnable (e.g., the fixed DFT #OTHEREFR ), and in those situations the optimization variable only includes Z.", "The philosophy of the t-SVD model (1) is to minimize the rank in the transformed domain, which can model the implicit low-rankness of HSI.", "There are naturally two key building blocks for exactly modeling the implicit low-rankness, i.e., the selection of the transform \u03c6 \u03b8 (\u2022) and the exact low-rank characterization \u03c8(\u2022) of the transformed frontal slice Z (k) .", "Most t-SVD-based methods focus on the design of different transforms \u03c6 \u03b8 (\u2022) (see examples in #OTHEREFR , #OTHEREFR , #OTHEREFR ), but all of them pay less attention to the exact characterization of the transformed frontal slice."], "citing_paper_content": {"title": "H2Tf For Hyperspectral Image Denoising: Where Hierarchical Nonlinear Transform Meets Hierarchical Matrix Factorization", "abstract": "Recently, tensor singular value decomposition (t-SVD) has emerged as a promising tool for hyperspectral image (HSI) processing. In the t-SVD, there are two key building blocks: (i) the low-rank enhanced transform and (ii) the accompanying low-rank characterization of transformed frontal slices. Previous t-SVD methods mainly focus on the developments of (i), while neglecting the other important aspect, i.e., the exact characterization of transformed frontal slices. In this letter, we exploit the potentiality in both building blocks by leveraging the Hierarchical nonlinear transform and the Hierarchical matrix factorization to establish a new Tensor Factorization (termed as H2TF). Compared to shallow counter partners, e.g., low-rank matrix factorization or its convex surrogates, H2TF can better capture complex structures of transformed frontal slices due to its hierarchical modeling abilities. We then suggest the H2TF-based HSI denoising model and develop an alternating direction method of multipliers-based algorithm to address the resultant model. Extensive experiments validate the superiority of our method over state-of-the-art HSI denoising methods."}, "cited_paper_content": {"title": "Framelet Representation Of Tensor Nuclear Norm For Third-Order Tensor Completion", "abstract": "The main aim of this paper is to develop a framelet representation of the tensor nuclear norm for third-order tensor completion. In the literature, the tensor nuclear norm can be computed by using tensor singular value decomposition based on the discrete Fourier transform matrix, and tensor completion can be performed by the minimization of the tensor nuclear norm which is the relaxation of the sum of matrix ranks from all Fourier transformed matrix frontal slices. These Fourier transformed matrix frontal slices are obtained by applying the discrete Fourier transform on the tubes of the original tensor. In this paper, we propose to employ the framelet representation of each tube so that a framelet transformed tensor can be constructed. Because of framelet basis redundancy, the representation of each tube is sparsely represented. When the matrix slices of the original tensor are highly correlated, we expect the corresponding sum of matrix ranks from all framelet transformed matrix frontal slices would be small, and the resulting tensor completion can be performed much better. The proposed minimization model is convex and global minimizers can be obtained. Numerical results on several types of multi-dimensional data (videos, multispectral images, and magnetic resonance imaging data) have tested and shown that the proposed method outperformed the other testing tensor completion methods."}, "keywords": ["k-th frontal (spatial)"], "citation_intent": "background"} {"citing_id": "2304.01823v1", "cited_id": "1511.08777", "section_title": "Finite Presentability Of Minor-Excluded Groups. A Walk In A Graph G Is A Finite Sequence Of Vertices", "citation": "The following result was proved in #REFR Theorem 25] when G is a quasitransitive locally finite planar graph.", "text_before_citation": ["(v k , v k\u22121 . . . , v 2 , v 1 ). If W = (v 1 , . . .", ", v k ) and W \u2032 = (v \u2032 1 , . . .", ", v \u2032 \u2113 ) are two walks such that v k = v \u2032 1 , then their sum is the walk W \u2022 W \u2032 := (v 1 , . . . , v k = v \u2032 1 , . . . , v \u2032 \u2113 )", ".", "We will say that a set of closed walks W generates another set of closed walks W \u2032 if every element of W \u2032 can be obtained from elements of W by adding and deleting spurs and repetitions, and performing sums, reflections and rotations."], "text_after_citation": ["We reuse some of the arguments of the proof of #OTHEREFR Proposition 22] and combine them with our structure theorem to extend the result to graphs excluding K \u221e as a minor.", "Theorem 5.3.", "Let G be a locally finite graph excluding K \u221e as a minor and let \u0393 be a group acting quasi-transitively on G.", "Then the set of closed walks of G admits a \u0393-invariant generating set with finitely many \u0393-orbits.", "Proof."], "citing_paper_content": {"title": "The Structure Of Quasi-Transitive Graphs Avoiding A Minor With Applications To The Domino Problem", "abstract": "An infinite graph is quasi-transitive if its vertex set has finitely many orbits under the action of its automorphism group. In this paper we obtain a structure theorem for locally finite quasi-transitive graphs avoiding a minor, which is reminiscent of the Robertson-Seymour Graph Minor Structure Theorem. We prove that every locally finite quasi-transitive graph G avoiding a minor has a treedecomposition whose torsos are finite or planar; moreover the tree-decomposition is canonical, i.e. invariant under the action of the automorphism group of G. As applications of this result, we prove the following. \u2022 Every locally finite quasi-transitive graph attains its Hadwiger number, that is, if such a graph contains arbitrarily large clique minors, then it contains an infinite clique minor. This extends a result of Thomassen (1992) who proved it in the 4-connected case and suggested that this assumption could be omitted. In particular, this shows that a Cayley graph excludes a finite minor if and only if it avoids the countable clique as a minor. \u2022 Locally finite quasi-transitive graphs avoiding a minor are accessible (in the sense of Thomassen and Woess), which extends known results on planar graphs to any proper minor-closed family. \u2022 Minor-excluded finitely generated groups are accessible (in the group-theoretic sense) and finitely presented, which extends classical results on planar groups. \u2022 The domino problem is decidable in a minor-excluded finitely generated group if and only if the group is virtually free, which proves the minorexcluded case of a conjecture of Ballier and Stein (2018)."}, "cited_paper_content": {"title": "Planar Transitive Graphs", "abstract": "We prove that the first homology group of every planar locally finite transitive graph $G$ is finitely generated as an $\\Aut(G)$-module and we prove a similar result for the fundamental group of locally finite planar Cayley graphs. Corollaries of these results include Droms's theorem that planar groups are finitely presented and Dunwoody's theorem that planar locally finite transitive graphs are accessible."}, "keywords": ["planar graph"], "citation_intent": "background"} {"citing_id": "2303.07625v1", "cited_id": "1405.0312", "section_title": "Introduction", "citation": "As a consequence, researchers are forced to leverage synthetic data generated from images (e.g., #REFR ) for transformation learning in deep planar tracking, which may result in inferior performance Figure 2 .", "text_before_citation": ["In particular, several benchmarks (e.g., #OTHEREFR ) have been specially developed for evaluating and comparing different planar trackers, which greatly facilitates related research and progress on this topic.", "Despite this, these benchmarks are severely limited in further pushing the frontier of planar object tracking.", "One of the major issues with existing benchmarks is their relatively small scales.", "Especially, in the deep learning era, to unleash the potential of deep planar tracking, it is desired to have a large-scale platform. Nevertheless, as displayed in Fig.", "2 , currently all planar tracking benchmarks consist of less than 300 sequences, which is insufficient for largescale learning of deep planar tracking."], "text_after_citation": ["Summary of planar object tracking datasets, containing POT-280 #OTHEREFR , POT-210 #OTHEREFR , TMT #OTHEREFR , UCSB #OTHEREFR , Metiao #OTHEREFR , POIC #OTHEREFR , and PlanarTrack.", "The circle diameter is in proportion to the number of frames of a dataset. Our PlanarTrack is the largest among all these benchmarks. due to domain gap between different tasks.", "Besides the small-scale issue, another problem is the less challenging scenarios for planar object tracking.", "Early planar tracking datasets (e.g., #OTHEREFR ) are constructed from the indoor laboratories with simple background, which cannot reflect the diverse and complicated scenarios of real world in performance evaluation.", "To deal with this, recent datasets (e.g., #OTHEREFR ) directly collect videos in the wild."], "citing_paper_content": {"title": "Planartrack: A Large-Scale Challenging Benchmark For Planar Object Tracking", "abstract": "Planar object tracking is a critical computer vision problem and has drawn increasing interest owing to its key roles in robotics, augmented reality, etc. Despite rapid progress, its further development, especially in the deep learning era, is largely hindered due to the lack of large-scale challenging benchmarks. Addressing this, we introduce PlanarTrack, a large-scale challenging planar tracking benchmark. Specifically, PlanarTrack consists of 1,000 videos with more than 490K images. All these videos are collected in complex unconstrained scenarios from the wild, which makes Planar-Track, compared with existing benchmarks, more challenging but realistic for real-world applications. To ensure the high-quality annotation, each frame in PlanarTrack is manually labeled using four corners with multiple-round careful inspection and refinement. To our best knowledge, Planar-Track, to date, is the largest and most challenging dataset dedicated to planar object tracking. In order to analyze the proposed PlanarTrack, we evaluate 10 planar trackers and conduct comprehensive comparisons and in-depth analysis. Our results, not surprisingly, demonstrate that current topperforming planar trackers degenerate significantly on the challenging PlanarTrack and more efforts are needed to improve planar tracking in the future. In addition, we further derive a variant named PlanarTrack BB for generic object tracking from PlanarTrack. Our evaluation of 10 excellent generic trackers on PlanarTrack BB manifests that, surprisingly, PlanarTrack BB is even more challenging than several popular generic tracking benchmarks and more attention should be paid to handle such planar objects, though they are rigid. All benchmarks and evaluations will be released at the project webpage. * Equal contributions. \u2020 Corresponding author. (a) Example of generic object tracking with rectangular bounding box"}, "cited_paper_content": {"title": "Microsoft Coco: Common Objects In Context", "abstract": "We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model."}, "keywords": ["deep planar tracking"], "citation_intent": "background"} {"citing_id": "2303.16043v1", "cited_id": "1912.02814", "section_title": "Other Related Work", "citation": "Similar approaches have been presented for other problems, see e.g., Bamberger et al. #REFR for results on \u2206 + 1 coloring.", "text_before_citation": ["(1) Better randomized constructions for network decomposition have been known.", "In particular, the work of Linial and Saks #OTHEREFR presented an O(log 2 n) round randomized algorithm for computing decompositions with O(log n) colors and O(log n) weak diameter.", "Elkin and Neiman #OTHEREFR imported a parallel algorithm of Miller, Peng and Xu #OTHEREFR into the distributed setting and obtained an O(log 2 n) round randomized algorithm for computing decompositions with O(log n) colors and O(log n) strong diameter.", "(2) The deterministic MIS method described in Section 1.1 for using network decompositions in computing symmetry-breaking problems such as MIS would require large messages, as it gathers the topology of each cluster in a center.", "For MIS, one can work with O(log n)-bit messages, by using a derandomization method of Censor-Hillel, Parter, and Schwartzman #OTHEREFR , and that gives a deterministic algorithm with O(log n)-bit messages and round complexity O(cd)\u2022poly(log n)."], "text_after_citation": ["(3) Ghaffari, Harris, and Kuhn #OTHEREFR and Ghaffari, Kuhn and Maus #OTHEREFR showed that one can get a general derandomization method for the LOCAL model using network decompositions.", "This method transforms any poly(log n)-round randomized algorithm for any problem whose solution can be checked in poly(log n) rounds into a deterministic algorithm with round complexity O(cd + t) \u2022 poly(log n), assuming we have a deterministic (c, d) network decomposition algorithm with round complexity t."], "citing_paper_content": {"title": "Faster Deterministic Distributed Mis And Approximate Matching", "abstract": "We present an O(log 2 n) round deterministic distributed algorithm for the maximal independent set problem. By known reductions, this round complexity extends also to maximal matching, \u2206 + 1 vertex coloring, and 2\u2206 \u2212 1 edge coloring. These four problems are among the most central problems in distributed graph algorithms and have been studied extensively for the past four decades. This improved round complexity comes closer to the \u2126(log n) lower bound of maximal independent set and maximal matching [Balliu et al. FOCS '19]. The previous best known deterministic complexity for all of these problems was \u0398(log 3 n). Via the shattering technique, the improvement permeates also to the corresponding randomized complexities, e.g., the new randomized complexity of \u2206 + 1 vertex coloring is now O(log 2 log n) rounds. Our approach is a novel combination of the previously known (and seemingly orthogonal) two methods for developing fast deterministic algorithms for these problems, namely global derandomization via network decomposition (see e.g., [Rozhon, Ghaffari STOC'20; Ghaffari, Grunau, Rozhon SODA'21; Ghaffari et al. SODA'23]) and local rounding of fractional solutions (see e.g., [Fischer DISC'17; Harris FOCS'19; Fischer, Ghaffari, Kuhn FOCS'17; Ghaffari, Kuhn FOCS'21; Faour et al. SODA'23]). We consider a relaxation of the classic network decomposition concept, where instead of requiring the clusters in the same block to be non-adjacent, we allow each node to have a small number of neighboring clusters. We also show a deterministic algorithm that computes this relaxed decomposition faster than standard decompositions. We then use this relaxed decomposition to significantly improve the integrality of certain fractional solutions, before handing them to the local rounding procedure that now has to do fewer rounding steps. Randomized algorithms, and the pursuit of deterministic algorithms. In the 1980s, Luby [Lub86] and Alon, Babai, and Itai [ABI86] presented a simple and elegant randomized distributed algorithm that computes an MIS in O(log n) rounds, with high probability 1. Due to known reductions, this MIS algorithm led to O(log n) round randomized algorithms for many other key graph problems, including maximal matching, \u2206 + 1 vertex coloring, and (2\u2206 \u2212 1) edge coloring. These problems are often listed as the four fundamental symmetry-breaking problems in distributed graph algorithms and have a wide range of applications. The O(log n)-round randomized algorithm naturally led the researchers to seek a deterministic distributed algorithm with the same round complexity. In his celebrated work [Lin87, Lin92], Linial asked \"can it [MIS] always be found in polylogarithmic time [deterministically]? \" He even added that \"getting a deterministic polylog-time algorithm for MIS seems hard.\" Since then, this became known as Linial's MIS question and turned into one of the research foci in distributed graph algorithms."}, "cited_paper_content": {"title": "Efficient Deterministic Distributed Coloring With Small Bandwidth", "abstract": "We show that the $(degree+1)$-list coloring problem can be solved deterministically in $O(D \\cdot \\log n \\cdot\\log^2\\Delta)$ rounds in the \\CONGEST model, where $D$ is the diameter of the graph, $n$ the number of nodes, and $\\Delta$ the maximum degree. Using the recent polylogarithmic-time deterministic network decomposition algorithm by Rozho\\v{n} and Ghaffari [STOC 2020], this implies the first efficient (i.e., $\\poly\\log n$-time) deterministic \\CONGEST algorithm for the $(\\Delta+1)$-coloring and the $(\\mathit{degree}+1)$-list coloring problem. Previously the best known algorithm required $2^{O(\\sqrt{\\log n})}$ rounds and was not based on network decompositions. Our techniques also lead to deterministic $(\\mathit{degree}+1)$-list coloring algorithms for the congested clique and the massively parallel computation (MPC) model. For the congested clique, we obtain an algorithm with time complexity $O(\\log\\Delta\\cdot\\log\\log\\Delta)$, for the MPC model, we obtain algorithms with round complexity $O(\\log^2\\Delta)$ for the linear-memory regime and $O(\\log^2\\Delta + \\log n)$ for the sublinear memory regime."}, "keywords": ["problems"], "citation_intent": "method"} {"citing_id": "2304.12666v1", "cited_id": "1911.04252", "section_title": "Effect Of Pretrained Weight", "citation": "In light of our observations that utilizing pretrained weights for both student and teacher networks leads to a improved performance, we note that the Noisy Student #REFR reported no improvement in performance when initializing both models simultaneously.", "text_before_citation": [], "text_after_citation": ["They claim that this approach may sometimes lead to getting stuck in a local optima, resulting in inferior performance compared to training the student model from scratch.", "We hypothesize that this contrasting result could be due to the fact that they used the same pretrained weight for both teacher and student.", "Since the student model has already inherited all the knowledge that could be learned from the teacher, distilling knowledge with the teacher may not bring additional performance gains.", "A recent study by Allen-Zhu and Li #OTHEREFR revealed that different networks trained with distinct random seeds learn different knowledge.", "Motivated by this observation, we train VGG-16 on CIFAR-100 with 8 different seeds following the same procedure as the baseline in Section 4.1 and utilize them to initialize teacher and student networks."], "citing_paper_content": {"title": "Bayesian Optimization Meets Self-Distillation", "abstract": "Bayesian optimization (BO) has contributed greatly to improving model performance by suggesting promising hyperparameter configurations iteratively based on observations from multiple training trials. However, only partial knowledge (i.e., the measured performances of trained models and their hyperparameter configurations) from previous trials is transferred. On the other hand, Self-Distillation (SD) only transfers partial knowledge learned by the task model itself. To fully leverage the various knowledge gained from all training trials, we propose the BOSS framework, which combines BO and SD. BOSS suggests promising hyperparameter configurations through BO and carefully selects pre-trained models from previous trials for SD, which are otherwise abandoned in the conventional BO process. BOSS achieves significantly better performance than both BO and SD in a wide range of tasks including general image classification, learning with noisy labels, semi-supervised learning, and medical image analysis tasks."}, "cited_paper_content": {"title": "Self-Training With Noisy Student Improves Imagenet Classification", "abstract": "We present a simple self-training method that achieves 88.4% top-1 accuracy on ImageNet, which is 2.0% better than the state-of-the-art model that requires 3.5B weakly labeled Instagram images. On robustness test sets, it improves ImageNet-A top-1 accuracy from 61.0% to 83.7%, reduces ImageNet-C mean corruption error from 45.7 to 28.3, and reduces ImageNet-P mean flip rate from 27.8 to 12.2. ::: To achieve this result, we first train an EfficientNet model on labeled ImageNet images and use it as a teacher to generate pseudo labels on 300M unlabeled images. We then train a larger EfficientNet as a student model on the combination of labeled and pseudo labeled images. We iterate this process by putting back the student as the teacher. During the generation of the pseudo labels, the teacher is not noised so that the pseudo labels are as accurate as possible. However, during the learning of the student, we inject noise such as dropout, stochastic depth and data augmentation via RandAugment to the student so that the student generalizes better than the teacher."}, "keywords": ["pretrained weights", "teacher networks"], "citation_intent": "result"} {"citing_id": "2303.01515v1", "cited_id": "1704.00447", "section_title": "Related Work", "citation": "For pMRI: Variational network (VN) #REFR introduced gradient descent method by applying given sensitivities {s i }.", "text_before_citation": ["Most existing deep-learning (DL) based methods rendering end-to-end neural networks mapping from the partial k-space data to the reconstructed images #OTHEREFR .", "The common issue with this class of methods is that the DNNs require excessive amount of data to train, and the resulting networks perform similar to \"black-boxes\" which are difficult to interpret and modify.", "In recent years, a class of DL based methods improve over the end-to-end training by selecting the scheme of an iterative optimization algorithm and prescribe a phase number T , map each iteration of the scheme to one phase of the network.", "These methods are often known as the learned optimization algorithms (LOAs), #OTHEREFR .", "For instance, ADMM-Net #OTHEREFR , ISTA-Net + #OTHEREFR , and cascade network #OTHEREFR are regular MRI reconstruction."], "text_after_citation": ["MoDL #OTHEREFR proposed a recursive network by unrolling the conjugate gradient algorithm using a weight sharing strategy.", "Blind-PMRI-Net #OTHEREFR designed three network blocks to alternately update multi-channel images, sensitivity maps and the reconstructed MR image using an iterative algorithm based on half-quadratic splitting.", "The network in #OTHEREFR developed a Bayesian framework for joint MRI-PET reconstruction. VS-Net #OTHEREFR derived a variable splitting optimization method.", "However, existing methods still face the lack of accurate coil sensitivity maps and proper regularization in the pMRI problem.", "Recently, a method called DeepcomplexMRI #OTHEREFR developed an end-to-end learning without explicitly using coil sensitivity maps to recover channel-wise images, and then combine to a single channel image in testing."], "citing_paper_content": {"title": "Optimization-Based Deep Learning Methods For Magnetic Resonance Imaging Reconstruction And Synthesis", "abstract": ""}, "cited_paper_content": {"title": "Learning A Variational Network For Reconstruction Of Accelerated Mri Data", "abstract": "Purpose: To allow fast and high-quality reconstruction of clinical accelerated multi-coil MR data by learning a variational network that combines the mathematical structure of variational models with deep learning. Theory and Methods: Generalized compressed sensing reconstruction formulated as a variational model is embedded in an unrolled gradient descent scheme. All parameters of this formulation, including the prior model defined by filter kernels and activation functions as well as the data term weights, are learned during an offline training procedure. The learned model can then be applied online to previously unseen data. Results: The variational network approach is evaluated on a clinical knee imaging protocol. The variational network reconstructions outperform standard reconstruction algorithms in terms of image quality and residual artifacts for all tested acceleration factors and sampling patterns. Conclusion: Variational network reconstructions preserve the natural appearance of MR images as well as pathologies that were not included in the training data set. Due to its high computational performance, i.e., reconstruction time of 193 ms on a single graphics card, and the omission of parameter tuning once the network is trained, this new approach to image reconstruction can easily be integrated into clinical workflow."}, "keywords": ["Variational network"], "citation_intent": "method"} {"citing_id": "2303.15495v1", "cited_id": "1912.02574", "section_title": "Ii. Related Works", "citation": "In this paper, they presented a stochastic optimization toolchain along with sensitivity analyses for choosing the optimal hyperparameters, and they solved the optimization problem by using a single-objective optimization task as well as a greedy algorithm, a genetic algorithm (GA), and a particle swarm optimization (PSO) algorithm #REFR .", "text_before_citation": ["Dubey #OTHEREFR introduced a public transportation decision support system for short-term and long-term prediction of arrival bus times.", "This study used the real-world historical data of two Nashville bus system routes.", "The approach of this research combined the clustering analysis and Kalman filters with a shared route segment model to produce more accurate arrival time predictions and, based on their results, compared to the basic arrival time prediction model that Nashville MTA was using, their system reduced arrival time prediction errors by 25% on average when predicting the arrival delay an hour ahead and 47% when predicting within a 15-minute future time window #OTHEREFR . S. Basak, F. Sun, S. Sengupta, and A.", "Dubey have conducted a similar study #OTHEREFR , using unsupervised clustering mechanisms to optimize transit on-time performance.", "As a local case study, they analyzed the monthly and seasonal delays of the Nashville metro region and clustered months with similar patterns."], "text_after_citation": ["According to the newest research in #OTHEREFR , dynamic datadriven application systems (DDDAS) that use real-time sensors and a data-driven decision support system can provide online model learning and multi-time-scale analytics to enhance the intelligence of the system.", "As part of their study, the authors analyzed an online bus arrival prediction system in Nashville using historical and real-time streaming data, which can be packaged as modular, distributed, and resilient micro-services.", "The long-term delay analysis service excludes noise from outliers in historical data to identify delay patterns associated with different hours, days, and seasons for specific time points and route segments.", "City planners can use the feedback data generated by these analytics services to improve bus schedules and increase rider satisfaction #OTHEREFR .", "In addition, another study by S. Nannapaneni and A."], "citing_paper_content": {"title": "A Novel Neural Network Approach For Predicting The Arrival Time Of Buses For Smart On-Demand Public Transit", "abstract": "Among the major public transportation systems in cities, bus transit has its problems, including more accuracy and reliability when estimating the bus arrival time for riders. This can lead to delays and decreased ridership, especially in cities where public transportation is heavily relied upon. A common issue is that the arrival times of buses do not match the schedules, resulting in latency for fixed schedules. According to the study in this paper on New York City bus data, there is an average delay of around eight minutes or 491 seconds mismatch between the bus arrivals and the actual scheduled time. This research paper presents a novel AI-based data-driven approach for estimating the arrival times of buses at each transit point (station). Our approach is based on a fully connected neural network and can predict the arrival time collectively across all bus lines in large metropolitan areas. Our neural-net data-driven approach provides a new way to estimate the arrival time of the buses, which can lead to a more efficient and smarter way to bring the bus transit to the general public. Our evaluation of the network bus system with more than 200 bus lines, and 2 million data points, demonstrates less than 40 seconds of estimated error for arrival times. The inference time per each validation set data point is less than 0.006 ms."}, "cited_paper_content": {"title": "Data-Driven Optimization Of Public Transit Schedule", "abstract": "Bus transit systems are the backbone of public transportation in the United States. An important indicator of the quality of service in such infrastructures is on-time performance at stops, with published transit schedules playing an integral role governing the level of success of the service. However there are relatively few optimization architectures leveraging stochastic search that focus on optimizing bus timetables with the objective of maximizing probability of bus arrivals at timepoints with delays within desired on-time ranges. In addition to this, there is a lack of substantial research considering monthly and seasonal variations of delay patterns integrated with such optimization strategies. To address these, this paper makes the following contributions to the corpus of studies on transit on-time performance optimization: (a) an unsupervised clustering mechanism is presented which groups months with similar seasonal delay patterns, (b) the problem is formulated as a single-objective optimization task and a greedy algorithm, a genetic algorithm (GA) as well as a particle swarm optimization (PSO) algorithm are employed to solve it, (c) a detailed discussion on empirical results comparing the algorithms are provided and sensitivity analysis on hyper-parameters of the heuristics are presented along with execution times, which will help practitioners looking at similar problems. The analyses conducted are insightful in the local context of improving public transit scheduling in the Nashville metro region as well as informative from a global perspective as an elaborate case study which builds upon the growing corpus of empirical studies using nature-inspired approaches to transit schedule optimization."}, "keywords": ["optimal hyperparameters", "particle swarm optimization"], "citation_intent": "method"} {"citing_id": "2303.03443v1", "cited_id": "1810.01969", "section_title": "Forward Algorithm.", "citation": "The details of how it is implemented can be found under Algorithm A.1 of #REFR , but the main idea is to successively compute the distribution of the underlying state of H after each sample via Bayesian updates.", "text_before_citation": ["The Forward Algorithm is prevalent in the preprocessing, compression and decompression algorithms.", "Therefore, we provide an overview of what the algorithm does and how it works in this subsection.", "Given a Markov source H with \u2113 states, let the first j samples be Y 1 , . . . , Y j \u223c H n . For a setting y 1 , . . .", ", y j\u22121 , the Forward Algorithm computes the distribution of Y n given that (Y 1 , . . . , Y j\u22121 ) = (y 1 , . . . , y j\u22121 )."], "text_after_citation": ["Each update takes time O(\u2113 2 ) (since there are roughly this many possible transitions to consider), and we must do j of these updates, for a total runtime of O(j\u2113 2 ). This is implemented using dynamic programming."], "citing_paper_content": {"title": "Improving The Runtime Of Algorithmic Polarization Of Hidden Markov Models", "abstract": "We improve the runtime of the linear compression scheme for hidden Markov sources presented in a 2018 paper of Guruswami, Nakkiran, and Sudan. Under the previous scheme, compressing a message of length n takes O(n log n) runtime, and decompressing takes O(n 1+\u03b4) runtime for any fixed \u03b4 > 0. We present how to improve the runtime of the decoding scheme to O(n log n) by caching intermediate results to avoid repeating computation."}, "cited_paper_content": {"title": "Algorithmic Polarization For Hidden Markov Models", "abstract": "Using a mild variant of polar codes we design linear compression schemes compressing Hidden Markov sources (where the source is a Markov chain, but whose state is not necessarily observable from its output), and to decode from Hidden Markov channels (where the channel has a state and the error introduced depends on the state). We give the first polynomial time algorithms that manage to compress and decompress (or encode and decode) at input lengths that are polynomial $\\it{both}$ in the gap to capacity and the mixing time of the Markov chain. Prior work achieved capacity only asymptotically in the limit of large lengths, and polynomial bounds were not available with respect to either the gap to capacity or mixing time. Our results operate in the setting where the source (or the channel) is $\\it{known}$. If the source is $\\it{unknown}$ then compression at such short lengths would lead to effective algorithms for learning parity with noise -- thus our results are the first to suggest a separation between the complexity of the problem when the source is known versus when it is unknown."}, "keywords": ["Algorithm A.1"], "citation_intent": "method"} {"citing_id": "2304.03706v1", "cited_id": "2004.02554", "section_title": "Related Work", "citation": "For the special case of additive valuations with binary marginals, the algorithm of Aziz #REFR improves upon the guarantees of Aleksandrov et al.", "text_before_citation": ["#OTHEREFR gave a randomized polynomial-time algorithm that outputs an EF1 allocation, while being envy-free ex-ante.", "Ensuing work by Aziz #OTHEREFR showed that there exists a similar randomized algorithm that additionally implements the well-known Probabilistic Serial fractional outcome described by Bogomolnaia and Moulin #OTHEREFR .", "This randomized algorithm also preserves a weak notion of efficiency. Babaioff et al.", "#OTHEREFR also study the case of additive valuations and find a distribution over ex-post proportional up to one item and #OTHEREFR 2 -MMS allocations that is ex-ante proportional, i.e., the expected value of each agent's bundle is at least a 1 n -fraction of her value for the set of all items.", "Best-of-both-worlds results have also been analyzed for other settings, including the case where the agents have binary marginals, and the case of additive valuations with arbitrary entitlements."], "text_after_citation": ["#OTHEREFR and is group-strategyproof, ex-ante fractionally-PO and envy-free, and ex-post fractionally PO and EF1.", "In a similar vein, for additive valuations with binary marginals, Halpern et al.", "#OTHEREFR independently showed that there is a distribution over expost Nash-welfare-maximizing allocations that also ex-ante maximizes the fractional Nash welfare, implying the same fairness guarantees as Aziz #OTHEREFR for this setting. For matroid rank valuations, Babaioff et al.", "#OTHEREFR present a randomized truthful mechanism that is ex-ante envy-free and ex-post Lorenz dominating (and thus ex-post Nash-welfare-maximizing and EFX for this specific class).", "For the case of agents with arbitrary entitlements, both Hoefer et al. #OTHEREFR and Aziz et al."], "citing_paper_content": {"title": "Breaking The Envy Cycle: Best-Of-Both-Worlds Guarantees For Subadditive Valuations", "abstract": "We study best-of-both-worlds guarantees for the fair division of indivisible items among agents with subadditive valuations. Our main result establishes the existence of a random allocation that is simultaneously ex-ante 1 2-envy-free, ex-post 1 2-EFX and ex-post EF1, for every instance with subadditive valuations. We achieve this result by a novel polynomial-time algorithm that randomizes the well-established envy cycles procedure in a way that provides ex-ante fairness. Notably, this is the first best-of-both-worlds fairness guarantee for subadditive valuations, even when considering only EF1 without EFX."}, "cited_paper_content": {"title": "Simultaneously Achieving Ex-Ante And Ex-Post Fairness", "abstract": "We present a polynomial-time algorithm that computes an ex-ante envy-free lottery over envy-free up to one item (EF1) deterministic allocations. It has the following advantages over a recently proposed algorithm: it does not rely on the linear programming machinery including separation oracles; it is SD-efficient (both ex-ante and ex-post); and the ex-ante outcome is equivalent to the outcome returned by the well-known probabilistic serial rule. As a result, we answer a question raised by Freeman, Shah, and Vaish (2020) whether the outcome of the probabilistic serial rule can be implemented by ex-post EF1 allocations. In the light of a couple of impossibility results that we prove, our algorithm can be viewed as satisfying a maximal set of properties. Under binary utilities, our algorithm is also ex-ante group-strategyproof and ex-ante Pareto optimal. Finally, we also show that checking whether a given random allocation can be implemented by a lottery over EF1 and Pareto optimal allocations is NP-hard."}, "keywords": ["additive valuations"], "citation_intent": "method"} {"citing_id": "2305.01622v1", "cited_id": "1906.09788", "section_title": "C. Online Path Generation", "citation": "Note that #REFR was designed for trajectory optimization, and we slightly reformulate it to conduct path smoothing.", "text_before_citation": ["The output of Algorithm 2 consists of paths originating", "; 2 L \u2190 \u2205; 3 foreach G z of G do 4 D \u2190 \u2205; 5 l init \u2190 InitialGuessSearch(G z ) ; 6 {s i |i = 1, ..., N } \u2190 StationSampling(l init ); 7 for i = 1, ..., N do 8 D i \u2190 LateralClustering(s i ); 9 D \u2190 D \u222a D i 10 end 11 L z \u2190 DPSearch(D, {s i |i = 1, ..., N }, G z ); 12 L z \u2190 NonMaximumSuppress(L z ); 13 L \u2190 L \u222a L z ; 14 end 15 return L;", "from each entry lane.", "Due to the discretization of the grid, the path is not smooth enough for control.", "To this end, we utilize a local path smoothing #OTHEREFR based on Quadratic Programming (QP) to smooth the path."], "text_after_citation": ["An example of the path smoothing process is given in Fig. 4d ."], "citing_paper_content": {"title": "Flowmap: Path Generation For Automated Vehicles In Open Space Using Traffic Flow", "abstract": "There is extensive literature on perceiving road structures by fusing various sensor inputs such as lidar point clouds and camera images using deep neural nets. Leveraging the latest advance of neural architects (such as transformers) and bird-eye-view (BEV) representation, the road cognition accuracy keeps improving. However, how to cognize the \"road\" for automated vehicles where there is no well-defined \"roads\" remains an open problem. For example, how to find paths inside intersections without HD maps is hard since there is neither an explicit definition for \"roads\" nor explicit features such as lane markings. The idea of this paper comes from a proverb: it becomes a way when people walk on it. Although there are no \"roads\" from sensor readings, there are \"roads\" from tracks of other vehicles. In this paper, we propose FlowMap, a path generation framework for automated vehicles based on traffic flows. FlowMap is built by extending our previous work RoadMap [1], a lightweight semantic map, with an additional traffic flow layer. A path generation algorithm on traffic flow fields (TFFs) is proposed to generate human-like paths. The proposed framework is validated using real-world driving data and is amenable to generating paths for super complicated intersections without using HD maps."}, "cited_paper_content": {"title": "Safe Trajectory Generation For Complex Urban Environments Using Spatio-Temporal Semantic Corridor", "abstract": "Planning safe trajectories for autonomous vehicles in complex urban environments is challenging since there are numerous semantic elements (such as dynamic agents, traffic lights, and speed limits) to consider. These semantic elements may have different mathematical descriptions, such as obstacle, constraint, and cost. It is non-trivial to tune the effects from different combinations of semantic elements for a stable and generalizable behavior. In this letter, we propose a novel unified spatio-temporal semantic corridor (SSC) structure, which provides a level of abstraction for different types of semantic elements. The SSC consists of a series of mutually connected collision-free cubes with dynamical constraints posed by the semantic elements in the spatio-temporal domain. The trajectory generation problem then boils down to a general quadratic programming formulation. Thanks to the unified SSC representation, our framework can generalize to any combination of semantic elements. Moreover, our formulation provides a theoretical guarantee that the entire trajectory is safe and constraint-satisfied, by using the convex hull and hodograph properties of piecewise Bezier curve parameterization. We also release the code of our method to accommodate benchmarking."}, "keywords": ["path smoothing", "trajectory optimization"], "citation_intent": "method"} {"citing_id": "2304.06537v1", "cited_id": "1706.04599", "section_title": "Introduction", "citation": "A model is called perfect calibrated if the predictive confidence of the model represents a good approximation of its actual probability of correctness #REFR .", "text_before_citation": ["With the development of deep neural networks, great progress has been made in image classification.", "In addition to performance, the uncertainty estimate of a given model is also receiving increasing attention, as the confidence of a model is expected to accurately reflect its performance. \u2020 Corresponding author."], "text_after_citation": ["Model calibration is particularly important in safety-critical applications, such as autonomous driving, medical diagnosis, and robotics #OTHEREFR .", "For example, if a prediction with low confidence is more likely to be wrong, we can take countermeasures to avoid unknown risks.", "Most existing calibration techniques assume that the distribution of training data is balanced, i.e., each class has a similar number of training instances, so that each class is treated equally #OTHEREFR . As shown in Fig.", "1 , the traditional calibration pipeline uses a balanced training set to train the classification model and a balanced validation set to obtain the calibration model, respectively.", "The target test set is in the same distribution as the training/validation set."], "citing_paper_content": {"title": "Transfer Knowledge From Head To Tail: Uncertainty Calibration Under Long-Tailed Distribution", "abstract": "How to estimate the uncertainty of a given model is a crucial problem. Current calibration techniques treat different classes equally and thus implicitly assume that the distribution of training data is balanced, but ignore the fact that real-world data often follows a long-tailed distribution. In this paper, we explore the problem of calibrating the model trained from a long-tailed distribution. Due to the difference between the imbalanced training distribution and balanced test distribution, existing calibration methods such as temperature scaling can not generalize well to this problem. Specific calibration methods for domain adaptation are also not applicable because they rely on unlabeled target domain instances which are not available. Models trained from a long-tailed distribution tend to be more overconfident to head classes. To this end, we propose a novel knowledge-transferring-based calibration method by estimating the importance weights for samples of tail classes to realize long-tailed calibration. Our method models the distribution of each class as a Gaussian distribution and views the source statistics of head classes as a prior to calibrate the target distributions of tail classes. We adaptively transfer knowledge from head classes to get the target probability density of tail classes. The importance weight is estimated by the ratio of the target probability density over the source probability density. Extensive experiments on CIFAR-10-LT, MNIST-LT, CIFAR-100-LT, and ImageNet-LT datasets demonstrate the effectiveness of our method."}, "cited_paper_content": {"title": "On Calibration Of Modern Neural Networks", "abstract": "Confidence calibration -- the problem of predicting probability estimates representative of the true correctness likelihood -- is important for classification models in many applications. We discover that modern neural networks, unlike those from a decade ago, are poorly calibrated. Through extensive experiments, we observe that depth, width, weight decay, and Batch Normalization are important factors influencing calibration. We evaluate the performance of various post-processing calibration methods on state-of-the-art architectures with image and document classification datasets. Our analysis and experiments not only offer insights into neural network learning, but also provide a simple and straightforward recipe for practical settings: on most datasets, temperature scaling -- a single-parameter variant of Platt Scaling -- is surprisingly effective at calibrating predictions."}, "keywords": ["predictive confidence"], "citation_intent": "background"} {"citing_id": "2303.05234v1", "cited_id": "1811.06186", "section_title": "Related Work 2.1. Gait Recognition", "citation": "Silhouette-based methods GaitSet #REFR aggregates temporal information in silhouette sequences using a statistical function to adapt to different frame rates.", "text_before_citation": ["Gait recognition methods can be classified into two main categories: appearance-based #OTHEREFR and modelbased methods #OTHEREFR depending on the type of data input.", "Appearance-based methods usually rely on silhouette sequences #OTHEREFR or their transformation, such as Gait Energy Image (GEI) #OTHEREFR and Chrono-Gait Image (CGI) #OTHEREFR .", "Model-based methods, on the other hand, represent the human body as mesh #OTHEREFR or a set of keypoints #OTHEREFR ."], "text_after_citation": ["GaitPart #OTHEREFR uses Focal Convolutional Layer to extract fine-grained local features and Micro-motion Template Builder with different window sizes to extract local temporal information.", "GaitGL #OTHEREFR uses 3D Convolution to extract global and local spatiotemporal information.", "LagrangeGait #OTHEREFR adds a local motion extractor and a viewpoint branch based on GaitGL to get more discriminative local temporal information.", "Pose-based methods PoseGait #OTHEREFR employs 3D pose information to generate multi-feature vectors and uses a CNN to extract the gait information in both spatial and temporal dimensions.", "GaitGraph #OTHEREFR and GaitGraph2 #OTHEREFR adopt Graph Convolutional Network for gait recognition, treating keypoints as nodes and limbs as edges to form a topology graph."], "citing_paper_content": {"title": "Gpgait: Generalized Pose-Based Gait Recognition", "abstract": "Recent works on pose-based gait recognition have demonstrated the potential of using such simple information to achieve results comparable to silhouette-based methods. However, the generalization ability of pose-based methods on different datasets is undesirably inferior to that of silhouette-based ones, which has received little attention but hinders the application of these methods in real-world scenarios. To improve the generalization ability of pose-based methods across datasets, we propose a Generalized Pose-based Gait recognition (GPGait) framework. First, a Human-Oriented Transformation (HOT) and a series of Human-Oriented Descriptors (HOD) are proposed to obtain a unified pose representation with discriminative multi-features. Then, given the slight variations in the unified representation after HOT and HOD, it becomes crucial for the network to extract local-global relationships between the keypoints. To this end, a Part-Aware Graph Convolutional Network (PAGCN) is proposed to enable efficient graph partition and local-global spatial feature extraction. Experiments on four public gait recognition datasets, CASIA-B, OUMVLP-Pose, Gait3D and GREW, show that our model demonstrates better and more stable cross-domain capabilities compared to existing skeletonbased methods, achieving comparable recognition results to silhouette-based ones. The code will be released."}, "cited_paper_content": {"title": "Gaitset: Regarding Gait As A Set For Cross-View Gait Recognition", "abstract": "As a unique biometric feature that can be recognized at a distance, gait has broad applications in crime prevention, forensic identification and social security. To portray a gait, existing gait recognition methods utilize either a gait template, where temporal information is hard to preserve, or a gait sequence, which must keep unnecessary sequential constraints and thus loses the flexibility of gait recognition. In this paper we present a novel perspective, where a gait is regarded as a set consisting of independent frames. We propose a new network named GaitSet to learn identity information from the set. Based on the set perspective, our method is immune to permutation of frames, and can naturally integrate frames from different videos which have been filmed under different scenarios, such as diverse viewing angles, different clothes/carrying conditions. Experiments show that under normal walking conditions, our single-model method achieves an average rank-1 accuracy of 95.0% on the CASIA-B gait dataset and an 87.1% accuracy on the OU-MVLP gait dataset. These results represent new state-of-the-art recognition accuracy. On various complex scenarios, our model exhibits a significant level of robustness. It achieves accuracies of 87.2% and 70.4% on CASIA-B under bag-carrying and coat-wearing walking conditions, respectively. These outperform the existing best methods by a large margin. The method presented can also achieve a satisfactory accuracy with a small number of frames in a test sample, e.g., 82.5% on CASIA-B with only 7 frames. The source code has been released at https://github.com/AbnerHqC/GaitSet."}, "keywords": ["GaitSet"], "citation_intent": "method"} {"citing_id": "2303.04244v1", "cited_id": "1911.12409", "section_title": "Data Normalization", "citation": "Processing to remove location and orientation variability has been used in prior work #REFR , and to save space we present our version of normalization in the Appendix.", "text_before_citation": ["Before training, temporal windows of body pose data are normalized by centering, rotating and scale normalizing the data values.", "Centering and rotation use a body-specific procedure to make the 3D point data invariant to the body's location and facing direction in the capture space."], "text_after_citation": [], "citing_paper_content": {"title": "A Light-Weight Contrastive Approach For Aligning Human Pose Sequences", "abstract": "We present a simple unsupervised method for learning an encoder mapping short 3D pose sequences into embedding vectors suitable for sequence-to-sequence alignment by dynamic time warping. Training samples consist of temporal windows of frames containing 3D body points such as mocap markers or skeleton joints. A lightweight , 3layer encoder is trained using a contrastive loss function that encourages embedding vectors of augmented sample pairs to have cosine similarity 1, and similarity 0 with all other samples in a minibatch. When multiple scripted training sequences are available, temporal alignments inferred from an initial round of training are harvested to extract additional, cross-performance match pairs for a second phase of training to refine the encoder. In addition to being simple, the proposed method is fast to train, making it easy to adapt to new data using different marker sets or skeletal joint layouts. Experimental results illustrate ease of use, transferability, and utility of the learned embeddings for comparing and analyzing human behavior sequences."}, "cited_paper_content": {"title": "Predict&Cluster: Unsupervised Skeleton Based Action Recognition", "abstract": "We propose a novel system for unsupervised skeleton-based action recognition. Given inputs of body keypoints sequences obtained during various movements, our system associates the sequences with actions. Our system is based on an encoder-decoder recurrent neural network, where the encoder learns a separable feature representation within its hidden states formed by training the model to perform prediction task. We show that according to such unsupervised training the decoder and the encoder self-organize their hidden states into a feature space which clusters similar movements into the same cluster and distinct movements into distant clusters. Current state-of-the-art methods for action recognition are strongly supervised, i.e., rely on providing labels for training. Unsupervised methods have been proposed, however, they require camera and depth inputs (RGB+D) at each time step. In contrast, our system is fully unsupervised, does not require labels of actions at any stage, and can operate with body keypoints input only. Furthermore, the method can perform on various dimensions of body keypoints (2D or 3D) and include additional cues describing movements. We evaluate our system on three extensive action recognition benchmarks with different number of actions and examples. Our results outperform prior unsupervised skeleton-based methods, unsupervised RGB+D based methods on cross-view tests and while being unsupervised have similar performance to supervised skeleton-based action recognition."}, "keywords": ["orientation variability", "normalization"], "citation_intent": "method"} {"citing_id": "2304.02309v1", "cited_id": "1803.07100", "section_title": "Efficient Learning", "citation": "While these results are not state-of-the-art (100% reported by #REFR ), this test proves that our model can cope with a larger dataset.", "text_before_citation": ["To remove this limitation, we perform a second set of tests where we choose for each class an optimized template stimulus (indicated by Optimized).", "This stimulus is determined by iterating through the training dataset and retaining the class-specific training picture that results in the highest training accuracy after a single training epoch.", "Table 4 shows the accuracy of the classifications for the different training schemes.", "Using the first images in the dataset for each class (condition First), we obtain a testing accuracy of 73%.", "This was in the range of the accuracies for training with individual avatars, which range from 66% to 79%."], "text_after_citation": ["When the model have access to every domain (MD-NRE-I), the test accuracy reaches a value of 92.15%, which exceeds the accuracy (89.02%) reported in the original paper on the FERG dataset #OTHEREFR .", "Moreover, the best transfer learning model (MD-NRE-Bonnie) reaches 84.42% accuracy.", "This is an encouraging transfer learning result for a classifier trained with only 12 images, and where all the expressions are from a single avatar type."], "citing_paper_content": {"title": "Multi-Domain Norm-Referenced Encoding Enables Data Efficient Transfer Learning Of Facial Expression Recognition *", "abstract": "People can innately recognize human facial expressions in unnatural forms, such as when depicted on the unusual faces drawn in cartoons or when applied to an animal's features. However, current machine learning algorithms struggle with out-of-domain transfer in facial expression recognition (FER). We propose a biologically-inspired mechanism for such transfer learning, which is based on norm-referenced encoding, where patterns are encoded in terms of difference vectors relative to a domain-specific reference vector. By incorporating domain-specific reference frames, we demonstrate high data efficiency in transfer learning across multiple domains. Our proposed architecture provides an explanation for how the human brain might innately recognize facial expressions on varying head shapes (humans, monkeys, and cartoon avatars) without extensive training. Norm-referenced encoding also allows the intensity of the expression to be read out directly from neural unit activity, similar to face-selective neurons in the brain. Our model achieves a classification accuracy of 92.15% on the FERG dataset with extreme data efficiency. We train our proposed mechanism with only 12 images, including a single image of each class (facial expression) and one image per domain (avatar). In comparison, the authors of the FERG dataset achieved a classification accuracy of 89.02% with their FaceExpr model, which was trained on 43,000 images."}, "cited_paper_content": {"title": "Vgan-Based Image Representation Learning For Privacy-Preserving Facial Expression Recognition", "abstract": "Reliable facial expression recognition plays a critical role in human-machine interactions. However, most of the facial expression analysis methodologies proposed to date pay little or no attention to the protection of a user's privacy. In this paper, we propose a Privacy-Preserving Representation-Learning Variational Generative Adversarial Network (PPRL-VGAN) to learn an image representation that is explicitly disentangled from the identity information. At the same time, this representation is discriminative from the standpoint of facial expression recognition and generative as it allows expression-equivalent face image synthesis. We evaluate the proposed model on two public datasets under various threat scenarios. Quantitative and qualitative results demonstrate that our approach strikes a balance between the preservation of privacy and data utility. We further demonstrate that our model can be effectively applied to other tasks such as expression morphing and image completion."}, "keywords": ["model"], "citation_intent": "result"} {"citing_id": "2304.12486v2", "cited_id": "1904.04433", "section_title": "Conclusion And Future Works", "citation": "It seems to be the case with other blackbox attacks, since the perturbation is generated randomly using probabilistic laws that don't take into account the fact that document images are brighter than other images #REFR .", "text_before_citation": ["Compressing and then decompressing input images of a model using JPEG protocol improves the model robustness against adversarial images, but not consistently.", "On the other hand, the adversarial training of both models using the method of Kurakin et al. #OTHEREFR , strongly improves robustness of both models.", "This training method is quite easy to implement and does not affect a lot the test accuracy of models on our classification task.", "Therefore, this method seems far more effective on a document classification task than against an image classification task, as we can see in Dong et al. #OTHEREFR .", "The black-box attack we evaluated generates blurry examples that appear darker than legitimate document images."], "text_after_citation": ["There are many ways to improve the robustness of a model that would only use the visual modality of a document #OTHEREFR .", "However, state-of-the-art approaches to document classification take advantage of other information modalities, such as the layout of the document, and the text it contains 3 .", "Therefore, after this work on evaluating the robustness of visual models, it would be interesting to evaluate the transferability of the generated examples to a multimodal model such as DocFormer or LayoutLMv2, which use optical character recognition (OCR) and transformer layers.", "Furthermore, we could explore the possibility of designing adversarial attacks to which these models are more sensitive, for example by targeting OCR prediction errors #OTHEREFR that affect the textual modality and may also affect the robustness of such models #OTHEREFR .", "Dealing with the added modality given by text means that more approaches can be explored attacking only one modality or both."], "citing_paper_content": {"title": "Evaluating Adversarial Robustness On Document Image Classification", "abstract": "Adversarial attacks and defenses have gained increasing interest on computer vision systems in recent years, but as of today, most investigations are limited to natural images. However, many artificial intelligence models actually handle documentary data, which is very different from real world images. Hence, in this work, we try to apply the adversarial attack philosophy on documentary data and to protect models against such attacks. Our methodology is to implement untargeted gradient-based, transfer-based and score-based attacks and evaluate the impact of defenses such as adversarial training, JPEG input compression and grey-scale input transformation on the robustness of ResNet50 and EfficientNetB0 model architectures. To the best of our knowledge, no such work has been conducted by the community in order to study the impact of these attacks on the document image classification task."}, "cited_paper_content": {"title": "Efficient Decision-Based Black-Box Adversarial Attacks On Face Recognition", "abstract": "Face recognition has obtained remarkable progress in recent years due to the great improvement of deep convolutional neural networks (CNNs). However, deep CNNs are vulnerable to adversarial examples, which can cause fateful consequences in real-world face recognition applications with security-sensitive purposes. Adversarial attacks are widely studied as they can identify the vulnerability of the models before they are deployed. In this paper, we evaluate the robustness of state-of-the-art face recognition models in the decision-based black-box attack setting, where the attackers have no access to the model parameters and gradients, but can only acquire hard-label predictions by sending queries to the target model. This attack setting is more practical in real-world face recognition systems. To improve the efficiency of previous methods, we propose an evolutionary attack algorithm, which can model the local geometry of the search directions and reduce the dimension of the search space. Extensive experiments demonstrate the effectiveness of the proposed method that induces a minimum perturbation to an input face image with fewer queries. We also apply the proposed method to attack a real-world face recognition system successfully."}, "keywords": ["document images", "blackbox attacks"], "citation_intent": "background"} {"citing_id": "2305.01732v1", "cited_id": "1812.11941", "section_title": "Related Work", "citation": "Depth from Single Image Convolutional networks within an encoder-decoder paradigm #REFR is the standard prototype architecture for dense depth prediction from a single image.", "text_before_citation": ["Synthetic dataset generation from computer games #OTHEREFR has been used extensively for training computer vision algorithms like semantic segmentation #OTHEREFR , object detection #OTHEREFR , and depth estimation #OTHEREFR . Mohammad et al.", "#OTHEREFR have also used the GTA-V game to generate a synthetic RGB-D dataset but with relative depth at the focus.", "The resolution of the dataset used for training is 256\u00d7256 which is much smaller than the publicly available datasets.", "Furthermore, they need a preprocessing phase, such as histogram equalization, to use the datasets before feeding them to the training.", "For training, they use a Resnet architecture and process the RGB image and GT depth with resolution 256 \u00d7 256."], "text_after_citation": ["The building blocks of such a network consist of convolutional and sub-sampling as their core elements.", "However, CNN as an encoder suffers from a local receptive field problem #OTHEREFR , leading to less global representation learning at higher resolutions.", "Several algorithms adapt different techniques to learn features at different resolutions to address this issue like dilated convolutions #OTHEREFR or parallel multi-scale feature aggression #OTHEREFR .", "Recently, transformer architectures such as vision transformer (ViT) #OTHEREFR or data-efficient image transformers (DeiT) #OTHEREFR have outperformed CNN architectures in image recognition #OTHEREFR , object detection #OTHEREFR , and semantic segmentation #OTHEREFR .", "Inspired by the success of transformers in various topics, Ren\u00e9 et al."], "citing_paper_content": {"title": "High-Resolution Synthetic Rgb-D Datasets For Monocular Depth Estimation", "abstract": "Accurate depth maps are essential in various applications, such as autonomous driving, scene reconstruction, point-cloud creation, etc. However, monocular-depth estimation (MDE) algorithms often fail to provide enough texture & sharpness, and also are inconsistent for homogeneous scenes. These algorithms mostly use CNN or vision transformer-based architectures requiring large datasets for supervised training. But, MDE algorithms trained on available depth datasets do not generalize well and hence fail to perform accurately in diverse real-world scenes. Moreover, the ground-truth depth maps are either lower resolution or sparse leading to relatively inconsistent depth maps. In general, acquiring a high-resolution ground truth dataset with pixel-level precision for accurate depth prediction is an expensive, and time-consuming challenge. In this paper, we generate a high-resolution synthetic depth dataset (HRSD) of dimension 1920 \u00d7 1080 from Grand Theft Auto (GTA-V), which contains 100,000 color images and corresponding dense ground truth depth maps. The generated datasets are diverse and have scenes from indoors to outdoors, from homogeneous surfaces to textures. For experiments and analysis, we train the DPT algorithm, a state-of-the-art transformer-based MDE algorithm on the proposed synthetic dataset, which significantly increases the accuracy of depth maps on different scenes by 9%. Since the synthetic datasets are of higher resolution, we propose adding a feature extraction module in the transformer's encoder and incorporating an attention-based loss, further improving the accuracy by 15 %."}, "cited_paper_content": {"title": "High Quality Monocular Depth Estimation Via Transfer Learning", "abstract": "Accurate depth estimation from images is a fundamental task in many applications including scene understanding and reconstruction. Existing solutions for depth estimation often produce blurry approximations of low resolution. This paper presents a convolutional neural network for computing a high-resolution depth map given a single RGB image with the help of transfer learning. Following a standard encoder-decoder architecture, we leverage features extracted using high performing pre-trained networks when initializing our encoder along with augmentation and training strategies that lead to more accurate results. We show how, even for a very simple decoder, our method is able to achieve detailed high-resolution depth maps. Our network, with fewer parameters and training iterations, outperforms state-of-the-art on two datasets and also produces qualitatively better results that capture object boundaries more faithfully. Code and corresponding pre-trained weights are made publicly available."}, "keywords": ["dense depth prediction"], "citation_intent": "background"} {"citing_id": "2303.13582v1", "cited_id": "1804.08328", "section_title": "A.1. Experiments On Tanks And Temples", "citation": "As shown in Table A5 , SCADE trained with the same outof-domain prior that we used for the other datasets (which was trained on Taskonomy #REFR ) outperforms the baselines on the Tanks and Temples dataset as well. Moreover, Figure A11 shows qualitative results.", "text_before_citation": ["We conduct further experiments to test the robustness of SCADE.", "We evaluate on three scenes from the Tanks and Temples #OTHEREFR dataset, namely three large indoor rooms -Church, Courtroom and Auditorium scenes.", "The training set consists of 21, 26 and 21 sparse views for the Church, Courtroom and Auditorium scenes respectively, and the test set consists of 8 sparse views, so the amount of data is similar to that used in prior work #OTHEREFR .", "We also followed similar data preprocessing steps as prior work #OTHEREFR and ran SfM #OTHEREFR on all images to obtain camera poses for training."], "text_after_citation": ["As shown, SCADE is able to recover objects better than the baselines such as the table in the Church, the group of chairs in the Courtroom (second column), and the rows of seats in the Auditorium (clearer in the side-view seats on the second column).", "Moreover, results also show that SCADE avoids clouds of dust such as the lights on the wall of the Church (second column), painting on the wall of the Courtroom (last column) and details on the repetitive seats of the auditorium."], "citing_paper_content": {"title": "Scade: Nerfs From Space Carving With Ambiguity-Aware Depth Estimates", "abstract": "Figure 1. SCADE Overview. We present SCADE, a novel technique for NeRF reconstruction under sparse, unconstrained views for in-the-wild indoor scenes. We leverage on generalizable monocular depth priors and address to represent the inherent ambiguities of monocular depth by exploiting our ambiguity-aware depth estimates (left). Our approach accounts for multimodality of both distributions using our novel space carving loss that seeks to disambiguate and find the common mode to fuse the information between different views (middle). SCADE enables better photometric reconstruction especially in highly ambiguous scenes, e.g. non-opaque surfaces (right)."}, "cited_paper_content": {"title": "Taskonomy: Disentangling Task Transfer Learning", "abstract": "Do visual tasks have a relationship, or are they unrelated? For instance, could having surface normals simplify estimating the depth of an image? Intuition answers these questions positively, implying existence of a structure among visual tasks. Knowing this structure has notable values; it is the concept underlying transfer learning and provides a principled way for identifying redundancies across tasks, e.g., to seamlessly reuse supervision among related tasks or solve many tasks in one system without piling up the complexity. We proposes a fully computational approach for modeling the structure of space of visual tasks. This is done via finding (first and higher-order) transfer learning dependencies across a dictionary of twenty six 2D, 2.5D, 3D, and semantic tasks in a latent space. The product is a computational taxonomic map for task transfer learning. We study the consequences of this structure, e.g. nontrivial emerged relationships, and exploit them to reduce the demand for labeled data. For example, we show that the total number of labeled datapoints needed for solving a set of 10 tasks can be reduced by roughly 2/3 (compared to training independently) while keeping the performance nearly the same. We provide a set of tools for computing and probing this taxonomical structure including a solver that users can employ to devise efficient supervision policies for their use cases."}, "keywords": ["datasets", "Taskonomy"], "citation_intent": "result"} {"citing_id": "2303.14572v1", "cited_id": "1407.6756", "section_title": "Application: Improved 3Sum In Preprocessed Universes", "citation": "Kopelowitz, Pettie and Porat #REFR gave a simple randomized reduction from 3SUM to O(log n) instances of 3SUM-Convolution via hashing.", "text_before_citation": ["As one immediate application, we can solve 3SUM with preprocessed universe, improving Chan and Lewenstein's previous solution which required O(n 13/7 ) query time #OTHEREFR , and also improving Corollary 5.3 regardless of the value of \u03c9: the query algorithm does not use fast matrix multiplication but uses FFT instead, though randomization is now needed in the preprocessing algorithm.", "Corollary 10.4.", "We can preprocess sets A, B, and C of n integers in O(n 2 ) Las Vegas randomized time, so that given any subsets A \u2032 \u2286 A, B \u2032 \u2286 B, and C \u2032 \u2286 C, we can solve All-Nums-3SUM on (A \u2032 , B \u2032 , C \u2032 ) in O(n 11/6 ) time.", "Proof."], "text_after_citation": ["The same approach works in the preprocessed universe setting, and transform the input into O(log n) instances where A, B, and C are indexed sets.", "During preprocessing, we apply Theorem 10.3 to A \u222a (\u2212B), producing subsets A (\u03bb) and a set R of pairs.", "During a query with given subsets A \u2032 \u2286 A, B \u2032 \u2286 B, and C \u2032 \u2286 C, we first examine each pair (a, \u2212b) \u2208 R and check whether a \u2208 A \u2032 , b \u2208 B \u2032 , and a + b \u2208 C \u2032 . This takes O(n 2 /s) time.", "Next, for each \u03bb, we compute (A (\u03bb) \u2229 A \u2032 ) + ((\u2212A (\u03bb) ) \u2229 B \u2032 ) by known FFT-based algorithms #OTHEREFR ; the running time is near-linear in the output size, which is bounded by |A (\u03bb) \u2212 A (\u03bb) |.", "For each output value c, we check whether c \u2208 C \u2032 ."], "citing_paper_content": {"title": "Fredman'S Trick Meets Dominance Product: Fine-Grained Complexity Of Unweighted Apsp, 3Sum Counting, And More", "abstract": "In this paper we carefully combine Fredman's trick [SICOMP'76] and Matou\u0161ek's approach for dominance product [IPL'91] to obtain powerful results in fine-grained complexity: \u2022 Under the hypothesis that APSP for undirected graphs with edge weights in {1, 2,. .. , n} requires n 3\u2212o(1) time (when \u03c9 = 2), we show a variety of conditional lower bounds, including an n 7/3\u2212o(1) lower bound for unweighted directed APSP and an n 2.2\u2212o(1) lower bound for computing the Minimum Witness Product between two n \u00d7 n Boolean matrices, even if \u03c9 = 2, improving upon their trivial n 2 lower bounds. Our techniques can also be used to reduce the unweighted directed APSP problem to other problems. In particular, we show that (when \u03c9 = 2), if unweighted directed APSP requires n 2.5\u2212o(1) time, then Minimum Witness Product requires n 7/3\u2212o(1) time."}, "cited_paper_content": {"title": "Higher Lower Bounds From The 3Sum Conjecture", "abstract": "The 3SUM conjecture has proven to be a valuable tool for proving conditional lower bounds on dynamic data structures and graph problems. This line of work was initiated by P\\v{a}tra\\c{s}cu (STOC 2010) who reduced 3SUM to an offline SetDisjointness problem. However, the reduction introduced by P\\v{a}tra\\c{s}cu suffers from several inefficiencies, making it difficult to obtain tight conditional lower bounds from the 3SUM conjecture. ::: In this paper we address many of the deficiencies of P\\v{a}tra\\c{s}cu's framework. We give new and efficient reductions from 3SUM to offline SetDisjointness and offline SetIntersection (the reporting version of SetDisjointness) which leads to polynomially higher lower bounds on several problems. Using our reductions, we are able to show the essential optimality of several algorithms, assuming the 3SUM conjecture. ::: - Chiba and Nishizeki's $O(m\\alpha)$-time algorithm (SICOMP 1985) for enumerating all triangles in a graph with arboricity/degeneracy $\\alpha$ is essentially optimal, for any $\\alpha$. ::: - Bj{\\o}rklund, Pagh, Williams, and Zwick's algorithm (ICALP 2014) for listing $t$ triangles is essentially optimal (assuming the matrix multiplication exponent is $\\omega=2$). ::: - Any static data structure for SetDisjointness that answers queries in constant time must spend $\\Omega(N^{2-o(1)})$ time in preprocessing, where $N$ is the size of the set system. ::: These statements were unattainable via P\\v{a}tra\\c{s}cu's reductions. ::: We also introduce several new reductions from 3SUM to pattern matching problems and dynamic graph problems. Of particular interest are new conditional lower bounds for dynamic versions of Maximum Cardinality Matching, which introduce a new technique for obtaining amortized lower bounds."}, "keywords": ["O(log n) instances"], "citation_intent": "background"} {"citing_id": "2304.11400v1", "cited_id": "1704.02422", "section_title": "Results On Single-Coil Mri Reconstruction", "citation": "It is worth noting that even compared to the second best model DCCNN #REFR , PSNR is improved by 0.41dB. This effectively illustrates the excellence of EAMRI.", "text_before_citation": ["In Table 1 , we show the quantitative results for all model reconstructions on the Calgary single-coil brain dataset.", "According to the table, we can observe that EAMRI achieves the best results for the quantification metrics under acceleration factor 4 and requires fewer model parameters (123K)."], "text_after_citation": ["Moreover, in Fig.", "6 , we provide a visual comparison of the reconstruction results of these models.", "We can see that EAMRI has fewer bright spots in the heatmaps, which means less error between the EAMRI reconstructed image and the ground truth image.", "Meanwhile, according to the zoomed-in images of the selected areas, we can observe that our EAMRI can reconstruct more clean and accurate edges. This further validates the validity of EAMRI.", "Both the quantitative and the qualitative results for the single-coil MRI reconstruction demonstrate the effectiveness of EAMRI."], "citing_paper_content": {"title": "Fast Mri Reconstruction Via Edge Attention", "abstract": "Fast and accurate MRI reconstruction is a key concern in modern clinical practice. Recently, numerous Deep-Learning methods have been proposed for MRI reconstruction, however, they usually fail to reconstruct sharp details from the subsampled k-space data. To solve this problem, we propose a lightweight and accurate Edge Attention MRI Reconstruction Network (EAMRI) to reconstruct images with edge guidance. Specifically, we design an efficient Edge Prediction Network to directly predict accurate edges from the blurred image. Meanwhile, we propose a novel Edge Attention Module (EAM) to guide the image reconstruction utilizing the extracted edge priors, as inspired by the popular self-attention mechanism. EAM first projects the input image and edges into Q image , K edge , and V image , respectively. Then EAM pairs the Q image with K edge along the channel dimension, such that 1) it can search globally for the high-frequency image features that are activated by the edge priors; 2) the overall computation burdens are largely reduced compared with the traditional spatial-wise attention. With the help of EAM, the predicted edge priors can effectively guide the model to reconstruct high-quality MR images with accurate edges. Extensive experiments show that our proposed EAMRI outperforms other methods with fewer parameters and can recover more accurate edges."}, "cited_paper_content": {"title": "A Deep Cascade Of Convolutional Neural Networks For Dynamic Mr Image Reconstruction", "abstract": "Inspired by recent advances in deep learning, we propose a framework for reconstructing dynamic sequences of 2-D cardiac magnetic resonance (MR) images from undersampled data using a deep cascade of convolutional neural networks (CNNs) to accelerate the data acquisition process. In particular, we address the case where data are acquired using aggressive Cartesian undersampling. First, we show that when each 2-D image frame is reconstructed independently, the proposed method outperforms state-of-the-art 2-D compressed sensing approaches, such as dictionary learning-based MR image reconstruction, in terms of reconstruction error and reconstruction speed. Second, when reconstructing the frames of the sequences jointly, we demonstrate that CNNs can learn spatio-temporal correlations efficiently by combining convolution and data sharing approaches. We show that the proposed method consistently outperforms state-of-the-art methods and is capable of preserving anatomical structure more faithfully up to 11-fold undersampling. Moreover, reconstruction is very fast: each complete dynamic sequence can be reconstructed in less than 10 s and, for the 2-D case, each image frame can be reconstructed in 23 ms, enabling real-time applications."}, "keywords": ["second best model", "DCCNN"], "citation_intent": "result"} {"citing_id": "2304.00858v1", "cited_id": "1905.00875", "section_title": "Self-Supervised Action Representation Learning", "citation": "To learn the contextual coherence in action representation, Lai and Xie #REFR matched pixelwise correspondence from the spatial-temporal color information. Han et al.", "text_before_citation": ["#OTHEREFR first proposed a convolutional auto-encoder to construct the latent space from the encoder output.", "By operating the high-level features, the model is functional in many areas, such as action interpolation and comparison.", "Later on, with a hierarchical RNN auto-encoder in both spatial and temporal domains, Wang et al.", "#OTHEREFR developed a high-quality representation space that is motivated for precise action modeling.", "As a vision-based learning task, the effectiveness of self-supervised representation is frequently explored in RGB-based action understanding."], "text_after_citation": ["#OTHEREFR exploited action representations from multiple modalities of RGB streams and optical flow.", "However, these RGB-based action recognition models usually learn contrastive representation from background or visual consistency #OTHEREFR .", "With the visual information unavailable, the challenge of learning self-supervised skeletonbased action representation mainly comes from the diverse pose information under different view observations #OTHEREFR .", "In self-supervised skeleton-based action representation learning, existing works such as #OTHEREFR mainly focus on preserving the action-dependent features as much as possible to identify samples #OTHEREFR .", "For example, an adversarial discriminator is used to assist the auto-encoder to rectify the reconstructed action for a more distinctive representation #OTHEREFR ."], "citing_paper_content": {"title": "Focalized Contrastive View-Invariant Learning For Self-Supervised Skeleton-Based Action Recognition", "abstract": "Learning view-invariant representation is a key to improving feature discrimination power for skeleton-based action recognition. Existing approaches cannot effectively remove the impact of viewpoint due to the implicit view-dependent representations. In this work, we propose a self-supervised framework called Focalized Contrastive View-invariant Learning (FoCoViL), which significantly suppresses the view-specific information on the representation space where the viewpoints are coarsely aligned. By maximizing mutual information with an effective contrastive loss between multi-view sample pairs, FoCoViL associates actions with common view-invariant properties and simultaneously separates the dissimilar ones. We further propose an adaptive focalization method based on pairwise similarity to enhance contrastive learning for a clearer cluster boundary in the learned space. Different from many existing self-supervised representation learning work that rely heavily on supervised classifiers, FoCoViL performs well on both unsupervised and supervised classifiers with superior recognition performance. Extensive experiments also show that the proposed contrastive-based focalization generates a more discriminative latent representation."}, "cited_paper_content": {"title": "Self-Supervised Learning For Video Correspondence Flow", "abstract": "The objective of this paper is self-supervised learning of feature embeddings that are suitable for matching correspondences along the videos, which we term correspondence flow. By leveraging the natural spatial-temporal coherence in videos, we propose to train a ``pointer'' that reconstructs a target frame by copying pixels from a reference frame. ::: We make the following contributions: First, we introduce a simple information bottleneck that forces the model to learn robust features for correspondence matching, and prevent it from learning trivial solutions, \\eg matching based on low-level colour information. Second, to tackle the challenges from tracker drifting, due to complex object deformations, illumination changes and occlusions, we propose to train a recursive model over long temporal windows with scheduled sampling and cycle consistency. Third, we achieve state-of-the-art performance on DAVIS 2017 video segmentation and JHMDB keypoint tracking tasks, outperforming all previous self-supervised learning approaches by a significant margin. Fourth, in order to shed light on the potential of self-supervised learning on the task of video correspondence flow, we probe the upper bound by training on additional data, \\ie more diverse videos, further demonstrating significant improvements on video segmentation."}, "keywords": ["action representation", "pixelwise correspondence"], "citation_intent": "background"} {"citing_id": "2304.04726v1", "cited_id": "1803.05407", "section_title": "Representing Model Uncertainty", "citation": "Intuitively, ensembles and checkpoint averages also reflect the idea of different views and interpretations of the data and, therefore, provide a framework for uncertainty modeling. Stochastic Weight Averaging (SWA, #REFR ) and SWA-Gaussian (SWAG, Maddox et al. (2019)) both build on this idea.", "text_before_citation": ["The approach to uncertainty modeling that we consider is related to the well-established technique of model ensembling.", "Stochastic optimization procedures applied in training deep neural networks are non-deterministic and depend on hyper-parameters and initial seeds.", "Ensembles have been used as a pragmatic solution to average over several solutions, and the positive impact on model performance pushed ensembling into the standard toolbox of deep learning.", "Related to ensembling is the technique of checkpoint averaging (refer to e.g.", "#OTHEREFR , which is also known to improve performance."], "text_after_citation": ["SWA proposes using the first moments of the parameters of the solutions traversed by the optimizer during the optimization process, as mean estimates of the model parameters.", "Using such mean values have been argued to result in finding wider optima, providing better generalization to unseen data.", "On top of these mean estimations procured by SWA, SWAG then adds a low-rank plus diagonal approximation of covariances, which, when combined with the aforementioned mean estimations, provide us with corresponding Gaussian posterior approximations over model parameters.", "Posterior distribution approximations learned this way then represent our epistemic uncertainty about the model #OTHEREFR , meaning the uncertainty stemming from not knowing the perfect values of the model parameters, since we do not have infinite data to train on.", "During test time, instead of making estimates from a single model with deterministic parameters, we sample N different models from the approximated posteriors for each model parameter, and use the average of their prediction distributions as the model response."], "citing_paper_content": {"title": "Uncertainty-Aware Natural Language Inference With Stochastic Weight Averaging", "abstract": "This paper introduces Bayesian uncertainty modeling using Stochastic Weight Averaging-Gaussian (SWAG) in Natural Language Understanding (NLU) tasks. We apply the approach to standard tasks in natural language inference (NLI) and demonstrate the effectiveness of the method in terms of prediction accuracy and correlation with human annotation disagreements. We argue that the uncertainty representations in SWAG better reflect subjective interpretation and the natural variation that is also present in human language understanding. The results reveal the importance of uncertainty modeling, an often neglected aspect of neural language modeling, in NLU tasks."}, "cited_paper_content": {"title": "Averaging Weights Leads To Wider Optima And Better Generalization", "abstract": "Deep neural networks are typically trained by optimizing a loss function with an SGD variant, in conjunction with a decaying learning rate, until convergence. We show that simple averaging of multiple points along the trajectory of SGD, with a cyclical or constant learning rate, leads to better generalization than conventional training. We also show that this Stochastic Weight Averaging (SWA) procedure finds much broader optima than SGD, and approximates the recent Fast Geometric Ensembling (FGE) approach with a single model. Using SWA we achieve notable improvement in test accuracy over conventional SGD training on a range of state-of-the-art residual networks, PyramidNets, DenseNets, and Shake-Shake networks on CIFAR-10, CIFAR-100, and ImageNet. In short, SWA is extremely easy to implement, improves generalization, and has almost no computational overhead."}, "keywords": ["uncertainty modeling", "Stochastic Weight Averaging"], "citation_intent": "background"} {"citing_id": "2303.09472v1", "cited_id": "1911.07783", "section_title": "B. Evaluation On Real-World Sr", "citation": "We evaluate all methods on the dataset provided in the challenge of Real-World Super-Resolution: NTIRE2020 Track1 and Tracks #REFR .", "text_before_citation": ["Specifically, we adopt the same loss functions of Real-ESRGAN #OTHEREFR , which further introduce perceptual loss and adversarial loss to the basic L 1 loss.", "We set the learning rate of the KDSR T to 2 \u00d7 10 \u22124 .", "We further validate the effectiveness of KDSR on Real-World datasets.", "For optimization, we use Adam with \u03b2 1 = 0.9, \u03b2 2 = 0.99.", "In both two stages of training, we set the batch size to 64, with the input patch size being 64."], "text_after_citation": ["In addition, we also validate our DiffIR on RealSRSet #OTHEREFR .", "Since NTIRE2020 Track1 and RealSRSet datasets provide a paired validation set, we use the LPIPS #OTHEREFR , DISTS #OTHEREFR , and PSNR for the evaluation.", "The quantitative results are shown in Tab. 6.", "We can see that DiffIR S2 outperforms SOTA real-world SR method KDSR S -GAN on LPIPS, DISTS, and PSNR, consuming fewer computational costs.", "In addition, we can see that DiffIR S2 outperforms classic real-world SR method Real-ESRGAN on LPIPS, DISTS, and PSNR, only consuming its 63% Mult-Adds."], "citing_paper_content": {"title": "Diffir: Efficient Diffusion Model For Image Restoration", "abstract": "Diffusion model (DM) has achieved SOTA performance by modeling the image synthesis process into a sequential application of a denoising network. However, different from image synthesis generating each pixel from scratch, most pixels of image restoration (IR) are given. Thus, for IR, traditional DMs running massive iterations on a large model to estimate whole images or feature maps is inefficient. To address this issue, we propose an efficient DM for IR (Dif-fIR), which consists of a compact IR prior extraction network (CPEN), dynamic IR transformer (DIRformer), and denoising network. Specifically, DiffIR has two training stages: pretraining and training DM. In pretraining, we input ground-truth images into CPEN S1 to capture a compact IR prior representation (IPR) to guide DIRformer. In the second stage, we train the DM to directly estimate the same IRP as pretrained CPEN S1 only using LQ images. We observe that since the IPR is only a compact vector, DiffIR can use fewer iterations than traditional DM to obtain accurate estimations and generate more stable and realistic results. Since the iterations are few, our DiffIR can adopt a joint optimization of CPEN S2 , DIRformer, and denoising network, which can further reduce the estimation error influence. We conduct extensive experiments on several IR tasks and achieve SOTA performance while consuming less computational costs."}, "cited_paper_content": {"title": "Aim 2019 Challenge On Real-World Image Super-Resolution: Methods And Results", "abstract": "This paper reviews the AIM 2019 challenge on real world super-resolution. It focuses on the participating methods and final results. The challenge addresses the real world setting, where paired true high and low-resolution images are unavailable. For training, only one set of source input images is therefore provided in the challenge. In Track 1: Source Domain the aim is to super-resolve such images while preserving the low level image characteristics of the source input domain. In Track 2: Target Domain a set of high-quality images is also provided for training, that defines the output domain and desired quality of the super-resolved images. To allow for quantitative evaluation, the source input images in both tracks are constructed using artificial, but realistic, image degradations. The challenge is the first of its kind, aiming to advance the state-of-the-art and provide a standard benchmark for this newly emerging task. In total 7 teams competed in the final testing phase, demonstrating new and innovative solutions to the problem."}, "keywords": ["dataset", "Resolution"], "citation_intent": "method"} {"citing_id": "2304.01242v1", "cited_id": "2003.01332", "section_title": "Ii. Related Work", "citation": "HGT #REFR uses node-type and edge-type dependent parameters to characterize the heterogeneous attention over each edge based on metarelation, which achieves the state-of-the-art in the mission of knowledge embedding.", "text_before_citation": ["Knowledge graph embedding aims to project entities and relations in the knowledge graph into a low-dimensional space.", "Graph Neural Networks (GNNs), such as GCN #OTHEREFR and GAT #OTHEREFR , use information aggregation from nearby nodes to learn graph representations. Wang et al.", "#OTHEREFR proposed AM-GCN to improve the embedding capability of GCN.", "This is because GCN may struggle to learn complex correlation information between node features and topological structures. However, in real-world applications, knowledge graphs can be heterogeneous.", "RGCN #OTHEREFR is used as the graph embedding model in MedRec #OTHEREFR which achieve the state-of-theart performance in drug recommendation."], "text_after_citation": ["Current studies on knowledge-based recommendation suffer a lot from the problem of graph sparsity, which severely reduces the effectiveness of identifying valuable items for users.", "Meanwhile, heterogeneous graph neural network becomes popular for its capacity to incorporate both structured and unstructured information.", "But the use of heterogeneous network in recommendation systems has not been extensively investigated."], "citing_paper_content": {"title": "Enhancing Clinical Evidence Recommendation With Multi-Channel Heterogeneous Learning On Evidence Graphs", "abstract": "Clinical evidence encompasses the associations and impacts between patients, interventions (such as drugs or physiotherapy), problems, and outcomes. The goal of recommending clinical evidence is to provide medical practitioners with relevant information to support their decision-making processes and to generate new evidence. Our specific task focuses on recommending evidence based on clinical problems. However, the direct connections between certain clinical problems and related evidence are often sparse, creating a challenge of link sparsity. Additionally, to recommend appropriate evidence, it is essential to jointly exploit both topological relationships among evidence and textual information describing them. To address these challenges, we define two knowledge graphs: an Evidence Co-reference Graph and an Evidence Text Graph, to represent the topological and linguistic relations among evidential elements, respectively. We also introduce a multi-channel heterogeneous learning model and a fusional attention mechanism to handle the coreference-text heterogeneity in evidence recommendation. Our experiments demonstrate that our model outperforms state-of-the-art methods on open data."}, "cited_paper_content": {"title": "Heterogeneous Graph Transformer", "abstract": "Recent years have witnessed the emerging success of graph neural networks (GNNs) for modeling structured data. However, most GNNs are designed for homogeneous graphs, in which all nodes and edges belong to the same types, making them infeasible to represent heterogeneous structures. In this paper, we present the Heterogeneous Graph Transformer (HGT) architecture for modeling Web-scale heterogeneous graphs. To model heterogeneity, we design node- and edge-type dependent parameters to characterize the heterogeneous attention over each edge, empowering HGT to maintain dedicated representations for different types of nodes and edges. To handle dynamic heterogeneous graphs, we introduce the relative temporal encoding technique into HGT, which is able to capture the dynamic structural dependency with arbitrary durations. To handle Web-scale graph data, we design the heterogeneous mini-batch graph sampling algorithm---HGSampling---for efficient and scalable training. Extensive experiments on the Open Academic Graph of 179 million nodes and 2 billion edges show that the proposed HGT model consistently outperforms all the state-of-the-art GNN baselines by 9%--21% on various downstream tasks."}, "keywords": ["heterogeneous attention"], "citation_intent": "method"} {"citing_id": "2303.16202v1", "cited_id": "1811.10541", "section_title": "Related Work", "citation": "HiPPI #REFR is a computationally efficient method that takes geometric relations into account while generalising permutation synchronization but is still limited in resolution.", "text_before_citation": ["When more than two shapes of the same class exist, stronger geometric cues can be leveraged to improve results by matching all of them simultaneously.", "Unfortunately, the already very high problem complexity increases even further the more shapes are used.", "Hence, existing multi-shape matching methods limit the total number of shapes and their resolution #OTHEREFR , work in spectral space #OTHEREFR , or relax the permutation constraints #OTHEREFR .", "Early multi-matching methods computed pair-wise matchings and subsequently used permutation synchronisation to establish cycle consistency #OTHEREFR .", "Still, permutation synchronisation requires the eigendecomposition of a matrix with quadratically increasing dimensions #OTHEREFR ."], "text_after_citation": ["Instead of looking at permutations directly, ZoomOut #OTHEREFR reduces the dimensionality of the problem by projecting it onto the spectral decomposition.", "This idea has been extended to take cycle consistency within the spectral space into account #OTHEREFR , which does not guarantee a point-wise consistent matching.", "To circumvent this issue, IsoMuSh #OTHEREFR jointly optimises point and functional correspondences.", "The method detangles the optimisation into smaller subproblems by using a so-called universe shape that all shapes are mapped to instead of each other, as Cao and Bernard do #OTHEREFR .", "Using a universe is similar to requiring a template shape, as many learning-based approaches do #OTHEREFR : Both synchronise all correspondences by matching them through a unified space."], "citing_paper_content": {"title": "Ccuantumm: Cycle-Consistent Quantum-Hybrid Matching Of Multiple Shapes", "abstract": "Jointly matching multiple, non-rigidly deformed 3D shapes is a challenging, N P-hard problem. A perfect matching is necessarily cycle-consistent: Following the pairwise point correspondences along several shapes must end up at the starting vertex of the original shape. Unfortunately, existing quantum shape-matching methods do not support multiple shapes and even less cycle consistency. This paper addresses the open challenges and introduces the first quantum-hybrid approach for 3D shape multimatching; in addition, it is also cycle-consistent. Its iterative formulation is admissible to modern adiabatic quantum hardware and scales linearly with the total number of input shapes. Both these characteristics are achieved by reducing the N-shape case to a sequence of three-shape matchings, the derivation of which is our main technical contribution. Thanks to quantum annealing, high-quality solutions with low energy are retrieved for the intermediate N Phard objectives. On benchmark datasets, the proposed approach significantly outperforms extensions to multi-shape matching of a previous quantum-hybrid two-shape matching method and is on-par with classical multi-matching methods. Our source code is available at 4dqv.mpiinf.mpg.de/CCuantuMM/."}, "cited_paper_content": {"title": "Hippi: Higher-Order Projected Power Iterations For Scalable Multi-Matching", "abstract": "The matching of multiple objects (e.g. shapes or images) is a fundamental problem in vision and graphics. In order to robustly handle ambiguities, noise and repetitive patterns in challenging real-world settings, it is essential to take geometric consistency between points into account. Computationally, the multi-matching problem is difficult. It can be phrased as simultaneously solving multiple (NP-hard) quadratic assignment problems (QAPs) that are coupled via cycle-consistency constraints. The main limitations of existing multi-matching methods are that they either ignore geometric consistency and thus have limited robustness, or they are restricted to small-scale problems due to their (relatively) high computational cost. We address these shortcomings by introducing a Higher-order Projected Power Iteration method, which is (i) efficient and scales to tens of thousands of points, (ii) straightforward to implement, (iii) able to incorporate geometric consistency, (iv) guarantees cycle-consistent multi-matchings, and (iv) comes with theoretical convergence guarantees. Experimentally we show that our approach is superior to existing methods."}, "keywords": ["permutation synchronization"], "citation_intent": "method"} {"citing_id": "2303.11964v1", "cited_id": "1607.04247", "section_title": "Simulation Of The Stable Undershoot Via Domination By A Mixture Of Densities", "citation": "In the following two subsections we describe how to sample from the laws given by the densities \u03c8 The bivariate density \u03c8 #REFR s can be written as the product of a marginal log-concave density and an exponential density with deterministic shift and a random scale.", "text_before_citation": ["The random running time of Algorithm 5 has exponential moments.", "Moreover, the expected running time of Algorithm 5 is bounded above by", "\u03ba 5 (1 \u2212 \u03b1) \u22121 1 + \u03b1(1 \u2212 \u03b1) \u22121 log + 2 (s \u22121 ) + log + 2 1/(s \u03b1/(1\u2212\u03b1) \u2212 \u03b1 (1\u22122\u03b1)/(1\u2212\u03b1) (1 \u2212 \u03b1)) + \u03b1 log + 2 (s) + | log(\u03b1)| + (1 \u2212 \u03b1) \u22122 + log N ,", "where the constant \u03ba 5 > 0 depends on neither s = (\u03b8t) \u22121/\u03b1 w \u2208 (0, \u221e) (see line 1 of Algorithm 5), t > 0, \u03b8 \u2208 (0, \u221e) nor \u03b1 \u2208 (0, 1).", "The proof of Proposition 2.5 is given in Subsection 4.3.4 below."], "text_after_citation": ["The latter is easy to simulate and the former can also be simulated via the general Algorithm 11 by virtue of being log-concave.", "Require: Parameters \u03b1 \u2208 (0, 1) and s > 0", "1: Sample Y s with density y \u2192 exp(\u2212\u03c3 \u03b1 (y)s \u2212r ) via Algorithm 11 2: Sample E \u223c Exp(1) and return (s \u2212r + E/\u03c3 \u03b1 (Y s ), Y s )", "Proposition 2.6.", "Algorithm 6 samples from the density \u03c8 #OTHEREFR s ."], "citing_paper_content": {"title": "Fast Exact Simulation Of The First Passage Of A Tempered Stable Subordinator Across A Non-Increasing Function", "abstract": "We construct a fast exact algorithm for the simulation of the first-passage time, jointly with the undershoot and overshoot, of a tempered stable subordinator over an arbitrary non-increasing absolutely continuous function. We prove that the running time of our algorithm has finite exponential moments and provide bounds on its expected running time with explicit dependence on the characteristics of the process and the initial value of the function. The expected running time grows at most cubically in the stability parameter (as it approaches either 0 or 1) and is linear in the tempering parameter and the initial value of the function. Numerical performance, based on the implementation in the dedicated GitHub repository, exhibits a good agreement with our theoretical bounds. We provide numerical examples to illustrate the performance of our algorithm in Monte Carlo estimation."}, "cited_paper_content": {"title": "Accurate And Efficient Numerical Calculation Of Stable Densities Via Optimized Quadrature And Asymptotics", "abstract": "Stable distributions are an important class of infinitely-divisible probability distributions, of which two special cases are the Cauchy distribution and the normal distribution. Aside from a few special cases, the density function for stable distributions has no known analytic form, and is expressible only through the variate's characteristic function or other integral forms. In this paper we present numerical schemes for evaluating the density function for stable distributions, its gradient, and distribution function in various parameter regimes of interest, some of which had no pre-existing efficient method for their computation. The novel evaluation schemes consist of optimized generalized Gaussian quadrature rules for integral representations of the density function, complemented by various asymptotic expansions near various values of the shape and argument parameters. We report several numerical examples illustrating the efficiency of our methods. The resulting code has been made available online."}, "keywords": ["deterministic shift", "bivariate density"], "citation_intent": "background"} {"citing_id": "2304.14298v1", "cited_id": "1805.01934", "section_title": "Table 13", "citation": "And numerical results are surprisingly good, i.e., 27.2 AP with SID #REFR , which outperforms baseline by 7.4 points. This implies the superiority of using RAW images.", "text_before_citation": ["And ResNet-50-FPN #OTHEREFR Faster R-CNN 22.1 35.6 23.5 GLADNet #OTHEREFR Faster R-CNN 15.4 24.9 16.4 Retinex-Net #OTHEREFR Faster R-CNN 19.6 31.1 21.5 EnlightenGAN #OTHEREFR Faster R-CNN 21.1 34.8 21.9 Zero-DCE #OTHEREFR Faster R-CNN 22.0 35.9 23.5", "Enhance + Denoise HE #OTHEREFR ) + SGN #OTHEREFR Faster R-CNN 25.1 39.8 26.9 GLADNet #OTHEREFR ) + SGN #OTHEREFR Faster R-CNN 24.1 39.1 25.3 Retinex-Net #OTHEREFR ) + SGN #OTHEREFR Faster R-CNN 25.5 41.0 27.4 EnlightenGAN #OTHEREFR itively expect performance improvement, but the accuracies stay the same (with histogram equalization #OTHEREFR and Zero-DCE #OTHEREFR ) or even decrease (with GLADNet #OTHEREFR , Retinex-Net #OTHEREFR and EnlightenGAN #OTHEREFR ).", "We guess the reason is that these enhancers only improve the overall brightness but cannot handle the noise.", "To verify it, we further introduce denoiser to the pipeline, and the overall accuracy significantly increases as expected, e.g., Zero-DCE #OTHEREFR plus SGN #OTHEREFR leads to 6.7 AP gain. Notice that these methods for comparison use camera outputs.", "Then, we also perform experiments with SID #OTHEREFR and REDI #OTHEREFR , which can restore sRGB images from low-light RAW images."], "text_after_citation": ["Though these enhancing and denoising steps boost the low-light instance segmentation performance remarkably, our method achieves the best quantitative results without extra preprocessing steps.", "Besides, the inference speed of the pro-posed method outperforms all other pipelines.", "And its speed is very close to the original Mask R-CNN #OTHEREFR .", "Moreover, qualitative results illustrated in Figure 12 show the proposed method can consistently recall most of the targets even in challenging scenarios."], "citing_paper_content": {"title": "Instance Segmentation In The Dark", "abstract": "Existing instance segmentation techniques are primarily tailored for high-visibility inputs, but their performance significantly deteriorates in extremely low-light environments. In this work, we take a deep look at instance segmentation in the dark and introduce several techniques that substantially boost the low-light inference accuracy. The proposed method is motivated by the observation that noise in low-light images introduces high-frequency disturbances to the feature maps of neural networks, thereby significantly degrading performance. To suppress this \"feature noise\", we propose a novel learning method that relies on an adaptive weighted downsampling layer, a smooth-oriented convolutional block, and disturbance suppression learning. These components effectively reduce feature noise during downsampling and convolution operations, enabling the model to learn disturbance-invariant features. Furthermore, we discover that high-bit-depth RAW images can better preserve richer scene information in low-light conditions compared to typical camera sRGB outputs, thus supporting the use of RAW-input algorithms. Our analysis indicates that high bit"}, "cited_paper_content": {"title": "Learning To See In The Dark", "abstract": "Imaging in low light is challenging due to low photon count and low SNR. Short-exposure images suffer from noise, while long exposure can induce blur and is often impractical. A variety of denoising, deblurring, and enhancement techniques have been proposed, but their effectiveness is limited in extreme conditions, such as video-rate imaging at night. To support the development of learning-based pipelines for low-light image processing, we introduce a dataset of raw short-exposure low-light images, with corresponding long-exposure reference images. Using the presented dataset, we develop a pipeline for processing low-light images, based on end-to-end training of a fully-convolutional network. The network operates directly on raw sensor data and replaces much of the traditional image processing pipeline, which tends to perform poorly on such data. We report promising results on the new dataset, analyze factors that affect performance, and highlight opportunities for future work. The results are shown in the supplementary video at this https URL"}, "keywords": ["RAW images"], "citation_intent": "result"} {"citing_id": "2304.00173v1", "cited_id": "2003.07962", "section_title": "Downstream Las Decoder / Deliberation", "citation": "Figure 3 also ends with an LAS decoder, except this one can optionally attend to the continuous encoder features as well, as is done in previous deliberation work #REFR . Gradients do not flow back through embedded N-best.", "text_before_citation": ["A fitting baseline to this experiment is second-pass deliberation ASR #OTHEREFR .", "Typically, a deliberation system generates first-pass hypotheses using a fast decoder, like RNN-T, then embeds its Nbest hyps and attends to them with a second-pass full-context LAS decoder.", "We have therefore constructed a comparable deliberation baseline model shown in Figure 3 . This model is analogous to our full pipeline, i.e.", "Figures 1 & 2 put together, and is designed to have a similar total model size and encoder latency.", "It starts with the same frozen base encoder, then trains a first-pass RNN-T decoder to obtain the N-best hyps, which stands to be compared to the Lego-Features in terms of informativeness and modularity."], "text_after_citation": [], "citing_paper_content": {"title": "Lego-Features: Exporting Modular Encoder Features For Streaming And Deliberation Asr", "abstract": "In end-to-end (E2E) speech recognition models, a representational tight-coupling inevitably emerges between the encoder and the decoder. We build upon recent work that has begun to explore building encoders with modular encoded representations, such that encoders and decoders from different models can be stitched together in a zero-shot manner without further fine-tuning. While previous research only addresses full-context speech models, we explore the problem in a streaming setting as well. Our framework builds on top of existing encoded representations, converting them to modular features, dubbed as Lego-Features, without modifying the pre-trained model. The features remain interchangeable when the model is retrained with distinct initializations. Though sparse, we show that the Lego-Features are powerful when tested with RNN-T or LAS decoders, maintaining high-quality downstream performance. They are also rich enough to represent the first-pass prediction during twopass deliberation. In this scenario, they outperform the N-best hypotheses, since they do not need to be supplemented with acoustic features to deliver the best results. Moreover, generating the Lego-Features does not require beam search or auto-regressive computation. Overall, they present a modular, powerful and cheap alternative to the standard encoder output, as well as the N-best hypotheses."}, "cited_paper_content": {"title": "Deliberation Model Based Two-Pass End-To-End Speech Recognition", "abstract": "End-to-end (E2E) models have made rapid progress in automatic speech recognition (ASR) and perform competitively relative to conventional models. To further improve the quality, a two-pass model has been proposed to rescore streamed hypotheses using the non-streaming Listen, Attend and Spell (LAS) model while maintaining a reasonable latency. The model attends to acoustics to rescore hypotheses, as opposed to a class of neural correction models that use only first-pass text hypotheses. In this work, we propose to attend to both acoustics and first-pass hypotheses using a deliberation network. A bidirectional encoder is used to extract context information from first-pass hypotheses. The proposed deliberation model achieves 12% relative WER reduction compared to LAS rescoring in Google Voice Search (VS) tasks, and 23% reduction on a proper noun test set. Compared to a large conventional model, our best model performs 21% relatively better for VS. In terms of computational complexity, the deliberation decoder has a larger size than the LAS decoder, and hence requires more computations in second-pass decoding."}, "keywords": ["LAS decoder"], "citation_intent": "background"} {"citing_id": "2303.00844v1", "cited_id": "1009.3525", "section_title": "Literature Review", "citation": "A similar result was derived in #REFR from a probabilistic point of view where the signal support is assumed to be formed by two subsets with different probability of occurrence.", "text_before_citation": ["Weights have been employed in sparse recovery methods for various purposes.", "For instance, in the seminal work #OTHEREFR , the authors propose to solve a sequence of (re)weighted \u2113 1 minimization problems to enhance sparse signal recovery.", "In our context, weights can generally be thought of as a way of incorporating prior information about the signal into a sparse recovery model.", "In adaptive LASSO #OTHEREFR , a data-driven but careful choice of weights is shown to admit near oracle properties.", "Works such as #OTHEREFR show that replacing the \u2113 1 -norm with its weighted version can improve recovery assuming that accurate (partial) support knowledge is provided."], "text_after_citation": ["Further studies of weighted \u2113 1 minimization and its impactful application in the context of function approximation from pointwise samples and uncertainty quantification include #OTHEREFR .", "The notion of weighted sparsity was formalized in #OTHEREFR .", "Weighted sparsity is related to structured sparsity (see #OTHEREFR ).", "In fact it allows one to promote structures (rather than being a structure itself).", "For example, in the context of highdimensional function approximation (see #OTHEREFR and references therein) weights are able to promote so-called sparsity in lower sets, which largely contributes to mitigating the curse of dimensionality in the sample complexity."], "citing_paper_content": {"title": "The Greedy Side Of The Lasso: New Algorithms For Weighted Sparse Recovery Via Loss Function-Based Orthogonal Matching Pursuit", "abstract": "We propose a class of greedy algorithms for weighted sparse recovery by considering new loss function-based generalizations of Orthogonal Matching Pursuit (OMP). Given a (regularized) loss function, the proposed algorithms alternate the iterative construction of the signal support via greedy index selection and a signal update based on solving a local data-fitting problem restricted to the current support. We show that greedy selection rules associated with popular weighted sparsity-promoting loss functions admit explicitly computable and simple formulas. Specifically, we consider \u2113 0-and \u2113 1-based versions of the weighted LASSO (Least Absolute Shrinkage and Selection Operator), the Square-Root LASSO (SR-LASSO) and the Least Absolute Deviations LASSO (LAD-LASSO). Through numerical experiments on Gaussian compressive sensing and high-dimensional function approximation, we demonstrate the effectiveness of the proposed algorithms and empirically show that they inherit desirable characteristics from the corresponding loss functions, such as SR-LASSO's noise-blind optimal parameter tuning and LAD-LASSO's fault tolerance. In doing so, our study sheds new light on the connection between greedy sparse recovery and convex relaxation."}, "cited_paper_content": {"title": "Analyzing Weighted $\\Ell_1$ Minimization For Sparse Recovery With Nonuniform Sparse Models", "abstract": "In this paper we introduce a nonuniform sparsity model and analyze the performance of an optimized weighted $\\ell_1$ minimization over that sparsity model. In particular, we focus on a model where the entries of the unknown vector fall into two sets, with entries of each set having a specific probability of being nonzero. We propose a weighted $\\ell_1$ minimization recovery algorithm and analyze its performance using a Grassmann angle approach. We compute explicitly the relationship between the system parameters-the weights, the number of measurements, the size of the two sets, the probabilities of being nonzero- so that when i.i.d. random Gaussian measurement matrices are used, the weighted $\\ell_1$ minimization recovers a randomly selected signal drawn from the considered sparsity model with overwhelming probability as the problem dimension increases. This allows us to compute the optimal weights. We demonstrate through rigorous analysis and simulations that for the case when the support of the signal can be divided into two different subclasses with unequal sparsity fractions, the optimal weighted $\\ell_1$ minimization outperforms the regular $\\ell_1$ minimization substantially. We also generalize the results to an arbitrary number of classes."}, "keywords": ["signal support"], "citation_intent": "result"} {"citing_id": "2304.01447v1", "cited_id": "1802.05098", "section_title": "Iterated Prisoner'S Dilemma", "citation": "Additionally, we compared the performance of LOLA-OffPA2 with LOLA-DiCE #REFR , which is designed specifically for this game, and we reported the results in Table 3 .", "text_before_citation": ["The LOLA agents can shape the opponent's learning to encourage cooperation and, therefore, converge to TFT #OTHEREFR .", "We evaluate the methods' performances based on the Averaged Episode Reward (AER).", "Cooperate (\u22121, \u22121) (\u22123, 0) Defect (0, \u22123) (\u22123, \u22123)", "In Figure 4 , we depict the learning curves for LOLA-OffPA2 and the MADDPG-based methods.", "From this figure, we find that only LOLA-OffPA2 can solve the game, which once again highlights the importance of learning anticipation."], "text_after_citation": ["Although both methods demonstrate high values of AER, our LOLA-OffPA2 is significantly more efficient as its LATC value is much lower than that of LOLA-DiCE.", "Table 5 : Comparisons of LOLA-DiCE with our proposed LOLA-OffPA2 in the Exit-Room game, in terms of performance (normalized average return in different game levels) and efficiency (learning anticipation time complexity in different reasoning levels)."], "citing_paper_content": {"title": "Off-Policy Action Anticipation In Multi-Agent Reinforcement Learning", "abstract": "Learning anticipation in Multi-Agent Reinforcement Learning (MARL) is a reasoning paradigm where agents anticipate the learning steps of other agents to improve cooperation among themselves. As MARL uses gradient-based optimization, learning anticipation requires using Higher-Order Gradients (HOG), with so-called HOG methods. Existing HOG methods are based on policy parameter anticipation, i.e., agents anticipate the changes in policy parameters of other agents. Currently, however, these existing HOG methods have only been applied to differentiable games or games with small state spaces. In this work, we demonstrate that in the case of non-differentiable games with large state spaces, existing HOG methods do not perform well and are inefficient due to their inherent limitations related to policy parameter anticipation and multiple sampling stages. To overcome these problems, we propose Off-Policy Action Anticipation (OffPA2), a novel framework that approaches learning anticipation through action anticipation, i.e., agents anticipate the changes in actions of other agents, via off-policy sampling. We theoretically analyze our proposed OffPA2 and employ it to develop multiple HOG methods that are applicable to non-differentiable games with large state spaces. We conduct a large set of experiments and illustrate that our proposed HOG methods outperform the existing ones regarding efficiency and performance."}, "cited_paper_content": {"title": "Dice: The Infinitely Differentiable Monte-Carlo Estimator", "abstract": "The score function estimator is widely used for estimating gradients of stochastic objectives in Stochastic Computation Graphs (SCG), eg. in reinforcement learning and meta-learning. While deriving the first-order gradient estimators by differentiating a surrogate loss (SL) objective is computationally and conceptually simple, using the same approach for higher-order gradients is more challenging. Firstly, analytically deriving and implementing such estimators is laborious and not compliant with automatic differentiation. Secondly, repeatedly applying SL to construct new objectives for each order gradient involves increasingly cumbersome graph manipulations. Lastly, to match the first-order gradient under differentiation, SL treats part of the cost as a fixed sample, which we show leads to missing and wrong terms for higher-order gradient estimators. To address all these shortcomings in a unified way, we introduce DiCE, which provides a single objective that can be differentiated repeatedly, generating correct gradient estimators of any order in SCGs. Unlike SL, DiCE relies on automatic differentiation for performing the requisite graph manipulations. We verify the correctness of DiCE both through a proof and through numerical evaluation of the DiCE gradient estimates. We also use DiCE to propose and evaluate a novel approach for multi-agent learning. Our code is available at this https URL"}, "keywords": ["game", "LOLA-DiCE"], "citation_intent": "result"} {"citing_id": "2305.02337v1", "cited_id": "1302.5843", "section_title": "C. Ising Model", "citation": "It is a natural starting point for Hamiltonian simulation since it can be used to formulate many computationally hard problems, such as spin glasses, Quadratic Unconstrained Binary Optimization problems (QUBOs), or graph partitioning #REFR .", "text_before_citation": ["This paper will primarily focus on the transverse-field Ising model as a representative of an important class of Hamiltonians.", "In its general form, it models the spins of particles and is described by the Hamiltonian", "EQUATION", "where J ij is the interaction strength between nearest-neighbor spin pairs at sites i, j and g j is an external field pointing perpendicular to the interactions at site j."], "text_after_citation": ["The following constrained version of this type of Hamiltonian will be used to illustrate all proposed concepts and methods throughout the remainder of this paper.", "Example 1. The L-site finite 1D Ising chain is defined by", "EQUATION", "where the parameters are site-independent, i.e., J ij := J and g := g.", "Using the product formula, a single Trotter under this model has the form"], "citing_paper_content": {"title": "Towards Hamiltonian Simulation With Decision Diagrams", "abstract": "This paper proposes a novel approach to Hamiltonian simulation using Decision Diagrams (DDs), which are an exact representation based on exploiting redundancies in representations of quantum states and operations. While the simulation of Hamiltonians has been studied extensively, scaling these simulations to larger or more complex systems is often challenging and may require approximations or new simulation methods altogether. DDs offer such an alternative that has not yet been applied to Hamiltonian simulation. In this work, we investigate the behavior of DDs for this task. To this end, we review the basics of DDs such as their construction and present how the relevant operations for Hamiltonian simulation are implemented in this data structure-leading to the first DD-based Hamiltonian simulation approach. Based on several series of evaluations and comparisons, we then discuss insights about the performance of this complementary approach. Overall, these studies show that DDs indeed may offer a promising new data structure which, for certain examples, can provide orders of magnitudes of improvement compared to the state-of-the-art, yet also comes with its own, fundamentally different, limitations."}, "cited_paper_content": {"title": "Ising Formulations Of Many Np Problems", "abstract": "We provide Ising formulations for many NP-complete and NP-hard problems, including all of Karp's 21 NP-complete problems. This collects and extends mappings to the Ising model from partitioning, covering and satisfiability. In each case, the required number of spins is at most cubic in the size of the problem. This work may be useful in designing adiabatic quantum optimization algorithms."}, "keywords": ["Hamiltonian simulation", "many computationally hard"], "citation_intent": "background"} {"citing_id": "2303.01841v1", "cited_id": "1806.07366", "section_title": "Related Work", "citation": "Much more closely aligned to our work, and a natural fit for irregularly sampled data is research that uses differential equations to model continuous-time processes #REFR .", "text_before_citation": ["Time series modelling in machine learning: There is vast literature on the use of machine learning for time series modelling and we highlight some of the ideas that have been explored to adapt diverse kinds of models for irregular time series data.", "Although not naturally well suited to learning representations of such data, there have been modifications proposed to discrete-time models such as recurrent neural networks #OTHEREFR to handle such data.", "Models such as mTANs #OTHEREFR leverage an attention-based approach to interpolate sequences to create discrete-time data from irregularly sampled data.", "Another strategy has been architectural modifications to the recurrence equations e.g.", "CT-GRU #OTHEREFR , GRU-D #OTHEREFR and Unitary RNNs #OTHEREFR ."], "text_after_citation": ["By parameterizing the derivative of a time series using neural networks and integrating the dynamics over unobserved time points, this class of models is well suited to handle irregularly sampled data.", "This includes models such as ODE-RNN #OTHEREFR , ODE-LSTM #OTHEREFR and Neural CDE #OTHEREFR .", "ODE-based approaches require the use of differential equation solvers during training and inference, which can come at the cost of runtime #OTHEREFR .", "PolyODEs lie in this family of models; specifically, this work proposes a new parameterization of the dynamics function and a practical method for learning that enables this model family to accurately forecast the future and reconstruct the past greatly enhancing the scope and utility of the learned embeddings.", "Orthogonal polynomials: PolyODEs are inspired by a rich line of work in orthogonal decomposition of time series data."], "citing_paper_content": {"title": "Anamnesic Neural Differential Equations With Orthogonal Polynomials Projections", "abstract": "Neural ordinary differential equations (Neural ODEs) are an effective framework for learning dynamical systems from irregularly sampled time series data. These models provide a continuous-time latent representation of the underlying dynamical system where new observations at arbitrary time points can be used to update the latent representation of the dynamical system. Existing parameterizations for the dynamics functions of Neural ODEs limit the ability of the model to retain global information about the time series; specifically, a piece-wise integration of the latent process between observations can result in a loss of memory on the dynamic patterns of previously observed data points. We propose PolyODE, a Neural ODE that models the latent continuous-time process as a projection onto a basis of orthogonal polynomials. This formulation enforces long-range memory and preserves a global representation of the underlying dynamical system. Our construction is backed by favourable theoretical guarantees and in a series of experiments, we demonstrate that it outperforms previous works in the reconstruction of past and future data, and in downstream prediction tasks. Our code is available at https://github.com/edebrouwer/polyode."}, "cited_paper_content": {"title": "Neural Ordinary Differential Equations", "abstract": "We introduce a new family of deep neural network models. Instead of specifying a discrete sequence of hidden layers, we parameterize the derivative of the hidden state using a neural network. The output of the network is computed using a black-box differential equation solver. These continuous-depth models have constant memory cost, adapt their evaluation strategy to each input, and can explicitly trade numerical precision for speed. We demonstrate these properties in continuous-depth residual networks and continuous-time latent variable models. We also construct continuous normalizing flows, a generative model that can train by maximum likelihood, without partitioning or ordering the data dimensions. For training, we show how to scalably backpropagate through any ODE solver, without access to its internal operations. This allows end-to-end training of ODEs within larger models."}, "keywords": ["continuous-time processes", "differential equations"], "citation_intent": "background"} {"citing_id": "2304.11045v1", "cited_id": "1603.09320", "section_title": "Iii. Lightdxml: Architecture", "citation": "Like in Astec, this is achieved in O(log(L)) time per data point using an ANN data structure #REFR .", "text_before_citation": ["Specifically, the goal is now to train label encoder Z l to minimize the mean squared error between feature representations x i (obtained from Step 1) and label embeddingl j if the j th label is positive for the data point i.", "By letting P i be the set of all positive labels for the i th data point, i.e., P = {j :", "EQUATION", "In", "Step 3, LightDXML finds the top k = O(log(L)) most hard negative labels for each data point using the shortlist S k by finding the k nearest label points\u0177 j to the feature representation x i ."], "text_after_citation": ["Using S k , we now have access to both positive and negative labels for a data point.", "In", "Step 4, making use of S k , LightDXML learns the onevs-all classifier Z by jointly learning data feature embedding\u015d Z x , and training on the union of set of positive labels and a set of shortlisted negative labels (instead of training on all labels), i.e., argmin Zx,ZL", "(\u1e90 x , Z) = N i=1 j\u2208P i \u222aS k (x i ) (x i , y ij ;\u1e90 x , Z).", "(3) It should be noted that Problem (3) is more efficient to solve than the original problem (1), due to training on a much reduced set of labels."], "citing_paper_content": {"title": "Light-Weight Deep Extreme Multilabel Classification", "abstract": "Extreme multi-label (XML) classification refers to the task of supervised multi-label learning that involves a large number of labels. Hence, scalability of the classifier with increasing label dimension is an important consideration. In this paper, we develop a method called LightDXML which modifies the recently developed deep learning based XML framework by using label embeddings instead of feature embedding for negative sampling and iterating cyclically through three major phases: (1) proxy training of label embeddings (2) shortlisting of labels for negative sampling and (3) final classifier training using the negative samples. Consequently, LightDXML also removes the requirement of a re-ranker module, thereby, leading to further savings on time and memory requirements. The proposed method achieves the best of both worlds: while the training time, model size and prediction times are on par or better compared to the tree-based methods, it attains much better prediction accuracy that is on par with the deep learning based methods. Moreover, the proposed approach achieves the best tail-label prediction accuracy over most state-of-the-art XML methods on some of the large datasets 1. Code: https://github.com/misterpawan/ LightDXML"}, "cited_paper_content": {"title": "Efficient And Robust Approximate Nearest Neighbor Search Using Hierarchical Navigable Small World Graphs", "abstract": "We present a new approach for the approximate K-nearest neighbor search based on navigable small world graphs with controllable hierarchy (Hierarchical NSW, HNSW). The proposed solution is fully graph-based, without any need for additional search structures, which are typically used at the coarse search stage of the most proximity graph techniques. Hierarchical NSW incrementally builds a multi-layer structure consisting from hierarchical set of proximity graphs (layers) for nested subsets of the stored elements. The maximum layer in which an element is present is selected randomly with an exponentially decaying probability distribution. This allows producing graphs similar to the previously studied Navigable Small World (NSW) structures while additionally having the links separated by their characteristic distance scales. Starting search from the upper layer together with utilizing the scale separation boosts the performance compared to NSW and allows a logarithmic complexity scaling. Additional employment of a heuristic for selecting proximity graph neighbors significantly increases performance at high recall and in case of highly clustered data. Performance evaluation has demonstrated that the proposed general metric space search index is able to strongly outperform previous opensource state-of-the-art vector-only approaches. Similarity of the algorithm to the skip list structure allows straightforward balanced distributed implementation."}, "keywords": ["ANN data structure"], "citation_intent": "method"} {"citing_id": "2304.05642v1", "cited_id": "1810.04805", "section_title": "Introduction", "citation": "Discrete prompt tuning transforms the task into a \"fill-in-the-blank\" format and then utilizes a pre-trained language model to predict the answer, which operates similarly to the masked language model (MLM) #REFR .", "text_before_citation": ["Prompt-based methods can be classified into two categories: discrete prompt tuning (Radford et al., 2018; #OTHEREFR and continuous prompt tuning #OTHEREFR Liu et al., 2021b) ."], "text_after_citation": ["In subsequent research, a recent study #OTHEREFR introduced soft prompts (i.e., prompt embeddings) to replace manual templates, which consist of special tokens with adjustable embeddings.", "We refer to this method as \"prompt tuning\" for simplicity.", "Vanilla prompt tuning, which concatenates prompt embeddings with input tokens in the first layer and updates only the parameters of the prompt embeddings during the training phase, has several limitations #OTHEREFR Su et al., 2021) .", "Firstly, since the effectiveness of prompt embeddings is highly related to the length, it is necessary to use prompt embeddings with hundreds of tokens in length to achieve better downstream task performance, as suggested by #OTHEREFR and (Su et al., 2021) .", "However, longer prompt embeddings also reduce the length space of normal input text."], "citing_paper_content": {"title": "Global Prompt Cell: A Portable Control Module For Effective Prompt Tuning", "abstract": "As a novel approach to tuning pre-trained models, prompt tuning involves freezing the parameters in downstream tasks while inserting trainable embeddings into inputs in the first layer. However, previous methods have mainly focused on the initialization of prompt embeddings. The question of how to train and utilize prompt embeddings in a reasonable way has become a limiting factor in the effectiveness of prompt tuning. To address this issue, we introduce the Global Prompt Cell (GPC), a portable control module for prompt tuning that selectively preserves prompt information across all encoder layers. Our experimental results demonstrate a 5.8% improvement on SuperGLUE datasets compared to vanilla prompt tuning."}, "cited_paper_content": {"title": "Bert: Pre-Training Of Deep Bidirectional Transformers For Language Understanding", "abstract": "We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. ::: BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement)."}, "keywords": ["Discrete prompt tuning", "pre-trained language model"], "citation_intent": "method"} {"citing_id": "2304.12210v1", "cited_id": "1911.05371", "section_title": "Grouping Similar Images Together:", "citation": "Other improvements to deep clustering include using optimal transport methods in feature space to create more informative clusters #REFR .", "text_before_citation": ["One can learn rich features by grouping semantically similar images together.", "K-means clustering is one of the most widely used methods from classical machine learning.", "A number of studies have adapted k-means to perform SSL with neural models.", "Deep clustering alternates between assigning labels to images by performing k-means in the feature space, and updating the model to respect these assigned class labels #OTHEREFR .", "More recent treatments of this approach use mean-shift updates to push features towards their cluster center, and have been shown to complement BYOL, a method based on two networks with the objective to predict pseudo-labels for each sample (discussed in Section 2.3)."], "text_after_citation": [], "citing_paper_content": {"title": "A Cookbook Of Self-Supervised Learning", "abstract": ""}, "cited_paper_content": {"title": "Self-Labelling Via Simultaneous Clustering And Representation Learning", "abstract": "Combining clustering and representation learning is one of the most promising approaches for unsupervised learning of deep neural networks. However, doing so naively leads to ill posed learning problems with degenerate solutions. In this paper, we propose a novel and principled learning formulation that addresses these issues. The method is obtained by maximizing the information between labels and input data indices. We show that this criterion extends standard cross-entropy minimization to an optimal transport problem, which we solve efficiently for millions of input images and thousands of labels using a fast variant of the Sinkhorn-Knopp algorithm. The resulting method is able to self-label visual data so as to train highly competitive image representations without manual labels. Compared to the best previous method in this class, namely DeepCluster, our formulation minimizes a single objective function for both representation learning and clustering; it also significantly outperforms DeepCluster in standard benchmarks."}, "keywords": ["deep clustering"], "citation_intent": "method"} {"citing_id": "2304.04661v1", "cited_id": "1910.05339", "section_title": "B. Log-Based Rca Problem Definition", "citation": "Knowledge Graph based Methods: Amongst knowledge graph based methods, #REFR diagnoses and triages performance failure issues in an online fashion by continuously building a knowledge base out of rules extracted from a random forest constructed over log data using heuristics and domain knowledge.", "text_before_citation": ["#OTHEREFR , #OTHEREFR uses KNN or its supervised versions to identify loglines that led to a failure.", "Knowledge Mining based Methods: #OTHEREFR , #OTHEREFR takes a different approach of summarizing log events into an entityrelation knowledge graph by extracting custom entities and relationships from log lines and mining temporal and procedural dependencies between them from the overall log dump.", "While this gives a more structured representation of the log summary, it is also an intuitive way of aggregating knowledge from logs, it is also a way to bridge the knowledge gap developer community who creates the log data and the site reliability engineers who typically consume the log data when investigating incidents.", "However, eventually the end goal of constructing this knowledge graph representation of logs is to facilitate RCA.", "While these works do provide use-cases like case-studies on RCA for this vision, but they leave ample scope of research towards a more concrete usage of this kind of knowledge mining in RCA."], "text_after_citation": ["#OTHEREFR constructs a system graph from the combination of KPI metrics and log data.", "Based on the detected anomalies from these data sources, it extracts anomalous subgraphs from it and compares them with the normal system graph to detect the root cause.", "Other works mine normal log patterns #OTHEREFR or time-weighted control flow graphs #OTHEREFR from normal executions and on estimates divergences from them to executions during ongoing failures to suggest root causes.", "#OTHEREFR , #OTHEREFR , #OTHEREFR mines execution sequences or user actions #OTHEREFR either from normal and manually injected failures or from good or bad performing systems, in a knowledge base and utilizes the assumption that similar faults generate similar failures to match and diagnose type of failure.", "Most of these knowledge based approaches incrementally expand their knowledge or rules to cater to newer incident types over time."], "citing_paper_content": {"title": "Ai For It Operations (Aiops) On Cloud Platforms: Reviews, Opportunities And Challenges", "abstract": "Intelligence for IT operations (AIOps) aims to combine the power of AI with the big data generated by IT Operations processes, particularly in cloud infrastructures, to provide actionable insights with the primary goal of maximizing availability. There are a wide variety of problems to address, and multiple use-cases, where AI capabilities can be leveraged to enhance operational efficiency. Here we provide a review of the AIOps vision, trends challenges and opportunities, specifically focusing on the underlying AI techniques. We discuss in depth the key types of data emitted by IT Operations activities, the scale and challenges in analyzing them, and where they can be helpful. We categorize the key AIOps tasks as-incident detection, failure prediction, root cause analysis and automated actions. We discuss the problem formulation for each task, and then present a taxonomy of techniques to solve these problems. We also identify relatively under explored topics, especially those that could significantly benefit from advances in AI literature. We also provide insights into the trends in this field, and what are the key investment opportunities."}, "cited_paper_content": {"title": "Decaf: Diagnosing And Triaging Performance Issues In Large-Scale Cloud Services", "abstract": "Large scale cloud services use Key Performance Indicators (KPIs) for tracking and monitoring performance. They usually have Service Level Objectives (SLOs) baked into the customer agreements which are tied to these KPIs. Dependency failures, code bugs, infrastructure failures, and other problems can cause performance regressions. It is critical to minimize the time and manual effort in diagnosing and triaging such issues to reduce customer impact. Large volumes of logs and mixed type of attributes (categorical, continuous) make any automated or manual diagnosing non-trivial. ::: In this paper, we present the design, implementation and experience from building and deploying DeCaf, a system for automated diagnosis and triaging of KPI issues using service logs. It uses machine learning along with pattern mining to help service owners automatically root cause and triage performance issues. We present the learnings and results from case studies on two large scale cloud services in Microsoft where DeCaf successfully diagnosed 10 known and 31 unknown issues. DeCaf also automatically triages the identified issues by leveraging historical data. Our key insights are that for any such diagnosis tool to be effective in practice, it should a) scale to large volumes of service logs and attributes, b) support different types of KPIs and ranking functions, c) be integrated into the DevOps processes."}, "keywords": ["log data", "performance failure issues"], "citation_intent": "method"} {"citing_id": "2303.13731v1", "cited_id": "1409.0473", "section_title": "Related Work", "citation": "The attention mechanism #REFR has been used extensively in DL, especially NLP-related tasks, to learn what target tokens the source tokens should \"look at\".", "text_before_citation": ["We refer readers to recent surveys #OTHEREFR , #OTHEREFR for a thorough review of these works.", "Lately, deep transformers demonstrate superior performance than other DL models on 1D sequential data, and multiple visualization works have been introduced for their interpretations #OTHEREFR , #OTHEREFR , #OTHEREFR , #OTHEREFR , #OTHEREFR , #OTHEREFR .", "The success of transformers has also been extended to 2D images with the seminal work of vision transformers (ViTs) #OTHEREFR .", "However, to the best of our knowledge, no comprehensive visual analyses have been conducted to demystify this type of powerful yet complex models, especially how attention works in the 2D image context. Our work tries to fill this gap.", "Attention Visualization."], "text_after_citation": ["Essentially, attention is a matrix where each cell denotes the attention magnitude that the source token (row) pays to the target (column).", "Popular attention visualization techniques include flow maps #OTHEREFR , #OTHEREFR , parallel coordinates plots (PCPs) #OTHEREFR , and heatmaps #OTHEREFR , #OTHEREFR , #OTHEREFR .", "For example, the flow maps used by Dong et al.", "#OTHEREFR connect the source and target tokens with curves, the widths of which denote the attention strengths.", "Vig #OTHEREFR arranges the source and target tokens along two parallel axes (i.e., a simplified PCP) and connects them with line segments in between to show the attention patterns."], "citing_paper_content": {"title": "How Does Attention Work In Vision Transformers? A Visual Analytics Attempt", "abstract": "Vision transformer (ViT) expands the success of transformer models from sequential data to images. The model decomposes an image into many smaller patches and arranges them into a sequence. Multi-head self-attentions are then applied to the sequence to learn the attention between patches. Despite many successful interpretations of transformers on sequential data, little effort has been devoted to the interpretation of ViTs, and many questions remain unanswered. For example, among the numerous attention heads, which one is more important? How strong are individual patches attending to their spatial neighbors in different heads? What attention patterns have individual heads learned? In this work, we answer these questions through a visual analytics approach. Specifically, we first identify what heads are more important in ViTs by introducing multiple pruning-based metrics. Then, we profile the spatial distribution of attention strengths between patches inside individual heads, as well as the trend of attention strengths across attention layers. Third, using an autoencoder-based learning solution, we summarize all possible attention patterns that individual heads could learn. Examining the attention strengths and patterns of the important heads, we answer why they are important. Through concrete case studies with experienced deep learning experts on multiple ViTs, we validate the effectiveness of our solution that deepens the understanding of ViTs from head importance, head attention strength, and head attention pattern."}, "cited_paper_content": {"title": "Neural Machine Translation By Jointly Learning To Align And Translate", "abstract": "Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition."}, "keywords": ["attention mechanism", "DL, especially NLP-related"], "citation_intent": "method"} {"citing_id": "2305.00135v1", "cited_id": "1703.06182", "section_title": "C. Semi-Distributed Multi-Agent Rl For Robust Handover Optimization", "citation": "In contrast, the second learning rate, \u03b7 2 , is much smaller and is utilized to slow down the degradation of Q-values associated with previously positive experiences resulting from successful operations #REFR .", "text_before_citation": ["This is particularly crucial in providing uninterrupted and dependable service for XR users, whose needs may change over time, and guarantees a resilient user experience that can promptly recover from any disruptions.", "Moreover, the semi-distributed learning framework used in this approach ensures guaranteed convergence due to the adoption of asynchronous updates, where agents update their policy and value function independently and asynchronously.", "Although the optimality of the solution may not be guaranteed in highly dynamic and non-stationary environments like THz networks, the quick adaptation of deep hysteretic networks enables the system to maintain robust and resilient performance in response to unpredictable user behavior.", "Hysteretic deep recurrent Q-networks leverage two distinct learning rates to handle the complex dynamics of the learning process.", "The first learning rate, \u03b7 1 , is used when the temporal difference (TD)-error is nonnegative."], "text_after_citation": ["By implementing this approach, hysteresis is introduced, allowing subarrays to be more resilient against negative learning, exploration, and concurrent actions.", "This approach significantly improves the stability and robustness of the learning process, leading to better overall performance and expedited convergence.", "Similarly to #OTHEREFR , the deep hysteretic Q-network adopted is with one input layer, two fully connected hidden layers, one RNN hidden layer, a dueling layer, and an output layer.", "The subarray's local observations and the estimated state-action value Q b.n , respectively define the input and output layer of the hysteretic deep recurrent Q-network.", "Multi-agent RL frameworks are known to face the challenge of shadowed equilibria, a phenomenon where local observations and non-stationarity cause locally optimal actions to become suboptimal at the global level #OTHEREFR ."], "citing_paper_content": {"title": "Joint Sensing, Communication, And Ai: A Trifecta For Resilient Thz User Experiences", "abstract": "In this paper a novel joint sensing, communication, and artificial intelligence (AI) framework is proposed so as to optimize extended reality (XR) experiences over terahertz (THz) wireless systems. The proposed framework consists of three main components. First, a tensor decomposition framework is proposed to extract unique sensing parameters for XR users and their environment by exploiting then THz channel sparsity. Essentially, THz band's quasi-opticality is exploited and the sensing parameters are extracted from the uplink communication signal, thereby allowing for the use of the same waveform, spectrum, and hardware for both communication and sensing functionalities. Then, the Cramer-Rao lower bound is derived to assess the accuracy of the estimated sensing parameters. Second, a non-autoregressive multi-resolution generative artificial intelligence (AI) framework integrated with an adversarial transformer is proposed to predict missing and future sensing information. The proposed framework offers robust and comprehensive historical sensing information and anticipatory forecasts of future environmental changes, which are generalizable to fluctuations in both known and unforeseen user behaviors and environmental conditions. Third, a multi-agent deep recurrent hysteretic Q-neural network is developed to control the handover policy of reconfigurable intelligent surface (RIS) subarrays, leveraging the informative nature of sensing information to minimize handover cost, maximize the individual quality of personal experiences (QoPEs), and improve the robustness and resilience of THz links. Simulation results show a high generalizability of the proposed unsupervised generative AI framework to fluctuations in user behavior and velocity, leading to a 61 % improvement in instantaneous reliability compared to schemes with known channel state information."}, "cited_paper_content": {"title": "Deep Decentralized Multi-Task Multi-Agent Reinforcement Learning Under Partial Observability", "abstract": "Many real-world tasks involve multiple agents with partial observability and limited communication. Learning is challenging in these settings due to local viewpoints of agents, which perceive the world as non-stationary due to concurrently-exploring teammates. Approaches that learn specialized policies for individual tasks face problems when applied to the real world: not only do agents have to learn and store distinct policies for each task, but in practice identities of tasks are often non-observable, making these approaches inapplicable. This paper formalizes and addresses the problem of multi-task multi-agent reinforcement learning under partial observability. We introduce a decentralized single-task learning approach that is robust to concurrent interactions of teammates, and present an approach for distilling single-task policies into a unified policy that performs well across multiple related tasks, without explicit provision of task identity."}, "keywords": ["second learning rate"], "citation_intent": "background"} {"citing_id": "2303.17358v1", "cited_id": "1207.6083", "section_title": "Dpp-Based Client Selection", "citation": "Note that DPP is a probabilistic model of repulsion, which has been widely adopted for solving subset sampling problems with diversity constraints in machine learning #REFR .", "text_before_citation": ["With the clients' data profiles, a DPP-based CS strategy can be further devised to avoid selecting similar clients in each round of training."], "text_after_citation": ["And the k-DPP is a variant of DPP, with which the size of sampled subsets is fixed at k #OTHEREFR .", "Particularly, a DPP is a probabilistic model over subsets on a finite set, which can be derived from a positive semi-definite similarity kernel matrix #OTHEREFR .", "For a finite set M with M elements, the similarity kernel matrix L can be expressed as L = {lm,n}M\u00d7M , with lm,n representing the similarity between the m-th and n-th elements in M.", "Meanwhile, the DPP assigns a probability to sub-sampling any subset Y of M, which is proportional to the determinant of the sub-matrix LY regarding the subset Y, i.e.,"], "citing_paper_content": {"title": "Dpp-Based Client Selection For Federated Learning With Non-Iid Data", "abstract": "This paper proposes a client selection (CS) method to tackle the communication bottleneck of federated learning (FL) while concurrently coping with FL's data heterogeneity issue. Specifically, we first analyze the effect of CS in FL and show that FL training can be accelerated by adequately choosing participants to diversify the training dataset in each round of training. Based on this, we leverage data profiling and determinantal point process (DPP) sampling techniques to develop an algorithm termed Federated Learning with DPP-based Participant Selection (FL-DP 3 S). This algorithm effectively diversifies the participants' datasets in each round of training while preserving their data privacy. We conduct extensive experiments to examine the efficacy of our proposed method. The results show that our scheme attains a faster convergence rate, as well as a smaller communication overhead than several baselines."}, "cited_paper_content": {"title": "Determinantal Point Processes For Machine Learning", "abstract": "Determinantal point processes (DPPs) are elegant probabilistic models of repulsion that arise in quantum physics and random matrix theory. In contrast to traditional structured models like Markov random fields, which become intractable and hard to approximate in the presence of negative correlations, DPPs offer efficient and exact algorithms for sampling, marginalization, conditioning, and other inference tasks. While they have been studied extensively by mathematicians, giving rise to a deep and beautiful theory, DPPs are relatively new in machine learning. Determinantal Point Processes for Machine Learning provides a comprehensible introduction to DPPs, focusing on the intuitions, algorithms, and extensions that are most relevant to the machine learning community, and shows how DPPs can be applied to real-world applications like finding diverse sets of high-quality search results, building informative summaries by selecting diverse sentences from documents, modeling non-overlapping human poses in images or video, and automatically building timelines of important news stories. It presents the general mathematical background to DPPs along with a range of modeling extensions, efficient algorithms, and theoretical results that aim to enable practical modeling and learning."}, "keywords": ["machine learning", "probabilistic model"], "citation_intent": "method"} {"citing_id": "2303.04388v1", "cited_id": "1405.0312", "section_title": "Experimental Settings", "citation": "The dataset comprises 33k QA pairs from 28k images sourced from the COCO2014 dataset #REFR .", "text_before_citation": ["Dataset.", "For our experiments, we utilized the VQA-X dataset #OTHEREFR , an extension of the VQA-v2 dataset #OTHEREFR , which included explanations for each answer."], "text_after_citation": ["For the dataset division, we used all the images in the COCO2014 training set with 29k QA pairs as our training set.", "We divided the COCO2014 validation set into our validation set and test set according to the proportion of 3:4, which contains 1.5k QA pairs and 2k QA pairs, respectively.", "Compared to traditional vision models with specific tasks such as image classification and image segmentation, for the vision encoder, we only rely on their primary network function to output simple grid features rather than their time-consuming top-down features.", "Thus, to better adapt to the Vision & Language task, we used the CLIP based on the structure of the vision transformer as the vision encoder.", "The CLIP makes the fusion of vision and language features easier by encoding them in the same hidden space."], "citing_paper_content": {"title": "Interpretable Visual Question Answering Referring To Outside Knowledge", "abstract": "We present a novel multimodal interpretable VQA model that can answer the question more accurately and generate diverse explanations. Although researchers have proposed several methods that can generate human-readable and fine-grained natural language sentences to explain a model's decision, these methods have focused solely on the information in the image. Ideally, the model should refer to various information inside and outside the image to correctly generate explanations, just as we use background knowledge daily. The proposed method incorporates information from outside knowledge and multiple image captions to increase the diversity of information available to the model. The contribution of this paper is to construct an interpretable visual question answering model using multimodal inputs to improve the rationality of generated results. Experimental results show that our model can outperform state-ofthe-art methods regarding answer accuracy and explanation rationality."}, "cited_paper_content": {"title": "Microsoft Coco: Common Objects In Context", "abstract": "We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model."}, "keywords": ["dataset"], "citation_intent": "method"} {"citing_id": "2304.06664v1", "cited_id": "1811.10879", "section_title": "Chou, Golovnev, Sudan, And Velusamy [Cgsv21B]", "citation": "Next, in \u00a75.2, we present results due to Chou, Golovnev, Sudan, Velingker, and Velusamy [CGS + 22] in the linear-space streaming setting, which generalize the result we've seen for Max-Cut (discussed in \u00a73.4.3, due to #REFR ).", "text_before_citation": ["Our primary goal is to articulate technical statements of these results, since we'll use them later in Chapters 6 and 7.", "We also give some broad-strokes discussions of the techniques involved, building on our work in the previous two chapters.", "Specifically, in \u00a75.1, we describe the results of Chou, Golovnev, Sudan, and Velusamy [CGSV21a; CGSV21b] on \u221a n-space streaming algorithms, which generalize the results we've already seen for Max-Cut (Theorem 3.1, due to #OTHEREFR ) and Max-DiCut (Theorem 4.1, due to #OTHEREFR ).", "They include a so-called dichotomy theorem, which completely characterizes CSP approximability for \u221a n-space sketching algorithms (see \u00a72.2) and builds on our \"template distribution\" analysis for Max-DiCut in Chapter 4.", "This dichotomy will later be the basis for the [BHP + 22] analysis of Max-BCSP(f ) problems for symmetric f : Z k 2 \u2192 {0, 1}, described in Chapter 7 below."], "text_after_citation": ["We will use these in Chapter 6 to prove linear-space streaming approximation-resistance results for so-called \"ordering constraint satisfaction problems\" from our joint work #OTHEREFR ."], "citing_paper_content": {"title": "On Streaming Approximation Algorithms For Constraint Satisfaction Problems", "abstract": "In this thesis, we explore streaming algorithms for approximating constraint satisfaction problems (CSPs). The setup is roughly the following: A computer has limited memory space, sees a long \"stream\" of local constraints on a set of variables, and tries to estimate how many of the constraints may be simultaneously satisfied. The past ten years have seen a number of works in this area, and this thesis includes both expository material and novel contributions. Throughout, we emphasize connections to the broader theories of CSPs, approximability, and streaming models, and highlight interesting open problems. The first part of our thesis is expository: We present aspects of previous works that completely characterize the approximability of specific CSPs like Max-Cut and Max-DiCut with \u221a n-space streaming algorithm (on n-variable instances), while characterizing the approximability of all CSPs in \u221a n space in the special case of \"composable\" (i.e., sketching) algorithms, and of a particular subclass of CSPs with linear-space streaming algorithms. In the second part of the thesis, we present two of our own joint works. We begin with a work with Madhu Sudan and Santhoshini Velusamy in which we prove linear-space streaming approximation-resistance for all ordering CSPs (OCSPs), which \"CSP-like\" problems maximizing over sets of permutations. Previous works considered the maximum acyclic subgraph problem (MAS), the prototypical OCSP; even for MAS, we improve on both the inapproximability factor and the space bound. Next, we present joint work with Joanna Boyland, Michael Hwang, Tarun Prasad, and Santhoshini Velusamy in which we investigate the \u221a n-space streaming approximability of Boolean CSPs with negations. In particular, we give explicit \u221a n-space sketching approximability ratios for several families of CSPs, including Max-kAND; develop simpler optimal sketching approximation algorithms for threshold predicates; and show that previous iii Thesis advisor: Prof. Madhu Sudan Noah Singer streaming lower bounds are \"incomplete\" in that they fail to characterize the \u221a n-space streaming approximability of Max-3AND."}, "cited_paper_content": {"title": "An Optimal Space Lower Bound For Approximating Max-Cut", "abstract": "We consider the problem of estimating the value of MAX-CUT in a graph in the streaming model of computation. At one extreme, there is a trivial 2-approximation for this problem that uses only O(log n) space, namely, count the number of edges and output half of this value as the estimate for the size of the MAX-CUT. On the other extreme, for any fixed \u0454 > 0, if one allows O(n) space, a (1+\u0454)-approximate solution to the MAX-CUT value can be obtained by storing an O(n)-size sparsifier that essentially preserves MAX-CUT value. Our main result is that any (randomized) single pass streaming algorithm that breaks the 2-approximation barrier requires \u03a9(n)-space, thus resolving the space complexity of any non-trivial approximations of the MAX-CUT value to within polylogarithmic factors in the single pass streaming model. We achieve the result by presenting a tight analysis of the Implicit Hidden Partition Problem introduced by Kapralov et al. [SODA\u201917] for an arbitrarily large number of players. In this problem a number of players receive random matchings of \u03a9(n) size together with random bits on the edges, and their task is to determine whether the bits correspond to parities of some hidden bipartition, or are just uniformly random. Unlike all previous Fourier analytic communication lower bounds, our analysis does not directly use bounds on the l2 norm of Fourier coefficients of a typical message at any given weight level that follow from hypercontractivity. Instead, we use the fact that graphs received by players are sparse (matchings) to obtain strong upper bounds on the l1 norm of the Fourier coefficients of the messages of individual players using their special structure, and then argue, using the convolution theorem, that similar strong bounds on the l1 norm are essentially preserved (up to an exponential loss in the number of players) once messages of different players are combined. We feel that our main technique is likely of independent interest."}, "keywords": ["linear-space streaming setting", "Max-Cut"], "citation_intent": "background"} {"citing_id": "2304.13013v1", "cited_id": "1804.04235", "section_title": "Preliminaries And Related Work", "citation": "In contrast with Shazeer and Stern #REFR , who only observe instabilities without warmup, we observe instabilities despite a long warmup period.", "text_before_citation": ["Various solutions have been proposed, including freezing the embedding layer #OTHEREFR , adding additional layer normalization #OTHEREFR , or reparametrizing the weights #OTHEREFR .", "In our work we investigate instabilities which arise during CLIP training.", "Unlike the instabilities observed in #OTHEREFR , we find these are not caused by attention entropy collapse.", "Instead, our results indicate that spikes arise when the second moment estimator is out of date for the networks early layers.", "While our analysis and methods build directly on Shazeer and Stern #OTHEREFR (AdaFactor), there are important differences."], "text_after_citation": ["Moreover, in contrast with Shazeer and Stern #OTHEREFR we find that an out-of-date second moment estimator is primarily an issue for the (patch) embedding layer, and measure how well loss spikes are predicted by this event.", "Finally, we note that reserachers have moved away from AdaFactor in its original formulation for large-scale training #OTHEREFR , finding AdaFactor to under-perform AdamW #OTHEREFR .", "We believe this is due to the factored second moment or absence of first moment.", "This is why our focus is AdamW #OTHEREFR which is the de facto standard optimizer for transformers."], "citing_paper_content": {"title": "Stable And Low-Precision Training For Large-Scale Vision-Language Models", "abstract": "We introduce new methods for 1) accelerating and 2) stabilizing training for large language-vision models. 1) Towards accelerating training, we introduce SwitchBack , a linear layer for int8 quantized training which provides a speed-up of 13-25% while matching the performance of bfloat16 training within 0.1 percentage points for the 1B parameter CLIP ViT-Huge-the largest int8 training to date. Our main focus is int8 as GPU support for float8 is rare, though we also analyze float8 training through simulation. While SwitchBack proves effective for float8, we show that standard techniques are also successful if the network is trained and initialized so that large feature magnitudes are discouraged, which we accomplish via layer-scale initialized with zeros. 2) Towards stable training, we analyze loss spikes and find they consistently occur 1-8 iterations after the squared gradients become underestimated by their AdamW second moment estimator. As a result, we recommend an AdamW-Adafactor hybrid, which we refer to as StableAdamW because it avoids loss spikes when training a CLIP ViT-Huge model and outperforms gradient clipping."}, "cited_paper_content": {"title": "Adafactor: Adaptive Learning Rates With Sublinear Memory Cost", "abstract": "In several recently proposed stochastic optimization methods (e.g. RMSProp, Adam, Adadelta), parameter updates are scaled by the inverse square roots of exponential moving averages of squared past gradients. Maintaining these per-parameter second-moment estimators requires memory equal to the number of parameters. For the case of neural network weight matrices, we propose maintaining only the per-row and per-column sums of these moving averages, and estimating the per-parameter second moments based on these sums. We demonstrate empirically that this method produces similar results to the baseline. Secondly, we show that adaptive methods can produce larger-than-desired updates when the decay rate of the second moment accumulator is too slow. We propose update clipping and a gradually increasing decay rate scheme as remedies. Combining these methods and dropping momentum, we achieve comparable results to the published Adam regime in training the Transformer model on the WMT 2014 English-German machine translation task, while using very little auxiliary storage in the optimizer. Finally, we propose scaling the parameter updates based on the scale of the parameters themselves."}, "keywords": ["contrast", "Stern"], "citation_intent": "result"} {"citing_id": "2304.02838v1", "cited_id": "1910.00056", "section_title": "Related Work", "citation": "For example, Poirot #REFR threat detection is modeled as an imprecise graph pattern matching problem.", "text_before_citation": ["Provenance graphs have been used extensively in APT detection in recent years due to the superiority of connecting nodes with causal relationships and representing data flow and control flow relationships between system objects.", "Both academics and industry are paying increasing attention to this type of learning method #OTHEREFR .", "APT detection based on provenance graph is mainly divided into three directions: graph matching-based detection, anomaly score-based detection, and tag propagationbased detection.", "The first is graph matching-based detection.", "Due to the substructure in the provenance graph can completely describe malicious behavior, it is a very popular method to use graph matching-based detection method to detect APT attacks."], "text_after_citation": ["A graph matching method is proposed to identify attacks in provenance graphs.", "But it needs to construct the attack graph according to the prior knowledge, and cannot detect the unknown attack method.", "UNICORN #OTHEREFR is the first APT intrusion detection system to analyze the local complete system operation.", "The provenance graph is converted into a sequence of feature vectors and uses an automaton to model the clustering.", "However, the effectiveness of detection is negatively impacted by a large number of automatic opportunities, and the ability to describe the feature sequence is weak."], "citing_paper_content": {"title": "Tbdetector:Transformer-Based Detector For Advanced Persistent Threats With Provenance Graph", "abstract": "APT detection is difficult to detect due to the longterm latency, covert and slow multistage attack patterns of Advanced Persistent Threat (APT). To tackle these issues, we propose TBDetector, a transformer-based advanced persistent threat detection method for APT attack detection. Considering that provenance graphs provide rich historical information and have the powerful attacks historic correlation ability to identify anomalous activities, TBDetector employs provenance analysis for APT detection, which summarizes longrunning system execution with space efficiency and utilizes transformer with self-attention based encoder-decoder to extract long-term contextual features of system states to detect slow-acting attacks. Furthermore, we further introduce anomaly scores to investigate the anomaly of different system states, where each state is calculated with an anomaly score corresponding to its similarity score and isolation score. To evaluate the effectiveness of the proposed method, we have conducted experiments on five public datasets, i.e., streamspot, cadets, shellshock, clearscope, and wget baseline. Experimental results and comparisons with state-of-the-art methods have exhibited better performance of our proposed method."}, "cited_paper_content": {"title": "Poirot: Aligning Attack Behavior With Kernel Audit Records For Cyber Threat Hunting", "abstract": "Cyber threat intelligence (CTI) is being used to search for indicators of attacks that might have compromised an enterprise network for a long time without being discovered. To have a more effective analysis, CTI open standards have incorporated descriptive relationships showing how the indicators or observables are related to each other. However, these relationships are either completely overlooked in information gathering or not used for threat hunting. In this paper, we propose a system, called POIROT, which uses these correlations to uncover the steps of a successful attack campaign. We use kernel audits as a reliable source that covers all causal relations and information flows among system entities and model threat hunting as an inexact graph pattern matching problem. Our technical approach is based on a novel similarity metric which assesses an alignment between a query graph constructed out of CTI correlations and a provenance graph constructed out of kernel audit log records. We evaluate POIROT on publicly released real-world incident reports as well as reports of an adversarial engagement designed by DARPA, including ten distinct attack campaigns against different OS platforms such as Linux, FreeBSD, and Windows. Our evaluation results show that POIROT is capable of searching inside graphs containing millions of nodes and pinpoint the attacks in a few minutes, and the results serve to illustrate that CTI correlations could be used as robust and reliable artifacts for threat hunting."}, "keywords": ["Poirot threat detection"], "citation_intent": "background"} {"citing_id": "2303.01724v1", "cited_id": "1911.05076", "section_title": "A. Hyperbolic Geometry", "citation": "At each x \u2208 D n c , there is a tangent space T x D n c , which can be viewed as the first-order approximation of the hyperbolic manifold at x #REFR .", "text_before_citation": ["A hyperbolic space is a non-Euclidean space with constant negative curvature.", "There are different but equivalent models to describe the same hyperbolic geometry.", "In this paper, we work with the Poincar\u00e9 ball model, in which all points are inside a ball.", "The hyperbolic space with constant negative curvature c is denoted by (D n c , g c x ).", "It consists of the n-dimensional hyperbolic manifold D n c = {x \u2208 R n : c x < 1} with the Riemannian metric g c x = (\u03bb c x ) 2 g E , where \u03bb c x = 2/(1 \u2212 c x 2 ) and g E = I n is the Euclidean metric."], "text_after_citation": ["The tangent space is then useful to perform Euclidean operations that we are familiar with but are undefined in hyperbolic spaces.", "A hyperbolic space and the tangent space at a point are connected through the exponential map exp c", "x :", "EQUATION", "EQUATION"], "citing_paper_content": {"title": "Node-Specific Space Selection Via Localized Geometric Hyperbolicity In Graph Neural Networks", "abstract": "Many graph neural networks have been developed to learn graph representations in either Euclidean or hyperbolic space, with all nodes' representations embedded in a single space. However, a graph can have hyperbolic and Euclidean geometries at different regions of the graph. Thus, it is suboptimal to indifferently embed an entire graph into a single space. In this paper, we explore and analyze two notions of local hyperbolicity, describing the underlying local geometry: geometric (Gromov) and model-based, to determine the preferred space of embedding for each node. The two hyperbolicities' distributions are aligned using the Wasserstein metric such that the calculated geometric hyperbolicity guides the choice of the learned model hyperbolicity. As such our model Joint Space Graph Neural Network (JSGNN) can leverage both Euclidean and hyperbolic spaces during learning by allowing node-specific geometry space selection. We evaluate our model on both node classification and link prediction tasks and observe promising performance compared to baseline models."}, "cited_paper_content": {"title": "Constant Curvature Graph Convolutional Networks", "abstract": "Interest has been rising lately towards methods representing data in non-Euclidean spaces, e.g. hyperbolic or spherical, that provide specific inductive biases useful for certain real-world data properties, e.g. scale-free, hierarchical or cyclical. However, the popular graph neural networks are currently limited in modeling data only via Euclidean geometry and associated vector space operations. Here, we bridge this gap by proposing mathematically grounded generalizations of graph convolutional networks (GCN) to (products of) constant curvature spaces. We do this by i) introducing a unified formalism that can interpolate smoothly between all geometries of constant curvature, ii) leveraging gyro-barycentric coordinates that generalize the classic Euclidean concept of the center of mass. Our class of models smoothly recover their Euclidean counterparts when the curvature goes to zero from either side. Empirically, we outperform Euclidean GCNs in the tasks of node classification and distortion minimization for symbolic data exhibiting non-Euclidean behavior, according to their discrete curvature."}, "keywords": ["hyperbolic manifold"], "citation_intent": "background"} {"citing_id": "2303.11620v2", "cited_id": "1011.5553", "section_title": "Introduction", "citation": "An understanding of affine, global (Euclidean), or local (Euclidean) rigidity #REFR of a realization \u0398(S) has importance in several areas such as molecular dynamics [18; 4; 8] or sensor network localization [34; 32] .", "text_before_citation": ["#OTHEREFR Given an alignment S \u2208 O(d) m of the local views, a consensus representation \u0398(S) of the points can be obtained by averaging the local coordinates of the points due to the (rigidly transformed) views containing them.", "In the noiseless setting where the local views are clean measurements of the data (obtained by applying an unknown rigid transformation to a subset of data points), a perfect alignment of views is possible.", "Equivalently, when the views are noiseless, a value of zero for F is attainable, and an S that achieves it is called a \"perfect alignment\".", "Clearly, a perfect alignment is an optimal one, while the converse may not hold.", "To be consistent with previous works [12; 13] , the consensus representation of the points \u0398(S) due to a perfect alignment S of the views is called a realization of the framework."], "text_after_citation": ["Under a mild assumption on the structure of the local views, [3; 31; 12] characterized the affine rigidity of a realization by the rank of a certain matrix derived from the framework \u0398.", "It was shown in #OTHEREFR that deriving a similar characterization of global rigidity is NP-Hard.", "Nevertheless a characterization of the local rigidity is useful from an algorithmic standpoint as we show in this work.", "Furthermore, necessary and sufficient conditions on the overlapping structure of the local views for affine rigidity were derived in #OTHEREFR .", "Similar results in the context of local and global rigidity form our second set of contributions."], "citing_paper_content": {"title": "Non-Degenerate Rigid Alignment In A Patch Framework", "abstract": "Given a set of overlapping local views (patches) of a dataset, we consider the problem of finding a rigid alignment of the views that minimizes a 2-norm based alignment error. In general, the views are noisy and a perfect alignment may not exist. In this work, we characterize the non-degeneracy of an alignment in the noisy setting based on the kernel and positivity of a certain matrix. This leads to a polynomial time algorithm for testing the non-degeneracy of a given alignment. Consequently, we focus on Riemannian gradient descent for minimization of the error and obtain a sufficient condition on an alignment for the algorithm to converge (locally) linearly to it. In the case of noiseless views, a perfect alignment exists, resulting in a realization of the points that respects the geometry of the views. Under a mild condition on the views, we show that the non-degeneracy of a perfect alignment is equivalent to the local rigidity of the resulting realization. By specializing the characterization of a non-degenerate alignment to the noiseless setting, we obtain necessary and sufficient conditions on the overlapping structure of the views for a locally rigid realization. Similar results are also obtained in the context of global rigidity."}, "cited_paper_content": {"title": "On Affine Rigidity", "abstract": "We define the notion of affine rigidity of a hypergraph and prove a variety of fundamental results for this notion. First, we show that affine rigidity can be determined by the rank of a specific matrix which implies that affine rigidity is a generic property of the hypergraph.Then we prove that if a graph is is $(d+1)$-vertex-connected, then it must be \"generically neighborhood affinely rigid\" in $d$-dimensional space. This implies that if a graph is $(d+1)$-vertex-connected then any generic framework of its squared graph must be universally rigid. ::: Our results, and affine rigidity more generally, have natural applications in point registration and localization, as well as connections to manifold learning."}, "keywords": ["local (Euclidean) rigidity"], "citation_intent": "background"} {"citing_id": "2303.11673v1", "cited_id": "1906.07413", "section_title": "Sampling Techniques", "citation": "It is found that data sampling may generate local models that over-fit (under-fit) for the minority (majority) classes #REFR .", "text_before_citation": ["In general, sampling based techniques are easier to operate compared with other groups of methods (e.g., algorithm-centered techniques) and do not require extensive expertise in ML.", "Thus, these methods have become popular when the models are operated by out-of-domain experts.", "Another strength of sampling based techniques is that they are agnostic to the core classification models #OTHEREFR .", "As such, these techniques can be incorporated into any classification model, and event an ensemble of multiple classification models.", "However, straightforward data sampling is not guaranteed to boost the classification accuracy, and sometimes may even worsen the model performance on one client #OTHEREFR ."], "text_after_citation": ["A possible explanation could be that naive data resampling may prevent clients from learning the \"Special Knowledge\" from local dataset in FL, which eventually leads to low accuracy #OTHEREFR .", "Therefore, sampling strategies need to be carefully designed for FL in order to handle class imbalance effectively.", "Therefore, many attempts have been made to propose sampling strategies that address data imbalance systematically in FL.", "In general, sampling can be done on either the data instance level (i.e., determining if one data instance should be involved in training) or the client level (i.e., determining if the data instances owned by one client should be involved in training).", "In this section, we categorize sampling techniques in FL into three groups and introduce them respectively: (1) data sampling; (2) client sampling; and (3) hybrid data and client sampling."], "citing_paper_content": {"title": "A Survey On Class Imbalance In Federated Learning", "abstract": "Federated learning, which allows multiple client devices in a network to jointly train a machine learning model without direct exposure of clients' data, is an emerging distributed learning technique due to its nature of privacy preservation. However, it has been found that models trained with federated learning usually have worse performance than their counterparts trained in the standard centralized learning mode, especially when the training data is imbalanced. In the context of federated learning, data imbalance may occur either locally one one client device, or globally across many devices. The complexity of different types of data imbalance has posed challenges to the development of federated learning technique, especially considering the need of relieving data imbalance issue and preserving data privacy at the same time. Therefore, in the literature, many attempts have been made to handle class imbalance in federated learning. In this paper, we present a detailed review of recent advancements along this line. We first introduce various types of class imbalance in federated learning, after which we review existing methods for estimating the extent of class imbalance without the need of knowing the actual data to preserve data privacy. After that, we discuss existing methods for handling class imbalance in FL, where the advantages and disadvantages of the these approaches are discussed. We also summarize common evaluation metrics for class imbalanced tasks, and point out potential future directions."}, "cited_paper_content": {"title": "Learning Imbalanced Datasets With Label-Distribution-Aware Margin Loss", "abstract": "Deep learning algorithms can fare poorly when the training dataset suffers from heavy class-imbalance but the testing criterion requires good generalization on less frequent classes. We design two novel methods to improve performance in such scenarios. First, we propose a theoretically-principled label-distribution-aware margin (LDAM) loss motivated by minimizing a margin-based generalization bound. This loss replaces the standard cross-entropy objective during training and can be applied with prior strategies for training with class-imbalance such as re-weighting or re-sampling. Second, we propose a simple, yet effective, training schedule that defers re-weighting until after the initial stage, allowing the model to learn an initial representation while avoiding some of the complications associated with re-weighting or re-sampling. We test our methods on several benchmark vision tasks including the real-world imbalanced dataset iNaturalist 2018. Our experiments show that either of these methods alone can already improve over existing techniques and their combination achieves even better performance gains."}, "keywords": ["minority (majority) classes"], "citation_intent": "background"} {"citing_id": "2303.17251v1", "cited_id": "2001.10289", "section_title": "Misconception 4", "citation": "In yet some other cases, scientific results about bot activity were later found to largely match with independent platform removals of malicious accounts #REFR Tardelli et al., 2022) , which supports the correctness of the scientific results.", "text_before_citation": ["The previous misconceptions might induce readers to think that social bots research has led to flawed-if not outright useless-results, as claimed by some scholars #OTHEREFR , even in sensationalist terms #OTHEREFR .", "However, we see at least two strong arguments against this thesis.", "Firstly, in spite of the limitations of bot detectors, there have been multiple glaring examples of bot studies that were able to bring to light demonstrably harmful campaigns.", "For example, the detectors developed as part of some scientific endeavors were later deployed on online platforms and used to remove large numbers of malicious accounts #OTHEREFR .", "Similarly, results of some studies on bot activity led platforms to remove the accounts identified as malicious bots (Ferrara, 2022) ."], "text_after_citation": ["These cases represent but some of the success stories of social bots research.", "If researchers had not developed bot detection techniques, we would not have been able to identify pockets of anomalous accounts engaged in well-engineered malicious activities.", "Therefore, even if no universal bot detector still exists and in spite of the many caveats to consider in bot detection tasks, being able to detect some malicious bots puts us in an advantageous position than being able to detect none.", "Secondly, the benefits of social bots research extend beyond the detec-tion of malicious bots.", "For example, research and experimentation on social bots led to the development of neutral bots used for assessing the degree of political polarization on a platform #OTHEREFR , \"news bots\" used for journalistic purposes to curate, aggregate, and disseminate content collected from multiple sources #OTHEREFR , or even bots used for content moderation #OTHEREFR ."], "citing_paper_content": {"title": "Demystifying Misconceptions In Social Bots Research", "abstract": "The science of social bots seeks knowledge and solutions to one of the most debated forms of online misinformation. Yet, social bots research is plagued by widespread biases, hyped results, and misconceptions that set the stage for ambiguities, unrealistic expectations, and seemingly irreconcilable findings. Overcoming such issues is instrumental towards ensuring reliable solutions and reaffirming the validity of the scientific method. In this contribution we revise some recent results in social bots research, highlighting and correcting factual errors as well as methodological and conceptual issues. More importantly, we demystify common misconceptions, addressing fundamental points on how social bots research is discussed. Our analysis surfaces the need to discuss misinformation research in a rigorous, unbiased, and responsible way. This article bolsters such effort by identifying and refuting common fallacious arguments used by both proponents and opponents of social bots research as well as providing indications on the correct methodologies and sound directions for future research in the field."}, "cited_paper_content": {"title": "Charting The Landscape Of Online Cryptocurrency Manipulation", "abstract": "Cryptocurrencies represent one of the most attractive markets for financial speculation. As a consequence, they have attracted unprecedented attention on social media. Besides genuine discussions and legitimate investment initiatives, several deceptive activities have flourished. In this work, we chart the online cryptocurrency landscape across multiple platforms. To reach our goal, we collected a large dataset, composed of more than 50M messages published by almost 7M users on Twitter, Telegram and Discord, over three months. We performed bot detection on Twitter accounts sharing invite links to Telegram and Discord channels, and we discovered that more than 56% of them were bots or suspended accounts. Then, we applied topic modeling techniques to Telegram and Discord messages, unveiling two different deception schemes - \"pump-and-dump\" and \"Ponzi\" - and identifying the channels involved in these frauds. Whereas on Discord we found a negligible level of deception, on Telegram we retrieved 296 channels involved in pump-and-dump and 432 involved in Ponzi schemes, accounting for a striking 20% of the total. Moreover, we observed that 93% of the invite links shared by Twitter bots point to Telegram pump-and-dump channels, shedding light on a little-known social bot activity. Charting the landscape of online cryptocurrency manipulation can inform actionable policies to fight such abuse."}, "keywords": ["bot activity"], "citation_intent": "result"} {"citing_id": "2305.02704v1", "cited_id": "1403.2079", "section_title": "C. Power Control For Secure Transmission 1) Background:", "citation": "Considering a two-link interference channel with one eavesdropper, the authors of #REFR devise a power control algorithm based on an altruistic and egoistic setting.", "text_before_citation": ["The aim of secure transmission is to send messages to legitimate receivers without any information leakage to eavesdroppers.", "Aside from the information-theoretical study #OTHEREFR - #OTHEREFR , the optimization aspect has attracted extensive research interest as well."], "text_after_citation": ["For a wiretap channel with one legitimate receiver, one eavesdropper, and multiple jammers, #OTHEREFR characterizes the power control as a Srtacjelberg game and thereby proposes an iterative optimization that achieves a Srtacjelberg equilibrium.", "Moreover, #OTHEREFR suggest a second-order-cone approach to power control for secure transmission in a cell-free MIMO network.", "For the UAV network case, #OTHEREFR proposes optimizing powers via block coordinate descent and successive convex approximation.", "The use of machine learning has become popular in this area, e.g., the deep reinforcement learning approach in #OTHEREFR and the Q-learning approach in #OTHEREFR .", "2) Problem Formulation: Consider L base-stations (BSs) each serving a legitimate downlink user terminal."], "citing_paper_content": {"title": "Mixed Max-And-Min Fractional Programming For Wireless Networks", "abstract": "Fractional programming (FP) plays a crucial role in wireless network design because many relevant problems involve maximizing or minimizing ratio terms. Notice that the maximization case and the minimization case of FP cannot be converted to each other in general, so they have to be dealt with separately in most of the previous studies. Thus, an existing FP method for maximizing ratios typically does not work for the minimization case, and vice versa. However, the FP objective can be mixed max-and-min, e.g., one may wish to maximize the signal-tointerference-plus-noise ratio (SINR) of the legitimate receiver while minimizing that of the eavesdropper. We aim to fill the gap between max-FP and min-FP by devising a unified optimization framework. The main results are threefold. First, we extend the existing max-FP technique called quadratic transform to the min-FP, and further develop a full generalization for the mixed case. Second. we provide a minorization-maximization (MM) interpretation of the proposed unified approach, thereby establishing its convergence and also obtaining a matrix extension; another result we obtain is a generalized Lagrangian dual transform which facilitates the solving of the logarithmic FP. Finally, we present three typical applications: the age-of-information (AoI) minimization, the Cram\u00e9r-Rao bound minimization for sensing, and the secure data rate maximization, none of which can be efficiently addressed by the previous FP methods. Index Terms-Multi-ratio mixed max-and-min fractional programming (FP); matrix FP; unified approach; age-of-information (AoI); Cram\u00e9r-Rao bound; sensing; secure transmission."}, "cited_paper_content": {"title": "Joint Power Control In Wiretap Interference Channels", "abstract": "Interference in wireless networks degrades the signal quality at the terminals. However, it can potentially enhance the secrecy rate. This paper investigates the secrecy rate in a two-user interference network where one of the users, namely user 1, requires to establish a confidential connection. User 1 wants to prevent an unintended user of the network to decode its transmission. User 1 has to transmit such that its secrecy rate is maximized while the quality of service at the destination of the other user, user 2, is satisfied, and both user's power limits are taken into account. We consider two scenarios: 1) user 2 changes its power in favor of user 1, an altruistic scenario, 2) user 2 is selfish and only aims to maintain the minimum quality of service at its destination, an egoistic scenario. It is shown that there is a threshold for user 2's transmission power that only below or above which, depending on the channel qualities, user 1 can achieve a positive secrecy rate. Closed-form solutions are obtained in order to perform joint optimal power control. Further, a new metric called secrecy energy efficiency is introduced. We show that in general, the secrecy energy efficiency of user 1 in an interference channel scenario is higher than that of an interference-free channel."}, "keywords": ["two-link interference channel"], "citation_intent": "method"} {"citing_id": "2305.02528v1", "cited_id": "1906.05332", "section_title": "Results", "citation": "As shown in Table 1 , our self-supervised model achieves comparable performance with supervised HPLFlowNet #REFR on FT3D s dataset.", "text_before_citation": ["From the results, it can be found that our model can outperform all compared self-supervised methods in terms of the four metrics on the FT3D s test data. Especially, our model brings 8.72% gains for metric AS.", "For the KITTI s dataset, our model brings substantial improvements on all metrics.", "To be specific, our model outperforms the second best method RCP #OTHEREFR by 8.68% and 6.58% on metrics AS and AR, respectively.", "Besides, it is worth noting that our model can even achieve an EPE metric of 3.62cm, which is much lower than the EPE (6.19cm ) of recent RigidFlow.", "We also compare our model with some supervised methods, such as FlowNet3D #OTHEREFR and FLOT #OTHEREFR , etc."], "text_after_citation": ["Without any fine-tuning on KITTI s dataset, our model can even outperform the supervised methods listed in Table 1 , which proves that our model has better generalization ability.", "For real scenes, most local regions are with similar flow patterns.", "Thanks to dynamically clustering mechanism, our model clusters points with similar flow pattern into the same clusters and encodes the superpointlevel flow into the GRU for flow refinement, thereby leading to satisfactory performance on real scenes.", "Performance on point clouds with occlusions.", "Following the experimental settings used in Self-Point-Flow #OTHEREFR and RigidFlow #OTHEREFR , we train our model on KITTI r dataset and evaluate our model on both KITTI o and KITTI t datasets."], "citing_paper_content": {"title": "Self-Supervised 3D Scene Flow Estimation Guided By Superpoints", "abstract": "3D scene flow estimation aims to estimate point-wise motions between two consecutive frames of point clouds. Superpoints, i.e., points with similar geometric features, are usually employed to capture similar motions of local regions in 3D scenes for scene flow estimation. However, in existing methods, superpoints are generated with the offline clustering methods, which cannot characterize local regions with similar motions for complex 3D scenes well, leading to inaccurate scene flow estimation. To this end, we propose an iterative end-to-end superpoint based scene flow estimation framework, where the superpoints can be dynamically updated to guide the point-level flow prediction. Specifically, our framework consists of a flow guided superpoint generation module and a superpoint guided flow refinement module. In our superpoint generation module, we utilize the bidirectional flow information at the previous iteration to obtain the matching points of points and superpoint centers for soft point-to-superpoint association construction, in which the superpoints are generated for pairwise point clouds. With the generated superpoints, we first reconstruct the flow for each point by adaptively aggregating the superpoint-level flow, and then encode the consistency between the reconstructed flow of pairwise point clouds. Finally, we feed the consistency encoding along with the reconstructed flow into GRU to refine point-level flow. Extensive experiments on several different datasets show that our method can achieve promising performance. Code is available at https://github. com/supersyq/SPFlowNet."}, "cited_paper_content": {"title": "Hplflownet: Hierarchical Permutohedral Lattice Flownet For Scene Flow Estimation On Large-Scale Point Clouds", "abstract": "We present a novel deep neural network architecture for end-to-end scene flow estimation that directly operates on large-scale 3D point clouds. Inspired by Bilateral Convolutional Layers (BCL), we propose novel DownBCL, UpBCL, and CorrBCL operations that restore structural information from unstructured point clouds, and fuse information from two consecutive point clouds. Operating on discrete and sparse permutohedral lattice points, our architectural design is parsimonious in computational cost. Our model can efficiently process a pair of point cloud frames at once with a maximum of 86K points per frame. Our approach achieves state-of-the-art performance on the FlyingThings3D and KITTI Scene Flow 2015 datasets. Moreover, trained on synthetic data, our approach shows great generalization ability on real-world data and on different point densities without fine-tuning."}, "keywords": ["self-supervised model", "supervised HPLFlowNet"], "citation_intent": "result"} {"citing_id": "2303.06565v1", "cited_id": "1904.08082", "section_title": "Graph Compressor", "citation": "The graph compressor is inspired by #REFR , and it works by computing the attention scores for all sentence nodes, filtering out nodes with the lowest scores, and then masking the rest using their attention scores.", "text_before_citation": ["Given G D and Q D from the graph encoder, the graph compressor aims to \"compress\" the graph by selecting a subset of salient nodes and edges.", "Here we focus on filtering the sentence nodes, because we want to identify key sentences that help generate the summary.", "After the compression, all selected sentence nodes and their linked word and document nodes represent the compressed graph and their embeddings will be used by the text decoder for summary generation."], "text_after_citation": ["Firstly, attention scores of the sentence nodes are calculated based on the updated node embeddings from our proposed graph encoder MGAT(Q D , G D ):", "EQUATION", "where r is the only trainable parameter of the graph compressor which transforms the updated node embedding into a scalar.", "Then, based on these scores, we select sentence nodes with the highest scores:", "EQUATION"], "citing_paper_content": {"title": "Compressed Heterogeneous Graph For Abstractive Multi-Document Summarization", "abstract": "Multi-document summarization (MDS) aims to generate a summary for a number of related documents. We propose HGSUM-an MDS model that extends an encoder-decoder architecture to incorporate a heterogeneous graph to represent different semantic units (e.g., words and sentences) of the documents. This contrasts with existing MDS models which do not consider different edge types of graphs and as such do not capture the diversity of relationships in the documents. To preserve only key information and relationships of the documents in the heterogeneous graph, HGSUM uses graph pooling to compress the input graph. And to guide HGSUM to learn the compression, we introduce an additional objective that maximizes the similarity between the compressed graph and the graph constructed from the ground-truth summary during training. HGSUM is trained end-to-end with the graph similarity and standard cross-entropy objectives. Experimental results over MULTI-NEWS, WCEP-100, and ARXIV show that HGSUM outperforms state-of-the-art MDS models. The code for our model and experiments is available at: https://github.com/oaimli/HGSum."}, "cited_paper_content": {"title": "Self-Attention Graph Pooling", "abstract": "Advanced methods of applying deep learning to structured data such as graphs have been proposed in recent years. In particular, studies have focused on generalizing convolutional neural networks to graph data, which includes redefining the convolution and the downsampling (pooling) operations for graphs. The method of generalizing the convolution operation to graphs has been proven to improve performance and is widely used. However, the method of applying downsampling to graphs is still difficult to perform and has room for improvement. In this paper, we propose a graph pooling method based on self-attention. Self-attention using graph convolution allows our pooling method to consider both node features and graph topology. To ensure a fair comparison, the same training procedures and model architectures were used for the existing pooling methods and our method. The experimental results demonstrate that our method achieves superior graph classification performance on the benchmark datasets using a reasonable number of parameters."}, "keywords": ["graph compressor"], "citation_intent": "method"} {"citing_id": "2304.13559v2", "cited_id": "1910.13461", "section_title": "Discussion", "citation": "While text-to-table, for example, also uses pre-trained language models as a basis, it is directly initialized with the pre-trained weights of a language model (i.e., BART #REFR ), which has been pre-trained on plain text only.", "text_before_citation": ["While there exist already approaches that transform textual data to tables like text-to-table #OTHEREFR or STable #OTHEREFR , we think that these approaches are not suitable for implementing multi-modal database operations in an MMDB.", "One key difference of our approach compared to text-to-table and STable is that the underlying models have to be trained from scratch for every new text collection."], "text_after_citation": ["Different from that, we carefully design a new pre-training procedure that allows our MMDB-Model to provide more meaningful representations for table extraction and thus realize MMOps on unseen texts with just a few examples.", "Moreover, another key difference is that text-to-table or STable produce the data for an output table using a transformer-based generative decoder #OTHEREFR .", "The first disadvantage here is that the model can \"make up\" values, that are not actually present in the input text, but that the model picked up during (pre-)training. This phenomenon is called hallucination #OTHEREFR .", "On top of that, transformer-based decoders output tables token-by-token in an autoregressive manner, which requires a pass through the decoder for every token in the output table.", "This results in a computationally expensive decoding process as we show in our evaluation."], "citing_paper_content": {"title": "Towards Multi-Modal Dbmss For Seamless Querying Of Texts And Tables", "abstract": "In this paper, we propose Multi-Modal Databases (MMDBs), which is a new class of database systems that can seamlessly query text and tables using SQL. To enable seamless querying of textual data using SQL in an MMDB, we propose to extend relational databases with so-called multi-modal operators (MMOps) which are based on the advances of recent large language models such as GPT-3. The main idea of MMOps is that they allow text collections to be treated as tables without the need to manually transform the data. As we show in our evaluation, our MMDB prototype can not only outperform state-of-the-art approaches such as text-totable in terms of accuracy and performance but it also requires significantly less training data to fine-tune the model for an unseen text collection."}, "cited_paper_content": {"title": "Bart: Denoising Sequence-To-Sequence Pre-Training For Natural Language Generation, Translation, And Comprehension", "abstract": "We present BART, a denoising autoencoder for pretraining sequence-to-sequence models. BART is trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. It uses a standard Tranformer-based neural machine translation architecture which, despite its simplicity, can be seen as generalizing BERT (due to the bidirectional encoder), GPT (with the left-to-right decoder), and many other more recent pretraining schemes. We evaluate a number of noising approaches, finding the best performance by both randomly shuffling the order of the original sentences and using a novel in-filling scheme, where spans of text are replaced with a single mask token. BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa with comparable training resources on GLUE and SQuAD, achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 6 ROUGE. BART also provides a 1.1 BLEU increase over a back-translation system for machine translation, with only target language pretraining. We also report ablation experiments that replicate other pretraining schemes within the BART framework, to better measure which factors most influence end-task performance."}, "keywords": ["language model", "pre-trained language models"], "citation_intent": "method"} {"citing_id": "2305.02109v1", "cited_id": "1909.11875", "section_title": "B. Open Issues And Challenges", "citation": "Existing FL implementations generally consider static strategies to allocate spectrum (i.e., PRBs) to FLUs for an entire LTR #REFR , leading to underutilization of spectrum, called spectrum fragmentation.", "text_before_citation": ["Due to temporal variation of channels, static resource allocation (e.g., licensed/unlicensed spectrum and transmit power) of current FL implementations #OTHEREFR can lead to two obscure problems.", "(i) Wireless resource over-provisioning (WROP), referring to surpassing FLUs' QoS requirements upon over-provisioning of wireless resources, causing an increase in expenditure, energy consumption, and interference.", "(ii) Wireless resources under provisioning (WRUP), referring to the wireless resource deficiency of FLUs at some time instants, which can cause SLA violations for FLSs.", "WRUP can reduce FLSOs satisfaction, preventing them from running their FLSs in the future.", "To harness WROP/WRUP, we propose two mechanisms in EV-FL to dynamically match resources to FLSOs' de- 3) Spectrum fragmentation: In FL, a local training round (LTR) of FLUs consists of (i) GM downloading, (ii) LM training, and (iii) LM uploading."], "text_after_citation": ["To our knowledge, we are among the first to identify spectrum fragmentation in FL, meaning that FLUs only utilize wireless resources for GM downloading and LM uploading, while these resources are idle during LM training.", "Spectrum fragmentation under DCC. Spectrum fragmentation becomes more severe upon considering DCC.", "In DCC, we have two types of LTRs (see Fig.", "2 ): (i) DPU LTR, consisting of three steps: (i-a) GM downloading, (i-b) LM training, and (i-c) dispersing the trained LM to neighboring CHUs.", "(ii) CHU LTR, consisting of five steps: (ii-a) GM downloading, (ii-b) LM training, (ii-c) waiting for DPUs connected to the CHU to perform DPU LTR, (ii-d) performing local aggregation, and (ii-e) aggregated LM uploading to the BS."], "citing_paper_content": {"title": "Synergies Between Federated Learning And O-Ran: Towards An Elastic Virtualized Architecture For Multiple Distributed Machine Learning Services", "abstract": "Federated learning (FL) is the most popular distributed machine learning technique. However, implementation of FL over modern wireless networks faces key challenges caused by (i) dynamics of the network conditions, (ii) coexistence of multiple FL services/tasks in the system, and (iii) concurrent execution of FL services with other network services, which are not jointly considered in prior works. Motivated by these challenges, we introduce a generic FL paradigm over next-generation (NextG) networks, called dynamic multi-service FL (DMS-FL). We identify three unexplored design considerations in DMS-FL: (i) FL service operator accumulation, (ii) wireless resource fragmentation, and (iii) signal strength fluctuations. We take the first steps towards addressing these design considerations through proposing a novel distributed ML architecture called elastic virtualized FL (EV-FL). EV-FL unleashes the full potential of Open RAN (O-RAN) systems and introduces an elastic resource provisioning methodology to execute FL services. It further constitutes a multi-timescale FL management system that introduces three dimensions into existing FL architectures: (i) virtualization, (ii) scalability, and (iii) elasticity. Through investigating EV-FL, we reveal a series of open research directions for future work. We finally simulate EV-FL to demonstrate its potential to save wireless resources and increase fairness among FL services."}, "cited_paper_content": {"title": "Federated Learning In Mobile Edge Networks: A Comprehensive Survey", "abstract": "In recent years, mobile devices are equipped with increasingly advanced sensing and computing capabilities. Coupled with advancements in Deep Learning (DL), this opens up countless possibilities for meaningful applications. Traditional Machine Learning (ML) approaches require the data to be centralized in a cloud server. However, this results in critical issues related to unacceptable latency and communication inefficiency. To this end, Mobile Edge Computing (MEC) has been proposed to bring intelligence closer to the edge, where data is produced. However, conventional enabling technologies for ML at mobile edge networks still require personal data to be shared with external parties, e.g., edge servers. Recently, in light of increasingly stringent data privacy legislation and growing privacy concerns, the concept of Federated Learning (FL) has been introduced. In FL, end devices use their local data to train an ML model required by the server. The end devices then send the model updates rather than raw data to the server for aggregation. FL can serve as an enabling technology in mobile edge networks since it enables the collaborative training of an ML model and also enables DL for mobile edge network optimization. However, in a large-scale and complex mobile edge network, heterogeneous devices with varying constraints are involved. This raises challenges of communication costs, resource allocation, and privacy and security in the implementation of FL at scale. In this survey, we begin with an introduction to the background and fundamentals of FL. Then, we highlight the aforementioned challenges of FL implementation and review existing solutions. Furthermore, we present the applications of FL for mobile edge network optimization. Finally, we discuss the important challenges, open issues and future research directions in FL."}, "keywords": ["Existing FL implementations", "underutilization"], "citation_intent": "background"} {"citing_id": "2303.00052v1", "cited_id": "1906.02642", "section_title": "Minimum Spanning Tree Games And Approximation", "citation": "From the more theoretical perspective, one can easily see that for MST games the cost function c( \u2022 ) is subadditive, yet it is not submodular in general; see #REFR for a characterization when it is.", "text_before_citation": ["In this section we address a well known special class of games known as minimum cost spanning tree (MST) games #OTHEREFR , where the cost of the outside option for a set of agents is determined by the cost of a minimum cost spanning tree in the subgraph induced by these agents. MST games are known to have a non-empty core.", "Moreover, it is known that finding some element in the core is computationally easy and can be done by computing a minimum spanning tree #OTHEREFR .", "The optimization problem that we address asks for the maximal amount that can be charged to the agents while no proper subset of agents would prefer the outside option."], "text_after_citation": ["Recalling that the computation of maximum shareable costs can be done in polynomial time when c( \u2022 ) is submodular, it is a natural question to ask if this still holds for subadditive cost functions.", "In that respect, note that the weighted graph games as studied in #OTHEREFR have polynomial time algorithms to decide non-emptiness of the core whenever c( \u2022 ) is subadditive, hence Theorem 4 applied to weighted graphs games does not give an answer to this question.", "We will see that for MST games, maximizing shareable costs cannot be done efficiently unless P=NP.", "In fact, one could define an even more general class of problems in the spirit of cooperative games with restricted coalition formation, by defining a (downward-closed) set system that describes all those subsets of agents that are able to cooperate and hence have access to an outside option, while all other subsets do not have that option.", "The almost core as studied in this paper arises as the special case where the set system is the (n \u2212 1)-uniform matroid."], "citing_paper_content": {"title": "Algorithmic Solutions For Maximizing Shareable Costs", "abstract": "This paper addresses the optimization problem to maximize the total costs that can be shared among a group of agents, while maintaining stability in the sense of the core constraints of a cooperative transferable utility game, or TU game. This means that all subsets of agents have an outside option at a certain cost, and stability requires that the cost shares are defined so that none of the outside options is preferable. When maximizing total shareable costs, the cost shares must satisfy all constraints that define the core of a TU game, except for being budget balanced. The paper gives a fairly complete picture of the computational complexity of this optimization problem, in relation to classical computational problems on the core. We also show that, for games with an empty core, the problem is equivalent to computing minimal core relaxations for several relaxations that have been proposed earlier. As an example for a class of cost sharing games with non-empty core, we address minimum cost spanning tree games. While it is known that cost shares in the core can be found efficiently, we show that the computation of maximal cost shares is NP-hard for minimum cost spanning tree games. We also derive a 2-approximation algorithm. Our work opens several directions for future work."}, "cited_paper_content": {"title": "An Efficient Characterization Of Submodular Spanning Tree Games", "abstract": "Cooperative games are an important class of problems in game theory, where the goal is to distribute a value among a set of players who are allowed to cooperate by forming coalitions. An outcome of the game is given by an allocation vector that assigns a value share to each player. A crucial aspect of such games is submodularity (or convexity). Indeed, convex instances of cooperative games exhibit several nice properties, e.g. regarding the existence and computation of allocations realizing some of the most important solution concepts proposed in the literature. For this reason, a relevant question is whether one can give a polynomial time characterization of submodular instances, for prominent cooperative games that are in general non-convex."}, "keywords": ["cost function", "MST games"], "citation_intent": "background"} {"citing_id": "2304.07775v1", "cited_id": "1610.09001", "section_title": "Introduction", "citation": "The aforementioned audio-visual distillation methods #REFR highly relate the cross-modal synchronization to semantic shows that we first erase the irrelevant modality noise.", "text_before_citation": ["Knowledge distillation has become an effective approach to ensemble different data sources to enrich the representation ability of target modality #OTHEREFR . Compared with visual-related modalities (e.g.", "RGB and optical flow), sound naturally contains vivid supervisory information from an independent auditory source, which deserves thorough exploration due to its synchronized nature with vision.", "Inspired by this, crossmodal knowledge distillation has been extensively investigated in audio-visual learning #OTHEREFR , leveraging the synchronization between auditory and visual modalities."], "text_after_citation": ["(c) illustrates that we further capture the audio-visual correspondence and rectify the semantic correlation. consistency, and directly transfer knowledge across the audiovisual modalities.", "However, such semantic consistency is hard to guarantee in unconstrained videos, thus, directly transferring knowledge across audio-visual modalities assuming highly-related semantic consistency could harm the distillation performance.", "The similar phenomenon of semantic misalignment is also found in self-supervised multi-modal learning #OTHEREFR but got little attention in cross-modal distillation.", "In this work, we reconsider the conventional semantic consistency assumption and point out that the irrelevant modality noise and differentiated semantic correlation are blamed for the failure of such assumption and harm the crossmodal distillation performance, especially in unconstrained audio-visual scenarios.", "Concretely, in cross-modal distillation, irrelevant modality noise is considered as the signals in one modality that are unrelated to its accompanying target modality."], "citing_paper_content": {"title": "Robust Cross-Modal Knowledge Distillation For Unconstrained Videos", "abstract": "Cross-modal distillation has been widely used to transfer knowledge across different modalities, enriching the representation of the target unimodal one. Recent studies highly relate the temporal synchronization between vision and sound to the semantic consistency for cross-modal distillation. However, such semantic consistency from the synchronization is hard to guarantee in unconstrained videos, due to the irrelevant modality noise and differentiated semantic correlation. To this end, we first propose a Modality Noise Filter (MNF) module to erase the irrelevant noise in teacher modality with cross-modal context. After this purification, we then design a Contrastive Semantic Calibration (CSC) module to adaptively distill useful knowledge for target modality, by referring to the differentiated sample-wise semantic correlation in a contrastive fashion. Extensive experiments show that our method could bring a performance boost compared with other distillation methods in both visual action recognition and video retrieval task. We also extend to the audio tagging task to prove the generalization of our method."}, "cited_paper_content": {"title": "Soundnet: Learning Sound Representations From Unlabeled Video", "abstract": "We learn rich natural sound representations by capitalizing on large amounts of unlabeled sound data collected in the wild. We leverage the natural synchronization between vision and sound to learn an acoustic representation using two-million unlabeled videos. Unlabeled video has the advantage that it can be economically acquired at massive scales, yet contains useful signals about natural sound. We propose a student-teacher training procedure which transfers discriminative visual knowledge from well established visual recognition models into the sound modality using unlabeled video as a bridge. Our sound representation yields significant performance improvements over the state-of-the-art results on standard benchmarks for acoustic scene/object classification. Visualizations suggest some high-level semantics automatically emerge in the sound network, even though it is trained without ground truth labels."}, "keywords": ["aforementioned audio-visual distillation"], "citation_intent": "method"} {"citing_id": "2303.13809v1", "cited_id": "1904.09675", "section_title": "Meta Evaluation", "citation": "BERTScore #REFR ) is a neural metric that relies on pre-trained models to compute the semantic similarity with the reference.", "text_before_citation": ["We utilize the accuracy of pairwise system-ranking (Kocmi et al., 2021) of three types of Kendall correlation.", "Specifically, these values are computed by flattening the scores into a single vector and calculating the average correlations over systems, or over segments.", "Baseline We compare LLMs with several commonly used baseline metrics for MT evaluation.", "BLEU #OTHEREFR is the most popular metric that compares the n-gram overlap of the translation with human reference, but it has been criticized for not capturing the full semantic meaning of the translation #OTHEREFR ."], "text_after_citation": ["BLEURT #OTHEREFR and COMET #OTHEREFR are supervised neural metrics that leverage human judgments to train. They have shown a high correlation with human judgments."], "citing_paper_content": {"title": "Error Analysis Prompting Enables Human-Like Translation Evaluation In Large Language Models: A Case Study On Chatgpt", "abstract": "Generative large language models (LLMs), e.g., ChatGPT, have demonstrated remarkable proficiency across several NLP tasks such as machine translation, question answering, text summarization, and natural language understanding. Recent research (Kocmi and Federmann, 2023) has shown that utilizing Chat-GPT for assessing the quality of machine translation (MT) achieves state-of-the-art performance at the system level but performs poorly at the segment level. To further improve the performance of LLMs on MT quality assessment, we conducted an investigation into several prompting methods. Our results indicate that by combining Chain-of-Thoughts (Wei et al., 2022) and Error Analysis (Lu et al., 2022), a new prompting method called Error Analysis Prompting, LLMs like ChatGPT can generate human-like MT evaluations at both the system and segment level. Additionally, we discovered some limitations of Chat-GPT as an MT evaluator, such as unstable scoring and biases when provided with multiple translations in a single query. Our findings aim to provide a preliminary experience for appropriately evaluating translation quality on Chat-GPT while offering a variety of tricks in designing prompts for in-context learning. We anticipate that this report will shed new light on advancing the field of translation evaluation with LLMs by enhancing both the accuracy and reliability of metrics."}, "cited_paper_content": {"title": "Bertscore: Evaluating Text Generation With Bert", "abstract": "We propose BERTScore, an automatic evaluation metric for text generation. Analogously to common metrics, BERTScore computes a similarity score for each token in the candidate sentence with each token in the reference sentence. However, instead of exact matches, we compute token similarity using contextual embeddings. We evaluate using the outputs of 363 machine translation and image captioning systems. BERTScore correlates better with human judgments and provides stronger model selection performance than existing metrics. Finally, we use an adversarial paraphrase detection task and show that BERTScore is more robust to challenging examples compared to existing metrics."}, "keywords": ["semantic similarity"], "citation_intent": "method"} {"citing_id": "2304.11697v1", "cited_id": "1911.10150", "section_title": "B. Multi-Modal Fusion For Object Detection 1) Multi-Modal Object Detection:", "citation": "PointPainting #REFR projects LiDAR points into the output of an image-only semantic segmentation network, and appends the class scores to each point.", "text_before_citation": ["To date, several studies have investigated multi-modal fusion for 2D and 3D object detection.", "Frustum PointNets #OTHEREFR extract the 3D bounding frustum of an object by extruding 2D bounding boxes from image detectors.", "PointFusion #OTHEREFR combines a CNN and a PointNet #OTHEREFR architecture respectively to process images and raw point clouds then predict 3D boxes."], "text_after_citation": ["All these fusion methods of RGB and LiDAR achieve high average precision on the benchmarks, however, the coupling or interrelation of two modalities will cause the whole system to fail easily once part of the sensors break down.", "Besides, the methods above only provide a deterministic predict result, making it risky to carry out in the real application.", "2) Adaptive fusion: Several new studies have proposed self-adaptive techniques in computer vision.", "Therefore, the robustness of those tasks can be improved to some extent.", "Adaptnet #OTHEREFR uses a convoluted mixture of deep experts(CMoDE) fusion techniques to learn features from complementary modalities and spectra."], "citing_paper_content": {"title": "Informative Data Selection With Uncertainty For Multi-Modal Object Detection", "abstract": "Noise has always been nonnegligible trouble in object detection by creating confusion in model reasoning, thereby reducing the informativeness of the data. It can lead to inaccurate recognition due to the shift in the observed pattern, that requires a robust generalization of the models. To implement a general vision model, we need to develop deep learning models that can adaptively select valid information from multi-modal data. This is mainly based on two reasons. Multi-modal learning can break through the inherent defects of single-modal data, and adaptive information selection can reduce chaos in multi-modal data. To tackle this problem, we propose a universal uncertainty-aware multi-modal fusion model. It adopts a multi-pipeline loosely coupled architecture to combine the features and results from point clouds and images. To quantify the correlation in multimodal information, we model the uncertainty, as the inverse of data information, in different modalities and embed it in the bounding box generation. In this way, our model reduces the randomness in fusion and generates reliable output. Moreover, we conducted a completed investigation on the KITTI 2D object detection dataset and its derived dirty data. Our fusion model is proven to resist severe noise interference like Gaussian, motion blur, and frost, with only slight degradation. The experiment results demonstrate the benefits of our adaptive fusion. Our analysis on the robustness of multi-modal fusion will provide further insights for future research."}, "cited_paper_content": {"title": "Pointpainting: Sequential Fusion For 3D Object Detection", "abstract": "Camera and lidar are important sensor modalities for robotics in general and self-driving cars in particular. The sensors provide complementary information offering an opportunity for tight sensor-fusion. Surprisingly, lidar-only methods outperform fusion methods on the main benchmark datasets, suggesting a gap in the literature. In this work, we propose PointPainting: a sequential fusion method to fill this gap. PointPainting works by projecting lidar points into the output of an image-only semantic segmentation network and appending the class scores to each point. The appended (painted) point cloud can then be fed to any lidar-only method. Experiments show large improvements on three different state-of-the art methods, Point-RCNN, VoxelNet and PointPillars on the KITTI and nuScenes datasets. The painted version of PointRCNN represents a new state of the art on the KITTI leaderboard for the bird's-eye view detection task. In ablation, we study how the effects of Painting depends on the quality and format of the semantic segmentation output, and demonstrate how latency can be minimized through pipelining."}, "keywords": ["image-only semantic segmentation", "LiDAR points"], "citation_intent": "method"} {"citing_id": "2304.00359v1", "cited_id": "1905.05172", "section_title": "Implementation Details", "citation": "For each training subject, we sample (a) 5,000 surface points as G s , which are evenly distributed on the mesh surface; (b) 5,000 occupancy-target points as G o in the 3D space, for which we follow the sampling strategy in #REFR .", "text_before_citation": [], "text_after_citation": ["In particular, 15/16 points in G o are evenly sampled on the mesh surface.", "Gaussian perturbation is then applied to them along the surface normal direction.", "The rest 1/16 points are randomly sampled within the predefined 3D space where the mesh lies in.", "We employ Rembg [1] to segment out the background for real-world images and use Kaolin [2] to calculate the SDF from the SMPL-X model.", "We apply PIXIE #OTHEREFR to estimate the SMPL-X model for the segmented human subject."], "citing_paper_content": {"title": "Sesdf: Self-Evolved Signed Distance Field For Implicit 3D Clothed Human Reconstruction", "abstract": "Figure 1. Self-evolved Signed Distance Field (SeSDF): We propose Self-evolved Signed Distance Field (SeSDF), which can flexibly reconstruct 3D clothed human models from a single-view image (top) or uncalibrated multi-view images (bottom). SeSDF can robustly recover fine geometry details from any type of poses, which allows us to generate clothed human avatars."}, "cited_paper_content": {"title": "Pifu: Pixel-Aligned Implicit Function For High-Resolution Clothed Human Digitization", "abstract": "We introduce Pixel-aligned Implicit Function (PIFu), a highly effective implicit representation that locally aligns pixels of 2D images with the global context of their corresponding 3D object. Using PIFu, we propose an end-to-end deep learning method for digitizing highly detailed clothed humans that can infer both 3D surface and texture from a single image, and optionally, multiple input images. Highly intricate shapes, such as hairstyles, clothing, as well as their variations and deformations can be digitized in a unified way. Compared to existing representations used for 3D deep learning, PIFu can produce high-resolution surfaces including largely unseen regions such as the back of a person. In particular, it is memory efficient unlike the voxel representation, can handle arbitrary topology, and the resulting surface is spatially aligned with the input image. Furthermore, while previous techniques are designed to process either a single image or multiple views, PIFu extends naturally to arbitrary number of views. We demonstrate high-resolution and robust reconstructions on real world images from the DeepFashion dataset, which contains a variety of challenging clothing types. Our method achieves state-of-the-art performance on a public benchmark and outperforms the prior work for clothed human digitization from a single image."}, "keywords": ["3D space", "mesh surface"], "citation_intent": "method"} {"citing_id": "2303.04544v1", "cited_id": "1809.00549", "section_title": "Social Reasoning Facilitating Communication", "citation": "The obverter mechanism was implemented in modern neural networks by Choi, Lazaridou, and de Freitas (2018) and #REFR , who also observed an increase in compositionality.", "text_before_citation": ["The obverter mechanism makes use of the same signal-meaning mapping for language generation and comprehension.", "This guarantees that agents are internally consistent and use the language they learn in the receiver role when they assume the speaker role.", "It can be interpreted as a rudimentary form of social reasoning: an agent's own cognitive capabilities serve as the model of another agent.", "The symmetry of the roles, together with the selection of speaker-receiver pairs at random from a population of agents, effectively creates a transmission chain similar to the iterated learning setting #OTHEREFR .", "This should constitute a pressure for a systematic communication protocol."], "text_after_citation": ["The more complex mechanism of social reasoning was explored in #OTHEREFR .", "Their research belongs to the domain of artificial intelligence and multi-agent cooperation.", "They introduced the concept of social influence as an auxiliary reward for agents that communicate to solve a sequential social dilemma.", "Social influence was measured using counterfactual reasoning: an agent considered alternative signals it could have sent and assessed their likely effect on the actions of another agent.", "In one variant of their simulations, actions of the other agent were predicted using a specialised internal model trained in a supervised manner."], "citing_paper_content": {"title": "Models Of Symbol Emergence In Communication: A Conceptual Review And A Guide For Avoiding Local Minima", "abstract": "Computational simulations are a popular method for testing hypotheses about the emergence of communication. This kind of research is performed in a variety of traditions including language evolution, developmental psychology, cognitive science, machine learning, robotics, etc. The motivations for the models are different, but the operationalizations and methods used are often similar. We identify the assumptions and explanatory targets of several most representative models and summarise the known results. We claim that some of the assumptions-such as portraying meaning in terms of mapping, focusing on the descriptive function of communication, modelling signals with amodal tokens-may hinder the success of modelling. Relaxing these assumptions and foregrounding the interactions of embodied and situated agents allows one to systematise the multiplicity of pressures under which symbolic systems evolve. In line with this perspective, we sketch the road towards modelling the emergence of meaningful symbolic communication, where symbols are simultaneously grounded in action and perception and form an abstract system."}, "cited_paper_content": {"title": "Emergence Of Communication In An Interactive World With Consistent Speakers", "abstract": "Training agents to communicate with one another given task-based supervision only has attracted considerable attention recently, due to the growing interest in developing models for human-agent interaction. Prior work on the topic focused on simple environments, where training using policy gradient was feasible despite the non-stationarity of the agents during training. In this paper, we present a more challenging environment for testing the emergence of communication from raw pixels, where training using policy gradient fails. We propose a new model and training algorithm, that utilizes the structure of a learned representation space to produce more consistent speakers at the initial phases of training, which stabilizes learning. We empirically show that our algorithm substantially improves performance compared to policy gradient. We also propose a new alignment-based metric for measuring context-independence in emerged communication and find our method increases context-independence compared to policy gradient and other competitive baselines."}, "keywords": ["compositionality", "modern neural networks"], "citation_intent": "method"} {"citing_id": "2304.02834v2", "cited_id": "1712.02463", "section_title": "C. Corrupted Input Detection", "citation": "CURE-TSR #REFR is a traffic sign recognition dataset that includes real-world and simulated challenging conditions of 12 types and five severity levels.", "text_before_citation": ["In addition to the widely accepted OOD and adversarial detection setup, we consider corrupted inputs as another type of anomaly.", "Deployed in the real world, neural networks are known to suffer from imperfect samples due to the data acquisition process and environmental factors, such as motion blur or weather conditions #OTHEREFR - #OTHEREFR .", "We use image classification datasets designed to benchmark the robustness of neural networks under realistic challenging conditions.", "CIFAR-10-C #OTHEREFR consists of 19 diverse corruption types in four categories, including noise, blur, weather, and digital, at five different severity levels that are applied to test images of CIFAR-10 dataset."], "text_after_citation": ["For each dataset, a ResNet model is trained on corruptionfree images, and the gradients are collected from pristine images in the test sets and their corrupted versions.", "We utilize the Mahalanobis method #OTHEREFR for comparison because it showed the best performance among all the compared methods for OOD detection and adversarial detection.", "In the experiments, we observed that the AUROC scores are highly saturated for both methods in many cases, particularly for CIFAR-10-C dataset, which calls for a more comprehensive comparison.", "To better facilitate the performance comparison, we employ corrected repeated k-fold crossvalidated (CV) paired t-test #OTHEREFR as a measure of statistical significance.", "Paired t-test is a statistical test for comparing two different learning schemes, A and B, based on a number of observations; in this case, predictive accuracy a and b."], "citing_paper_content": {"title": "Probing The Purview Of Neural Networks Via Gradient Analysis", "abstract": "We analyze the data-dependent capacity of neural networks and assess anomalies in inputs from the perspective of networks during inference. The notion of data-dependent capacity allows for analyzing the knowledge base of a model populated by learned features from training data. We define purview as the additional capacity necessary to characterize inference samples that differ from the training data. To probe the purview of a network, we utilize gradients to measure the amount of change required for the model to characterize the given inputs more accurately. To eliminate the dependency on ground-truth labels in generating gradients, we introduce confounding labels that are formulated by combining multiple categorical labels. We demonstrate that our gradient-based approach can effectively differentiate inputs that cannot be accurately represented with learned features. We utilize our approach in applications of detecting anomalous inputs, including out-of-distribution, adversarial, and corrupted samples. Our approach requires no hyperparameter tuning or additional data processing and outperforms state-of-the-art methods by up to 2.7%, 19.8%, and 35.6% of AUROC scores, respectively."}, "cited_paper_content": {"title": "Cure-Tsr: Challenging Unreal And Real Environments For Traffic Sign Recognition", "abstract": "In this paper, we investigate the robustness of traffic sign recognition algorithms under challenging conditions. Existing datasets are limited in terms of their size and challenging condition coverage, which motivated us to generate the Challenging Unreal and Real Environments for Traffic Sign Recognition (CURE-TSR) dataset. It includes more than two million traffic sign images that are based on real-world and simulator data. We benchmark the performance of existing solutions in real-world scenarios and analyze the performance variation with respect to challenging conditions. We show that challenging conditions can decrease the performance of baseline methods significantly, especially if these challenging conditions result in loss or misplacement of spatial information. We also investigate the effect of data augmentation and show that utilization of simulator data along with real-world data enhance the average recognition performance in real-world scenarios. The dataset is publicly available at this https URL"}, "keywords": ["dataset", "traffic sign recognition"], "citation_intent": "background"} {"citing_id": "2304.02509v1", "cited_id": "0807.3917", "section_title": "The Conjecture", "citation": "Since then, the topic gathered significant attention, and the activity sparked with the emergence of polar codes #REFR ; Ar\u0131kan mentioned this as one of the major open problems in coding theory at ITW Dublin in 2010.", "text_before_citation": ["We refer to Section 9 for the formal definition of BMS channels; for now, it is sufficient to consider the BSC, the main case of interest, as BMS are mixtures of BSCs. Conjecture 1.", "For any BMS channel P, and any rate R below the capacity of P given by C(P) = I(U, P) = (1/2) x\u2208{0,1},y\u2208Y P(y|x) log 2 P(y|x)/(P(y|0)/2 + P(y|1)/2), where 2 Y is the output alphabet of P (with C(P) = 1 \u2212 H( ) when P = BSC( )), a sequence of RM(m i , r i ) codes of rate R i = m i \u2264r i 2 \u2212m i tending to R can be decoded successfully with high probability, i.e., any of the 2 nR i codewords can be decoded with probability 1 \u2212 o m i (1) despite corruptions from P.", "It is hard to track back the first appearance of this belief in the literature, but [KKM + 16a] reports that it was likely already present in the late 60s.", "The claim was mentioned explicitly in a 1993 talk by Shu Lin, entitled ?RM Codes are Not So Bad? #OTHEREFR .", "A 1993 paper by Dumer and Farrell also contains a discussion on the matter #OTHEREFR , as well as the 1997 paper of Costello and Forney on the 'road to channel capacity' #OTHEREFR ."], "text_after_citation": ["Due to the broad relevance 3 of RM codes in computer science, electrical engineering and mathematics, the activity scattered in a wide line of works [Dum04, DS06, Dum06, CG05, HKL05, Ari08, Ar\u013109, Ari10, KLP12, ASW15b, ASW15a, KKM + 16a, KKM + 17, MHU14, SSV17, AY19, YA20, SS20, KKM + 16b, Sam18, Sam20, AHN21, HSS21, LHP20, FFHM21, ASY21, RP21, GEE + 21, #OTHEREFR ; see also #OTHEREFR .", "Relations to polar codes In a breakthrough paper, Ar\u0131kan showed that the explicit class of polar codes achieve Shannon capacity on any BMS channel #OTHEREFR .", "Given the close relationship between polar and RM codes, the belief that RM codes could also be proved to achieve capacity on BMS channels intensified.", "Polar codes are derived from the same square matrix, i.e., the matrix whose rows correspond to evaluations of monomials, which can also be expressed as G n := ( 1 1 0 1 ) \u2297m , but polar codes use a different row selection that is channel dependant.", "One can view the difference as follows: apply G n (over F 2 ) to a vector with i.i.d."], "citing_paper_content": {"title": "A Proof That Reed-Muller Codes Achieve Shannon Capacity On Symmetric Channels", "abstract": "Reed-Muller codes were introduced in 1954, with a simple explicit construction based on polynomial evaluations, and have long been conjectured to achieve Shannon capacity on symmetric channels. Major progress was made towards a proof over the last decades; using combinatorial weight enumerator bounds, a breakthrough on the erasure channel from sharp thresholds, hypercontractivity arguments, and polarization theory. Another major progress recently established that the bit error probability vanishes slowly below capacity. However, when channels allow for errors, the results of Bourgain-Kalai do not apply for converting a vanishing bit to a vanishing block error probability, neither do the known weight enumerator bounds. The conjecture that RM codes achieve Shannon capacity on symmetric channels, with high probability of recovering the codewords, has thus remained open. This paper closes the conjecture's proof. It uses a new recursive boosting framework, which aggregates the decoding of codeword restrictions on 'subspace-sunflowers', handling their dependencies via an L p Boolean Fourier analysis, and using a list-decoding argument with a weight enumerator bound from Sberlo-Shpilka. The proof does not require a vanishing bit error probability for the base case, but only a non-trivial probability, obtained here for general symmetric codes. This gives in particular a shortened and tightened argument for the vanishing bit error probability result of Reeves-Pfister, and with prior works, it implies the strong wire-tap secrecy of RM codes on pure-state classical-quantum channels."}, "cited_paper_content": {"title": "Channel Polarization: A Method For Constructing Capacity-Achieving Codes For Symmetric Binary-Input Memoryless Channels", "abstract": "A method is proposed, called channel polarization, to construct code sequences that achieve the symmetric capacity $I(W)$ of any given binary-input discrete memoryless channel (B-DMC) $W$. The symmetric capacity is the highest rate achievable subject to using the input letters of the channel with equal probability. Channel polarization refers to the fact that it is possible to synthesize, out of $N$ independent copies of a given B-DMC $W$, a second set of $N$ binary-input channels $\\{W_N^{(i)}:1\\le i\\le N\\}$ such that, as $N$ becomes large, the fraction of indices $i$ for which $I(W_N^{(i)})$ is near 1 approaches $I(W)$ and the fraction for which $I(W_N^{(i)})$ is near 0 approaches $1-I(W)$. The polarized channels $\\{W_N^{(i)}\\}$ are well-conditioned for channel coding: one need only send data at rate 1 through those with capacity near 1 and at rate 0 through the remaining. Codes constructed on the basis of this idea are called polar codes. The paper proves that, given any B-DMC $W$ with $I(W)>0$ and any target rate $R