text_with_holes stringlengths 92 2.79k | text_candidates stringlengths 57 1.4k | A stringclasses 6
values | B stringclasses 6
values | C stringclasses 6
values | D stringclasses 6
values | label stringclasses 4
values |
|---|---|---|---|---|---|---|
<|MaskedSetence|> <|MaskedSetence|> We remark that in this case, our method is similar to that of [MR3591945], with some differences. First we consider that T~~𝑇\tilde{T}over~ start_ARG italic_T end_ARG can be nonzero. <|MaskedSetence|> We had to reconsider the proofs, in our view simplifying some of them.
. | **A**: Of course, the numerical scheme and the estimates developed in Section 3.1 hold.
**B**: Also, our scheme is defined by a sequence of elliptic problems, avoiding the annoyance of saddle point systems.
**C**: However, several simplifications are possible when the coefficients have low-contrast, leading to sharpe... | ACB | ABC | ACB | ACB | Selection 3 |
Most relevant for our work is the work presented in [20], where a time series model to capture the time-based variation of social-content features is used. <|MaskedSetence|> <|MaskedSetence|> Without any other handcrafted features, they got almost 90% accuracy for events reported in Snope.com. <|MaskedSetence|> The... | **A**: As the same disadvantage of all other deep learning models, the process of learning is a black box, so we cannot envisage the cause of the good performance based only on content features.
**B**: Ma et al. [19] used Recurrent Neural Networks for rumor detection, they batch tweets into time intervals and model th... | CBA | CBA | CBA | ABC | Selection 2 |
To overcome this issue, we set up a threshold 72 hours. <|MaskedSetence|> On average the human editors of Snopes need 25.49 hours to verify the rumors and post it. Our system already achieves 87% accuracy in 25 hours. <|MaskedSetence|> Figure 12(a) is a rumor about ‘Okra curing diabetes’ 161616http://www.snopes.com/m... | **A**: We only consider the first candidate within 72 hours before or after the beginning time of the event as timestamp of human confirming rumors.
**B**: However, Snopes does not provide any information regarding how they detect the rumor.
**C**: We illustrate two examples here in Figures 12(a) and 12(b).
| ACB | BCA | ACB | ACB | Selection 3 |
<|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> For GoogleTrends, there are 2,700 and 4,200 instances respectively. We then bin the entities in the two datasets chronologically into 10 different parts. We set up 4 trials with each of the last 4 bins (using the history bins for training in a rolling basic) for ... | **A**: We select a studied time for each event period randomly in the range of 5 days before and after the event time.
**B**: Evaluating methodology.
For RQ1, given an event entity e, at time t, we need to classify them into either Breaking or Anticipated class.
**C**: In total, our training dataset for AOL consists ... | BAC | BAC | BCA | BAC | Selection 1 |
<|MaskedSetence|> Half of the patients are female and ages range from 17 to 66, with a mean age of 41.8 years. Body weight, according to BMI, is normal for half of the patients, four are overweight and one is obese. <|MaskedSetence|> Only one of the patients suffers from diabetes type 2 and all are in ICT therapy. <... | **A**:
Table 1 shows basic patient information.
**B**: The mean BMI value is 26.9.
**C**: In terms of time since being diagnosed with diabetes, patients vary from inexperienced (2 years) to very experienced (35 years), with a mean value of 13.9 years..
| ABC | ABC | ABC | ABC | Selection 2 |
<|MaskedSetence|> (2018). It utilizes several convolutional layers with different dilation factors in parallel to capture multi-scale image information. Additionally, we incorporated scene content via global average pooling over the final encoder output, as motivated by the study of Torralba et al. (2006) who stated t... | **A**:
This representation constitutes the input to an Atrous Spatial Pyramid Pooling (ASPP) module Chen et al.
**B**: These authors augmented multi-scale information with global context and demonstrated performance improvements on semantic segmentation tasks..
**C**: (2017).
| ACB | CBA | ACB | ACB | Selection 1 |
Our strongest positive result about the approximation of the locality number will be derived from the reduction mentioned above (see Section 5.2). <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Secondly, due to the results of Section 4, the investigated greedy strategies for computing the locality number can... | **A**: This is mainly motivated by two aspects.
**B**: Firstly, ruling out simple strategies is a natural initial step in the search for approximation algorithms for a new problem.
**C**: However, we shall first investigate in Section 5.1 the approximation performance of several obvious greedy strategies to compute t... | CAB | CAB | CAB | CAB | Selection 1 |
We thank Marc Bellemare and Pablo Castro for their help with Rainbow and Dopamine. The work of Konrad Czechowski, Piotr Kozakowski and Piotr Miłoś was supported by the Polish National Science Center grants UMO-2017/26/E/ST6/00622. <|MaskedSetence|> <|MaskedSetence|> PLG/2019/012497 and PLG/2019/012784. <|MaskedSeten... | **A**: We gratefully acknowledge Polish high-performance computing infrastructure PLGrid (HPC Centers: ACK Cyfronet AGH, PCSS) for providing computer facilities and support within computational grants no.
**B**: The work of Henryk Michalewski was supported by the Polish National Science Center grant UMO-2018/29/B/ST6/... | BAC | BAC | BAC | BAC | Selection 2 |
Figure 10: The Cricket robot tackles a step of height h using rolling locomotion mode, negating the need for a transition to the walking mode. The total energy consumed throughout the entire step negotiation process in rolling locomotion stayed below the preset threshold value. This threshold value was established bas... | **A**: 8, or the rear body climbing gait at height h, as seen in Fig.
**B**: The blue line illustrates the total energy consumed (in rolling locomotion mode), while the green line represents the ongoing cumulative energy consumption of the rear legs, indicating it did not exceed the threshold values set by the rear bo... | BAC | ACB | ACB | ACB | Selection 2 |
It should be fairly clear that such assumptions are very unrealistic or undesirable. Advice bits, as all information, are prone to transmission errors. In addition, the known advice models often allow
information that one may arguably consider unrealistic, e.g., an encoding of some part of the offline optimal solution.... | **A**: In contrast, an online algorithm that does not use advice at all has competitive ratio at most 2, i.e., its output can be at most twice as costly as the optimal one..
**B**: In the traditional advice model, one bit suffices to be optimal: 0 for renting throughout the horizon, 1 for buying right away.
**C**: La... | CBA | CBA | CBA | BAC | Selection 2 |
It is worth noting that the difference in terms of space complexity is also very significant. For classifiers supporting incremental classification, like SS3 or MNB, only a small vector needs to be stored for each user. <|MaskedSetence|> <|MaskedSetence|> However, when working with classifiers not supporting increme... | **A**: Note that storing either all the documents or a d×t𝑑𝑡d\times titalic_d × italic_t document-term matrix, where d𝑑ditalic_d is the number of documents and t𝑡titalic_t the vocabulary size, takes up much more space than a small 2-dimensional vector..
**B**: of every user and then simply update it as more conten... | ACB | CBA | CBA | CBA | Selection 4 |
Compared with other algorithms, novel algorithm SPBLLA has more advantages in learning rate. Various algorithms have been employed in the UAV networks in search of the optimal channel selection [31][29], such as stochastic learning algorithm [30]. The most widely seen algorithm–LLA is an ideal method for NE approachin... | **A**: In large-scale scenarios, more iterations are required, which makes BLLA inefficient.
**B**: The BLLA has been employed by [33], which is modified from LLA to update strategies in each iteration to converge to the NE.
**C**: To achieve it, the works in [34] and [35] have provided a novel synchronous algorithm.... | BAC | BAC | BAC | BAC | Selection 1 |
The sources of DQN variance are Approximation Gradient Error(AGE)[23] and Target Approximation Error(TAE)[24]. In Approximation Gradient Error, the error in gradient direction estimation of the cost function leads to inaccurate and extremely different predictions on the learning trajectory through different episodes b... | **A**: Dropout methods have the ability to assemble these two solutions which minimize different source of variance.
**B**: Dropout methods can achieve a consistence learning trajectory and exact DQN parameters with averaging, which comes inherently with Dropout methods..
**C**: This type of variance leads to converg... | CBA | CAB | CAB | CAB | Selection 4 |
<|MaskedSetence|> (2019), using a dataset defined in Cohen et al. (2018), proposed an image-to-image based framework to transform an input image with object of interest (presence domain) like a tumor to an image without the tumor (absence domain) i.e. <|MaskedSetence|> This results in capturing detailed structure fro... | **A**: Vorontsov et al.
**B**: translate diseased image to healthy; next, their model learns to add the removed tumor to the new healthy image.
**C**: (2018) proposed a rewiring method for the long skip connections used in U-Net and tested their method on nodule segmentation in the low-dose CT scans of the chest, nuc... | ABC | ABC | BAC | ABC | Selection 2 |
<|MaskedSetence|> <|MaskedSetence|> (2017) demonstrate that deep neural networks are capable of fitting random labels and memorizing the training data. Bornschein et al. (2020) analyze the performance across different dataset sizes.
Olson et al. (2018) evaluate the performance of modern neural networks using the same... | **A**: (2014) and find that neural networks achieve good results but are not as strong as random forests..
**B**: Zhang et al.
**C**: The generalization performance has been widely studied.
| CAB | CBA | CBA | CBA | Selection 2 |
Broadly speaking, our work is related to a vast body of work on value-based reinforcement learning in tabular (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018) and linear settings (Yang and Wang, 2019b, a; Jin et al., 2019;... | **A**: (2019).
**B**: (2020), which focuses on value-based reinforcement learning, OPPO attains the same T𝑇\sqrt{T}square-root start_ARG italic_T end_ARG-regret even in the presence of adversarially chosen reward functions.
**C**: (2020); Zhou et al.
| ACB | ACB | ACB | CAB | Selection 1 |
<|MaskedSetence|> Henry Adams and Dr. <|MaskedSetence|> We also thank Prof. Mikhail Katz and Prof. Michael Lesnick for explaining to us some aspects of their work. We thank Dr. Qingsong Wang for bringing to our attention the paper [76] which was critical for the proof of Theorem 1. Finally, we thank Dr. <|MaskedSete... | **A**: We thank Prof.
**B**: Johnathan Bush for very useful feedback about a previous version of this article.
**C**: Alexey Balitsky for pointing out an imprecision in the statement of Proposition 9.2.
.
| ABC | ACB | ABC | ABC | Selection 4 |
Although our main design goal was to support the investigation of t-SNE projections, most of our views and interaction techniques are not strictly confined to the t-SNE algorithm. <|MaskedSetence|> <|MaskedSetence|> The same goes for other views, such as Neighborhood Preservation or Adaptive PCP: the inspiration and ... | **A**: Its motivation, however, came from the fact that t-SNE is especially known to generate hard-to-interpret shapes in its output [14], so the necessity of exploring and investigating such shapes became more apparent than with other DR methods.
**B**: We argue, though, that more than a decade after its proposal, it... | CBA | CAB | CAB | CAB | Selection 3 |
<|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> The most representative method of this category is arguably PSO [80], in which each solution evolves with a velocity vector to explore the search domain. Another popular algorithm with differential movement at its core is DE [59], in which new solutions are produ... | **A**: The newly generated solution could compete against previous ones, or against other solutions in the population to achieve a space and remain therein in subsequent search iterations.
**B**: Differential Vector Movement, in which new solutions are produced by a shift or a mutation performed onto a previous soluti... | BAC | CBA | BAC | BAC | Selection 3 |
<|MaskedSetence|> Instead, DEC and SpectralNet work better on the large scale datasets. Although GAE-based models (GAE, MGAE, and GALA) achieve impressive results on graph type datasets, they fail on the general datasets, which is probably caused by the fact that the graph is constructed by an algorithm rather than pr... | **A**: Classical clustering models work poorly on large scale datasets.
**B**: If the graph is not updated, the contained information is low-level.
**C**: The adaptive learning will induce the model to exploit the high-level information.
| ABC | ABC | ABC | ACB | Selection 1 |
<|MaskedSetence|> The traffic to the servers is stable and hence can be predicted, (Wessels et al., 2003). <|MaskedSetence|> <|MaskedSetence|> In this evaluation we issued queries to a Name server at 69.13.54.XXX during three minutes, and plot the IPID values received in responses in Figure 3 - the identical pattern... | **A**:
Measuring IPID increment rate.
**B**: One example evaluation of IPID sampling on one of the busiest servers is plotted in Figure 3.
**C**: We validate this by sampling the IPID value at the servers which we use for running the test.
| ACB | ACB | ACB | ACB | Selection 1 |
<|MaskedSetence|> However, the model can be modified to be trained on unlabeled data, simply by allowing arbitrary data samples as input to the context layer. This design introduces variation in training inputs, which makes it harder to learn consistent context patterns. <|MaskedSetence|> If the context layer can pro... | **A**:
The current design of the context-based network relies on labeled data because the odor samples for a given class are presented as ordered input to the context layer.
**B**: The full six-gas sensor drift dataset can be used, as well as other unbalanced and therefore realistic datasets..
**C**: For this task, ... | ACB | CBA | ACB | ACB | Selection 1 |
There is a quite interesting evolution of constructions to present free groups in a self-similar way or even as automaton groups (see [15] for an overview). <|MaskedSetence|> While these constructions and the involved proofs are generally deemed quite complicated, the situation for semigroups turns out to be much sim... | **A**: This culminated in constructions to present free groups of arbitrary rank as automaton groups where the number of states coincides with the rank [18, 17].
**B**: In fact, the construction to generate these semigroups is quite simple [4, Proposition 4.1] (compare also to 3).
**C**: While it is known that the fr... | ACB | ACB | ACB | ACB | Selection 4 |
Since Wu and Mooney (2019) reported that human-based textual explanations Huk Park et al. (2018) gave better results than human-based attention maps for SCR, we train all of the SCR variants on the subset containing textual explanation-based cues. <|MaskedSetence|> For the first phase, which strengthens the influentia... | **A**: For the second phase, we use a learning rate of 10−4superscript10410^{-4}10 start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT and weight of 1000100010001000, which is applied alongside the loss term used in the first phase.
**B**: SCR is trained in two phases.
**C**: Then, following Wu and Mooney (2019), for the s... | BCA | BCA | CBA | BCA | Selection 4 |
URL Cross Verification. Legal jurisdictions around the world require organisations to make their privacy policies readily available to their users. <|MaskedSetence|> In order to focus PrivaSeer Corpus on privacy policies that users are intended to read, we cross-verified the URLs of the privacy policies in our corpus... | **A**: After cross-verifying the URLs, we were left with a set of 1.1 million web pages..
**B**: We then gathered the URLs satisfying our selection criteria and cross-verified them with the URLs in our existing corpus.
**C**: As a result, most organisations include a link to their privacy policy in the footer of thei... | BCA | CBA | CBA | CBA | Selection 3 |
Ensemble learning can be controlled in different ways. <|MaskedSetence|> This offers new possibilities for direct manipulation of both instances and features. <|MaskedSetence|> <|MaskedSetence|> These models produce predictions that can be stored again as new metadata. If visualized, this predictions’ space can be m... | **A**: Visualization also enhances the interaction with data preparation (Figure 1, upper red arrow) [25].
**B**: Starting from the data, visualization can be used to explore the data space (Figure 1, upper blue arrow) [47].
**C**: Data preprocessing and wrangling benefits from feedback provided by a VA system, for e... | ACB | BAC | BAC | BAC | Selection 3 |
<|MaskedSetence|> Firstly, the data quantity within the datasets used as ”tasks” varies across different applications, which can impact the effectiveness of MAML [Serban et al., 2015, Song et al., 2020]. Secondly, while vanilla MAML assumes that the data distribution is the same across tasks, in real-world NLP tasks, ... | **A**: This variation manifests both between training tasks and between training and testing tasks, similarly affecting the performance of MAML.
**B**:
When applying MAML to NLP, several factors can influence the training strategy and performance of the model.
**C**: For example, PAML [Madotto et al., 2019] regards ... | ABC | BCA | BCA | BCA | Selection 4 |
In this paper, we consider a dynamic mission-driven UAV network with UAV-to-UAV mmWave communications, wherein multiple transmitting UAVs (t-UAVs) simultaneously transmit to a receiving UAV (r-UAV). In such a scenario, we focus on inter-UAV communications in UAV networks, and the UAV-to-ground communications are not in... | **A**: In summary, the key contributions of this paper are listed as follows.
.
**B**: Based on the joint UAV position-attitude prediction, an efficient codeword selection scheme is further developed with tracking error (TE) awareness, which achieves fast subarray activation/partition and array weighting vector select... | BCA | ACB | BCA | BCA | Selection 4 |
Szepesvári, 2018; Dalal et al., 2018; Srikant and Ying, 2019) settings. See Dann et al. <|MaskedSetence|> Also, when the value function approximator is linear, Melo et al. <|MaskedSetence|> (2019); Chen et al. (2019b) study the convergence of Q-learning. When the value function approximator is nonlinear, TD possibly ... | **A**: (2014) for a detailed survey.
**B**: When the value function approximator is an overparameterized multi-layer neural network, Cai et al.
**C**: (2008); Zou et al.
| BAC | ACB | ACB | ACB | Selection 3 |
<|MaskedSetence|> (2018) suggest that skip connections are “shallow” themselves, and only fuse by simple, one-step operations, and therefore Yu et al. (2018) augment standard architectures with deeper aggregation to better fuse information across layers to improve recognition and resolution. Shen et al. (2018) propose... | **A**: (2018) simultaneously expose all layer representations with layer aggregation.
**B**: Yu et al.
**C**: Dou et al.
| BCA | BCA | BCA | BCA | Selection 4 |
<|MaskedSetence|> <|MaskedSetence|> These problems seriously limit the learning ability of neural networks and cause inferior distortion rectification results. To address the above problems, we propose a fully novel concept, i.e., ordinal distortion. <|MaskedSetence|> 2 illustrates the attributes of the proposed ord... | **A**: Fig.
**B**: However, due to the implicit and heterogeneous representation, the neural network suffers from the insufficient learning problem and imbalance regression problem.
**C**:
As mentioned above, most previous learning methods correct the distorted image based on the distortion parameters estimation.
| CBA | CBA | CBA | ABC | Selection 1 |
<|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> To our knowledge, radius minimization has not been previously considered in the two-stage stochastic paradigm. Most prior work in this setting has focused on Facility Location [23, 24, 21, 22, 11, 19, 25]. On similar lines, [1] studies a stochastic k𝑘kitalic_k-c... | **A**: The black-box model is motivated by data-driven applications where specific knowledge of the distribution is unknown but we have the ability to sample or simulate from the distribution.
**B**: Clustering is a fundamental task in unsupervised and self-supervised learning.
**C**: The stochastic setting models si... | BCA | BCA | BCA | CBA | Selection 3 |
Motivated by distributed statistical learning over uncertain communication networks, we study the distributed stochastic convex optimization by networked local optimizers to cooperatively minimize a sum of local convex cost functions. <|MaskedSetence|> <|MaskedSetence|> The local optimizers can only obtain measureme... | **A**: The local cost functions are not required to be differentiable, nor do their subgradients need to be bounded.
**B**: The network is modeled by a sequence of time-varying random digraphs which may be spatially and temporally dependent.
**C**: The main contributions of our paper are listed as follows..
| BAC | BAC | BAC | BAC | Selection 2 |
<|MaskedSetence|> <|MaskedSetence|> Consequently, the accuracy of query answering of MuCo is much better and more stable than that of Mondrian and Anatomy. <|MaskedSetence|> Therefore, differing from Mondrian and Anatomy, increasing the level of protection of MuCo has little influence on the query results. In conclu... | **A**: Besides, since the results of queries for MuCo are specific records rather than groups, the relative error rate of MuCo does not increase steadily with the growth of δ𝛿\deltaitalic_δ but fluctuates depending on specific query conditions.
**B**:
We observe that the results of MuCo are much better than that of ... | BCA | BCA | BCA | BCA | Selection 3 |
Due to limited mask representation of HTC, we move on to SOLOv2, which utilizes much larger mask to segment objects. It builds an efficient yet simple instance segmentation framework, outperforming other segmentation methods like TensorMask Chen et al. (2019c), CondInst Tian et al. <|MaskedSetence|> (2020) on COCO. ... | **A**: In SOLOv2, the unified mask feature branch is dynamically convoluted by learned kernels, and the adaptively generated mask for each location benefits from the whole image view instead of cropped region proposals like HTC.
**B**: (2020) and BlendMask Chen et al.
**C**: It’s worth noting that other attempts, inc... | BAC | ACB | BAC | BAC | Selection 3 |
However, all of the aforementioned empirical and theoretical works on RL with function approximation assume the environment is stationary, which is insufficient to model problems with time-varying dynamics. <|MaskedSetence|> The instantaneous reward is the payoff when viewers are redirected to an advertiser, and the ... | **A**: Then what is the maximum nonstationarity a learner can tolerate to adapt to the time-varying dynamics of an MDP with potentially infinite number of states? This paper addresses these two questions..
**B**: For example, consider online advertising.
**C**: Can one design a theoretically sound algorithm for large... | BCA | BCA | BCA | CAB | Selection 1 |
<|MaskedSetence|> RQ1: How much do people trust the media by which they obtain news? RQ2: Why do people share news and how do they do it? RQ3: How do people view the fake news phenomenon and what measures do they take against it? An online survey was employed for data collection in which the assessed media items inclu... | **A**:
In this study, we seek to answer these research questions.
**B**: Respondents were allowed to select multiple options for some question items while the branching questions served to direct them to different sections based on their answer.
**C**: The survey contained 19 question items, 2 branching questions, a... | BAC | ACB | ACB | ACB | Selection 4 |
<|MaskedSetence|> <|MaskedSetence|> Triplet-based KG embedding models like TransE [11] transform the embedding of each subject entity and its relation into a hidden vector, subsequently used to predict the central entity W3C of the triplets. This behavior resembles that of the Skip-gram model [9], where each word emb... | **A**: The existing methods for KG embedding and word embedding exhibit even more similarities.
**B**: The aggregation operation mirrors the CBOW model [9], except that CBOW does not involve self-embedding.
.
**C**: As shown in Figure 1, the KG comprises three triplets conveying similar information to the example sen... | ACB | ACB | CBA | ACB | Selection 2 |
<|MaskedSetence|> <|MaskedSetence|> In alternative, we follow [11, 13], and use the extrinsic rewards given by the environment to measure the performance. We highlight that the extrinsic rewards are only used for evaluation, not for training. We illustrate the evaluation curves of 18181818 common Atari games in Fig. ... | **A**: Since different methods utilize different intrinsic rewards, the intrinsic rewards are not applicable to measure the performance of the trained purely exploratory agents.
**B**: For each method, the solid line indicates the mean episodic reward of all five seeds, and the shadow area shows the confidence interva... | BCA | CAB | CAB | CAB | Selection 3 |
<|MaskedSetence|> <|MaskedSetence|> We do that by applying any of the above mentioned VAEs111In this exposition we use unspervised trained VAEs as our base models but the framework also works with GAN-based or FLOW-based DGMs, supervised, semi-supervised or unsupervised. In the Appendix we present such implementation... | **A**:
The model has two parts.
**B**: The goal of the second part of the model, is to add the details while maintaining the semantic information retrieved in the first stage.
**C**: First, we apply a DGM to learn only the disentangled part, C𝐶Citalic_C, of the latent space.
| CBA | ACB | ACB | ACB | Selection 2 |
The structural computer used an inverted signal pair to implement the reversal of a signal (NOT operation) as a structural transformation, i.e. a twist, and four pins were used for AND and OR operations as a series and parallel connection were required. <|MaskedSetence|> In other words, operating a structural compute... | **A**: Let’s look at the role of the four pins that transmit signals in a 4 pin based signal system.
**B**: However, one can think about whether the four pin designs are the minimum number of pins required by structural computers.
**C**: Four pins are paired into two pairs, each representing/delivering true and inver... | BAC | BAC | BAC | BAC | Selection 4 |
Forward selection is a simple, greedy feature selection algorithm (Guyon \BBA Elisseeff, \APACyear2003). <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Here we consider forward selection based on the Akaike Information Criterion (AIC). In order to impose nonnegativity of the coefficients, we will use a slight... | **A**: The basic strategy is to start with a model with no features, and then add the single feature to the model which is “best” according to some criterion.
**B**: It is a so-called wrapper method, which means it can be used in combination with any learner (Guyon \BBA Elisseeff, \APACyear2003).
**C**: One then proc... | BAC | CAB | BAC | BAC | Selection 4 |
<|MaskedSetence|> Firstly, existing dependency-based methods represent only a fraction of a much larger potential combinations of supervised methods and scoring functions for dependency-based anomaly detection. There has been no work on summarizing the common procedure taken by the existing methods for establishing a ... | **A**:
Despite that research [7, 4] has shown the promise of dependency-based anomaly detection, there are still certain research gaps in this area that need attention.
**B**: Secondly, dependency-based methods are promising and practical, especially when off-the-shelf techniques are used.
**C**: However, there is a... | ABC | ABC | ABC | BAC | Selection 1 |
<|MaskedSetence|> <|MaskedSetence|> [2011]), which is in contrast to the use of an exploration bonus as seen in Faury et al. <|MaskedSetence|> [2010]. Optimistic parameter search provides a cleaner description of the learning strategy. In non-linear reward models, both approaches may not follow similar trajectory bu... | **A**: [2020], Filippi et al.
**B**: in Abbasi-Yadkori et al.
**C**:
CB-MNL enforces optimism via an optimistic parameter search (e.g.
| CBA | CBA | BAC | CBA | Selection 1 |
Self-stitching (Fig. 3 d)). <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> We simply fill in zeros in the gap to make the network learn to distinguish a long sequence and a stitched sequence by identifying the zeros. This turns out an effective approach.
. | **A**: Then we stitch the original short clip (Clip O) and the up-scaled clip (Clip U) into one single sequence.
**B**: To address this issue, we devise a simple strategy: inserting a gap between the two clips, as shown in Fig. 3 d).
**C**: If we directly concatenate the two clips side by side, one issue arises that ... | ACB | CAB | ACB | ACB | Selection 4 |
<|MaskedSetence|> These papers use bagging [Bre01] and boosting [CG16, FSA99, KMF∗17] techniques for ranking and identifying the best combination of models in different application scenarios. <|MaskedSetence|> On the one hand, we also enable the user to assess the various models and build his/her own ensemble of mode... | **A**: On the other hand, we support the process of generating new models through genetic algorithms and highlight the necessity for the best and most diverse models in the simplest possible voting ensemble.
**B**:
There are relevant works that involve the human in interpreting, debugging, refining, and comparing ens... | BCA | ACB | BCA | BCA | Selection 4 |
In contrast, HiPPI and our method require shape-to-universe representations. To obtain these, we use synchronisation to extract the shape-to-universe representation from the pairwise transformations. <|MaskedSetence|> <|MaskedSetence|> Throughout this section we also report results of the initialisation methods ZoomO... | **A**: Further details can be found in the supplementary material.
.
**B**: We refer to this method of synchronising the ZoomOut results as ZoomOut+Sync, which directly serves as initialisation for HiPPI and our method.
**C**: By doing so, we obtain the initial U𝑈Uitalic_U and Q𝑄Qitalic_Q.
| CBA | CBA | CBA | ACB | Selection 2 |
<|MaskedSetence|> This characterization decomposes the input graph G𝐺Gitalic_G by clique separators as in [18], then at the recursive step one has to find a proper vertex coloring of an antipodality graph satisfying some particular conditions; see Section 3, in particular Theorem 6. In a few words, an antipodality gr... | **A**: This order allows us to establish all the antipodality relations in a faster time.
**B**: Unfortunately, we cannot build all the antipodality graphs by brute force because checking all possible antipodal pairs requires too much time (more time than the overall complexity of algorithms in [3, 22]).
**C**:
The ... | CBA | ABC | CBA | CBA | Selection 1 |
<|MaskedSetence|> Some recent developments of SBM can be found in (abbe2017community, ) and references therein. Since in empirical network data sets, the degree distributions are often highly inhomogeneous across nodes, a natural extension of SBM is proposed: the degree-corrected stochastic block model (DCSBM) (DCSBM,... | **A**: MMSB constructed a mixed membership stochastic blockmodel (MMSB) which is an extension of SBM by letting each node have different weights of membership in all communities.
**B**: However, in MMSB, nodes in the same communities still share the same degrees.
**C**:
The stochastic blockmodel (SBM) (SBM, ) is on... | CAB | CAB | ABC | CAB | Selection 4 |
See, e.g., Welling and Teh (2011); Chen et al. (2014); Ma et al. (2015); Chen et al. (2015); Dubey et al. (2016); Vollmer et al. (2016); Chen et al. (2016); Dalalyan (2017); Chen et al. (2017); Raginsky et al. (2017); Brosse et al. (2018); Xu et al. <|MaskedSetence|> (2018); Wibisono (2018); Bernton (2018); Dalalyan a... | **A**: (2019); Wibisono (2019) and the references therein.
Among these works,.
**B**: (2019); Vempala and Wibisono (2019); Salim et al.
**C**: (2018); Cheng and Bartlett (2018); Chatterji et al.
| CBA | BCA | CBA | CBA | Selection 1 |
<|MaskedSetence|> Even if the agent performs the same action on the same observation at different timesteps, the agent may receive different rewards and observation transitions because of neighbor agents’ different actions. In this case, the received rewards and observation transitions of the current agent could not b... | **A**: In other words, the design of the decoders and intrinsic reward is similar to the law of contra-positive.
**B**: To avoid this situation, four decoders are introduced to predict the next observations and rewards without neighbor agents’ policies or with partially neighbor agents, respectively.
**C**: Secondly,... | CBA | CBA | CBA | CBA | Selection 1 |
Online bin packing has a long history of study. <|MaskedSetence|> FirstFit is another simple heuristic that places an item into the first bin of sufficient space and opens a new bin if required. <|MaskedSetence|> NextFit has a competitive ratio of 2, while both FirstFit and BestFit are 1.7-competitive (?, ?). <|Mas... | **A**: Improving upon this performance requires more sophisticated algorithms, and many have been proposed in the literature..
**B**: The simplest algorithm is NextFit, which places an item into its single open bin when possible; otherwise, it closes the bin (does not use it anymore) and opens a new bin for the item. ... | BCA | BAC | BCA | BCA | Selection 3 |
To address the problem mentioned above, most of the methods extend the Chamfer loss function of basic AtlasNet with additional terms. Bednarik et al. <|MaskedSetence|> <|MaskedSetence|> (2020b) introduced two additional terms to increase global consistency of the local mappings explicitly. <|MaskedSetence|> Another ... | **A**: Deng et al.
**B**: (2020) added terms to prevent patch collapse, reduce patch overlap and calculate the exact surface properties analytically rather than approximating them.
**C**: One of them exploits the surface normals and requires that they remain locally consistent when estimated within and across the ind... | BAC | BAC | BAC | BAC | Selection 2 |
Paper organization. <|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> In Section 4, we present the lower complexity bounds for saddle point problems without individual variables. Finally in Section 5, we show how the proposed algorithm can be applied to the problem computing Wasserstein barycenters .
. | **A**: Section 2 presents a saddle point problem of interest along with its decentralized reformulation.
**B**: This paper is organized as follows.
**C**: In Section 3, we provide the main algorithm of the paper to solve such kind of problems.
| ABC | BAC | BAC | BAC | Selection 3 |
<|MaskedSetence|> The minimum cycle basis (MCB) problem is the problem of finding a cycle basis such that the sum of the lengths (or edge weights) of its cycles is minimum. This problem was formulated by Stepanec [7] and Zykov [8] for general graphs and by Hubicka and Syslo [9] in the strictly fundamental class contex... | **A**: For example in [10] a remarkable reduction is constructed to prove that the MCB problem is NP-hard for the strictly fundamental class, while in [11] a polynomial time algorithm is given to solve the problem for the undirected class.
**B**:
The length of a cycle is its number of edges.
**C**: The authors show ... | CBA | BCA | BCA | BCA | Selection 3 |
<|MaskedSetence|> The authors tested the same mathematical operations as in our system (i.e., addition, subtraction, multiplication, and division), but the generation was performed manually by the analysts. Also, the decision for this action was based solely upon the similarity in those features’ distributions [30].
I... | **A**: For the former, one of the most well-known approaches is Pearson’s correlation coefficient between features and with the target variable [32, 33].
**B**: For the latter, mutual information is used in our VA system (also used by May et al. [26], for instance).
**C**: A use case present in a visual diagnosis too... | CAB | CAB | CAB | CBA | Selection 2 |
<|MaskedSetence|> High-precision trajectories or set points can be generated prior to the actual machining process following various optimization methods, including MPC, feed-forward PID control strategies, or iterative-learning control [6, 7], where friction or vibration-induced disturbances can be corrected. In MPC,... | **A**: MPC accounts for the real behavior of the machine and the axis drive dynamics can be excited to compensate for the contour error to a big extent, even without including friction effects in the model [4, 5].
**B**: Instead of adapting the controller for the worst case scenarios, the prediction model can be selec... | ABC | ABC | BCA | ABC | Selection 1 |
In addition, we posit that the commonly used benchmarks are not challenging enough to test generalization to realistic scenarios. <|MaskedSetence|> It is unclear how well methods would fare in presence of multiple types of bias, e.g., position or co-occurring objects/patterns, which are commonly present in real-world ... | **A**: Annotating all such sources of bias is unrealistic.
**B**: Even when the bias variables are explicitly labeled, it is still unclear if the methods can remain robust to all of the bias sources, since this entails generalization to a large number of dataset groups e.g., hundred thousand groups for GQA-OOD [36].
.... | CAB | CAB | CAB | CAB | Selection 1 |
<|MaskedSetence|> It contains a total of 2,445,504 images from 1,474 participants. <|MaskedSetence|> Each participant is required to gaze at a circle shown on the devices without any constraint on their head movement. As a result, the GazeCapture dataset covers various lighting conditions and head motions. <|MaskedS... | **A**: GazeCapture [42] dataset is collected through crowdsourcing.
**B**: The GazeCapture dataset does not provide 3D coordinates of targets.
**C**: All images are collected using mobile phones or tablets.
| ACB | BCA | ACB | ACB | Selection 4 |
The efficiency of each pre-trained model depends on its architecture and the abstraction level of the extracted features. When dealing with real masked faces, VGG-16 has achieved the best recognition rate, while ResNet-50 outperformed both VGG-16 and AlexNet on the simulated masked faces. <|MaskedSetence|> <|MaskedS... | **A**: The achieved performance further confirms that the BoF paradigm is a slight representation that further reinforces the high discrimination power of the deep features to feed a machine learning-based classifier..
**B**: When dealing with other state-of-the-art recognizers, one of them applied the same pre-traine... | CBA | CBA | BAC | CBA | Selection 2 |
<|MaskedSetence|> Sized (co)inductive types [BFG+04, Bla04, Abe08, AP16] gave way to sized mixed inductive-coinductive types [Abe12, AP16]. <|MaskedSetence|> We present, to our knowledge, the first sized type system for a concurrent programming language as well as the first system to combine both features from above.... | **A**: As we mentioned in the introduction, we use unbounded quantification [Vez15] in lieu of transfinite sizes to represent (co)data of arbitrary height and depth.
**B**: In parallel, linear size arithmetic for sized inductive types [CK01, Xi01, BR06] was generalized to support coinductive types as well [Sac14].
**... | CBA | CBA | CBA | BAC | Selection 3 |
<|MaskedSetence|> <|MaskedSetence|> Section III describes the system model, threat model, and design goals. <|MaskedSetence|> The two schemes are constructed in Section V. The performance of the two schemes regarding the three problems is evaluated in Section VI followed by the efficiency analysis in Section VII. Th... | **A**: Subsequently, Section IV introduces the involved fundamental techniques.
**B**: The next section reviews the related work.
**C**:
The rest of this paper is outlined below.
| CBA | CAB | CBA | CBA | Selection 4 |
<|MaskedSetence|> This allows them to explicitly encode high-order relationships between nodes in the embeddings. GNNs have shown great potential for modeling high-order feature interactions for click-through rate prediction. Fi-GNN Li et al. (2019) proposes to connect each pair of features and treat the multi-field f... | **A**: (2015) to model feature interactions on the graph.
**B**: At their core, GNNs learn node embeddings by iteratively aggregating features from the neighboring nodes, layer by layer.
**C**: (2023) proposes a directed acyclic graph based model, which can be aligned with the DP Dudzik and Veličković (2022) algorith... | BAC | BAC | CAB | BAC | Selection 2 |
<|MaskedSetence|> The original definition of self-concordance has been expanded and generalized since its inception, as many objective functions of interest have self-concordant-like properties without satisfying the strict definition of self-concordance. <|MaskedSetence|> This was also the case in Ostrovskii & Bach ... | **A**: [2015], in which more general properties of these
pseudo-self-concordant functions were established.
**B**: Self-concordant functions have received strong interest in recent years due to the attractive properties that they allow to prove for many statistical estimation settings [Marteau-Ferey et al., 2019, Ostr... | CBA | BCA | BCA | BCA | Selection 2 |
<|MaskedSetence|> The term Pass-Bundle refers to multiple passes during which those routines are executed. Precisely, the routines are: (1) extend structures along active paths (Extend-Active-Paths), (2) check for edge augmentations (Check-for-Edge-Augmentation), and (3) include (additional) unmatched edges to each st... | **A**:
Our algorithm executes several methods (invoked within the loop starting at Algorithm 2 of Algorithm 2), and for most of them it makes a fresh pass over the edges.
**B**: In total, a Pass-Bundle requires 3333 passes..
**C**: Each of these routines is performed in a separate pass over the edges.
| CAB | ACB | ACB | ACB | Selection 3 |
Setting. <|MaskedSetence|> This is one of the default learning settings. Based on these settings, we build our settings using the intuition of algorithms (for details about tuning and intuition of our Algorithms, see Section 5.2). <|MaskedSetence|> <|MaskedSetence|> For more details how to choose T𝑇Titalic_T and p... | **A**: That is why we need carefully choose T𝑇Titalic_T (the number of inner/local iterations in Algorithm 1) and p𝑝pitalic_p (probability in Algorithm 3).
**B**: In order for the comparison of Algorithm 1 and Algorithm 3 to be fair, it is necessary to balance two things: 1) the number of communications and local it... | CAB | CBA | CBA | CBA | Selection 2 |
<|MaskedSetence|> The first is Maximum Welfare Correlated Equilibrium (MWCE) which is defined as the CE that maximises the sum of all player’s payoffs. An MWCE can be obtained by solving a linear program, however the MWCE may not be unique and therefore does not fully solve the equilibrium selection problem (e.g. <|M... | **A**: There are two important solution concepts in the space of CEs.
**B**: The second such concept is Maximum Entropy Correlated Equilibrium (MECE) (Ortiz et al., 2007) which maximises Shannon’s entropy (Shannon, 1948) as an objective.
**C**: constant-sum game solutions all have equal payoff).
| ABC | ACB | ACB | ACB | Selection 2 |
<|MaskedSetence|> (2012); Bassily et al. <|MaskedSetence|> (2011)) proposes relaxed privacy definitions that leverage the natural noise introduced by dataset sampling to achieve more average-case notions of privacy. This builds on intuition that average-case privacy can be viewed from a Bayesian perspective, by restr... | **A**: Another line of work (e.g., Gehrke et al.
**B**: (2013); Bhaskar et al.
**C**: Triastcyn and Faltings (2020) propose the notion of Bayesian differential privacy which leverages the underlying distribution to improve generalization guarantees, but their results still scale with the range in the general case.
. ... | ABC | ABC | ABC | BCA | Selection 1 |
The remainder of the paper is organized as follows. <|MaskedSetence|> <|MaskedSetence|> In Section 5 we show how color coding can be used to find a large feedback vertex cut, if one exists. <|MaskedSetence|> Our main results are derived in Section 6, where we show how color coding can be used to efficiently find ant... | **A**: We also prove that, given a large feedback vertex cut, we can shrink it while preserving the antlers in the graph.
**B**: After presenting preliminaries on graphs and sets in Section 2, we prove the mentioned hardness results in Section 3.
**C**: We present structural properties of antlers and how they combine... | BCA | BCA | BCA | BCA | Selection 2 |
It is worth noting that in HCOCO and HFlickr, traditional color transfer methods may produce low-quality synthetic composite images. <|MaskedSetence|> [18] manually filter out the low-quality synthetic composite images. <|MaskedSetence|> <|MaskedSetence|> To address this issue, Niu et al. [113] proposed to transit ... | **A**: Thus, Cong et al.
**B**: SycoNet [112] learns a mapping from real images to filtered synthetic composite images, which can capture the human filtering knowledge and produce high-quality synthetic composite images.
**C**: Another issue is that traditional color transfer methods may not faithfully reflect the na... | ABC | ABC | BAC | ABC | Selection 2 |
Comprehensiveness: Fig. <|MaskedSetence|> <|MaskedSetence|> 1(b)) to to capture a wider range of urban phenomena. <|MaskedSetence|> These measurements are crucial in revealing the state of the transportation market and citizen activities.
. | **A**: For instance, we have transformed raw mobility data of taxi movements into region-based measurements such as taxi flows, pickups, and idle driving time.
**B**: Furthermore, we have processed the raw data into several sub-datasets (as shown in Fig.
**C**: 1(a), illustrates that CityNet comprises three types of ... | BCA | CBA | CBA | CBA | Selection 4 |
Most of the data sets were obtained from the UCI repository Dua2019 . Specific references are given in Table 2. This table also shows the number of data points and (used) features and the skewness and (Pearson) kurtosis of the response variable. <|MaskedSetence|> <|MaskedSetence|> This strongly improved the R2supersc... | **A**: All data sets were standardized (both features and target variables) before training.
**B**: The crime data set comes in two versions: the original data set consists of integer-valued data (count data), while the version used here was preprocessed using an unsupervised standardization algorithm redmond2002data ... | ACB | ACB | ACB | BAC | Selection 3 |
<|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> Default MIDI velocity values are associated with dynamic indications. Apple’s Logic Pro 9 user manual correlates traditional volume indicators (pp, p, mp, mf, f, ff and fff) with specific MIDI velocity values (16, 32, 48, 64, 80, 96, 112 and 127), respectively.12... | **A**: In the realm of MIDI, velocity is a parameter that scales the intensity or volume at which a sound sample is played back, with the value ranging from 0 to 127.
**B**:
Dynamics is an important element in music, as they are often used by musicians to add excitement and emotion to songs.
**C**: Given that the to... | BCA | BCA | BCA | BCA | Selection 2 |
Recently, there are also investigations on semantic communications for other transmission contents, such as image and speech. A DL-enabled semantic communication system for image transmission, named JSCC, has been developed in[14]. <|MaskedSetence|> <|MaskedSetence|> Particularly, a joint image transmission-recognit... | **A**: A deep joint source-channel coding architecture, name DeepJSCC, has been investigated in[17] to process image with low computation complexity..
**B**: Based on JSCC, an image transmission system, integrating channel output feedback, can improve image reconstruction[15].
**C**: Similar to text transmission, IoT... | CBA | BCA | BCA | BCA | Selection 2 |
Two stage training versus joint training: Table I compares one-stage training with two-stage training performances trained with 10% and 1% labels. <|MaskedSetence|> However, when we jointly train CSFR and ISFR modules in one stage, we observe a performance drop, producing results lower than solely training one module... | **A**: For one-stage training, we perform experiments with only the CSFR module or ISFR module, each of the modules produces performance gain over the baseline method for both 10% and 1% label cases.
**B**: From the experiments, we argue that the training losses in the two modules may interfere with each other during ... | ABC | ABC | ABC | CAB | Selection 2 |
Setup. The KITTI dataset [11] provides widely used benchmarks for various visual tasks in the autonomous driving, including 2D Object detection, Average Orientation Similarity (AOS), Bird’s Eye View (BEV), and 3D Object Detection. The official data set contains 7481 training and 7518 test images with 2D and 3D bounding... | **A**: We report our results on the official settings of IoU ≥0.7absent0.7\geq 0.7≥ 0.7 for cars.
.
**B**: Each class uses different IoU standards for further evaluations.
**C**: Moreover, we use 40 recall positions instead of 11 recall positions proposed in the original Pascal VOC benchmark, following [40].
| CBA | CBA | CBA | CBA | Selection 4 |
ICDAR2015 [44] includes multi-orientated and small-scale text instances. <|MaskedSetence|> <|MaskedSetence|>
MSRA-TD500 [45] is dedicated to detecting multi-oriented long non-Latin texts. <|MaskedSetence|> Here, we follow the previous methods [35, 8] and add 400 training images from TR400 [46] to extend this datase... | **A**: Its ground truth is annotated with word-level quadrangles.
**B**: It contains 1,000 training and 500 testing images.
**C**: It contains 300 training images and 200 testing images with word-level annotation.
| ABC | CAB | ABC | ABC | Selection 1 |
The hardware architecture of modern processors usually consists of more than two independent central processing units (CPUs) or graphics processing units (GPUs). <|MaskedSetence|> The Compute Unified Device Architecture (CUDA) is a parallel computing platform for general computing on GPUs. Most parallel sorting algor... | **A**: Parallel software platforms can be implemented using high-level programming frameworks for specific hardware architectures Chen2009SA .
**B**: The parallel computation of sorting algorithms is considered to be the most efficient way of sorting elements on parallel hardware architectures Singh2018GPU ..
**C**: ... | ACB | ACB | ACB | CAB | Selection 3 |
<|MaskedSetence|> In section 2, we briefly recall the classic saddle point problem and its Schur complement, and introduce the twofold saddle point problem and the form of Schur complement, we then construct and analyze the block-triangular and block-diagonal preconditioners based on Schur complement for twofold saddl... | **A**: Finally, concluding remarks are given in Section 7.
.
**B**: The outline of the remainder of this paper is as follows.
**C**: Generalizations to n𝑛nitalic_n-tuple cases are provided in Section 5.
| BCA | BCA | ACB | BCA | Selection 1 |
However, in cases when the labels are sensitive and sharing the labels for a sample ID across silos is not feasible, the label information for a sample ID may only be present in a client in one silo. <|MaskedSetence|> The client with the label information calculates the loss and the partial derivatives, which can then... | **A**: Hence, the convergence analysis given in Section 4 can be trivially extended to this case.
.
**B**: This modification would significantly increase the communication cost of the algorithm.
**C**: In this case, we could modify our algorithm in the following way, similar to (Liu et al., 2020a): the clients in all... | CBA | CBA | ABC | CBA | Selection 2 |
<|MaskedSetence|> <|MaskedSetence|> The properties of pseudospectra are also discussed, along with a characterization of the pseudospectra for normal matrices. Additionally, for diagonalizable but not necessarily normal matrices, the corresponding Bauer-Fike theorem is presented, which can be found in (trefethen2005s... | **A**: The pseudospectra of finite-dimensional matrices and their extension to linear operators in Banach space have been extensively investigated and summarized in the classical book by Trefethen and Embree trefethen2005spectra .
**B**: The book also covers various methods for computing matrix pseudospectra; for more... | BCA | ACB | ACB | ACB | Selection 2 |
On Two-stream Network Architecture. To further highlight the two-stream dual generation architecture, we compare it with a multi-task single-stream network, which is tailed by two branches to model the image structure and texture simultaneously. We enlarge its channels to make it have the same amount of parameters as t... | **A**: The Bi-GFF and CFA modules are embedded to refine generation as the proposed model.
**B**: Quantitative results in Table 2 also validate the advantages of texture and structure dual generation.
.
**C**: As shown in Figure 7 (c), the two-stream architecture exhibits superior performance with more visually reaso... | ACB | ACB | ACB | ACB | Selection 4 |
In Figure 1, we present the performance of Subgoal Search. <|MaskedSetence|> The success rate is measured on 1000100010001000 instances of a given problem (which results in confidence intervals within ±0.03plus-or-minus0.03\pm 0.03± 0.03). <|MaskedSetence|> <|MaskedSetence|> For Sokoban, we use Algorithm 9 to realiz... | **A**: For BF-kSubS the search budget is referred to as graph size and includes the number of nodes visited by Algorithm 1.
**B**: We measure the success rate as a function of the search budget.
**C**: For INT and Rubik’s Cube, we include both the subgoal generated by SUB_GENERATE and the nodes visited by GET_PATH (a... | BAC | BAC | BAC | BAC | Selection 3 |
<|MaskedSetence|> <|MaskedSetence|> We label the Named Entities in raw materials first and then create their substitution forms by using similar characters to randomly replace these in the original Named Entities. <|MaskedSetence|> This dataset consists of 15780 sentences in total and is going to test our method in ... | **A**: This specially designed dataset is collected from informal news reports and blogs.
**B**: In this case, the dataset is made of pairs of original entities and their character substitution forms.
**C**: In order to verify whether our method has the ability to cope with the character substitution problem, we also... | CAB | CAB | CAB | ACB | Selection 3 |
<|MaskedSetence|> <|MaskedSetence|> A total of 4,560 samples are collected by a template-based method. <|MaskedSetence|> For NLI and coreference resolution, three variations of each sentence are used to construct entailment pairs. For machine translation, sentences with two variations of third-person pronouns in Eng... | **A**: ABC (Gonzalez et al., 2020), the Anti-reflexive Bias Challenge, is a multi-task benchmark dataset designed for evaluating gender assumptions in NLP models.
**B**: ABC consists of 4 tasks, including language modeling, natural language inference (NLI), coreference resolution, and machine translation.
**C**: The ... | ABC | BCA | ABC | ABC | Selection 4 |
<|MaskedSetence|> Therefore, they are NOT intended to be the final produced work that is displayed in print or on IEEEXplore®. <|MaskedSetence|> The structure of the LaTeXfiles, as designed, enable easy conversion to XML for the composition systems used by the IEEE’s outsource vendors. <|MaskedSetence|> Have you loo... | **A**: They will help to give the authors an approximation of the number of pages that will be in the final version.
**B**: The XML files are used to produce the final print/IEEEXplore® pdf and then converted to HTML for IEEEXplore®.
**C**: The templates are intended to approximate the final look and page length of t... | CAB | CAB | CAB | ACB | Selection 1 |
The treatment variation was implemented in the second part. In three baseline sessions, consisting of a total of 72 subjects in 18 groups, subjects were told that the second part of the experiment would be exactly the same as the first part, except that subject IDs would be randomly reassigned. In five treatment sessi... | **A**: However, in addition to reassigning ID’s, subjects were also told that they would be shown how much benefit they received in the previous round from each other subject in their group.
**B**: After the two main parts of the experiment were finished, subjects completed a series of questionnaires designed to elici... | ACB | ACB | ACB | ACB | Selection 1 |
<|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> This will affect the performance of the model in practical applications. However, it is undeniable that the emergence of these methods has enriched and promoted the development of SISR. According to different design targets, we divide these methods into three cat... | **A**: However, it is worth noting that most of these models use simulated datasets for testing and training, we call this method simulated SISR.
**B**: In recent years, the field of SISR has developed rapidly, and a large number of excellent models have emerged.
**C**: In other words, the low-resolution images used ... | BAC | BAC | ABC | BAC | Selection 4 |
<|MaskedSetence|> These approaches are fundamentally different as they attempt to create a wide generative model based on a large-scale dataset, while our approach focuses on data-agnostic internal learning tasks and uses a disparate architecture. Finally, Local Implicit Image Functions introduced in [17] are trained ... | **A**: There have been some works where coordinate-based networks are used as a core for a generative model using techniques such as a hypernetwork predicting the weights of a sample coordinate [11], or by modulating the weights of a base coordinate [12].
**B**: To the best of our knowledge, no attempt of introduci... | ACB | ACB | ACB | ACB | Selection 1 |
The present paper is the first work we aware of that specifically applies TS to apple tasting, but previous work has considered its use for logistic bandits. For logistic contextual bandits, the implementation of exact TS (i.e. the policy that draws its sample from the exact posterior) is infeasible due to the intract... | **A**: Dumitrascu et al., (2018) recently proposed an approximation based on Polya-Gamma augmentation (Polson et al.,, 2013; Windle et al.,, 2014) which has improved convergence properties over Laplace approximation originally used by Chapelle and Li, (2011).
**B**: The effect of approximation of the posterior on the ... | ABC | ABC | ABC | CAB | Selection 1 |
Many criticisms have recently been raised against the improper use of statistical significance as the only measure to evaluate results in scientific publications [65]. <|MaskedSetence|> <|MaskedSetence|> Yet, regarding performance in terms of correct explanations, reported in Table 4, we observe the following: MemDi... | **A**: However, we also perform the Wilcoxon paired test over the 10-fold cross-validation results, focusing on MemDistilBERT and MANN and the difference between weak and strong supervision.
**B**: Considering ToS-30, we concur that the results in Table 3, regarding classification performance, are not statistically si... | ABC | ABC | ACB | ABC | Selection 4 |
<|MaskedSetence|> (2019); Zhou et al. <|MaskedSetence|> <|MaskedSetence|> (2021a); Dai et al. (2021), has contributed to the improvement of coherent sentiment learning.
These studies explored the effectiveness of syntax information in ABSC, which mitigates issues related to sentiment coherency extraction.. | **A**: (2021); Li et al.
**B**: (2020); Tian et al.
**C**: However, the progress of sentiment dependency-based methods, such as the work by Zhang et al.
| CBA | CBA | CBA | CBA | Selection 1 |
Visualization of QNN extracted features. MNIST-2 classification result is determined by which feature is larger between the two: feature one is the sum of measurement outcomes of qubit 0 and 1; feature 2 is that of qubit 2 and 3. We visualize the two features obtained from experiments on Belem in a 2-D plane as in Figu... | **A**: With normalization (green), the distribution is significantly expanded, and the majority of ‘3’ is correctly classified.
**B**: The blue dash line is the classification boundary.
**C**: The circles/stars are samples of digit ‘3’ and ‘6’.
| BCA | BAC | BCA | BCA | Selection 1 |
<|MaskedSetence|> For conventional tracking methods, they are synchronized with the global camera shutter, and thus their speeds are evaluated by a synchronous criterion (e.g., 25 frames per second and above can be considered as real-time). <|MaskedSetence|> Instead, in event-based studies, the efficiency of event-ba... | **A**: The proposed EDA is evaluated on a PC with an Intel Core i7 CPU and an NVIDIA GTX 1080 GPU.
**B**: Since EDA works asynchronously, the synchronous criterion is not suitable for it.
**C**: EDA runs at the average speed of 56.33K/31.72K EPS on the test sequences with/without the GPU support.
| ABC | ABC | ABC | ABC | Selection 4 |
<|MaskedSetence|> <|MaskedSetence|> The teacher model is ResNet-50 pre-trained by MoCo.v2. ††{\dagger}† indicates using a momentum encoder as MoCo.v2. SSL denotes the InfoNCE loss. <|MaskedSetence|> H+AW denotes the Huber loss and angle-preserving loss in RKD.. | **A**:
TABLE IV: Unsupervised knowledge distillation.
**B**: Top-1 accuracy (%) under linear evaluation on STL-10.
**C**: KD denotes the knowledge distillation loss.
| ABC | ABC | ABC | ABC | Selection 4 |
<|MaskedSetence|> <|MaskedSetence|> We randomly sample 100 sub-networks satisfying the constraints to form the first generation of population. <|MaskedSetence|> Then we perform crossover to generate 50 new candidates and mutation to generate another 50, forming a new generation. The mutation rate is 0.1. We repeat t... | **A**: For each iteration, we only keep the top-20 candidates with the highest accuracy.
**B**: We use a population size of 100.
**C**:
We used evolutionary search to find the best sub-network architecture under certain constraints.
| CAB | CBA | CBA | CBA | Selection 2 |
<|MaskedSetence|> <|MaskedSetence|> The assembly we use includes GIN, GCN and GAT. <|MaskedSetence|> For a further analysis, we list the RDMs correlation between pairs of GIN, GCN and GAT in Table 2 for reference. According to the figure and the table, the trend of three encoders’ contrastive losses is in accord wit... | **A**: In Figure 5, we notice that each graph encoder converges synchronously on the two datasets, which justifies our proposed collaborative learning framework.
**B**:
In CGCL, multiple graph encoders compute their own contrastive losses based on representations learned by others, and optimize their losses collabora... | BCA | BCA | CBA | BCA | Selection 1 |
<|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> PLG/2019/012498. Our experiments were managed using https://neptune.ai. We would like to thank the Neptune team for providing us access to the team version and technical support.. | **A**: We gratefully acknowledge Polish high-performance computing infrastructure PLGrid (HPC Centers: ACK Cyfronet AGH, PCSS) for providing computer facilities and support within computational grant no.
**B**:
The work of Piotr Miłoś was supported by the Polish National Science Center grant UMO-2017/26/E/ST6/00622. ... | BCA | BCA | BAC | BCA | Selection 4 |
Learning CBFs: An open problem is how valid CBFs can be constructed. <|MaskedSetence|> For certain types of mechanical systems under input constraints, analytic CBFs can be constructed [30]. The construction of polynomial barrier functions towards certifying safety for polynomial systems by using sum-of-squares (SOS)... | **A**: The work in [35] considers the construction of higher order CBFs and their composition by, similarly to [32, 33], alternating-descent heuristics to solve the arising bilinear SOS program.
**B**: Indeed, the lack of systematic methods to construct valid CBFs is a main bottleneck.
**C**: Finding CBFs poses addit... | BCA | BAC | BCA | BCA | Selection 3 |
(b) To fit DCDFM, an efficient spectral clustering algorithm called nDFA is designed. We build theoretical framework on consistent estimation for the proposed algorithm under DCDFM. <|MaskedSetence|> Especially, when DCDFM reduces to DFM, our theoretical results are consistent with those under DFM. <|MaskedSetence|>... | **A**: When DCDFM degenerates to DCSBM, our results also match classical results under DCSBM.
**B**: Numerical results of both simulated and real-world networks show the advantage of introducing node heterogeneity to model weighted networks..
**C**: Benefited from the distribution-free property of DCDFM, our theoreti... | ACB | CAB | CAB | CAB | Selection 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.