robench-2024b
Collection
48 items • Updated
text_with_holes stringlengths 220 2.18k | text_candidates stringlengths 217 742 | A stringclasses 6
values | B stringclasses 6
values | C stringclasses 6
values | D stringclasses 6
values | label stringclasses 4
values |
|---|---|---|---|---|---|---|
One difficulty that hinders the development of efficient methods is the presence of high-contrast coefficients [MR3800035, MR2684351, MR2753343, MR3704855, MR3225627, MR2861254]. When LOD or VMS methods are considered, high-contrast coefficients might slow down the exponential decay of the solutions, making the metho... | **A**: We note the proposal in [CHUNG2018298] of generalized multiscale finite element methods based on eigenvalue problems inside the macro elements, with basis functions with support weakly dependent of the log of the contrast.
**B**: Additionally, the spectral techniques remove
macro-elements corner singularities t... | BAC | BAC | BAC | CAB | Selection 1 |
<|MaskedSetence|> For the credibility model we, therefore, leverage the signals derived from tweet contents. Related work often uses aggregated content [18, 20, 32], since individual tweets are often too short and contain slender context to draw a conclusion. However, content aggregation is problematic for hierarchica... | **A**: In this work, we overcome the restrictions (e.g., semantic sparsity) of traditional text representation methods (e.g., bag of words) in handling short text by learning low-dimensional tweet embeddings.
**B**: In this way, we achieve a rich hidden semantic representation for a more effective classification.
.... | CAB | CAB | CBA | CAB | Selection 2 |
<|MaskedSetence|> <|MaskedSetence|> <|MaskedSetence|> First we split the features in 7 catalogues as in Table 1: Tweet_Feature, User_Feature, Text_Feature, CreditScore, SpikeM Features, Epidemiological Features, CrowdWisdom and the BestSet. The BestSet is a combination of the top 9 most important features which is m... | **A**: 4.5.1.
**B**: Feature Analyzing Over Time
Here we present the performance of features over time.
**C**: We use the RF permutation-based (that account for possible feature correlations) for measuring feature importance.
| ABC | BCA | ABC | ABC | Selection 3 |
<|MaskedSetence|> <|MaskedSetence|> We adapted the L2R RankSVM [12]. <|MaskedSetence|> We modified the objective function of RankSVM following our global loss function, which takes into account the temporal feature specificities of event entities. The temporal and type-dependent ranking model is learned by minimizin... | **A**:
Multi-Criteria Learning.
**B**: The goal of RankSVM is learning a linear model that minimizes the number of discordant pairs in the training data.
**C**: Our task is to minimize the global relevance loss function, which evaluates the overall training error, instead of assuming the independent loss function,... | ACB | ACB | ACB | ABC | Selection 2 |
To restore the original image resolution, extracted features were processed by a series of convolutional and upsampling layers. <|MaskedSetence|> (2018); Liu and Han (2018), but we argue that a carefully chosen decoder architecture, similar to the model by Pan et al. (2017), results in better approximations. Here we ... | **A**: Previous work on saliency prediction has commonly utilized bilinear interpolation for that task Cornia et al.
**B**: (2016).
**C**: Our last network layer transformed activations into a continuous saliency distribution by applying a final 3×3333\times 33 × 3 convolution.
| ABC | ABC | ABC | ACB | Selection 1 |
Our work advances the state-of-the-art in model-based reinforcement learning by introducing a system that, to our knowledge, is the first to successfully handle a variety of challenging games in the ALE benchmark. <|MaskedSetence|> We present an approach, called Simulated Policy Learning (SimPLe), that utilizes these ... | **A**: (2019); Kielak (2020) that Rainbow can be tuned to have better results in low data regime.
**B**: The results are on a par with SimPLe – both of the model-free methods are better in 13 games, while SimPLe is better in the other 13 out of the total 26 games tested (note that in Section 4.2 van Hasselt et al.
**... | CAB | CAB | CAB | CAB | Selection 1 |
Figure 11: The Cricket robot tackles a step of height 2h, beginning in rolling locomotion mode and transitioning to walking locomotion mode using the rear body climbing gait. The red line in the plot shows that the robot tackled the step in rolling locomotion mode until the online accumulated energy consumption of th... | **A**: The overlap between the red line (ongoing energy consumption of the robot) and the blue line (pre-studied energy consumption of step negotiation in rolling locomotion mode only) illustrates this.
**B**: Following the preparation phase, the robot switches to the rear body climbing gait.
**C**: After the mode tr... | ACB | ACB | ACB | ABC | Selection 1 |
<|MaskedSetence|> Advice bits, as all information, are prone to transmission errors. In addition, the known advice models often allow
information that one may arguably consider unrealistic, e.g., an encoding of some part of the offline optimal solution. <|MaskedSetence|> For a very simple example, consider the well-k... | **A**:
It should be fairly clear that such assumptions are very unrealistic or undesirable.
**B**: Last, and perhaps more significantly, a malicious entity that takes control of the advice oracle can have a catastrophic impact.
**C**: However, if this bit is wrong, then the online algorithm has unbounded competitiv... | ABC | BAC | ABC | ABC | Selection 4 |
We organize this paper as follows. In section II, we introduce the related works. In section III, we first introduce the UAV’s power control in the multi-channel communication and coverage problems, then form a system model in highly dynamic scenarios. Moreover, in section IV, we formulate our work as an aggregative... | **A**: Ultimately, section VII gives a conclusion of the whole study.
.
**B**: In section V, we propose the two algorithms for approaching the NE.
**C**: Section VI presents the simulation results and discussions.
| BCA | BCA | BCA | BCA | Selection 4 |
multiplication (e.g.,formulae-sequence𝑒𝑔e.g.,italic_e . <|MaskedSetence|> , (a¯b¯)i=aibisubscript¯𝑎¯𝑏𝑖subscript𝑎𝑖subscript𝑏𝑖(\overline{a}\,\,\overline{b})_{i}=a_{i}b_{i}( over¯ start_ARG italic_a end_ARG over¯ start_ARG italic_b end_ARG ) start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = italic_a start_POSTS... | **A**: italic_g .
**B**: italic_g .
**C**: , C¯¯=A¯¯∗B¯¯¯¯𝐶¯¯𝐴¯¯𝐵\overline{\overline{C}}=\overline{\overline{A}}\,*\,\overline{\overline{B}}over¯ start_ARG over¯ start_ARG italic_C end_ARG end_ARG = over¯ start_ARG over¯ start_ARG italic_A end_ARG end_ARG ∗ over¯ start_ARG over¯ start_ARG italic_B end_ARG end_ARG)... | ABC | ACB | ABC | ABC | Selection 1 |
<|MaskedSetence|> Thus we ran ten consecutive learning trails and averaged them. We have evaluated Dropout-DQN algorithm on CARTPOLE problem from the Classic Control Environment. The game of CARTPOLE was selected due to its widespread use and the ease with which the DQN can achieve a steady state policy.
For the exper... | **A**: It was composed of two hidden layers of 128 neurons and two Dropout layers between the input layer and the first hidden layer and between the two hidden layers.
**B**: To minimize the
DQN loss, ADAM optimizer was used[25]..
**C**:
To evaluate the Dropout-DQN, we employ the standard reinforcement learning (RL... | CAB | CAB | CAB | ABC | Selection 1 |
3.5 Sequenced Models
The Recurrent Neural Network (RNN) was designed for handling sequences. <|MaskedSetence|> In the medical image analysis domain, RNNs have been used to model the temporal dependency in image sequences. Bai et al. (2018) proposed an image sequence segmentation algorithm by combining a fully convol... | **A**: Similarly, other works have also applied RNNs (LSTMs) (Alom et al., 2019; Chakravarty and Sivaswamy, 2018; Yang et al., 2017b; Zhao and Hamarneh, 2019a, b) to medical image segmentation.
.
**B**: The long short-term memory (LSTM) network is a type of RNN that introduces self-loops to enable the gradient flow fo... | BCA | CAB | BCA | BCA | Selection 1 |
<|MaskedSetence|> Zhang et al. (2017) demonstrate that deep neural networks are capable of fitting random labels and memorizing the training data. Bornschein et al. <|MaskedSetence|> (2018) evaluate the performance of modern neural networks using the same test strategy as Fernández-Delgado et al. <|MaskedSetence|> | **A**:
Neural networks are universal function approximators.
The generalization performance has been widely studied.
**B**: (2020) analyze the performance across different dataset sizes.
Olson et al.
**C**: (2014) and find that neural networks achieve good results but are not as strong as random forests.
.
| ABC | ABC | ABC | ABC | Selection 3 |
Broadly speaking, our work is related to a vast body of work on value-based reinforcement learning in tabular (Jaksch et al., 2010; Osband et al., 2014; Osband and Van Roy, 2016; Azar et al., 2017; Dann et al., 2017; Strehl et al., 2006; Jin et al., 2018) and linear settings (Yang and Wang, 2019b, a; Jin et al., 2019;... | **A**: It can be shown that the two settings are incomparable in the sense that one does not imply the other (Zhou et al., 2020).
**B**: Also, our setting is related to the low-Bellman-rank setting studied by Jiang et al.
**C**: (2020); Zhou et al.
| CBA | CAB | CAB | CAB | Selection 4 |
<|MaskedSetence|> <|MaskedSetence|> Isbell [52] proved that every metric space admits a smallest hyperconvex hull (cf. the definition of tight span below). Dress rediscovered this concept in [31] and subsequent work provided much development in the context of phylogenetics [77, 32]. <|MaskedSetence|> | **A**: These were studied by Aronszajn and Panitchpakdi [8] who showed that every hyperconvex space is an absolute 1-Lipschitz
retract.
**B**: More recently, in [53] Joharinad and Jost considered relaxations of hyperconvexity and related it to a certain notion of curvature applicable to general metric spaces..
**C**:... | BCA | CAB | CAB | CAB | Selection 2 |
She decides, then, to use t-SNE to explore the Breast Cancer Wisconsin data set which she downloaded from the UCI machine learning repository [58]. The data set contains measurements for 699 breast cancer cases, labeled into benign or malignant cancer. <|MaskedSetence|> However, she read on the Internet that t-SNE is ... | **A**: The nine dimensions included in this data set are cytological characteristics rated from 1 to 10 (higher means closer to malignant) when the instances were collected.
**B**: After finding that t-viSNE allows her to interpret and assess t-SNE’s results, she decides to use it.
Overall Accuracy
Anna loads the ... | ABC | BCA | ABC | ABC | Selection 1 |
<|MaskedSetence|> The most clear example can be found in the different animal species, which have developed over generations very specialized capabilities by evolutionary mechanisms. Indeed, evolution has allowed animals to adapt to harsh environments, foraging, very difficult tasks of orientation, and to resiliently ... | **A**: This family of optimization methods simulates biological processes such as natural evolution, where solutions are represented by individuals that reproduce and mutate to generate new, potentially improved candidate solutions for the problem at hand.
.
**B**:
In this context, complexity is not unusual in Natu... | BCA | BAC | BCA | BCA | Selection 3 |
<|MaskedSetence|> Since a large proportion of clustering methods are based on the graph, it is reasonable to consider how to employ GCN to promote the performance of graph-based clustering methods.
In this paper, we propose an Adaptive Graph Auto-Encoder (AdaGAE) to extend graph auto-encoder into common scenarios. The... | **A**: However, the existing methods are limited to graph type data while no graph is provided for general data clustering.
**B**: We analyze the degeneration theoretically and experimentally to understand the phenomenon.
**C**: Besides, it is insensitive to different initialization of parameters and needs no pretrai... | ABC | ABC | CAB | ABC | Selection 4 |
<|MaskedSetence|> The measurement community provided indispensable studies for assessing “spoofability” in the Internet, and has had success in detecting the ability to spoof in some individual networks using active measurements, e.g., via agents installed on those networks (Mauch, 2013; Lone et al., 2018), or by iden... | **A**:
Limitations of filtering studies.
**B**: The need to install agents on networks or the ability to obtain traces only from some networks limits the studies to non-uniform coverage of the Internet.
**C**: The extrapolation from the small set of networks to the entire Internet typically result in assessment that... | ABC | BCA | ABC | ABC | Selection 4 |
III-A Dataset description
Experiments in this paper used the gas sensor drift array dataset [7]. <|MaskedSetence|> Every batch contains between 161161161161 to 3,60036003{,}6003 , 600 samples, and each sample is represented by a 128-dimensional feature vector; 8 features each from 16 metal oxide-based gas sensors. ... | **A**: The dataset thus exemplifies sensor variance due to contamination and variable odor concentration in a controlled setting.
.
**B**: These features summarizing the time series sensor responses are the raw and normalized steady-state features and the exponential moving average of the increasing and decaying trans... | CBA | CBA | CBA | ABC | Selection 2 |
<|MaskedSetence|> This culminated in constructions to present free groups of arbitrary rank as automaton groups where the number of states coincides with the rank [18, 17]. <|MaskedSetence|> While it is known that the free semigroup of rank one is not an automaton semigroup [4, Proposition 4.3], the free semigroups o... | **A**: While these constructions and the involved proofs are generally deemed quite complicated, the situation for semigroups turns out to be much simpler.
**B**:
There is a quite interesting evolution of constructions to present free groups in a self-similar way or even as automaton groups (see [15] for an overview... | BAC | CBA | BAC | BAC | Selection 4 |
2.2.2 Enhancing Visual Sensitivities
Both Human Importance Aware Network Tuning (HINT) Selvaraju et al. (2019) and Self Critical Reasoning (SCR) Wu and Mooney (2019), train the network to be more sensitive towards salient image regions by improving the alignment between visual cues and gradient-based sensitivity sco... | **A**: In contrast, SCR does not require exact saliency ranks.
**B**: Instead, it penalizes the model if correct answers are more sensitive towards non-important regions as compared to important regions, and if incorrect answers are more sensitive to important regions than correct answers..
**C**: (2016) and the grad... | CAB | CAB | CAB | ACB | Selection 3 |
<|MaskedSetence|> Privacy policies in this corpus have a mean word length of about 1,871 words and range between a minimum of 143 words and a maximum of 16,980 words.
The corpus contains policies from over 800 different top level domains (TLDs). .com, .org, and .net make up a major share of the corpus covering 63%, 5%... | **A**: The distribution of popular TLDs (.com, .org, .net) roughly matches internet TLD trends suggesting that the corpus contains a random sample of internet web domains.
**B**: The PrivaSeer Corpus consists of 1,005,380 privacy policies from 995,475 different web domains.
**C**: Country-level domains like .uk, .au,... | BCA | BCA | ABC | BCA | Selection 1 |
Hence, we want to further investigate cases that cause problems (i.e., we have to look for large points). The parallel coordinates plot in Figure 3(b) is used to investigate the features of the data set in detail.
The Ca attribute, for example, has a range of 0–3, but by selection we can see five points with Ca value... | **A**: These values can be considered as unknown and should be further examined.
**B**: One of these points belongs to the healthy class (due to the olive color) but is very small in Figure 3(c.1)—meaning that it does not reduce the accuracy.
**C**: Four points are part of the diseased class.
| ABC | ABC | ABC | ACB | Selection 3 |
Other works use MAML for multi-domain and low-resource language generation, such as few-shot dialogue system [Mi et al., 2019, Madotto et al., 2019, Qian and Yu, 2019, Song et al., 2020] and low-resource machine translation [Gu et al., 2018].
When applying MAML to NLP, several factors can influence the training strat... | **A**: For example, PAML [Madotto et al., 2019] regards each person’s dialogues as a task for MAML and they have different personal profiles.
**B**: Secondly, while vanilla MAML assumes that the data distribution is the same across tasks, in real-world NLP tasks, the data distributions can differ significantly [Li et ... | ACB | BAC | BAC | BAC | Selection 4 |
Note that directly solving the above beam tracking problem is very challenging, especially in the considered highly dynamic UAV mmWave network. Therefore, developing new and efficient beam tracking solution for the CA-enabled UAV mmWave network is the major focus of our work. <|MaskedSetence|> <|MaskedSetence|> <|Ma... | **A**: Recall that several efficient codebook-based beam training and tracking schemes have been proposed for conventional mmWave network with uniform ULA and UPA [22, 23].
**B**: These prior works inspire us to propose a specialized new codebook design and the corresponding codeword selection/processing strategy that... | ABC | ABC | ABC | ABC | Selection 3 |
In contrast to Mei et al. <|MaskedSetence|> <|MaskedSetence|> We defer the detailed discussion on the approximation analysis to §B. <|MaskedSetence|> We are interested in the evolution of the feature representation
. | **A**: (2018, 2019), the PDE in (3.4) can not be cast as a gradient flow, since there does not exist a corresponding energy functional.
**B**: Proposition 3.1 allows us to convert the TD dynamics over the finite-dimensional parameter space to its counterpart over the infinite-dimensional Wasserstein space, where the i... | ACB | BAC | ACB | ACB | Selection 4 |
2.1. Depth-Wise LSTM
The computation of depth-wise LSTM is the same as the conventional LSTM except that depth-wise LSTM connects stacked Transformer layers instead of tokens in a token sequence as in conventional LSTMs. <|MaskedSetence|> In our work, we regard the outputs of stacked layers as a “vertical” sequenc... | **A**: The gate mechanisms in the original LSTM are to enhance its ability in capturing long-distance relations and to address the gradient vanishing/exploding issue in sequence modeling.
**B**: LSTMs are able to capture long-distance relationships: they can learn to selectively use the representations of distant toke... | ABC | ABC | BAC | ABC | Selection 2 |
<|MaskedSetence|> Apply [33, Corollary
5.14] to A𝐴Aitalic_A and B𝐵Bitalic_B. Then A~⊧φmodels~𝐴𝜑\widetilde{A}\models\varphiover~ start_ARG italic_A end_ARG ⊧ italic_φ because A→A~→𝐴~𝐴A\to\widetilde{A}italic_A → over~ start_ARG italic_A end_ARG and
φ𝜑\varphiitalic_φ is closed under homomorphisms. <|MaskedSetence... | **A**: Therefore B~⊧φmodels~𝐵𝜑\widetilde{B}\models\varphiover~ start_ARG italic_B end_ARG ⊧ italic_φ because A~~𝐴\widetilde{A}over~ start_ARG italic_A end_ARG and B~~𝐵\widetilde{B}over~ start_ARG italic_B end_ARG are
n𝑛nitalic_n-elementary equivalent.
**B**: furthermore B→C→𝐵𝐶B\to Citalic_B → italic_C.
**C**: ... | BAC | BAC | CBA | BAC | Selection 4 |
Quantitative Evaluation:
To demonstrate a quantitative comparison with the state-of-the-art approaches, we evaluate the rectified images based on the PSNR (peak signal-to-noise ratio), SSIM (structural similarity index), and the proposed MDLD (mean distortion level deviation). All the comparison methods are used to con... | **A**: However, the generation-based methods [11][12] mainly focus on the pixel reconstruction of a rectified image and ignore the parameter estimation..
**B**: Besides the high quality of the rectified image, our approach can obtain the accurate distortion parameters of a distorted image, which is crucial for the sub... | CBA | CBA | CBA | BCA | Selection 2 |
<|MaskedSetence|> As usual in two-stage stochastic problems, this has three steps. First, we develop algorithms for the simpler polynomial-scenarios model. <|MaskedSetence|> <|MaskedSetence|> This overall methodology is called Sample Average Approximation (SAA).
. | **A**: Finally, we extrapolate the solution to the original black-box problem.
**B**:
1.2 Our Generalization Scheme and Comparison with Previous Results
Our main goal is to develop algorithms for the black-box setting.
**C**: Second, we sample a small number of scenarios from the black-box oracle and use our polyno... | BCA | BCA | BCA | CBA | Selection 2 |