text stringlengths 9 1.99k | image imagewidth (px) 384 384 |
|---|---|
Example spectrum (in black) of a sight line where the 5780.6 Å DIB is normal, but the 5797.1 Å is much deeper than expected when compared to sight lines with similar dust column densities (dashed blue). The best fit for the group is shown in dashed red. | |
The main idea of PCL. Circles and triangles denote contrastive instances encoded by $f_{\theta}$ and $f_{\theta'}$, respectively. The red ones denote ineffective positives, blue ones denote effective positives, and gray ones denote negatives. (a) Conventional CL on ineffective positive instances causes shortcut learnin... | |
The main idea of PCL. Circles and triangles denote contrastive instances encoded by $f_{\theta}$ and $f_{\theta'}$, respectively. The red ones denote ineffective positives, blue ones denote effective positives, and gray ones denote negatives. (a) Conventional CL on ineffective positive instances causes shortcut learnin... | |
The main idea of PCL. Circles and triangles denote contrastive instances encoded by $f_{\theta}$ and $f_{\theta'}$, respectively. The red ones denote ineffective positives, blue ones denote effective positives, and gray ones denote negatives. (a) Conventional CL on ineffective positive instances causes shortcut learnin... | |
An overview of proposed self-supervised methods. They are a) switching utterance, b) switching interlocutor, c) inserting utterance, d) masking interlocutor methods. Each method is carried out separately. (Note that some methods can be combined to run beneficially.) | |
Illustration of our approach on abstractive summarization task. First, we enhance dialogue context understanding of BERT via (a) proposed self-supervised methods. Then, we initialize the traditional encoder-decoder model with enhanced BERT and fine-tune on abstractive summarization task. | |
Ablation results of Switching Utterance method according to combinations of two probabilities in terms of average ROUGE scores. | |
An example for common sense road rule along with the ego vehicle perspective considered in this work. Highway with barricaded right lane. | |
An example for common sense road rule along with the ego vehicle perspective considered in this work. A three lane highway scenario. | |
A schematic of the simulation environment used for training. | |
Reward for the ego car based on traffic condition, sub goals are weighted equally when calculating the final reward. Reward for desired relative distance. | |
Reward for the ego car based on traffic condition, sub goals are weighted equally when calculating the final reward. Reward for desired lateral position. | |
Reward for the ego car based on traffic condition, sub goals are weighted equally when calculating the final reward. Reward for desired ego speed. | |
Average learning curve with confidence bound for with and without short horizon safety check in Algorithm <ref>. | |
Average speed for simple IDM controller, with lane change, and trained RL agent. | |
Mean learning curve with confidence bound for Algorithm <ref> and prioritized experience reply <cit.>. In this work we used the PER implementation from <cit.>. | |
Comparison of number of safety trigger after learning with and without continuous adaptation. | |
DRL agent control architecture: SC is the short-horizon safety check and FBC is the low-level feedback controller. | |
Period - period derivative diagram for pulsars under analysis. Nearly orthogonal pulsars are shown by black circles, while nearly aligned by white ones. Dots represent single classical pulsars from the ANTF catalogue. Constant levels of conventional estimators are also shown: spin-down age $\tau_\mathrm{sd} = P/2{\dot ... | |
Distribution of the ratio $\log I_\perp/I_\parallel$ for NSs moments of inertia. Values of $I_\perp(M)$ and $I_\parallel(M)$ were calculated as moments of inertia of two independent pulsars with randomly distributed masses adopting one of the equations of state (see text for details). Generally, $\log I_\perp/I_\parall... | |
Black line: observed cumulative distribution of logarithmic difference $x_\mathrm{sd}$ | |
Distribution of KS-test p-values calculated for pairs of observed and theoretical distributions of $x_\mathrm{sd}$, assuming that magnetic field distributions of orthogonal and aligned pulsars are not the same but $\log(\mu_\perp/\mu_\parallel) \sim \mathrm{normal}(\langle \log(\mu_\perp/\mu_\parallel) \rangle,\sigma_\... | |
Grey lines: CDF of $x_\mathrm{sd}(n)$ (Eq. <ref>) calculated over 1350 pairs of pulsars for $n = 0..5$. Black line: CDF of $x_\mathrm{sd}$ for 84 pairs of pulsars with close periods. These distributions are statistically equal while $n = 1..4$ which gives simple constraint on value of braking index in the pulsar spin-d... | |
Overview of the proposed method for learning weights for sentence-level features to filter noisy parallel data and improve translation performance. | |
Improvement in BLEU scores of the final NMT system as data from additional `candidate` training runs is added to the tuning stage to learn weights. Training data was filtered using the learned weights. | |
Top: RGB measurements of the inner surface of three cylinder liners with a spatial range of $\SI[product-units=power]{4.2x4.2}{\mm}$, recorded by a handheld microscope. Bottom: Depth profile of the same cylinder with a spatial range of $\SI[product-units=power]{1.9x1.9}{\mm}$, measured with a confocal microscope. The p... | |
Top: RGB measurements of the inner surface of three cylinder liners with a spatial range of $\SI[product-units=power]{4.2x4.2}{\mm}$, recorded by a handheld microscope. Bottom: Depth profile of the same cylinder with a spatial range of $\SI[product-units=power]{1.9x1.9}{\mm}$, measured with a confocal microscope. The p... | |
Top: RGB measurements of the inner surface of three cylinder liners with a spatial range of $\SI[product-units=power]{4.2x4.2}{\mm}$, recorded by a handheld microscope. Bottom: Depth profile of the same cylinder with a spatial range of $\SI[product-units=power]{1.9x1.9}{\mm}$, measured with a confocal microscope. The p... | |
Top: RGB measurements of the inner surface of three cylinder liners with a spatial range of $\SI[product-units=power]{4.2x4.2}{\mm}$, recorded by a handheld microscope. Bottom: Depth profile of the same cylinder with a spatial range of $\SI[product-units=power]{1.9x1.9}{\mm}$, measured with a confocal microscope. The p... | |
Top: RGB measurements of the inner surface of three cylinder liners with a spatial range of $\SI[product-units=power]{4.2x4.2}{\mm}$, recorded by a handheld microscope. Bottom: Depth profile of the same cylinder with a spatial range of $\SI[product-units=power]{1.9x1.9}{\mm}$, measured with a confocal microscope. The p... | |
Top: RGB measurements of the inner surface of three cylinder liners with a spatial range of $\SI[product-units=power]{4.2x4.2}{\mm}$, recorded by a handheld microscope. Bottom: Depth profile of the same cylinder with a spatial range of $\SI[product-units=power]{1.9x1.9}{\mm}$, measured with a confocal microscope. The p... | |
The first column visualizes the RGB samples and the second column the grayscale versions. The third column contains the gamma corrected counterparts, where the contrast in lower gray levels is enhanced for dark images in particular. The last column illustrates the application of the high-pass filter. | |
From left to right: Surface RGB input, ground truth and profiles predicted by our method, gcGAN and cycleGAN. | |
From left to right: Surface RGB input, ground truth and profiles predicted by our method, gcGAN and cycleGAN. | |
From left to right: Surface RGB input, ground truth and profiles predicted by our method, gcGAN and cycleGAN. | |
An instant 3D model generated by our proposed framework provides valuable information on the liner surface condition. | |
An instant 3D model generated by our proposed framework provides valuable information on the liner surface condition. | |
An instant 3D model generated by our proposed framework provides valuable information on the liner surface condition. | |
From left to right: Face RGB input, ground truth and profiles predicted by our method, gcGAN, cycleGAN and CUT. | |
From left to right: Face RGB input, ground truth and profiles predicted by our method, gcGAN, cycleGAN and CUT. | |
From left to right: Face RGB input, ground truth and profiles predicted by our method, gcGAN, cycleGAN and CUT. | |
From left to right: Face RGB input, ground truth and profiles predicted by our method, gcGAN, cycleGAN and CUT. | |
From left to right: Face RGB input, ground truth and profiles predicted by our method, gcGAN, cycleGAN and CUT. | |
From left to right: Face RGB input, ground truth and profiles predicted by our method, gcGAN, cycleGAN and CUT. | |
An example of viewpoint augmentation using a 3D face model instantly generated by our proposed framework. | |
An example of viewpoint augmentation using a 3D face model instantly generated by our proposed framework. | |
An example of viewpoint augmentation using a 3D face model instantly generated by our proposed framework. | |
An example of viewpoint augmentation using a 3D face model instantly generated by our proposed framework. | |
From left to right: Body RGB input, ground truth and profiles predicted by the proposed method, gcGAN, cycleGAN and CUT. | |
From left to right: Body RGB input, ground truth and profiles predicted by the proposed method, gcGAN, cycleGAN and CUT. | |
From left to right: Body RGB input, ground truth and profiles predicted by the proposed method, gcGAN, cycleGAN and CUT. | |
From left to right: Body RGB input, ground truth and profiles predicted by the proposed method, gcGAN, cycleGAN and CUT. | |
From left to right: Body RGB input, ground truth and profiles predicted by the proposed method, gcGAN, cycleGAN and CUT. | |
From left to right: Body RGB input, ground truth and profiles predicted by the proposed method, gcGAN, cycleGAN and CUT. | |
Left: RGB samples of the Bosphorus-3DFA <cit.>. Right: Samples of the CelebAMask-HQ <cit.>. | |
Left: RGB samples of the Bosphorus-3DFA <cit.>. Right: Samples of the CelebAMask-HQ <cit.>. | |
Left: RGB samples of the Bosphorus-3DFA <cit.>. Right: Samples of the CelebAMask-HQ <cit.>. | |
Left: RGB samples of the Bosphorus-3DFA <cit.>. Right: Samples of the CelebAMask-HQ <cit.>. | |
Left: RGB samples of the Bosphorus-3DFA <cit.>. Right: Samples of the CelebAMask-HQ <cit.>. | |
Left: RGB samples of the Bosphorus-3DFA <cit.>. Right: Samples of the CelebAMask-HQ <cit.>. | |
Left: RGB samples of the Bosphorus-3DFA <cit.>. Right: Samples of the CelebAMask-HQ <cit.>. | |
Left: RGB samples of the Bosphorus-3DFA <cit.>. Right: Samples of the CelebAMask-HQ <cit.>. | |
From left to right: RGB input, four snapshots of the synthesized 3D model generated by our method and four snapshots of the synthesized 3D model generated by Wu et al. <cit.>. | |
From left to right: RGB input, four snapshots of the synthesized 3D model generated by our method and four snapshots of the synthesized 3D model generated by Wu et al. <cit.>. | |
From left to right: RGB input, four snapshots of the synthesized 3D model generated by our method and four snapshots of the synthesized 3D model generated by Wu et al. <cit.>. | |
From left to right: RGB input, four snapshots of the synthesized 3D model generated by our method and four snapshots of the synthesized 3D model generated by Wu et al. <cit.>. | |
From left to right: RGB input, four snapshots of the synthesized 3D model generated by our method and four snapshots of the synthesized 3D model generated by Wu et al. <cit.>. | |
From left to right: RGB input, four snapshots of the synthesized 3D model generated by our method and four snapshots of the synthesized 3D model generated by Wu et al. <cit.>. | |
From left to right: RGB input, four snapshots of the synthesized 3D model generated by our method and four snapshots of the synthesized 3D model generated by Wu et al. <cit.>. | |
From left to right: RGB input, four snapshots of the synthesized 3D model generated by our method and four snapshots of the synthesized 3D model generated by Wu et al. <cit.>. | |
From left to right: RGB input, four snapshots of the synthesized 3D model generated by our method and four snapshots of the synthesized 3D model generated by Wu et al. <cit.>. | |
From left to right: RGB input, four snapshots of the synthesized 3D model generated by our method and four snapshots of the synthesized 3D model generated by Wu et al. <cit.>. | |
From left to right: RGB input, four snapshots of the synthesized 3D model generated by our method and four snapshots of the synthesized 3D model generated by Wu et al. <cit.>. | |
From left to right: RGB input, four snapshots of the synthesized 3D model generated by our method and four snapshots of the synthesized 3D model generated by Wu et al. <cit.>. | |
From left to right: RGB input, four snapshots of the synthesized 3D model generated by our method and four snapshots of the synthesized 3D model generated by Wu et al. <cit.>. | |
From left to right: RGB input, four snapshots of the synthesized 3D model generated by our method and four snapshots of the synthesized 3D model generated by Wu et al. <cit.>. | |
From left to right: RGB input, four snapshots of the synthesized 3D model generated by our method and four snapshots of the synthesized 3D model generated by Wu et al. <cit.>. | |
From left to right: RGB input, four snapshots of the synthesized 3D model generated by our method and four snapshots of the synthesized 3D model generated by Wu et al. <cit.>. | |
From left to right: RGB input, four snapshots of the synthesized 3D model generated by our method and four snapshots of the synthesized 3D model generated by Wu et al. <cit.>. | |
From left to right: RGB input, four snapshots of the synthesized 3D model generated by our method and four snapshots of the synthesized 3D model generated by Wu et al. <cit.>. | |
From left to right: RGB input, four snapshots of the synthesized 3D model generated by our method and four snapshots of the synthesized 3D model generated by Wu et al. <cit.>. | |
From left to right: RGB input, four snapshots of the synthesized 3D model generated by our method and four snapshots of the synthesized 3D model generated by Wu et al. <cit.>. | |
From left to right: RGB input, four snapshots of the synthesized 3D model generated by our method and four snapshots of the synthesized 3D model generated by Wu et al. <cit.>. | |
From left to right: RGB input, four snapshots of the synthesized 3D model generated by our method and four snapshots of the synthesized 3D model generated by Wu et al. <cit.>. | |
From left to right: RGB input, four snapshots of the synthesized 3D model generated by our method and four snapshots of the synthesized 3D model generated by Wu et al. <cit.>. | |
From left to right: RGB input, four snapshots of the synthesized 3D model generated by our method and four snapshots of the synthesized 3D model generated by Wu et al. <cit.>. | |
From left to right: RGB input, four snapshots of the synthesized 3D model generated by our method and four snapshots of the synthesized 3D model generated by Wu et al. <cit.>. | |
From left to right: RGB input, four snapshots of the synthesized 3D model generated by our method and four snapshots of the synthesized 3D model generated by Wu et al. <cit.>. | |
From left to right: RGB input, four snapshots of the synthesized 3D model generated by our method and four snapshots of the synthesized 3D model generated by Wu et al. <cit.>. | |
From left to right: RGB input, four snapshots of the synthesized 3D model generated by our method and four snapshots of the synthesized 3D model generated by Wu et al. <cit.>. | |
From left to right: RGB input, four snapshots of the synthesized 3D model generated by our method and four snapshots of the synthesized 3D model generated by Wu et al. <cit.>. | |
From left to right: RGB input, four snapshots of the synthesized 3D model generated by our method and four snapshots of the synthesized 3D model generated by Wu et al. <cit.>. | |
From left to right: RGB input, four snapshots of the synthesized 3D model generated by our method and four snapshots of the synthesized 3D model generated by Wu et al. <cit.>. | |
From left to right: RGB input, four snapshots of the synthesized 3D model generated by our method and four snapshots of the synthesized 3D model generated by Wu et al. <cit.>. | |
From left to right: RGB input, four snapshots of the synthesized 3D model generated by our method and four snapshots of the synthesized 3D model generated by Wu et al. <cit.>. | |
From left to right: RGB input, four snapshots of the synthesized 3D model generated by our method and four snapshots of the synthesized 3D model generated by Wu et al. <cit.>. | |
From left to right: RGB input, four snapshots of the synthesized 3D model generated by our method and four snapshots of the synthesized 3D model generated by Wu et al. <cit.>. | |
From left to right: RGB input, four snapshots of the synthesized 3D model generated by our method and four snapshots of the synthesized 3D model generated by Wu et al. <cit.>. | |
From left to right: RGB input, four snapshots of the synthesized 3D model generated by our method and four snapshots of the synthesized 3D model generated by Wu et al. <cit.>. | |
From left to right: RGB input, four snapshots of the synthesized 3D model generated by our method and four snapshots of the synthesized 3D model generated by Wu et al. <cit.>. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.