Dataset Viewer
Auto-converted to Parquet Duplicate
Link/DOI
string
Publication Date
timestamp[ns]
Title
string
Authors
string
Abstract
string
Categories
string
label
int64
source
string
Classification_embedding
list
Proximity_embedding
list
top_10_similar
string
max_similarity
float64
avg_similarity
float64
http://arxiv.org/abs/1902.05605v4
2019-02-14T00:00:00
CrossQ: Batch Normalization in Deep Reinforcement Learning for Greater Sample Efficiency and Simplicity
Aditya Bhatt; Daniel Palenicek; Boris Belousov; Max Argus; Artemij Amiranashvili; Thomas Brox; Jan Peters
Sample efficiency is a crucial problem in deep reinforcement learning. Recent algorithms, such as REDQ and DroQ, found a way to improve the sample efficiency by increasing the update-to-data (UTD) ratio to 20 gradient update steps on the critic per environment sample. However, this comes at the expense of a greatly increased computational cost. To reduce this computational burden, we introduce CrossQ: A lightweight algorithm for continuous control tasks that makes careful use of Batch Normalization and removes target networks to surpass the current state-of-the-art in sample efficiency while maintaining a low UTD ratio of 1. Notably, CrossQ does not rely on advanced bias-reduction schemes used in current methods. CrossQ's contributions are threefold: (1) it matches or surpasses current state-of-the-art methods in terms of sample efficiency, (2) it substantially reduces the computational cost compared to REDQ and DroQ, (3) it is easy to implement, requiring just a few lines of code on top of SAC.
cs.LG; stat.ML
1
ICLR-2024
[ -0.66291344165802, -0.9310970306396484, 0.3393835425376892, -0.39459407329559326, -0.15434421598911285, -0.09034492075443268, 0.3100298047065735, 0.26732906699180603, -0.5123820304870605, -0.21383555233478546, -0.4221090078353882, 1.0684516429901123, 0.8614672422409058, 0.4628746509552002,...
[ -0.11044733226299286, 0.369069904088974, -0.395124226808548, 0.10284718871116638, -0.1350134015083313, 0.06156173720955849, 0.7323307394981384, -0.13476814329624176, -0.8261107802391052, 0.44935739040374756, -0.009715866297483444, 0.0547449067234993, 0.44324997067451477, -0.143585309386253...
{"http://arxiv.org/abs/2304.10466v1": 0.954596757888794, "http://arxiv.org/abs/2205.15043v2": 0.9472078084945679, "http://arxiv.org/abs/2205.11027v3": 0.9450389742851257, "http://arxiv.org/abs/2302.10145v1": 0.9449824094772339, "http://arxiv.org/abs/2210.12566v2": 0.944953203201294, "http://arxiv.org/abs/2301.11490v3": 0.9447583556175232, "http://arxiv.org/abs/2210.01542v1": 0.942772626876831, "http://arxiv.org/abs/2302.11312v1": 0.9404850602149963, "http://arxiv.org/abs/2208.06193v3": 0.9399589896202087, "http://arxiv.org/abs/2306.13085v1": 0.9398437142372131}
0.954597
0.94446
http://arxiv.org/abs/2003.13898v3
2020-03-31T00:00:00
Edge Guided GANs with Contrastive Learning for Semantic Image Synthesis
Hao Tang; Xiaojuan Qi; Guolei Sun; Dan Xu; Nicu Sebe; Radu Timofte; Luc Van Gool
We propose a novel ECGAN for the challenging semantic image synthesis task. Although considerable improvement has been achieved, the quality of synthesized images is far from satisfactory due to three largely unresolved challenges. 1) The semantic labels do not provide detailed structural information, making it difficult to synthesize local details and structures. 2) The widely adopted CNN operations such as convolution, down-sampling, and normalization usually cause spatial resolution loss and thus cannot fully preserve the original semantic information, leading to semantically inconsistent results. 3) Existing semantic image synthesis methods focus on modeling local semantic information from a single input semantic layout. However, they ignore global semantic information of multiple input semantic layouts, i.e., semantic cross-relations between pixels across different input layouts. To tackle 1), we propose to use edge as an intermediate representation which is further adopted to guide image generation via a proposed attention guided edge transfer module. Edge information is produced by a convolutional generator and introduces detailed structure information. To tackle 2), we design an effective module to selectively highlight class-dependent feature maps according to the original semantic layout to preserve the semantic information. To tackle 3), inspired by current methods in contrastive learning, we propose a novel contrastive learning method, which aims to enforce pixel embeddings belonging to the same semantic class to generate more similar image content than those from different classes. Doing so can capture more semantic relations by explicitly exploring the structures of labeled pixels from multiple input semantic layouts. Experiments on three challenging datasets show that our ECGAN achieves significantly better results than state-of-the-art methods.
cs.CV; cs.LG; eess.IV
1
ICLR-2023
[ -0.49626874923706055, -0.46649807691574097, -0.3833135962486267, -0.343882292509079, -0.4688773453235626, 0.43584397435188293, 0.252405047416687, -0.16754350066184998, 0.09305083006620407, -1.051220417022705, -0.3884459435939789, 1.9267717599868774, 0.7328929901123047, -0.15373994410037994...
[ 0.3297836184501648, 0.399527907371521, -0.05010775476694107, 0.10227156430482864, -0.22314831614494324, 0.057503435760736465, 0.2564406991004944, -0.32262739539146423, 0.10919182002544403, -0.4649718999862671, 0.6851951479911804, 0.9265543222427368, 0.35406437516212463, -0.6678822040557861...
{}
null
null
http://arxiv.org/abs/2006.07796v4
2020-06-14T00:00:00
Structure by Architecture: Structured Representations without Regularization
Felix Leeb; Guilia Lanzillotta; Yashas Annadani; Michel Besserve; Stefan Bauer; Bernhard Schölkopf
We study the problem of self-supervised structured representation learning using autoencoders for downstream tasks such as generative modeling. Unlike most methods which rely on matching an arbitrary, relatively unstructured, prior distribution for sampling, we propose a sampling technique that relies solely on the independence of latent variables, thereby avoiding the trade-off between reconstruction quality and generative performance typically observed in VAEs. We design a novel autoencoder architecture capable of learning a structured representation without the need for aggressive regularization. Our structural decoders learn a hierarchy of latent variables, thereby ordering the information without any additional regularization or supervision. We demonstrate how these models learn a representation that improves results in a variety of downstream tasks including generation, disentanglement, and extrapolation using several challenging and natural image datasets.
cs.LG; cs.CV; stat.ML
1
ICLR-2023
[ -0.617473840713501, -0.3876475393772125, -0.34355098009109497, -0.1945188045501709, -0.5904003381729126, -0.10441194474697113, 0.49440860748291016, 0.12180665135383606, -0.16045475006103516, -0.9508771896362305, -0.14502418041229248, 1.3771533966064453, 0.7589877843856812, 0.40906929969787...
[ 0.2846464514732361, 0.788372814655304, -0.23505939543247223, -0.06445222347974777, -0.30919349193573, -0.2559911608695984, 0.7198917269706726, -0.7416119575500488, 0.25550389289855957, -0.49763917922973633, 0.7615611553192139, 0.4657799005508423, 0.31415700912475586, -0.2725054621696472, ...
{}
null
null
http://arxiv.org/abs/2007.09890v3
2020-07-20T00:00:00
Learning the Positions in CountSketch
Simin Liu; Tianrui Liu; Ali Vakilian; Yulin Wan; David P. Woodruff
We consider sketching algorithms which first quickly compress data by multiplication with a random sketch matrix, and then apply the sketch to quickly solve an optimization problem, e.g., low rank approximation. In the learning-based sketching paradigm proposed by Indyk et al. [2019], the sketch matrix is found by choosing a random sparse matrix, e.g., the CountSketch, and then updating the values of the non-zero entries by running gradient descent on a training data set. Despite the growing body of work on this paradigm, a noticeable omission is that the locations of the non-zero entries of previous algorithms were fixed, and only their values were learned. In this work we propose the first learning algorithm that also optimizes the locations of the non-zero entries. We show this algorithm gives better accuracy for low rank approximation than previous work, and apply it to other problems such as $k$-means clustering for the first time. We show that our algorithm is provably better in the spiked covariance model and for Zipfian matrices. We also show the importance of the sketch monotonicity property for combining learned sketches. Our empirical results show the importance of optimizing not only the values of the non-zero entries but also their positions.
cs.LG; cs.DS; cs.NA; math.NA; stat.ML
1
ICLR-2023
[ -1.6176917552947998, -0.7511022090911865, -0.17473077774047852, -0.6125714778900146, -1.694733738899231, -0.21196356415748596, 0.30018311738967896, 0.5417760610580444, -0.02536710351705551, -0.4176151752471924, 0.1254631131887436, 1.4425902366638184, -0.06326741725206375, 0.760006129741668...
[ 0.12348458170890808, 0.5360383987426758, -0.9392450451850891, -0.02199743688106537, -0.4796910881996155, -0.037633974105119705, 0.8480938076972961, -0.261338472366333, -0.2165193110704422, -0.5568140745162964, 0.7258276343345642, 0.22235091030597687, -0.08329570293426514, -0.04748295247554...
{}
null
null
http://arxiv.org/abs/2008.03738v2
2020-08-09T00:00:00
Treatment Effects Estimation by Uniform Transformer
Ruoqi Yu; Shulei Wang
In observational studies, balancing covariates in different treatment groups is essential to estimate treatment effects. One of the most commonly used methods for such purposes is weighting. The performance of this class of methods usually depends on strong regularity conditions for the underlying model, which might not hold in practice. In this paper, we investigate weighting methods from a functional estimation perspective and argue that the weights needed for covariate balancing could differ from those needed for treatment effects estimation under low regularity conditions. Motivated by this observation, we introduce a new framework of weighting that directly targets the treatment effects estimation. Unlike existing methods, the resulting estimator for a treatment effect under this new framework is a simple kernel-based $U$-statistic after applying a data-driven transformation to the observed covariates. We characterize the theoretical properties of the new estimators of treatment effects under a nonparametric setting and show that they are able to work robustly under low regularity conditions. The new framework is also applied to several numerical examples to demonstrate its practical merits.
stat.ME; math.ST; stat.TH
1
ICLR-2024
[ -1.1140481233596802, -0.3708285391330719, -1.4242852926254272, -0.6285358667373657, -0.6922053098678589, 1.2139124870300293, 0.6684509515762329, 0.241927832365036, 0.16394862532615662, 1.6242477893829346, 0.02076822519302368, 1.1422077417373657, 0.16787250339984894, -0.45601192116737366, ...
[ 0.87340247631073, 0.6607159376144409, -1.2003120183944702, -0.016772247850894928, -0.7433194518089294, 0.6987985968589783, -0.13270306587219238, -0.30221444368362427, -0.13744932413101196, 0.2862550616264343, 0.8816241025924683, 0.4016844630241394, -0.2706587016582489, -0.257060706615448, ...
{"http://arxiv.org/abs/2306.06263v1": 0.922035813331604, "http://arxiv.org/abs/2208.08544v3": 0.921151340007782, "http://arxiv.org/abs/2210.00079v3": 0.9103657603263855, "http://arxiv.org/abs/2307.11503v1": 0.8976566791534424, "http://arxiv.org/abs/2206.02792v1": 0.896964430809021, "http://arxiv.org/abs/2201.12293v4": 0.8897626399993896, "http://arxiv.org/abs/2302.11337v3": 0.8893073797225952, "http://arxiv.org/abs/2307.08609v1": 0.8885442614555359, "http://arxiv.org/abs/2205.14977v3": 0.8884826898574829, "http://arxiv.org/abs/2309.14581v1": 0.8831429481506348}
0.922036
0.898741
http://arxiv.org/abs/2102.09407v5
2021-02-18T00:00:00
Adaptive Rational Activations to Boost Deep Reinforcement Learning
Quentin Delfosse; Patrick Schramowski; Martin Mundt; Alejandro Molina; Kristian Kersting
Latest insights from biology show that intelligence not only emerges from the connections between neurons but that individual neurons shoulder more computational responsibility than previously anticipated. This perspective should be critical in the context of constantly changing distinct reinforcement learning environments, yet current approaches still primarily employ static activation functions. In this work, we motivate why rationals are suitable for adaptable activation functions and why their inclusion into neural networks is crucial. Inspired by recurrence in residual networks, we derive a condition under which rational units are closed under residual connections and formulate a naturally regularised version: the recurrent-rational. We demonstrate that equipping popular algorithms with (recurrent-)rational activations leads to consistent improvements on Atari games, especially turning simple DQN into a solid approach, competitive to DDQN and Rainbow.
cs.LG
1
ICLR-2024
[ -0.820600688457489, -0.66475909948349, -0.06603653728961945, -0.0813857689499855, -0.22895292937755585, 0.037353962659835815, -0.26541951298713684, 0.6578511595726013, -0.22478929162025452, 0.6004663705825806, -0.7870171070098877, 1.0944037437438965, 0.3034815788269043, 0.308514803647995, ...
[ -0.23682548105716705, 0.9733121395111084, -0.37704765796661377, 0.420199453830719, 0.35452955961227417, 0.01584509387612343, 0.3571116030216217, -0.3924747407436371, -0.7421926856040955, 0.15809044241905212, 0.035127826035022736, 0.07422138750553131, 0.26842379570007324, -0.062350831925868...
{"http://arxiv.org/abs/2210.01542v1": 0.9413725137710571, "http://arxiv.org/abs/2210.13435v1": 0.9281185269355774, "http://arxiv.org/abs/2303.11934v1": 0.9249167442321777, "http://arxiv.org/abs/2205.15043v2": 0.9246573448181152, "http://arxiv.org/abs/2210.02157v2": 0.9245254397392273, "http://arxiv.org/abs/2209.10634v2": 0.9243888258934021, "http://arxiv.org/abs/2303.00599v1": 0.9241913557052612, "http://arxiv.org/abs/2304.10466v1": 0.9224520921707153, "http://arxiv.org/abs/2301.11490v3": 0.9222976565361023, "http://arxiv.org/abs/2201.08115v2": 0.9220247864723206}
0.941373
0.925895
http://arxiv.org/abs/2102.10882v3
2021-02-22T00:00:00
Conditional Positional Encodings for Vision Transformers
Xiangxiang Chu; Zhi Tian; Bo Zhang; Xinlong Wang; Chunhua Shen
"We propose a conditional positional encoding (CPE) scheme for vision Transformers. Unlike previous (...TRUNCATED)
cs.CV; cs.AI; cs.LG
1
ICLR-2023
[-0.0836898684501648,-0.6185870170593262,0.19620980322360992,-0.5301298499107361,-0.0449054241180419(...TRUNCATED)
[0.6201993227005005,0.22138410806655884,-0.05666289106011391,0.013244099915027618,-0.195098504424095(...TRUNCATED)
{}
null
null
http://arxiv.org/abs/2103.01403v3
2021-03-02T00:00:00
A Minimalist Dataset for Systematic Generalization of Perception, Syntax, and Semantics
Qing Li; Siyuan Huang; Yining Hong; Yixin Zhu; Ying Nian Wu; Song-Chun Zhu
"Inspired by humans' exceptional ability to master arithmetic and generalize to new problems, we pre(...TRUNCATED)
cs.LG; cs.AI; cs.CV
1
ICLR-2023
[-0.4402918815612793,-0.46931028366088867,0.8327910900115967,-0.6764956712722778,0.33663028478622437(...TRUNCATED)
[0.1770109236240387,0.49474531412124634,-0.012065744958817959,-0.6595368981361389,0.2432122379541397(...TRUNCATED)
{}
null
null
http://arxiv.org/abs/2105.03692v4
2021-05-08T00:00:00
Incompatibility Clustering as a Defense Against Backdoor Poisoning Attacks
Charles Jin; Melinda Sun; Martin Rinard
"We propose a novel clustering mechanism based on an incompatibility property between subsets of dat(...TRUNCATED)
cs.LG; cs.CR; stat.ML
1
ICLR-2023
[-0.37018436193466187,-1.4452083110809326,-0.31904152035713196,-0.45182371139526367,-0.1214764565229(...TRUNCATED)
[-0.516154944896698,-0.31915998458862305,-0.5338889956474304,0.39147329330444336,-0.2601101398468017(...TRUNCATED)
{}
null
null
http://arxiv.org/abs/2105.14559v3
2021-05-30T00:00:00
Active Learning in Bayesian Neural Networks with Balanced Entropy Learning Principle
Jae Oh Woo
"Acquiring labeled data is challenging in many machine learning applications with limited budgets. A(...TRUNCATED)
cs.LG; stat.ML
1
ICLR-2023
[-0.34924548864364624,-0.8688428997993469,0.09013675153255463,-0.8125857710838318,-0.209998384118080(...TRUNCATED)
[0.19341912865638733,1.2231199741363525,-1.0490398406982422,-0.12455292791128159,-0.4252555370330810(...TRUNCATED)
{}
null
null
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
9