CHANNEL_NAME
stringclasses
1 value
URL
stringlengths
43
43
TITLE
stringlengths
12
100
DESCRIPTION
stringlengths
66
5k
TRANSCRIPTION
stringlengths
150
90.9k
SEGMENTS
stringlengths
1.05k
146k
Yannic Kilcher
https://www.youtube.com/watch?v=_8KNb5iqblE
Longformer: The Long-Document Transformer
The Longformer extends the Transformer by introducing sliding window attention and sparse global attention. This allows for the processing of much longer documents than classic models like BERT. Paper: https://arxiv.org/abs/2004.05150 Code: https://github.com/allenai/longformer Abstract: Transformer-based models are ...
Hi there. Today we're looking at Longformer, the Long Document Transformer by Esbalteji, Matthew Peters and Armin Cohen of Alan AI. So the Longformer is a variant of the Transformer as you might have guessed. The Longformer is a Transformer that can deal with long documents. So it's aptly named. So I am going to discu...
[{"start": 0.0, "end": 5.64, "text": " Hi there. Today we're looking at Longformer, the Long Document Transformer by"}, {"start": 5.64, "end": 13.64, "text": " Esbalteji, Matthew Peters and Armin Cohen of Alan AI. So the Longformer is a"}, {"start": 13.64, "end": 19.36, "text": " variant of the Transformer as you might...
Yannic Kilcher
https://www.youtube.com/watch?v=a0f07M2uj_A
Backpropagation and the brain
Geoffrey Hinton and his co-authors describe a biologically plausible variant of backpropagation and report evidence that such an algorithm might be responsible for learning in the brain. https://www.nature.com/articles/s41583-020-0277-3 Abstract: During learning, the brain modifies synapses to improve behaviour. In t...
Hi there. Today we're looking at back propagation and the brain by Timothy Lillikrapp, Adam Santoro, Luke Morris, Colin Akerman and Jeffrey Hinton. So this is a bit of an unusual paper for the machine learning community but nevertheless it's interesting. And let's be honest, at least half of our interest comes from th...
[{"start": 0.0, "end": 12.0, "text": " Hi there. Today we're looking at back propagation and the brain by Timothy Lillikrapp, Adam Santoro, Luke Morris, Colin Akerman and Jeffrey Hinton."}, {"start": 12.0, "end": 19.0, "text": " So this is a bit of an unusual paper for the machine learning community but nevertheless it...
Yannic Kilcher
https://www.youtube.com/watch?v=D-eg7k8YSfs
Shortcut Learning in Deep Neural Networks
This paper establishes a framework for looking at out-of-distribution generalization failures of modern deep learning as the models learning false shortcuts that are present in the training data. The paper characterizes why and when shortcut learning can happen and gives recommendations for how to counter its effect. ...
Hi, today we're looking at shortcut learning in deep neural networks by a number of authors from the University of Tübingen, the Muxplonk Research Center and the University of Toronto. So I'm not going to read all of them, but all of them are either joint first authors or joint senior authors. I just... What is this? ...
[{"start": 0.0, "end": 7.88, "text": " Hi, today we're looking at shortcut learning in deep neural networks by a number of authors"}, {"start": 7.88, "end": 15.72, "text": " from the University of T\u00fcbingen, the Muxplonk Research Center and the University of Toronto."}, {"start": 15.72, "end": 20.48, "text": " So I...
Yannic Kilcher
https://www.youtube.com/watch?v=Ok44otx90D4
Feature Visualization & The OpenAI microscope
A closer look at the OpenAI microscope, a database of visualizations of the inner workings of ImageNet classifiers, along with an explanation of how to obtain these visualizations. https://distill.pub/2017/feature-visualization/ https://microscope.openai.com/models https://github.com/tensorflow/lucid Links: YouTube: ...
Hi there. Today we're going to take a look at the OpenAI microscope and this article on the still called feature visualization. So the feature visualization article is by Chris Ola Alexander Mortevinsv and Ludwig Schubert of the Google Brain team while the OpenAI microscope is by OpenAI. So keep that in mind. These to...
[{"start": 0.0, "end": 6.88, "text": " Hi there. Today we're going to take a look at the OpenAI microscope and this"}, {"start": 6.88, "end": 12.36, "text": " article on the still called feature visualization. So the feature visualization"}, {"start": 12.36, "end": 18.88, "text": " article is by Chris Ola Alexander Mor...
Yannic Kilcher
https://www.youtube.com/watch?v=-h1KB8ps11A
Datasets for Data-Driven Reinforcement Learning
Offline Reinforcement Learning has come more and more into focus recently in domains where classic on-policy RL algorithms are infeasible to train, such as safety-critical tasks or learning from expert demonstrations. This paper presents an extensive benchmark for evaluating offline RL algorithms in a variety of settin...
Hi there. Today we're looking at data sets for data driven reinforcement learning by Justin Fuh, a Viral Kumar, Ophir Nachum, George Tucker, and Sergei Levine. So this is a, what you would call, a data set paper or a benchmark paper. And the main point or the main area of the paper is, what's called offline reinforcem...
[{"start": 0.0, "end": 6.44, "text": " Hi there. Today we're looking at data sets for data driven reinforcement learning by Justin Fuh,"}, {"start": 6.44, "end": 15.120000000000001, "text": " a Viral Kumar, Ophir Nachum, George Tucker, and Sergei Levine. So this is a, what you would call,"}, {"start": 15.12000000000000...
Yannic Kilcher
https://www.youtube.com/watch?v=eYgPJ_7BkEw
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
FixMatch is a simple, yet surprisingly effective approach to semi-supervised learning. It combines two previous methods in a clever way and achieves state-of-the-art in regimes with few and very few labeled examples. Paper: https://arxiv.org/abs/2001.07685 Code: https://github.com/google-research/fixmatch Abstract: S...
Hi, today we're looking at fixed match simplifying semi-supervised learning with consistency and confidence by Kyuk-Son, David Perthalot and others of Google Research. So this paper concerns semi-supervised learning. So what does semi-supervised learning mean? In semi-supervised learning, you have a data set of labele...
[{"start": 0.0, "end": 7.0, "text": " Hi, today we're looking at fixed match simplifying semi-supervised learning with consistency"}, {"start": 7.0, "end": 15.32, "text": " and confidence by Kyuk-Son, David Perthalot and others of Google Research."}, {"start": 15.32, "end": 19.32, "text": " So this paper concerns semi-...
Yannic Kilcher
https://www.youtube.com/watch?v=AU30czb4iQA
Imputer: Sequence Modelling via Imputation and Dynamic Programming
The imputer is a sequence-to-sequence model that strikes a balance between fully autoregressive models with long inference times and fully non-autoregressive models with fast inference. The imputer achieves constant decoding time independent of sequence length by exploiting dynamic programming. https://arxiv.org/abs/2...
Hi there. Today we're looking at the imputed sequence modeling via imputation and dynamic programming by William Chan, Chitwan Saria, Jeffrey Hinton, Mohamed Nuruzie and Navdeep Jaitley. So this is a model to perform sequence to sequence tasks. Now sequence to sequence tasks are very very common in NLP but in this cas...
[{"start": 0.0, "end": 6.48, "text": " Hi there. Today we're looking at the imputed sequence modeling via imputation and"}, {"start": 6.48, "end": 12.8, "text": " dynamic programming by William Chan, Chitwan Saria, Jeffrey Hinton, Mohamed"}, {"start": 12.8, "end": 19.48, "text": " Nuruzie and Navdeep Jaitley. So this i...
Yannic Kilcher
https://www.youtube.com/watch?v=ZVVnvZdUMUk
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
Stunning evidence for the hypothesis that neural networks work so well because their random initialization almost certainly contains a nearly optimal sub-network that is responsible for most of the final performance. https://arxiv.org/abs/1803.03635 Abstract: Neural network pruning techniques can reduce the parameter...
Hi there. Today we're looking at the lottery ticket hypothesis finding sparse trainable neural networks by Jonathan Frankel and Michael Carben. So this paper is sort of an empirical paper into what makes neural networks train successfully. And it comes out of the literature of pruning. So they say neural network pruni...
[{"start": 0.0, "end": 5.16, "text": " Hi there. Today we're looking at the lottery ticket hypothesis finding sparse"}, {"start": 5.16, "end": 12.52, "text": " trainable neural networks by Jonathan Frankel and Michael Carben. So this paper is"}, {"start": 12.52, "end": 20.16, "text": " sort of an empirical paper into w...
Yannic Kilcher
https://www.youtube.com/watch?v=-0aM99dMu_4
Dynamical Distance Learning for Semi-Supervised and Unsupervised Skill Discovery
DDL is an auxiliary task for an agent to learn distances between states in episodes. This can then be used further to improve the agent's policy learning procedure. Paper: https://arxiv.org/abs/1907.08225 Blog: https://sites.google.com/view/dynamical-distance-learning/home Abstract: Reinforcement learning requires ma...
Hi there. If you look at this robot, this robot has learned to turn this valve by itself. Now, by itself isn't really correct, but it has learned it in a semi-supervised way, with only 10 human inputs along the entire learning trajectory. So only 10 times was there a true reward for this reinforcement learning procedu...
[{"start": 0.0, "end": 8.0, "text": " Hi there. If you look at this robot, this robot has learned to turn this valve by itself."}, {"start": 8.0, "end": 14.0, "text": " Now, by itself isn't really correct, but it has learned it in a semi-supervised way,"}, {"start": 14.0, "end": 22.0, "text": " with only 10 human input...
Yannic Kilcher
https://www.youtube.com/watch?v=hg2Q_O5b9w4
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
Contrastive Learning has been an established method in NLP and Image classification. The authors show that with relatively minor adjustments, CL can be used to augment and improve RL dramatically. Paper: https://arxiv.org/abs/2004.04136 Code: https://github.com/MishaLaskin/curl Abstract: We present CURL: Contrastive ...
Hi there. Today we're going to look at curl contrastive unsupervised representations for reinforcement learning by Ravind Shrinivas, Michael Laskin and Pieter Abil. So this is a general framework for unsupervised representation learning for RL. So let's untangle the title a little bit. It is for reinforcement learning...
[{"start": 0.0, "end": 7.08, "text": " Hi there. Today we're going to look at curl contrastive unsupervised representations for reinforcement"}, {"start": 7.08, "end": 15.4, "text": " learning by Ravind Shrinivas, Michael Laskin and Pieter Abil. So this is a general framework"}, {"start": 15.4, "end": 23.28, "text": " ...
Yannic Kilcher
https://www.youtube.com/watch?v=gbG1X8Xq-T8
Enhanced POET: Open-Ended RL through Unbounded Invention of Learning Challenges and their Solutions
The enhanced POET makes some substantial and well-crafted improvements over the original POET algorithm and excels at open-ended learning like no system before. https://arxiv.org/abs/2003.08536 https://youtu.be/RX0sKDRq400 Abstract: Creating open-ended algorithms, which generate their own never-ending stream of novel...
There, before we jump into today's paper, I just want to give a shout out to Machine Learning Street Talk, where every week we talk about current or big trends or topics in machine learning. The first discussion that we launched is actually on today's paper, The Enhanced Poet. So if you like the following video, you m...
[{"start": 0.0, "end": 5.12, "text": " There, before we jump into today's paper, I just want to give a shout out to Machine Learning"}, {"start": 5.12, "end": 11.76, "text": " Street Talk, where every week we talk about current or big trends or topics in machine learning."}, {"start": 12.48, "end": 18.56, "text": " The...
Yannic Kilcher
https://www.youtube.com/watch?v=klPuEHCKG9M
Evolving Normalization-Activation Layers
Normalization and activation layers have seen a long history of hand-crafted variants with various results. This paper proposes an evolutionary search to determine the ultimate, final and best combined normalization-activation layer... in a very specific setting. https://arxiv.org/abs/2004.02967 Abstract: Normalizati...
Hi there. Today we're looking at evolving normalization activation layers by Hangiao Liu and Drew Brock, Karen Simone and Guo Vili. These are people from Google Brain and Google Deep Mind. The topic of this paper is, as you can see, it's about normalization activation layers and we want to evolve them. I think the pri...
[{"start": 0.0, "end": 5.9, "text": " Hi there. Today we're looking at evolving normalization activation layers by"}, {"start": 5.9, "end": 13.56, "text": " Hangiao Liu and Drew Brock, Karen Simone and Guo Vili. These are people from"}, {"start": 13.56, "end": 21.3, "text": " Google Brain and Google Deep Mind. The topi...
Yannic Kilcher
https://www.youtube.com/watch?v=DRy_Mr732yA
[Drama] Who invented Contrast Sets?
Funny Twitter spat between researchers arguing who was the first to invent an idea that has probably been around since 1990 :D References: https://arxiv.org/abs/2004.02709 https://twitter.com/nlpmattg/status/1247326213296672768 https://arxiv.org/abs/1909.12434 https://twitter.com/zacharylipton/status/12473578104107622...
I love me some good Twitter drama look at this this is awesome so after this contrast set paper appeared and I've done a video on that the author of it tweeted it out with one of the these long Twitter threads with screenshots and all this seems to be the new marketing tool of academics and as you know I'm not a fan o...
[{"start": 0.0, "end": 7.36, "text": " I love me some good Twitter drama look at this this is awesome so after this"}, {"start": 7.36, "end": 13.52, "text": " contrast set paper appeared and I've done a video on that the author of it"}, {"start": 13.52, "end": 19.68, "text": " tweeted it out with one of the these long ...
Yannic Kilcher
https://www.youtube.com/watch?v=qeEO2GECQk0
Evaluating NLP Models via Contrast Sets
Current NLP models are often "cheating" on supervised learning tasks by exploiting correlations that arise from the particularities of the dataset. Therefore they often fail to learn the original intent of the dataset creators. This paper argues that NLP models should be evaluated on Contrast Sets, which are hand-craft...
Hi there, today we're looking at evaluating NLP models via contrast sets and these are too many authors from too many places for me to read out so we'll just jump right into the problem. So what is the problem or let's jump into the solution? Here you see a visual question answering task, visual question answering in ...
[{"start": 0.0, "end": 6.04, "text": " Hi there, today we're looking at evaluating NLP models via contrast sets and"}, {"start": 6.04, "end": 13.56, "text": " these are too many authors from too many places for me to read out so we'll"}, {"start": 13.56, "end": 22.64, "text": " just jump right into the problem. So what...
Yannic Kilcher
https://www.youtube.com/watch?v=8wkgDnNxiVs
POET: Endlessly Generating Increasingly Complex and Diverse Learning Environments and Solutions
From the makers of Go-Explore, POET is a mixture of ideas from novelty search, evolutionary methods, open-ended learning and curriculum learning. https://arxiv.org/abs/1901.01753 Abstract: While the history of machine learning so far largely encompasses a series of problems posed by researchers and algorithms that le...
Alright, so what you're seeing here are solutions found to this bipedal walker problem by a new algorithm called Poet. So as you might guess, the challenge is to keep this little thing here walking to the right as far as you can while it encounters various obstacles. And it is and remains a challenging reinforcement l...
[{"start": 0.0, "end": 9.0, "text": " Alright, so what you're seeing here are solutions found to this bipedal walker problem by a new algorithm called Poet."}, {"start": 9.0, "end": 21.0, "text": " So as you might guess, the challenge is to keep this little thing here walking to the right as far as you can while it enc...
Yannic Kilcher
https://www.youtube.com/watch?v=awyuuJoHawo
Dream to Control: Learning Behaviors by Latent Imagination
Dreamer is a new RL agent by DeepMind that learns a continuous control task through forward-imagination in latent space. https://arxiv.org/abs/1912.01603 Videos: https://dreamrl.github.io/ Abstract: Learned world models summarize an agent's experience to facilitate learning complex behaviors. While learning world mod...
Hi there. Today we're looking at Dream to Control Learning Behaviors by latent imagination by Donniger Hoffner Timothy Lilly Krup, Timmy sorry, Jimmy Bach and Muhammad Norosi. This is a reinforcement learning paper that iterates on a series of previous papers where the goal is to learn a policy. In this case they want...
[{"start": 0.0, "end": 6.0600000000000005, "text": " Hi there. Today we're looking at Dream to Control Learning Behaviors by latent"}, {"start": 6.0600000000000005, "end": 13.24, "text": " imagination by Donniger Hoffner Timothy Lilly Krup, Timmy sorry, Jimmy Bach and"}, {"start": 13.24, "end": 22.240000000000002, "tex...
Yannic Kilcher
https://www.youtube.com/watch?v=XdpF9ZixIbI
Can we Contain Covid-19 without Locking-down the Economy?
My thoughts on the let-the-young-get-infected argument. https://medium.com/amnon-shashua/can-we-contain-covid-19-without-locking-down-the-economy-2a134a71873f Abstract: In this article, we present an analysis of a risk-based selective quarantine model where the population is divided into low and high-risk groups. The...
Can we contain COVID-19 without locking down the economy? This is a question and I do care about this article because Shai Shalef Schwartz is one of the bigger names in machine learning theory. So it was interesting for me to see what he and his collaborator here had to say about the kind of outbreak and the strategy ...
[{"start": 0.0, "end": 5.76, "text": " Can we contain COVID-19 without locking down the economy?"}, {"start": 5.76, "end": 16.6, "text": " This is a question and I do care about this article because Shai Shalef Schwartz is one of the bigger names in machine learning theory."}, {"start": 16.6, "end": 27.8, "text": " So ...
Yannic Kilcher
https://www.youtube.com/watch?v=lqtlua-Ylts
State-of-Art-Reviewing: A Radical Proposal to Improve Scientific Publication
Peer Review is outdated and ineffective. SOAR is a new and revolutionary way to distribute scientific reviewing and scale to the new age of faster, better and more significant research. https://arxiv.org/abs/2003.14415 Abstract: Peer review forms the backbone of modern scientific manuscript evaluation. But after two ...
Alright, hi everyone. Today we're looking at state-of-the-art reviewing a radical proposal to improve scientific publication. So this has been on my mind for a while. The review process for modern science, especially machine learning, is just broken. I've spoken numerous times about the fact that we need to replace it...
[{"start": 0.0, "end": 6.0, "text": " Alright, hi everyone. Today we're looking at state-of-the-art reviewing a radical proposal"}, {"start": 6.0, "end": 13.0, "text": " to improve scientific publication. So this has been on my mind for a while. The review"}, {"start": 13.0, "end": 20.0, "text": " process for modern sc...
Yannic Kilcher
https://www.youtube.com/watch?v=U3zmekzQ8WQ
Agent57: Outperforming the Atari Human Benchmark
DeepMind's Agent57 is the first RL agent to outperform humans in all 57 Atari benchmark games. It extends previous algorithms like Never Give Up and R2D2 by meta-learning the exploration-exploitation tradeoff controls. https://arxiv.org/abs/2003.13350 https://deepmind.com/blog/article/Agent57-Outperforming-the-human-A...
Hi there, you're looking at Solaris, which is a game in the Atari benchmark, and it has been one of the hardest games for reinforcement learning agents to solve. What you're seeing is Agent 57, which is a new agent by DeepMind, that is the first one to beat all of the 57 games in the Atari suite to a superhuman perfor...
[{"start": 0.0, "end": 9.24, "text": " Hi there, you're looking at Solaris, which is a game in the Atari benchmark, and it has"}, {"start": 9.24, "end": 14.84, "text": " been one of the hardest games for reinforcement learning agents to solve."}, {"start": 14.84, "end": 21.52, "text": " What you're seeing is Agent 57, ...
Yannic Kilcher
https://www.youtube.com/watch?v=lmAj0SU_bW0
Axial Attention & MetNet: A Neural Weather Model for Precipitation Forecasting
MetNet is a predictive neural network model for weather prediction. It uses axial attention to capture long-range dependencies. Axial attention decomposes attention layers over images into row-attention and column-attention in order to save memory and computation. https://ai.googleblog.com/2020/03/a-neural-weather-mod...
Hi there. So what you're looking at here is a weather forecast model. Specifically, the very top row is a new weather forecast model called NetNet by Google Research. So the goal of weather prediction is pretty simple. You want to know what the weather is going to be in the future. Specifically here, you want to know ...
[{"start": 0.0, "end": 8.6, "text": " Hi there. So what you're looking at here is a weather forecast model. Specifically, the"}, {"start": 8.6, "end": 15.44, "text": " very top row is a new weather forecast model called NetNet by Google Research. So the"}, {"start": 15.44, "end": 19.92, "text": " goal of weather predic...
Yannic Kilcher
https://www.youtube.com/watch?v=wAgO2WZzjn4
[Rant] coronavirus
A rant about toilet paper and lockdowns. Links: YouTube: https://www.youtube.com/c/yannickilcher Twitter: https://twitter.com/ykilcher BitChute: https://www.bitchute.com/channel/yannic-kilcher Minds: https://www.minds.com/ykilcher
This video is going to be a rant. There is not really a script and I have not really thought this through, but I would like to talk about some things that are on my mind and that I don't see discussed very often with respect to coronavirus. I am not a medical expert, I don't play it on the internet and there absolutel...
[{"start": 0.0, "end": 18.0, "text": " This video is going to be a rant. There is not really a script and I have not really thought this through, but I would like to talk about some things that are on my mind and that I don't see discussed very often with respect to coronavirus."}, {"start": 18.0, "end": 30.0, "text": ...
Yannic Kilcher
https://www.youtube.com/watch?v=H3Bhlan0mE0
Online Education - How I Make My Videos
Just a short overview of tools I use to make my videos. OneNote - https://www.onenote.com iSpring Free Cam - https://www.ispringsolutions.com/ispring-cam Shotcut - https://shotcut.org Slack - https://slack.com RocketChat - https://rocket.chat Zoom - https://zoom.us Jitsi - https://jitsi.org GDocs - https://www.google....
Hi there. So a lot of people have been asking me how I make these videos and this is of course relevant now that everyone's work from home and all the schools are converted into online schools. All of a sudden a lot of people have to make these online education happen and I think this style of video lends itself to on...
[{"start": 0.0, "end": 8.52, "text": " Hi there. So a lot of people have been asking me how I make these videos and this is of course relevant now that everyone's"}, {"start": 8.8, "end": 16.84, "text": " work from home and all the schools are converted into online schools. All of a sudden a lot of people have to make ...
Yannic Kilcher
https://www.youtube.com/watch?v=p3sAF3gVMMA
Deep Learning for Symbolic Mathematics
This model solves integrals and ODEs by doing seq2seq! https://arxiv.org/abs/1912.01412 https://ai.facebook.com/blog/using-neural-networks-to-solve-advanced-mathematics-equations/ Abstract: Neural networks have a reputation for being better at solving statistical or approximate problems than at performing calculation...
Hi there. Can you solve this? Well neither can I, but Wolfram Alpha can. So this is the thing that probably I have most to think for passing university, especially the math classes in it. If you don't know Wolfram Alpha, it is an engine. It's from the creators of Mathematica, but it is online. And it can do symbolic m...
[{"start": 0.0, "end": 16.0, "text": " Hi there. Can you solve this? Well neither can I, but Wolfram Alpha can. So this is the thing that probably I have most to think for passing university, especially the math classes in it."}, {"start": 16.0, "end": 33.0, "text": " If you don't know Wolfram Alpha, it is an engine. I...
Yannic Kilcher
https://www.youtube.com/watch?v=JPX_jSZtszY
NeurIPS 2020 Changes to Paper Submission Process
My thoughts on the changes to the paper submission process for NeurIPS 2020. The main new changes are: 1. ACs can desk reject papers 2. All authors have to be able to review if asked 3. Resubmissions from other conferences must be marked and a summary of changes since the last submission must be provided 4. Borader so...
Hi there. So I just wanted to give a few quick thoughts about the changes to the new RIP submission process. This year as opposed to last year, they've announced this on the website, on Twitter, with the video and so on and I thought I might share some thoughts on that and maybe some of you haven't heard yet in case y...
[{"start": 0.0, "end": 9.72, "text": " Hi there. So I just wanted to give a few quick thoughts about the changes to the"}, {"start": 9.72, "end": 15.24, "text": " new RIP submission process. This year as opposed to last year, they've announced"}, {"start": 15.24, "end": 20.64, "text": " this on the website, on Twitter,...
Yannic Kilcher
https://www.youtube.com/watch?v=9Kec_7WFyp0
Growing Neural Cellular Automata
The Game of Life on steroids! This model learns to grow complex patterns in an entirely local way. Each cell is trained to listen to its neighbors and update itself in a way such that, collectively, an overall goal is reached. Fascinating and interactive! https://distill.pub/2020/growing-ca/ https://en.wikipedia.org/w...
Hi there. Today I thought we would be looking at growing neural cellular automata, which is an article on the still.pub, which I found pretty neat. So this is kind of an interactive article. If you don't know the still.pub, check it out. It is a cool new concept as an alternative to the classical journals or the confe...
[{"start": 0.0, "end": 8.24, "text": " Hi there. Today I thought we would be looking at growing neural cellular automata, which"}, {"start": 8.24, "end": 16.16, "text": " is an article on the still.pub, which I found pretty neat. So this is kind of an interactive"}, {"start": 16.16, "end": 23.04, "text": " article. If ...
Yannic Kilcher
https://www.youtube.com/watch?v=tC01FRB0M7w
Turing-NLG, DeepSpeed and the ZeRO optimizer
Microsoft has trained a 17-billion parameter language model that achieves state-of-the-art perplexity. This video takes a look at the ZeRO optimizer that enabled this breakthrough. ZeRO allows you to do model- and data-parallelism without having huge cuts in training speed. https://www.microsoft.com/en-us/research/blo...
Hi everyone, today we're going to look at Turing NLGA 17 billion parameter language model by Microsoft, the latest and greatest of language modeling by Microsoft. What is this? It is a language model. A language model is basically a model that learns to produce language given language. So if you start a sentence, it's...
[{"start": 0.0, "end": 7.4, "text": " Hi everyone, today we're going to look at Turing NLGA 17 billion parameter language model"}, {"start": 7.4, "end": 15.56, "text": " by Microsoft, the latest and greatest of language modeling by Microsoft."}, {"start": 15.56, "end": 16.56, "text": " What is this?"}, {"start": 16.56,...
Yannic Kilcher
https://www.youtube.com/watch?v=vB_hQ5NmtPs
[Interview] Mark Ledwich - Algorithmic Extremism: Examining YouTube's Rabbit Hole of Radicalization
Interview with one of the authors of a widely reported study on YouTube's recommendation engine and where it leads its users. https://arxiv.org/abs/1912.11211 https://www.recfluence.net/ https://github.com/markledwich2/Recfluence https://www.patreon.com/ledwich Abstract: The role that YouTube and its behind-the-scene...
All right, I'm very pleased to have Mark Ledowitch here today. He's one of the authors of this paper that's called Algorithmic Extremism Examining YouTube's Rhabitole of Radicalization. So I've done a video about a topic like this before, actually several, and this is basically one in a line of research that examines ...
[{"start": 0.0, "end": 3.0, "text": " All right, I'm very pleased to have Mark Ledowitch here today."}, {"start": 4.0, "end": 8.0, "text": " He's one of the authors of this paper that's called"}, {"start": 8.0, "end": 11.0, "text": " Algorithmic Extremism Examining YouTube's"}, {"start": 11.0, "end": 13.0, "text": " Rh...
Yannic Kilcher
https://www.youtube.com/watch?v=i4H0kjxrias
Reformer: The Efficient Transformer
The Transformer for the masses! Reformer solves the biggest problem with the famous Transformer model: Its huge resource requirements. By cleverly combining Locality Sensitive Hashing and ideas from Reversible Networks, the classically huge footprint of the Transformer is drastically reduced. Not only does that mean th...
Hi there. Today we'll look at Reformer, the efficient transformer by Nikita Kitaiv, Lukash Kaiser and Ansem Leveskaya. This is a paper that tries to reduce the extreme resource requirements of the transformer model. Now if you haven't seen the transformer model before, that's this thing. I suggest you go watch, for ex...
[{"start": 0.0, "end": 5.98, "text": " Hi there. Today we'll look at Reformer, the efficient transformer by Nikita"}, {"start": 5.98, "end": 13.280000000000001, "text": " Kitaiv, Lukash Kaiser and Ansem Leveskaya. This is a paper that tries to reduce"}, {"start": 13.280000000000001, "end": 18.7, "text": " the extreme r...
Yannic Kilcher
https://www.youtube.com/watch?v=EbFosdOi5SY
Go-Explore: a New Approach for Hard-Exploration Problems
This algorithm solves the hardest games in the Atari suite and makes it look so easy! This modern version of Dijkstra's shortest path algorithm is outperforming everything else by orders of magnitude, and all based on random exploration. https://arxiv.org/abs/1901.10995 https://eng.uber.com/go-explore/ https://github....
Hi there, what you're seeing here is the game Montezuma's Revenge and it has been a problem for a long time for reinforcement learning algorithms. What you can see is this little person that has to kind of jump around, collect keys, collect these coins, kind of get over enemies and so on. And all of this is super hard...
[{"start": 0.0, "end": 7.88, "text": " Hi there, what you're seeing here is the game Montezuma's Revenge and it has been a problem"}, {"start": 7.88, "end": 11.200000000000001, "text": " for a long time for reinforcement learning algorithms."}, {"start": 11.200000000000001, "end": 17.8, "text": " What you can see is th...
Yannic Kilcher
https://www.youtube.com/watch?v=waK7AD-AEyc
NeurIPS 19 Poster Session
I'm at the poster session and the amount of people here is just crazy
Hi there, we are here at the NERV 2019 poster session, one of the poster sessions specifically. There are two poster sessions a day, three days, so this is day two the first poster session. It's technically lunchtime so most people are out, but you can see there are still so many people here. There are about 250 poste...
[{"start": 0.0, "end": 7.640000000000001, "text": " Hi there, we are here at the NERV 2019 poster session, one of the poster sessions"}, {"start": 7.640000000000001, "end": 12.96, "text": " specifically. There are two poster sessions a day, three days, so this is day two"}, {"start": 12.96, "end": 16.72, "text": " the ...
Yannic Kilcher
https://www.youtube.com/watch?v=RrvC8YW0pT0
Reinforcement Learning Upside Down: Don't Predict Rewards -- Just Map Them to Actions
Schmidhuber thinking outside the box! Upside-Down RL turns RL on its head and constructs a behavior function that uses the desired reward as an input. The new paradigm shows surprising performance compared to classic RL algorithms. Abstract: We transform reinforcement learning (RL) into a form of supervised learning (...
He did it. Crazy son of a bitch. Did it again. What am I talking about? Jürgen Schmidhuber, reinforcement learning upside down. New paper just dropped on the verge of the NURRIPS conference being presented at a workshop here. Presenting upside down reinforcement learning. I am pumped for this one. Can you tell? So it ...
[{"start": 0.0, "end": 3.8000000000000003, "text": " He did it."}, {"start": 3.8000000000000003, "end": 5.32, "text": " Crazy son of a bitch."}, {"start": 5.32, "end": 6.88, "text": " Did it again."}, {"start": 6.88, "end": 8.8, "text": " What am I talking about?"}, {"start": 8.8, "end": 13.52, "text": " J\u00fcrgen Sc...
Yannic Kilcher
https://www.youtube.com/watch?v=Z6ea_AbnnCc
NeurIPS 2019
I'm at the 2019 conference on Neural Information Processing Systems in Vancouver, trying to register, but the line was just so long that I decided to bail :D
Good morning learners, we are here in beautiful Vancouver in Canada and attending the NURRIPS conference 2019. Of course one of the largest conferences in machine learning of the year. It's actually there's been a lottery system for the tickets because so many people wanted to register. There are over 8,000 people att...
[{"start": 0.0, "end": 6.8, "text": " Good morning learners, we are here in beautiful Vancouver in Canada and"}, {"start": 6.8, "end": 13.76, "text": " attending the NURRIPS conference 2019. Of course one of the largest conferences in"}, {"start": 13.76, "end": 20.28, "text": " machine learning of the year. It's actual...
Yannic Kilcher
https://www.youtube.com/watch?v=We20YSAJZSE
MuZero: Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model
MuZero harnesses the power of AlphaZero, but without relying on an accurate environment model. This opens up planning-based reinforcement learning to entirely new domains, where such environment models aren't available. The difference to previous work is that, instead of learning a model predicting future observations,...
Hi there. Today we're looking at mastering Atari, Go, chess, and Shogi by planning with a Learned Model by Julian Schitweiser and people generally from DeepMind. So this paper is an extension to Alpha0, the kind of famous algorithm that learn to play Go and chess simply by playing itself and the kind of cool thing abo...
[{"start": 0.0, "end": 6.88, "text": " Hi there. Today we're looking at mastering Atari, Go, chess, and Shogi by planning with"}, {"start": 6.88, "end": 16.72, "text": " a Learned Model by Julian Schitweiser and people generally from DeepMind. So this paper"}, {"start": 16.72, "end": 25.44, "text": " is an extension to...
Yannic Kilcher
https://www.youtube.com/watch?v=KXEEqcwXn8w
A neurally plausible model learns successor representations in partially observable environments
Successor representations are a mid-point between model-based and model-free reinforcement learning. This paper learns successor representation in environments where only incomplete information is available. Abstract: Animals need to devise strategies to maximize returns while interacting with their environment based ...
Alright, hi there. Today we're looking at a nearly plausible model, learned successor representations in partially observable environments by Esther Verdes and Manish Saani. This paper is a paper on a topic that has been interesting for a while and that's successor representations. So we'll dive into all of this. The ...
[{"start": 0.0, "end": 4.92, "text": " Alright, hi there. Today we're looking at a nearly plausible model,"}, {"start": 4.92, "end": 10.0, "text": " learned successor representations in partially observable environments by Esther"}, {"start": 10.0, "end": 18.36, "text": " Verdes and Manish Saani. This paper is a paper ...
Yannic Kilcher
https://www.youtube.com/watch?v=Xc9Rkbg6IZA
SinGAN: Learning a Generative Model from a Single Natural Image
With just a single image as an input, this algorithm learns a generative model that matches the input image's patch distribution at multiple scales and resolutions. This enables sampling of extremely realistic looking variations on the original image and much more. Abstract: We introduce SinGAN, an unconditional gener...
Hi there, today we'll look at SINGAN, learning a generative model from a single natural image by Tamar Rott Shaham, Tali Dekel and Tomar Mikhaili. So this paper, as it says, it's dealing with learning a generative model from just one image. And this kind of needs to be stressed because most generative model, even if t...
[{"start": 0.0, "end": 7.2, "text": " Hi there, today we'll look at SINGAN, learning a generative model from a single natural image"}, {"start": 7.2, "end": 12.88, "text": " by Tamar Rott Shaham, Tali Dekel and Tomar Mikhaili."}, {"start": 12.88, "end": 18.56, "text": " So this paper, as it says, it's dealing with lear...
Yannic Kilcher
https://www.youtube.com/watch?v=BTLCdge7uSQ
AlphaStar: Grandmaster level in StarCraft II using multi-agent reinforcement learning
DeepMind's new agent to tackle yet another Esport: Starcraft II. This agent uses deep reinforcement learning with a new technique, called League Training, to catapult itself to Grandmaster-level skill at playing this game. Abstract: Many real-world applications require artificial agents to compete and coordinate with ...
Alright, let's talk about Alpha Star, Grandmaster level and Starcraft 2 using multi-agent reinforcement learning. The corresponding paper looks like this and is by Oriol Vinyals at Al from Deep Mind and has been published in the Journal of Nature recently. Now let me say this first, stop publishing in Nature. This is ...
[{"start": 0.0, "end": 7.4, "text": " Alright, let's talk about Alpha Star, Grandmaster level and Starcraft 2 using multi-agent reinforcement"}, {"start": 7.4, "end": 8.4, "text": " learning."}, {"start": 8.4, "end": 15.32, "text": " The corresponding paper looks like this and is by Oriol Vinyals at Al from Deep Mind a...
Yannic Kilcher
https://www.youtube.com/watch?v=kOy49NqZeqI
IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures
Policy Gradient RL on a massively distributed scale with theoretical guarantees! Abstract: In this work we aim to solve a large collection of tasks using a single reinforcement learning agent with a single set of parameters. A key challenge is to handle the increased amount of data and extended training time. We have ...
Hi there! Today we're looking at Impala, scalable distributed deep RL with important-sweighted actor-learner architectures by Lossé Espéhal, Hubert Sawyer, Remy Munos et al. So this paper deals with a new architecture for deep reinforcement learning, specifically distributed deep reinforcement learning. So that means ...
[{"start": 0.0, "end": 6.0, "text": " Hi there! Today we're looking at Impala, scalable distributed deep RL with"}, {"start": 6.0, "end": 12.040000000000001, "text": " important-sweighted actor-learner architectures by Loss\u00e9 Esp\u00e9hal, Hubert Sawyer,"}, {"start": 12.040000000000001, "end": 18.6, "text": " Remy ...
Yannic Kilcher
https://www.youtube.com/watch?v=ctCv_NRpqvM
The Visual Task Adaptation Benchmark
This paper presents a new benchmark for Visual Task Adaptation (i.e. BERT for images) and investigates several baseline methods for doing so. Abstract: Representation learning promises to unlock deep learning for the long tail of vision tasks without expansive labelled datasets. Yet, the absence of a unified yardstick...
Hi there. Today we're looking at the Visual Task Adaptation Benchmark by a list of authors that's way too long to read out all from Google Brain. So what is this paper? This paper cares about a new benchmark that is abbreviated VTab. And VTab is a benchmark for a task called Visual Task Adaptation. So a benchmark, the...
[{"start": 0.0, "end": 8.0, "text": " Hi there. Today we're looking at the Visual Task Adaptation Benchmark by a list of authors"}, {"start": 8.0, "end": 15.8, "text": " that's way too long to read out all from Google Brain. So what is this paper? This paper"}, {"start": 15.8, "end": 24.96, "text": " cares about a new ...
Yannic Kilcher
https://www.youtube.com/watch?v=69IjNZaoeao
LeDeepChef 👨‍🍳 Deep Reinforcement Learning Agent for Families of Text-Based Games
The AI cook is here! This agent learns to play a text-based game where the goal is to prepare a meal according to a recipe. Challenges? Many! The number of possible actions is huge, ingredients change and can include ones never seen before, you need to navigate rooms, use tools, manage an inventory and sequence everyth...
Hi there! Today we're looking at LaDepchef, deep reinforcement learning agent for families of tech space games by Leonardo Adolfs and Thomas Huffman. So this is a paper about engineering an agent for a particular family of tasks. This is different from reinforcement learning agents that, for example, are just good at ...
[{"start": 0.0, "end": 6.32, "text": " Hi there! Today we're looking at LaDepchef, deep reinforcement learning agent for families"}, {"start": 6.32, "end": 14.84, "text": " of tech space games by Leonardo Adolfs and Thomas Huffman. So this is a paper about engineering"}, {"start": 14.84, "end": 20.92, "text": " an agen...
Yannic Kilcher
https://www.youtube.com/watch?v=BK3rv0MQMwY
[News] The Siraj Raval Controversy
Popular ML YouTuber Siraj Raval is in the middle of not just one, but two controversies: First, a lot of students of his 200$ online-course have accused him of breaking major promises he made when advertising the course and denying them refunds. Second, his paper on "The Neural Qubit" appears to be plagiarized almost v...
There is a massive controversy going on right now and in the middle is Siraj Raval, a prominent YouTuber. So today I'll just be actually shortly reporting on this, not giving too much opinion, just kind of stating what's up in a very high level overview. Because if you haven't heard of this, I think it's important tha...
[{"start": 0.0, "end": 6.84, "text": " There is a massive controversy going on right now and in the middle is Siraj Raval,"}, {"start": 6.84, "end": 9.36, "text": " a prominent YouTuber."}, {"start": 9.36, "end": 14.96, "text": " So today I'll just be actually shortly reporting on this, not giving too much opinion,"}, ...
Yannic Kilcher
https://www.youtube.com/watch?v=rvr143crpuU
Accelerating Deep Learning by Focusing on the Biggest Losers
What if you could reduce the time your network trains by only training on the hard examples? This paper proposes to select samples with high loss and only train on those in order to speed up training. Abstract: This paper introduces Selective-Backprop, a technique that accelerates the training of deep neural networks ...
Hi there, today we're looking at accelerating deep learning by focusing on the biggest losers by Angela Giang at L. This paper is pretty simple, pretty short in idea and is a pretty much an engineering paper. So we'll go over this idea and give it a good look and discuss advantages, disadvantages, and so on. So what's...
[{"start": 0.0, "end": 5.5600000000000005, "text": " Hi there, today we're looking at accelerating deep learning by focusing on the biggest"}, {"start": 5.5600000000000005, "end": 15.4, "text": " losers by Angela Giang at L. This paper is pretty simple, pretty short in idea and is"}, {"start": 15.4, "end": 20.96, "text...
Yannic Kilcher
https://www.youtube.com/watch?v=MIEA8azwu1k
DEEP LEARNING MEME REVIEW - Episode 1
The wait is finally over! Antonio and I discuss the best, funniest and dankest memes of the machine learning world. Join us for a laugh!
What are you done means before? No. Don't you have this show on YouTube when you review memes? No. You have this? I think that's an entirely new concept. We're just gonna steal this concept from PewDiePie. Okay. But first actual meme review, deep learning themes. Welcome. I'm joined by Antonio, who is a bit of a meme ...
[{"start": 0.0, "end": 2.0, "text": " What are you done means before?"}, {"start": 2.0, "end": 3.0, "text": " No."}, {"start": 3.0, "end": 5.0, "text": " Don't you have this show on YouTube when you review memes?"}, {"start": 5.0, "end": 6.0, "text": " No."}, {"start": 6.0, "end": 7.0, "text": " You have this?"}, {"sta...
Yannic Kilcher
https://www.youtube.com/watch?v=nXGHJTtFYRU
Dynamic Routing Between Capsules
Geoff Hinton's next big idea! Capsule Networks are an alternative way of implementing neural networks by dividing each layer into capsules. Each capsule is responsible for detecting the presence and properties of one particular entity in the input sample. This information is then allocated dynamically to higher-level c...
Hi there. Today we're looking at dynamic routing between capsules by Sara Sabur, Niklos Frost and Jeffrey Hinton of Google Brain. This paper is a bit older but it's made quite the impact at the time and so we'll go through it. I find it's a pretty hard paper to read and kind of understand because a lot of things are v...
[{"start": 0.0, "end": 5.5200000000000005, "text": " Hi there. Today we're looking at dynamic routing between capsules by Sara"}, {"start": 5.5200000000000005, "end": 11.64, "text": " Sabur, Niklos Frost and Jeffrey Hinton of Google Brain. This paper is a bit"}, {"start": 11.64, "end": 18.12, "text": " older but it's m...
Yannic Kilcher
https://www.youtube.com/watch?v=-MCYbmU9kfg
RoBERTa: A Robustly Optimized BERT Pretraining Approach
This paper shows that the original BERT model, if trained correctly, can outperform all of the improvements that have been proposed lately, raising questions about the necessity and reasoning behind these. Abstract: Language model pretraining has led to significant performance gains but careful comparison between diff...
Hello everyone, today we're looking at Roberta, a robustly optimized bird pre-training approach by Yinhang Liu at the out of mainly of Facebook research. So this paper is a pretty short, pretty simple paper and the main premise is we've seen a number of improvements over the initial bird paper where different pre-trai...
[{"start": 0.0, "end": 6.32, "text": " Hello everyone, today we're looking at Roberta, a robustly optimized bird pre-training"}, {"start": 6.32, "end": 12.08, "text": " approach by Yinhang Liu at the out of mainly of Facebook research."}, {"start": 12.08, "end": 18.88, "text": " So this paper is a pretty short, pretty ...
Yannic Kilcher
https://www.youtube.com/watch?v=AR3W-nfcDe4
Auditing Radicalization Pathways on YouTube
This paper claims that there is a radicalization pipeline on YouTube pushing people towards the Alt-Right, backing up their claims with empirical analysis of channel recommendations and commenting behavior. I suggest that there is a much simpler explanation of this data: A basic diffusion process. Abstract: Non-profit...
Hi there! Today we're going to look at auditing radicalization pathways on YouTube by Manuel Horta Riberio at Al. So this paper is a bit different than the one we're usually looking at, but since I'm a YouTuber and this is in the kind of a data science realm, I thought it fits neatly. So, yeah, we'll have a look and t...
[{"start": 0.0, "end": 7.32, "text": " Hi there! Today we're going to look at auditing radicalization pathways on YouTube by Manuel"}, {"start": 7.32, "end": 14.200000000000001, "text": " Horta Riberio at Al. So this paper is a bit different than the one we're usually looking"}, {"start": 14.200000000000001, "end": 22....
Yannic Kilcher
https://www.youtube.com/watch?v=wZWn7Hm8osA
Gauge Equivariant Convolutional Networks and the Icosahedral CNN
Ever wanted to do a convolution on a Klein Bottle? This paper defines CNNs over manifolds such that they are independent of which coordinate frame you choose. Amazingly, this then results in an efficient practical method to achieve state-of-the-art in several tasks! https://arxiv.org/abs/1902.04615 Abstract: The prin...
What you're looking at here are manifolds. Specifically, you're looking at 2D manifolds embedded in a 3D space. So, naturally, these are some kind of bodies that have a surface. And one of the things you might want to do with a manifold like this is to define a convolutional neural network to work on this surface. So,...
[{"start": 0.0, "end": 4.0, "text": " What you're looking at here are manifolds."}, {"start": 4.0, "end": 9.0, "text": " Specifically, you're looking at 2D manifolds embedded in a 3D space."}, {"start": 9.0, "end": 15.0, "text": " So, naturally, these are some kind of bodies that have a surface."}, {"start": 15.0, "end...
Yannic Kilcher
https://www.youtube.com/watch?v=H6Qiegq_36c
Processing Megapixel Images with Deep Attention-Sampling Models
Current CNNs have to downsample large images before processing them, which can lose a lot of detail information. This paper proposes attention sampling, which learns to selectively process parts of any large image in full resolution, while discarding uninteresting bits. This leads to enormous gains in speed and memory ...
Hi there. Today we're looking at processing megapixel images with deep attention sampling models by Angela's Cattaro-Poulouse and Francois Fleure. So this another paper that I saw they talk of at ICML and it's a pretty cool idea, it's pretty simple and apparently it works very well. So consider the following image her...
[{"start": 0.0, "end": 5.0200000000000005, "text": " Hi there. Today we're looking at processing megapixel images with deep"}, {"start": 5.0200000000000005, "end": 13.02, "text": " attention sampling models by Angela's Cattaro-Poulouse and Francois Fleure. So this"}, {"start": 13.02, "end": 21.36, "text": " another pap...
Yannic Kilcher
https://www.youtube.com/watch?v=1L83tM8nwHU
Manifold Mixup: Better Representations by Interpolating Hidden States
Standard neural networks suffer from problems such as un-smooth classification boundaries and overconfidence. Manifold Mixup is an easy regularization technique that rectifies these problems. It works by interpolating hidden representations of different data points and then train them to predict equally interpolated la...
Hi there. Today we're looking at manifold mixup, better representations by interpolating hidden states, by Vikus Verma at all. A number of big names on this paper as you can see and I also saw this at ICML so I was intrigued by it. They propose manifold mixup which is sort of a regularizer of neural networks, specific...
[{"start": 0.0, "end": 5.64, "text": " Hi there. Today we're looking at manifold mixup, better representations by"}, {"start": 5.64, "end": 11.88, "text": " interpolating hidden states, by Vikus Verma at all. A number of big names on this"}, {"start": 11.88, "end": 19.04, "text": " paper as you can see and I also saw t...
Yannic Kilcher
https://www.youtube.com/watch?v=Qk4lJdp7ZAs
Learning World Graphs to Accelerate Hierarchical Reinforcement Learning
The goal of hierarchical reinforcement learning is to divide a task into different levels of coarseness with the top-level agent planning only over a high-level view of the world and each subsequent layer having a more detailed view. This paper proposes to learn a set of important states as well as their connections to...
Hi there. Today we're looking at learning world graphs to accelerate hierarchical reinforcement learning by Wendling Chung at all from Salesforce research. This work is based in the world of reinforcement learning and especially hierarchical reinforcement learning. So in hierarchical reinforcement learning, the idea i...
[{"start": 0.0, "end": 6.2, "text": " Hi there. Today we're looking at learning world graphs to accelerate hierarchical reinforcement"}, {"start": 6.2, "end": 15.56, "text": " learning by Wendling Chung at all from Salesforce research. This work is based in the world of"}, {"start": 15.56, "end": 21.44, "text": " reinf...
Yannic Kilcher
https://www.youtube.com/watch?v=ZAW9EyNo2fw
Reconciling modern machine learning and the bias-variance trade-off
It turns out that the classic view of generalization and overfitting is incomplete! If you add parameters beyond the number of points in your dataset, generalization performance might increase again due to the increased smoothness of overparameterized functions. Abstract: The question of generalization in machine lear...
Hi there. Today we're looking at reconciling modern machine learning and the bias variance trade off by Mikhail Belkin at all. So this paper struck me as interesting at ICML when I heard a talk by Mike Mikhail Belkin and the kind of the paper is very interesting in terms of what it proposes about modern machine learni...
[{"start": 0.0, "end": 5.2, "text": " Hi there. Today we're looking at reconciling modern machine learning and the"}, {"start": 5.2, "end": 11.72, "text": " bias variance trade off by Mikhail Belkin at all. So this paper struck me as"}, {"start": 11.72, "end": 19.5, "text": " interesting at ICML when I heard a talk by ...
Yannic Kilcher
https://www.youtube.com/watch?v=l8JeokY5NsU
Conversation about Population-Based Methods (Re-upload)
Being interviewed by Connor Shorten of Henry AI Labs (https://www.youtube.com/channel/UCHB9VepY6kYvZjj0Bgxnpbw) on the topic of population-based methods and open-ended learning. Tutorial: https://www.facebook.com/icml.imls/videos/481758745967365/ Book: https://www.amazon.com/dp/B00X57B4JG/
Hi there. I've recently been interviewed by the YouTube channel Henry AI Labs by Connor Shorten. And what follows is the resulting conversation we had about population-based methods and open-ended learning, things like that, basically topics of the ICML tutorial that we both saw. It's important to note that none of us...
[{"start": 0.0, "end": 8.0, "text": " Hi there. I've recently been interviewed by the YouTube channel Henry AI Labs by Connor Shorten."}, {"start": 8.0, "end": 17.0, "text": " And what follows is the resulting conversation we had about population-based methods and open-ended learning,"}, {"start": 17.0, "end": 23.0, "t...
Yannic Kilcher
https://www.youtube.com/watch?v=H5vpBCLo74U
XLNet: Generalized Autoregressive Pretraining for Language Understanding
Abstract: With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling. However, relying on corrupting the input with masks, BERT neglects dependency between the masked positio...
Hi there. Today we're looking at XLNet, generalized auto-regressive pre-training for language understanding by Jill and Yang and other people from Carnegie Mellon University as well as Google Brain. So this is kind of the elephant in the room currently as XLNet is the first model to beat BERT, which was the previous s...
[{"start": 0.0, "end": 6.84, "text": " Hi there. Today we're looking at XLNet, generalized auto-regressive pre-training for language"}, {"start": 6.84, "end": 13.24, "text": " understanding by Jill and Yang and other people from Carnegie Mellon University as well as"}, {"start": 13.24, "end": 20.240000000000002, "text"...
Yannic Kilcher
https://www.youtube.com/watch?v=hkw-WDBipgo
Talking to companies at ICML19
A short rant on sponsor companies at ICML and how to talk to them.
Alright, I quickly want to talk about the interaction with corporation company reps at these conferences. Because to me it's still a bit of a secret or a bit of a not really clear of what to do. There's very different kinds of companies at these conferences. So some companies I feel are there to basically show off the...
[{"start": 0.0, "end": 13.0, "text": " Alright, I quickly want to talk about the interaction with corporation company reps at these conferences."}, {"start": 13.0, "end": 20.0, "text": " Because to me it's still a bit of a secret or a bit of a not really clear of what to do."}, {"start": 20.0, "end": 25.0, "text": " Th...
Yannic Kilcher
https://www.youtube.com/watch?v=TFiZYA_JfJs
Population-Based Search and Open-Ended Algorithms
Comments on the ICML2019 tutorial on population-based search and open-ended learning. Talk: https://www.facebook.com/icml.imls/videos/481758745967365/ Slides: http://www.cs.uwyo.edu/~jeffclune/share/2019_06_10_ICML_Tutorial.pdf Book: https://www.amazon.com/dp/B00X57B4JG/ Event: https://icml.cc/Conferences/2019/Schedul...
This is huge. This is just one hall and most people I guess are still waiting for registration. Yeah, but definitely the size of these things is ginormous. The tutorials have just started. I'll be going to find a place. Hi, so I just wanted to give a little update on a tutorial that I liked which was the population-ba...
[{"start": 0.0, "end": 7.32, "text": " This is huge. This is just one hall and most people I guess are still waiting for"}, {"start": 7.32, "end": 14.94, "text": " registration. Yeah, but definitely the size of these things is ginormous. The"}, {"start": 14.94, "end": 20.02, "text": " tutorials have just started. I'll ...
Yannic Kilcher
https://www.youtube.com/watch?v=EA96xh9qog0
I'm at ICML19 :)
Short intro to the International Conference on Machine Learning in Long Beach, CA. I'll be making some updates from the conference.
Hi there, it's day one of ICML and we'll be attending the conference here and just quickly pre-video to let everyone know I'll be trying to report from here kind of what papers are cool what I liked, what are kind of the trends and so hopefully get this conference out to a broader community so everyone's conglomeratin...
[{"start": 0.0, "end": 10.96, "text": " Hi there, it's day one of ICML and we'll be attending the conference here and"}, {"start": 10.96, "end": 18.16, "text": " just quickly pre-video to let everyone know I'll be trying to report from here"}, {"start": 18.16, "end": 25.16, "text": " kind of what papers are cool what I...
Yannic Kilcher
https://www.youtube.com/watch?v=hMO6rbMAPew
Adversarial Examples Are Not Bugs, They Are Features
Abstract: Adversarial examples have attracted significant attention in machine learning, but the reasons for their existence and pervasiveness remain unclear. We demonstrate that adversarial examples can be directly attributed to the presence of non-robust features: features derived from patterns in the data distributi...
Hi there. Today we're looking at adversarial examples aren't bugs. They are features by Andrew Ilias et al. So this paper is pretty interesting as a catchy title and we try to kind of dissect what it says. So first of all, in the abstract, they say adversarial examples have attracted significant attention, but the rea...
[{"start": 0.0, "end": 7.08, "text": " Hi there. Today we're looking at adversarial examples aren't bugs. They are features by Andrew"}, {"start": 7.08, "end": 17.240000000000002, "text": " Ilias et al. So this paper is pretty interesting as a catchy title and we try to kind of dissect"}, {"start": 17.240000000000002, ...
Yannic Kilcher
https://www.youtube.com/watch?v=_N_nFzMtWkA
Reinforcement Learning, Fast and Slow
Abstract: Deep reinforcement learning (RL) methods have driven impressive advances in artificial intelligence in recent years, exceeding human performance in domains ranging from Atari to Go to no-limit poker. This progress has drawn the attention of cognitive scientists interested in understanding human learning. Howe...
Hi there. Today we're looking at reinforcement learning fast and slow by Matthew Botvenek, Sam Ritter, Jane X Wang, Zeb Kurt Nielsen, Charles Spendel and Demis Hassabis. These people are from Google Deep Mind and this is a review of kind of development in reinforcement learning, especially as it pertains to kind of ho...
[{"start": 0.0, "end": 7.84, "text": " Hi there. Today we're looking at reinforcement learning fast and slow by Matthew Botvenek,"}, {"start": 7.84, "end": 17.48, "text": " Sam Ritter, Jane X Wang, Zeb Kurt Nielsen, Charles Spendel and Demis Hassabis. These people"}, {"start": 17.48, "end": 26.080000000000002, "text": ...
Yannic Kilcher
https://www.youtube.com/watch?v=F5mxzvgl_oU
S.H.E. - Search. Human. Equalizer.
Short opinion on Pantene's tool to de-bias Google search results. https://www.apnews.com/Business%20Wire/c53a0e8f5fe04bf68e8311f214c806cf https://shetransforms.us/
Hi everyone, just a quick more of a news update in the air world, which is the following Pantene launches SHE, the search human equalizer, to shine a light on bias in search. So Pantene, the kind of cosmetic corporation launches this thing, which is supposed to correct your search. And it's introduced here in this You...
[{"start": 0.0, "end": 8.48, "text": " Hi everyone, just a quick more of a news update in the air world, which is the following"}, {"start": 8.48, "end": 16.8, "text": " Pantene launches SHE, the search human equalizer, to shine a light on bias in search."}, {"start": 16.8, "end": 24.32, "text": " So Pantene, the kind ...
Yannic Kilcher
https://www.youtube.com/watch?v=3Tqp_B2G6u0
Blockwise Parallel Decoding for Deep Autoregressive Models
https://arxiv.org/abs/1811.03115 Abstract: Deep autoregressive sequence-to-sequence models have demonstrated impressive performance across a wide variety of tasks in recent years. While common architecture classes such as recurrent, convolutional, and self-attention networks make different trade-offs between the amoun...
Hi there, today we'll look at blockwise parallel decoding for deep autoregressive models by Mitchell Stern, Noem Shazir and Jakob Ushkurai of UC Berkeley and Google Brain. So this is a bit more of an engineering paper than usual, which is which I find cool. It's basically an engineering trick to get these autoregressi...
[{"start": 0.0, "end": 5.0600000000000005, "text": " Hi there, today we'll look at blockwise parallel decoding for deep"}, {"start": 5.0600000000000005, "end": 12.4, "text": " autoregressive models by Mitchell Stern, Noem Shazir and Jakob Ushkurai of UC"}, {"start": 12.4, "end": 17.44, "text": " Berkeley and Google Bra...
Yannic Kilcher
https://www.youtube.com/watch?v=pPBqM4CKjUU
Discriminating Systems - Gender, Race, and Power in AI
TL;DR: - There exists both an unequal representation of people in the AI workforce as well as examples of societal bias in AI systems. - The authors claim that the former causally leads to the latter and vice versa. - To me, the report does not manage to make a strong enough argument for that claim. - I find the statem...
Hi there. Today we're looking at discriminating systems, gender, race and power in AI by Sarah Myers-West Meredith Whitaker and Kate Crawford of the AINow Institute, which is a part of New York University or associated with it. This is not as much a paper as it is a report kind of summarizing current literature and al...
[{"start": 0.0, "end": 7.68, "text": " Hi there. Today we're looking at discriminating systems, gender, race and power in AI by Sarah"}, {"start": 7.68, "end": 14.52, "text": " Myers-West Meredith Whitaker and Kate Crawford of the AINow Institute, which is a part"}, {"start": 14.52, "end": 22.56, "text": " of New York ...
Yannic Kilcher
https://www.youtube.com/watch?v=sbKaUc0tPaY
The Odds are Odd: A Statistical Test for Detecting Adversarial Examples
https://arxiv.org/abs/1902.04818 Abstract: We investigate conditions under which test statistics exist that can reliably detect examples, which have been adversarially manipulated in a white-box attack. These statistics can be easily computed and calibrated by randomly corrupting inputs. They exploit certain anomalies...
Hello and welcome. Today we're looking at the odds are odd, a statistical test for detecting adversarial examples. So shameless self-promotion here since this is me. So this is on archive. And basically what we do is we're detecting adversarial examples. For those who don't know what an adversarial example is, is basi...
[{"start": 0.0, "end": 8.0, "text": " Hello and welcome. Today we're looking at the odds are odd, a statistical test for detecting adversarial examples."}, {"start": 8.0, "end": 15.0, "text": " So shameless self-promotion here since this is me."}, {"start": 15.0, "end": 22.0, "text": " So this is on archive. And basica...
Yannic Kilcher
https://www.youtube.com/watch?v=jltgNGt8Lpg
Neural Ordinary Differential Equations
https://arxiv.org/abs/1806.07366 Abstract: We introduce a new family of deep neural network models. Instead of specifying a discrete sequence of hidden layers, we parameterize the derivative of the hidden state using a neural network. The output of the network is computed using a black-box differential equation solver...
Hello and welcome. Today we're going to look at neural ordinary differential equations by Ricky Chen, Yulia Rubanova, Jesse Betencourt and David Devano. So this has been quite an interesting kind of paper to see because it's a bit special. We're going to go over parts of it, not the full paper, just kind of the import...
[{"start": 0.0, "end": 6.0, "text": " Hello and welcome. Today we're going to look at neural ordinary differential equations"}, {"start": 6.0, "end": 14.36, "text": " by Ricky Chen, Yulia Rubanova, Jesse Betencourt and David Devano. So this has been quite an"}, {"start": 14.36, "end": 19.84, "text": " interesting kind ...
Yannic Kilcher
https://www.youtube.com/watch?v=u1_qMdb0kYU
GPT-2: Language Models are Unsupervised Multitask Learners
A look at OpenAI's new GPT-2 model and the surrounding controversy. https://blog.openai.com/better-language-models/ Abstract: Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on taskspecific ...
Hi, today we're looking at language models are unsupervised multitask learners by Alec Radford, Jeffrey Wu, Reverend Child, David Luan, Daria Amadai and Ilya Satskiver from OpenAI. This paper has generated a bit of hype in the last few days so I wanted to go over it basically take a look and take a look at the surroun...
[{"start": 0.0, "end": 6.22, "text": " Hi, today we're looking at language models are unsupervised multitask learners by"}, {"start": 6.22, "end": 12.56, "text": " Alec Radford, Jeffrey Wu, Reverend Child, David Luan, Daria Amadai and Ilya Satskiver"}, {"start": 12.56, "end": 19.62, "text": " from OpenAI. This paper ha...
Yannic Kilcher
https://www.youtube.com/watch?v=OioFONrSETc
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
ERROR: type should be string, got "https://arxiv.org/abs/1502.03167\n\nAbstract:\nTraining Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regularizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.9% top-5 validation error (and 4.8% test error), exceeding the accuracy of human raters.\n\nAuthors:\nSergey Ioffe, Christian Szegedy"
Hi, today we're looking at batch normalization, accelerating deep network training by reducing internal covariate shift by Sergei Iof and Christian Skittes Sizzes, Skittety, yeah, not my not the best pernounser. Sergei, close enough. Alright, so this is a bit of an older paper and I think it's still good to look at it...
[{"start": 0.0, "end": 5.38, "text": " Hi, today we're looking at batch normalization, accelerating deep network"}, {"start": 5.38, "end": 13.68, "text": " training by reducing internal covariate shift by Sergei Iof and Christian Skittes"}, {"start": 13.68, "end": 22.92, "text": " Sizzes, Skittety, yeah, not my not the...
Yannic Kilcher
https://www.youtube.com/watch?v=-9evrZnBorM
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
ERROR: type should be string, got "https://arxiv.org/abs/1810.04805\n\nAbstract:\nWe introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT representations can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. \nBERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE benchmark to 80.4% (7.6% absolute improvement), MultiNLI accuracy to 86.7 (5.6% absolute improvement) and the SQuAD v1.1 question answering Test F1 to 93.2 (1.5% absolute improvement), outperforming human performance by 2.0%.\n\nAuthors:\nJacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova"
Hello everyone. Today we're looking at Bert pre-training of deep bi-directional transformers for language understanding by Jacob Devlin and Min Wai Chang, Kenton Lee, Christina Tatsunova. These are people from Google AI language so you're about to see the most hyped model currently. So basically Bert is a model that t...
[{"start": 0.0, "end": 5.12, "text": " Hello everyone. Today we're looking at Bert pre-training of deep bi-directional"}, {"start": 5.12, "end": 10.96, "text": " transformers for language understanding by Jacob Devlin and Min Wai Chang,"}, {"start": 10.96, "end": 17.88, "text": " Kenton Lee, Christina Tatsunova. These ...
Yannic Kilcher
https://www.youtube.com/watch?v=nPB0ppcnzZA
What’s in a name? The need to nip NIPS
http://tensorlab.cms.caltech.edu/users/anima/pubs/NIPS_Name_Debate.pdf Abstract: There has been substantial recent controversy surrounding the use of the acronym "NIPS" for the Neural Information Processing Systems conference, stemming from the fact that the word "nips" is common slang for nipples, and has historicall...
Hello and welcome. Today we're going to look at what's in a name, the Neetnip Nips by Daniela Whitten, Alino Ferdig, Anemasheri, Anankomar and Jeff Dean. This is a bit of a special paper as it's not an academic topic. The paper in fact is about the change of name or other change in acronym for the conference neural in...
[{"start": 0.0, "end": 6.12, "text": " Hello and welcome. Today we're going to look at what's in a name, the Neetnip Nips by Daniela"}, {"start": 6.12, "end": 13.08, "text": " Whitten, Alino Ferdig, Anemasheri, Anankomar and Jeff Dean. This is a bit of a special paper"}, {"start": 13.08, "end": 20.16, "text": " as it's...
Yannic Kilcher
https://www.youtube.com/watch?v=_PyusGsbBPY
Stochastic RNNs without Teacher-Forcing
We present a stochastic non-autoregressive RNN that does not require teacher-forcing for training. The content is based on our 2018 NeurIPS paper: Deep State Space Models for Unconditional Word Generation https://arxiv.org/abs/1806.04550
Hi everybody, my name is Florion and Janik was nice enough to host me here as a guest to talk about Storastic Arnans without teacher forcing This is based on recent work deep state space models for unconditional word generation which we presented at this year's new ribs and If you feel like any more details, please ch...
[{"start": 0.0, "end": 6.12, "text": " Hi everybody, my name is Florion and Janik was nice enough to host me here as a guest to talk about"}, {"start": 6.92, "end": 8.92, "text": " Storastic Arnans without teacher forcing"}, {"start": 9.48, "end": 15.08, "text": " This is based on recent work deep state space models fo...
Yannic Kilcher
https://www.youtube.com/watch?v=WYrvh50yu6s
Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations
https://arxiv.org/abs/1811.12359 Abstract: In recent years, the interest in unsupervised learning of disentangled representations has significantly increased. The key assumption is that real-world data is generated by a few explanatory factors of variation and that these factors can be recovered by unsupervised learni...
All right, hello everyone. Today we're going to look at this paper, challenging common assumptions in the unsupervised learning of disentanglement representations by Francesco Luc Tello and a bunch of other people at Google AI, ETAG Zurich and MPI. Full disclaimer, I know these people, and I've talked to them about th...
[{"start": 0.0, "end": 2.0, "text": " All right, hello everyone."}, {"start": 2.0, "end": 5.2, "text": " Today we're going to look at this paper,"}, {"start": 5.2, "end": 8.34, "text": " challenging common assumptions in the unsupervised learning"}, {"start": 8.34, "end": 12.4, "text": " of disentanglement representati...
Yannic Kilcher
https://www.youtube.com/watch?v=dPsXxLyqpfs
World Models
Authors: David Ha, Jürgen Schmidhuber Abstract: We explore building generative neural network models of popular reinforcement learning environments. Our world model can be trained quickly in an unsupervised manner to learn a compressed spatial and temporal representation of the environment. By using features extracted...
Hi, today we're looking at world models by David Ha and Jürgen Schmiduber. This is a paper that's concerned with reinforcement learning and especially with the problem of, say, you have an environment that you interact with and you can't need to learn to act in it, but it could be, for example, very expensive to alway...
[{"start": 0.0, "end": 7.12, "text": " Hi, today we're looking at world models by David Ha and J\u00fcrgen Schmiduber."}, {"start": 7.12, "end": 12.76, "text": " This is a paper that's concerned with reinforcement learning and especially with the problem"}, {"start": 12.76, "end": 20.080000000000002, "text": " of, say,...
Yannic Kilcher
https://www.youtube.com/watch?v=_Z9ZP1eiKsI
Curiosity-driven Exploration by Self-supervised Prediction
https://arxiv.org/abs/1705.05363 Authors: Deepak Pathak, Pulkit Agrawal, Alexei A. Efros, Trevor Darrell Abstract: In many real-world scenarios, rewards extrinsic to the agent are extremely sparse, or absent altogether. In such cases, curiosity can serve as an intrinsic reward signal to enable the agent to explore it...
Hi there. Today we're going to look at this paper, Curiosity-Driven Exploration, by self-supervised prediction. It's a relatively short idea, so it shouldn't take too long. So the fundamental idea of the paper is to tackle the reward sparseness problem reinforcement learning. For example, if you have a Super Mario gam...
[{"start": 0.0, "end": 2.0, "text": " Hi there."}, {"start": 2.0, "end": 9.0, "text": " Today we're going to look at this paper, Curiosity-Driven Exploration, by self-supervised prediction."}, {"start": 9.0, "end": 13.0, "text": " It's a relatively short idea, so it shouldn't take too long."}, {"start": 13.0, "end": 21...
Yannic Kilcher
https://www.youtube.com/watch?v=BBp0tHcirtQ
git for research basics: fundamentals, commits, branches, merging
Don't watch this if you already know how to solve a merge conflict :)
Hi there. Today we're taking a look at Git, especially Git as it is used maybe in research collaborations. So Git is like a tool to collaborate but when you research like when you work on a paper together with other people you won't use a lot of the features that Git offers and that are usually described by Git. So in...
[{"start": 0.0, "end": 8.0, "text": " Hi there. Today we're taking a look at Git, especially Git as it is used maybe in research"}, {"start": 8.0, "end": 16.0, "text": " collaborations. So Git is like a tool to collaborate but when you research like when you work"}, {"start": 16.0, "end": 22.5, "text": " on a paper tog...
Yannic Kilcher
https://www.youtube.com/watch?v=iDulhoQ2pro
Attention Is All You Need
https://arxiv.org/abs/1706.03762 Abstract: The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network archi...
Hi there. Today we're looking at attention is all you need by Google. Just to declare I don't work for Google just because we're looking at Google papers lately. But it's just an interesting paper and we're going to see what's the deal with it. So basically what the authors are saying is we should kind of get away fro...
[{"start": 0.0, "end": 7.58, "text": " Hi there. Today we're looking at attention is all you need by Google. Just to declare I"}, {"start": 7.58, "end": 13.02, "text": " don't work for Google just because we're looking at Google papers lately. But it's just an"}, {"start": 13.02, "end": 19.46, "text": " interesting pap...
Yannic Kilcher
https://www.youtube.com/watch?v=-YiMVR3HEuY
Reinforcement Learning with Unsupervised Auxiliary Tasks
ERROR: type should be string, got "https://arxiv.org/abs/1611.05397\n\nAbstract:\nDeep reinforcement learning agents have achieved state-of-the-art results by directly maximising cumulative reward. However, environments contain a much wider variety of possible training signals. In this paper, we introduce an agent that also maximises many other pseudo-reward functions simultaneously by reinforcement learning. All of these tasks share a common representation that, like unsupervised learning, continues to develop in the absence of extrinsic rewards. We also introduce a novel mechanism for focusing this representation upon extrinsic rewards, so that learning can rapidly adapt to the most relevant aspects of the actual task. Our agent significantly outperforms the previous state-of-the-art on Atari, averaging 880\\% expert human performance, and a challenging suite of first-person, three-dimensional \\emph{Labyrinth} tasks leading to a mean speedup in learning of 10× and averaging 87\\% expert human performance on Labyrinth.\n\nAuthors:\nMax Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, Koray Kavukcuoglu"
Hi there. Today we're looking at reinforcement learning with unsupervised auxiliary tasks by Google. So in this paper, the authors consider a reinforcement learning task and I can show you what it looks like. It looks like this kind of a maze or this is an example that they give where you have to navigate the maze. It...
[{"start": 0.0, "end": 6.48, "text": " Hi there. Today we're looking at reinforcement learning with unsupervised auxiliary tasks"}, {"start": 6.48, "end": 13.56, "text": " by Google. So in this paper, the authors consider a reinforcement learning task and I can"}, {"start": 13.56, "end": 20.64, "text": " show you what ...
Yannic Kilcher
https://www.youtube.com/watch?v=56GW1IlWgMg
Learning model-based planning from scratch
https://arxiv.org/abs/1707.06170 Abstract: Conventional wisdom holds that model-based planning is a powerful approach to sequential decision-making. It is often very challenging in practice, however, because while a model can be used to evaluate a plan, it does not prescribe how to construct a plan. Here we introduce ...
Hi there. Today we're taking a look at learning model-based planning from scratch by DeepMind. So as recap, what is model-based planning? Basically, a model, also called an environment model, is just kind of a black box thing you can imagine, where you have a state of your current environment. You put it in there and ...
[{"start": 0.0, "end": 7.0, "text": " Hi there. Today we're taking a look at learning model-based planning from scratch by DeepMind."}, {"start": 7.0, "end": 16.0, "text": " So as recap, what is model-based planning? Basically, a model, also called an environment model,"}, {"start": 16.0, "end": 24.0, "text": " is just...
Yannic Kilcher
https://www.youtube.com/watch?v=agXIYMCICcc
Imagination-Augmented Agents for Deep Reinforcement Learning
Commentary of https://arxiv.org/abs/1707.06203 Abstract We introduce Imagination-Augmented Agents (I2As), a novel architecture for deep reinforcement learning combining model-free and model-based aspects. In contrast to most existing model-based reinforcement learning and planning methods, which prescribe how a model ...
Hi, today we're taking a look at imagination augmented agents for deep reinforcement learning. This is a paper by DeepMind and has been in the news a bit recently so we're going to have a look at what it's all about. Basically they claim that agents who have a model of the world perform better usually than agents who ...
[{"start": 0.0, "end": 10.92, "text": " Hi, today we're taking a look at imagination augmented agents for deep reinforcement learning."}, {"start": 10.92, "end": 16.36, "text": " This is a paper by DeepMind and has been in the news a bit recently so we're going to"}, {"start": 16.36, "end": 20.92, "text": " have a look...