CHANNEL_NAME stringclasses 2
values | URL stringlengths 43 43 | TITLE stringlengths 18 100 | DESCRIPTION stringlengths 621 5k | TRANSCRIPTION stringlengths 958 84.8k | SEGMENTS stringlengths 1.51k 143k |
|---|---|---|---|---|---|
Yannic Kilchner | https://www.youtube.com/watch?v=xrYhDMqaa4U | I went to an AI Art Festival in Geneva (AiiA Festival Trip Report) | #aiia #ai #art
A trip report from the AiiA Festival in Geneva organized by the ImpactAI foundation.
OUTLINE:
0:00 - Intro
1:50 - Laura Tocmacov: The Festival
4:10 - Timothy O'Hear: The Tech
6:50 - Jonathan O'Hear: The Robot
11:50 - Cléa Chopard: The Artist
17:45 - Final Words
Website: https://aiiafestival.org/en/
L... | Hello and welcome to beautiful Geneva. It's such a shame this city speaks French. I'm here at the AIA festival, a crossover between AI and arts and creativity. And yeah, it's cool to attend in person events again. And it's especially cool that they are inside the borders of the country I happen to be in. Even if it's ... | [{"start": 0.0, "end": 14.98, "text": " Hello and welcome to beautiful Geneva."}, {"start": 14.98, "end": 17.28, "text": " It's such a shame this city speaks French."}, {"start": 17.28, "end": 24.900000000000002, "text": " I'm here at the AIA festival, a crossover between AI and arts and creativity."}, {"start": 24.900... |
Yannic Kilchner | https://www.youtube.com/watch?v=kP-dXK9JEhY | Symbolic Knowledge Distillation: from General Language Models to Commonsense Models (Explained) | #gpt3 #knowledge #symbolic
Symbolic knowledge models are usually trained on human-generated corpora that are cumbersome and expensive to create. Such corpora consist of structured triples of symbolic knowledge. This paper takes a different approach and attempts to generate such a corpus by prompting GPT-3. Results sho... | Hi there. Today we'll look at Symbolic Knowledge Distillation from General Language Models to Common Sense Models by Peter West and others of the University of Washington and the Allen Institute for Artificial Intelligence. On a high level, this paper takes a new approach to symbolic knowledge generation. So to automa... | [{"start": 0.0, "end": 4.96, "text": " Hi there. Today we'll look at Symbolic Knowledge Distillation from"}, {"start": 4.96, "end": 8.76, "text": " General Language Models to Common Sense Models by Peter West and"}, {"start": 8.76, "end": 11.24, "text": " others of the University of Washington and"}, {"start": 11.24, "... |
Yannic Kilchner | https://www.youtube.com/watch?v=vxdcX0JTEr0 | I took a Swiss train and it was awesome! Train Seat Review - SBB InterCity 1 - Geneva to St. Gallen | #sbb #seatreview #travel
A friendly parody of Travel Vloggers and Airplane Seat Reviews :)
No, SBB did not pay me for this (but they should ;) )
Links:
TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: http... | Watch this. Foldable armrest. The interior of the car is very nice. This is a comprehensive review of the SBB Intercity One train seat. Yes, I have seen so many flight seat review videos that I've decided to make one about a train. I'm actually alone right here, so otherwise I wouldn't dare make this video. Let's firs... | [{"start": 0.0, "end": 2.0, "text": " Watch this."}, {"start": 8.0, "end": 10.0, "text": " Foldable armrest."}, {"start": 10.0, "end": 14.0, "text": " The interior of the car is very nice."}, {"start": 26.0, "end": 30.0, "text": " This is a comprehensive review of the SBB Intercity One train seat."}, {"start": 30.0, "e... |
Yannic Kilchner | https://www.youtube.com/watch?v=K3cmxn5znyU | [ML News] Microsoft trains 530B model | ConvMixer model fits into single tweet | DeepMind profitable | #mlnews #turingnlg #convmixer
Your latest upates on what's happening in the Machine Learning world.
OUTLINE:
0:00 - Intro
0:16 - Weights & Biases raises on 1B valuation (sponsored)
2:30 - Microsoft trains 530 billion parameter model
5:15 - StyleGAN v3 released
6:45 - A few more examples may be worth billions of param... | Microsoft trains a model that's three times as large as GPT three, NVIDIA releases the third iteration of their style gun model and DeepMind goes hard on ml for biology. Welcome to ML news. You might have already heard this but weights and biases has just raised a series C round at the valuation of 1 billion US dollar... | [{"start": 0.0, "end": 5.5200000000000005, "text": " Microsoft trains a model that's three times as large as GPT three, NVIDIA releases the third"}, {"start": 5.5200000000000005, "end": 12.16, "text": " iteration of their style gun model and DeepMind goes hard on ml for biology. Welcome to ML news."}, {"start": 16.8, "... |
Yannic Kilchner | https://www.youtube.com/watch?v=NEkriziVYXo | [ML News] DeepMind does Nowcasting | The Guardian's shady reporting | AI finishes Beethoven's 10th | #deepmind #nowcasting #machinelearning
Your holy update on what's new in the Machine Learning world.
OUTLINE:
0:00 - Intro
0:30 - DeepMind tackles Nowcasting
3:30 - The Guardian's shady reporting on TruthfulQA
6:15 - Stochastic training not necessary for generalization
7:35 - Google AI's efficient partitioning of roa... | cut my hair, but not the beard. I have a giant cold sore here. It just looks weird without the beard. I was just gonna wait. Well, we'll Yeah, intro. DeepMind can predict rain better than anyone else. The Guardian is not so really truthful about truthful language models. And an AI finishes Beethoven's 10th Symphony. W... | [{"start": 0.0, "end": 5.88, "text": " cut my hair, but not the beard. I have a giant cold sore here. It just looks weird without"}, {"start": 5.88, "end": 13.72, "text": " the beard. I was just gonna wait. Well, we'll Yeah, intro. DeepMind can predict rain better"}, {"start": 13.72, "end": 21.48, "text": " than anyone... |
Yannic Kilchner | https://www.youtube.com/watch?v=dND-7llwrpw | Grokking: Generalization beyond Overfitting on small algorithmic datasets (Paper Explained) | #grokking #openai #deeplearning
Grokking is a phenomenon when a neural network suddenly learns a pattern in the dataset and jumps from random chance generalization to perfect generalization very suddenly. This paper demonstrates grokking on small algorithmic datasets where a network has to fill in binary tables. Inter... | Hi there, today we'll look at grokking, generalization beyond overfitting on small algorithmic datasets by Alethea Power, Yuri Burda, Harry Edwards, Igor Babushkin and Vedant Misra of OpenAI. On a high level, this paper presents a phenomenon that the researchers call grokking, where a neural network will generalize al... | [{"start": 0.0, "end": 7.5600000000000005, "text": " Hi there, today we'll look at grokking, generalization beyond overfitting on small algorithmic datasets"}, {"start": 7.5600000000000005, "end": 15.16, "text": " by Alethea Power, Yuri Burda, Harry Edwards, Igor Babushkin and Vedant Misra of OpenAI."}, {"start": 15.16... |
Yannic Kilchner | https://www.youtube.com/watch?v=wTzvKB6D_34 | How far can we scale up? Deep Learning's Diminishing Returns (Article Review) | #deeplearning #co2 #cost
Deep Learning has achieved impressive results in the last years, not least due to the massive increases in computational power and data that has gone into these models. Scaling up currently promises to be a reliable way to create more performant systems, but how far can we go? This article exp... | Hi there, I saw this article in IEEE spectrum called deep learnings diminishing returns, the cost of improvement is becoming unsustainable. This is by Nielse Thompson, Christian Greenwald, Qihong Li and Gabriel F. Manso. And I thought it was an interesting read, because it talks about the computational limits that we'... | [{"start": 0.0, "end": 6.72, "text": " Hi there, I saw this article in IEEE spectrum called deep learnings diminishing returns,"}, {"start": 6.72, "end": 13.68, "text": " the cost of improvement is becoming unsustainable. This is by Nielse Thompson, Christian Greenwald,"}, {"start": 13.68, "end": 20.400000000000002, "t... |
Yannic Kilchner | https://www.youtube.com/watch?v=tX1OolVxDzs | [ML News] Plagiarism Case w/ Plot Twist | CLIP for video surveillance | OpenAI summarizes books | #plagiarism #surveillance #schmidhuber
Your Mondaily updates of what's going in the world of Machine Learning.
OUTLINE:
0:00 - Intro
0:20 - New plagiarism case has plot twist
7:25 - CLIP for video surveillance
9:40 - DARPA SubTerranean Challenge
11:00 - Schmidhuber criticizing Turing Lecture
15:00 - OpenAI summarizes... | A plagiarism story has an unexpected plot twist clip can be used for video surveillance and Schmidt Hooper goes on another rant on his blog about citing his works. Welcome to ML news. Hello friends of the Monday it is ML news and our first story is convoluted. No pun intended. So it starts out with a Reddit post by us... | [{"start": 0.0, "end": 6.8, "text": " A plagiarism story has an unexpected plot twist clip can be used for video surveillance"}, {"start": 6.8, "end": 11.48, "text": " and Schmidt Hooper goes on another rant on his blog about citing his works. Welcome to"}, {"start": 11.48, "end": 23.7, "text": " ML news. Hello friends... |
Yannic Kilchner | https://www.youtube.com/watch?v=19Q-vMd9bYg | Inconsistency in Conference Peer Review: Revisiting the 2014 NeurIPS Experiment (Paper Explained) | #neurips #peerreview #nips
The peer-review system at Machine Learning conferences has come under much criticism over the last years. One major driver was the infamous 2014 NeurIPS experiment, where a subset of papers were given to two different sets of reviewers. This experiment showed that only about half of all acce... | Hi there! Today we'll look at inconsistency in conference peer review revisiting the 2014 NeurIPS experiment by Corinna Cortez and Neil D. Lawrence which were actually the chairs of the 2014 NeurIPS conference. So they are going to have access to some data that the rest of us sadly don't have access to but also it all... | [{"start": 0.0, "end": 4.9, "text": " Hi there! Today we'll look at inconsistency in conference peer review"}, {"start": 4.9, "end": 11.26, "text": " revisiting the 2014 NeurIPS experiment by Corinna Cortez and Neil D. Lawrence"}, {"start": 11.26, "end": 16.92, "text": " which were actually the chairs of the 2014 NeurI... |
Yannic Kilchner | https://www.youtube.com/watch?v=DkojaN7_f4E | [ML News] New ImageNet SOTA | Uber's H3 hexagonal coordinate system | New text-image-pair dataset | #truthfulqa #efficientnet #laion400M
Your regularly irregular updates on what's happening in the Machine Learning world.
OUTLINE:
0:00 - Intro
0:20 - TruthfulQA benchmark shines new light on GPT-3
2:00 - LAION-400M image-text-pair dataset
4:10 - GoogleAI's EfficientNetV2 and CoAtNet
6:15 - Uber's H3: A hexagonal coor... | A new benchmark makes GPT three look like a conspiracy theorist, a nonprofit builds a giant data set of text and image pairs and Jürgen Schmidt Huber claims that Turing is massively oversold. Welcome to ML news. Hello, hello, everyone. Welcome to ML news. Let's dive into our first story. Truthful QA is a new benchmark... | [{"start": 0.0, "end": 6.0, "text": " A new benchmark makes GPT three look like a conspiracy theorist, a nonprofit builds a giant"}, {"start": 6.0, "end": 12.64, "text": " data set of text and image pairs and J\u00fcrgen Schmidt Huber claims that Turing is massively oversold."}, {"start": 12.64, "end": 23.84, "text": "... |
Yannic Kilchner | https://www.youtube.com/watch?v=aX8phGhG8VQ | Does GPT-3 lie? - Misinformation and fear-mongering around the TruthfulQA dataset | #gpt-3 #truth #conspiracy
A new benchmark paper has created quite an uproar in the community. TruthfulQA is a dataset of 817 questions probing for imitative falsehoods where language models become less truthful, the larger they get. This surprising counter-intuitive finding validates many people's criticisms of large ... | GPT-3 is a liar. It fails. It learns common misconceptions. It is a conspiracy theorist. It is horrible. At least that's the impression you get from a new paper. The paper is called truthful QA measuring how models mimic human falsehoods by Stephanie Lynn, Jacob Hilton and Wayne Evans. Now here is the Twitter announce... | [{"start": 0.0, "end": 14.0, "text": " GPT-3 is a liar. It fails. It learns common misconceptions. It is a conspiracy theorist."}, {"start": 14.0, "end": 19.88, "text": " It is horrible. At least that's the impression you get from a new paper. The paper is called"}, {"start": 19.88, "end": 25.36, "text": " truthful QA ... |
Yannic Kilchner | https://www.youtube.com/watch?v=pBau7umFhjQ | Topographic VAEs learn Equivariant Capsules (Machine Learning Research Paper Explained) | #tvae #topographic #equivariant
Variational Autoencoders model the latent space as a set of independent Gaussian random variables, which the decoder maps to a data distribution. However, this independence is not always desired, for example when dealing with video sequences, we know that successive frames are heavily c... | Hello there, today we'll look at topographic VAE's learn equivariant capsules by T. Anderson Keller and Max Welling. On a high level, this paper proposes a new type of variational autoencoder, where the latent variables aren't independent, but are organized in a topographic way. Now what that means, we're going to loo... | [{"start": 0.0, "end": 7.44, "text": " Hello there, today we'll look at topographic VAE's learn equivariant capsules by T. Anderson"}, {"start": 7.44, "end": 14.08, "text": " Keller and Max Welling. On a high level, this paper proposes a new type of variational autoencoder,"}, {"start": 14.08, "end": 21.6, "text": " wh... |
Yannic Kilchner | https://www.youtube.com/watch?v=-sNJd7bANTI | [ML News] Roomba Avoids Poop | Textless NLP | TikTok Algorithm Secrets | New Schmidhuber Blog | #schmidhuber #tiktok #roomba
Your regularly irregular update on what's happening in the world of Machine Learning.
OUTLINE:
0:00 - Intro
0:15 - Sponsor: Weights & Biases
1:55 - ML YouTuber reaches 100k subscribers
2:40 - Facebook AI pushes Textless NLP
5:30 - Schmidhuber blog post: I invented everything
7:55 - TikTok... | Facebook releases textless NLP. Roomba learns to avoid poop. And Jürgen Schmidhuber invented every single thing there is. Welcome to ML News. It's a great Monday. All right, let me show you something. Come here. Watch this. See, this is 1234 boxes by Kevin. What do these boxes contain? Check it out. It says Kevin note... | [{"start": 0.0, "end": 7.2, "text": " Facebook releases textless NLP. Roomba learns to avoid poop. And J\u00fcrgen Schmidhuber invented"}, {"start": 7.2, "end": 11.84, "text": " every single thing there is. Welcome to ML News. It's a great Monday."}, {"start": 16.4, "end": 26.48, "text": " All right, let me show you so... |
Yannic Kilchner | https://www.youtube.com/watch?v=ifBI2jTaAEo | Celebrating 100k Subscribers! (w/ Channel Statistics) | #yannickilcher #machinelearning #100k
OUTLINE:
0:00 - 100k!
1:00 - Announcements & Thanks
3:55 - Channel Statistics
Links:
TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChu... | 100k. Nice. Big celebration. We have just reached 100,000 subscribers. Now truth be told as of recording of this videos, we actually don't have 100,000 subscribers yet there's like 156 missing. So all I have to do is not get cancelled in the next two days or so. And this is harder than it seems, but I've managed so fa... | [{"start": 0.0, "end": 12.48, "text": " 100k. Nice. Big celebration. We have just reached 100,000 subscribers. Now truth be"}, {"start": 12.48, "end": 18.02, "text": " told as of recording of this videos, we actually don't have 100,000 subscribers yet there's"}, {"start": 18.02, "end": 25.78, "text": " like 156 missing... |
Yannic Kilchner | https://www.youtube.com/watch?v=eROy3BrqEVk | [ML News] AI predicts race from X-Ray | Google kills HealthStreams | Boosting Search with MuZero | #mlnews #schmidhuber #muzero
Your regular updates on what's happening in the ML world!
OUTLINE:
0:00 - Intro
0:15 - Sponsor: Weights & Biases
1:45 - Google shuts down health streams
4:25 - AI predicts race from blurry X-Rays
7:35 - Facebook labels black men as primates
11:05 - Distill papers on Graph Neural Networks
... | Google decommissions DeepMind's health app. Jürgen Schmidhuber leads an AI initiative in Saudi Arabia, and I have a new paper. Welcome to ML News. Hey, hey, you. Yes, you. Do you run experiments? Machine learning experiments? Yes. How do you track them? What? That's not a good way to track them. Here's what you should... | [{"start": 0.0, "end": 5.5600000000000005, "text": " Google decommissions DeepMind's health app. J\u00fcrgen Schmidhuber leads an AI initiative"}, {"start": 5.5600000000000005, "end": 12.64, "text": " in Saudi Arabia, and I have a new paper. Welcome to ML News."}, {"start": 12.64, "end": 23.64, "text": " Hey, hey, you.... |
Yannic Kilchner | https://www.youtube.com/watch?v=0JlB9gufTw8 | ∞-former: Infinite Memory Transformer (aka Infty-Former / Infinity-Former, Research Paper Explained) | #inftyformer #infinityformer #transformer
Vanilla Transformers are excellent sequence models, but suffer from very harsch constraints on the length of the sequences they can process. Several attempts have been made to extend the Transformer's sequence length, but few have successfully gone beyond a constant factor imp... | Hello there, today we'll look at Infinityformer infinite memory transformer by Pedro Enrique Martins, Zita Marino and Andre F. T. Martins. On a high level, this paper proposes a transformer that can attend to unbounded memory in the past. It does so by building up what he calls a long term memory, which is a continuou... | [{"start": 0.8, "end": 7.84, "text": " Hello there, today we'll look at Infinityformer infinite memory transformer by Pedro Enrique"}, {"start": 7.84, "end": 16.080000000000002, "text": " Martins, Zita Marino and Andre F. T. Martins. On a high level, this paper proposes a transformer"}, {"start": 16.080000000000002, "e... |
Yannic Kilchner | https://www.youtube.com/watch?v=PFMtdR56Q4U | [ML News] Blind Chess AI Competition | Graph NNs for traffic | AI gift suggestions | #mlnews #chess #neurips
OUTLINE:
0:00 - Intro
0:30 - Reconnaissance Blind Chess NeurIPS 2021 Competition
3:40 - Colab Pro no longer top priority for GPUs
4:45 - DeepMind uses Graph NNs to do traffic prediction
6:00 - Helpful Libraries: Isaac Gym, Differentiable Human, LVIS, BEHAVIOR
10:25 - Cerebras Wafer Scale Engine... | We play some blind chess graph neural networks are used in Google Maps to predict traffic and AI makes for thoughtful gifts. Welcome to ML news. It's Monday. Hello and welcome friends of the Monday. Welcome to ML news. Now to be honest with you, not a lot of stuff happened this week. I guess that's what they call a sl... | [{"start": 0.16, "end": 4.88, "text": " We play some blind chess graph neural networks are used in Google Maps to predict traffic"}, {"start": 4.88, "end": 10.24, "text": " and AI makes for thoughtful gifts. Welcome to ML news. It's Monday."}, {"start": 14.88, "end": 21.12, "text": " Hello and welcome friends of the Mo... |
Yannic Kilchner | https://www.youtube.com/watch?v=-Kgxv64aG3o | ALiBi - Train Short, Test Long: Attention with linear biases enables input length extrapolation | #alibi #transformers #attention
Transformers are essentially set models that need additional inputs to make sense of sequence data. The most widespread additional inputs are position encodings or position embeddings, which add sequence index information in various forms. However, this has put a limit on the resulting ... | Hello there. Today we'll look at train short, test long, attention with linear biases enables input length extrapolation, also called Alibi by Ophir Press, Noah A. Smith, and Mike Lewis. So on a high level, this paper replaces the position encodings or position embeddings of transformers by a new very simple system th... | [{"start": 0.0, "end": 4.5600000000000005, "text": " Hello there. Today we'll look at train short, test long,"}, {"start": 4.5600000000000005, "end": 8.94, "text": " attention with linear biases enables input length extrapolation,"}, {"start": 8.94, "end": 12.200000000000001, "text": " also called Alibi by Ophir Press,... |
Yannic Kilchner | https://www.youtube.com/watch?v=tunf2OunOKg | [ML News] Stanford HAI coins Foundation Models & High-profile case of plagiarism uncovered | #plagiarism #foundationmodels #tesla
The best place to keep up to date with the latest and greatest from the ML world!
OUTLINE:
0:00 - Intro & Sponsor
3:15 - A high-profile case of plagiarism shocks the ML world
11:55 - Stanford AI releases paper on "Foundation Models"
19:45 - Updates on Apple's NeuralHash
20:45 - RL... | High profile case of plagiarism shocks the machine learning world. Tesla has an AI day extravaganza and all of Stanford writes a single paper. Welcome to ML news. Stop! Before the rest of the video, this video is sponsored by Weights and Biases. Weights and Biases builds developer tools for machine learning for resear... | [{"start": 0.0, "end": 5.36, "text": " High profile case of plagiarism shocks the machine learning world. Tesla has an AI day"}, {"start": 5.36, "end": 13.64, "text": " extravaganza and all of Stanford writes a single paper. Welcome to ML news."}, {"start": 13.64, "end": 21.14, "text": " Stop! Before the rest of the vi... |
Yannic Kilchner | https://www.youtube.com/watch?v=qgUegkefocg | Fastformer: Additive Attention Can Be All You Need (Machine Learning Research Paper Explained) | #attention #transformer #fastformer
Transformers have become the dominant model class in the last few years for large data, but their quadratic complexity in terms of sequence length has plagued them until now. Fastformer claims to be the fastest and most performant linear attention variant, able to consume long conte... | Hello there, today we'll look at fast former additive attention can be all you need by Chu Wanwu, Fang Zhaowu, Tao Qi and Yongfen Huang. So this paper definitely wins out in the category of most innovative paper titles of the last few months. As apparently, we've gone from is all you need to can be all you need. So a ... | [{"start": 0.0, "end": 6.24, "text": " Hello there, today we'll look at fast former additive attention can be all you need by"}, {"start": 6.24, "end": 13.92, "text": " Chu Wanwu, Fang Zhaowu, Tao Qi and Yongfen Huang. So this paper definitely wins out in the category"}, {"start": 13.92, "end": 22.88, "text": " of most... |
Yannic Kilchner | https://www.youtube.com/watch?v=nQDZmf2Yb9k | PonderNet: Learning to Ponder (Machine Learning Research Paper Explained) | #pondernet #deepmind #machinelearning
Humans don't spend the same amount of mental effort on all problems equally. Instead, we respond quickly to easy tasks, and we take our time to deliberate hard tasks. DeepMind's PonderNet attempts to achieve the same by dynamically deciding how many computation steps to allocate t... | Hello there, today we'll look at PonderNet learning to ponder by Andrea Bonino, Jan Balaguer and Charles Blondel. This paper on a high level introduces a recurrent architecture, or a principle of recurrent computation for deep networks, that essentially says the network recurrently computes its output at each step. An... | [{"start": 0.64, "end": 6.72, "text": " Hello there, today we'll look at PonderNet learning to ponder by Andrea Bonino, Jan Balaguer"}, {"start": 6.72, "end": 13.84, "text": " and Charles Blondel. This paper on a high level introduces a recurrent architecture,"}, {"start": 13.84, "end": 21.12, "text": " or a principle ... |
Yannic Kilchner | https://www.youtube.com/watch?v=6MUpWGeGMxs | NeuralHash is BROKEN - How to evade Apple's detection & craft hash collisions (w/ Open Source Code) | #apple #icloud #neuralhash
Send your Apple fanboy friends to prison with this one simple trick ;) We break Apple's NeuralHash algorithm used to detect CSAM for iCloud photos. I show how it's possible to craft arbitrary hash collisions from any source / target image pair using an adversarial example attack. This can be... | So, I've made multiple videos about this already. ML News reported, Apple is releasing their new system to detect child abuse material, which includes running code on the device of the actual users before they upload images to iCloud. I've also made a video about the technical summary that Apple released, where they d... | [{"start": 0.0, "end": 3.9, "text": " So, I've made multiple videos about this already."}, {"start": 3.9, "end": 10.4, "text": " ML News reported, Apple is releasing their new system to detect child abuse material,"}, {"start": 10.4, "end": 16.86, "text": " which includes running code on the device of the actual users ... |
Yannic Kilchner | https://www.youtube.com/watch?v=gu5UM99qaVc | [ML News] Nvidia renders CEO | Jurassic-1 larger than GPT-3 | Tortured Phrases reveal Plagiarism | #mlnews #nvidia #openai
An in-depth look over what's going on in the world of Machine Learning and Artificial intelligence. Subscribe now and make Monday the best day of the week!
OUTLINE:
0:00 - Intro
0:20 - Sponsor: Weights & Biases
3:00 - Nvidia's CEO was rendered during Keynote
5:00 - AI21 Labs releases Jurassic-... | NVIDIA blows everyone's mind by having a rendered CEO give their keynote speech. AI 21 labs releases a model that's just a tiny bit bigger than GPT-3. And we win a t shirt in the open AI codex challenge. Welcome to ML news. It's Monday. Before we dive into the news, this is sponsored by weights and biases. How are you... | [{"start": 0.0, "end": 4.72, "text": " NVIDIA blows everyone's mind by having a rendered CEO give their keynote speech."}, {"start": 4.72, "end": 11.200000000000001, "text": " AI 21 labs releases a model that's just a tiny bit bigger than GPT-3. And we win a t shirt in"}, {"start": 11.200000000000001, "end": 21.76, "te... |
Yannic Kilchner | https://www.youtube.com/watch?v=z15JLtAuwVI | How Apple scans your phone (and how to evade it) - NeuralHash CSAM Detection Algorithm Explained | #apple #icloud #privacy
Apple recently announced scanning all images uploaded to iCloud for CSAM (child abuse material), and that this scan would happen locally on users' phones. We take a look at the technical report and explore how the system works in detail, how it is designed to preserve user privacy, and what wea... | Hello there, today we're going to look at CSAM detection, the technical summary of Apple system in order to detect child abuse material of users before they upload it to iCloud. So I recently reported on this in ML News. And this story, of course, not my story, but the general story has sparked a lot of controversy ar... | [{"start": 0.88, "end": 8.0, "text": " Hello there, today we're going to look at CSAM detection, the technical summary of Apple system"}, {"start": 8.0, "end": 15.44, "text": " in order to detect child abuse material of users before they upload it to iCloud."}, {"start": 15.44, "end": 22.48, "text": " So I recently rep... |
Yannic Kilchner | https://www.youtube.com/watch?v=gFkBqD2hbnU | [ML NEWS] Apple scans your phone | Master Faces beat face recognition | WALL-E is real | #mlnews #apple #nolamarck
Your update on the latest news in the AI and Machine Learning world.
OUTLINE:
0:00 - Intro
0:15 - Sponsor: Weights & Biases
3:30 - Apple to scan iDevices for illegal content
14:10 - EU approves chatcontrol
15:20 - Machine Learning FAQ book
17:40 - TimeDial & Disfl-QA Conversation Datasets
20... | Apple scans your phone for illegal content, master faces are able to bypass almost any facial recognition software and Wally is real. Welcome to ML news. It's Monday. All right, before we get into things, this video is sponsored by weights and biases, weights and biases is of course the one stop shop for any machine l... | [{"start": 0.0, "end": 5.44, "text": " Apple scans your phone for illegal content, master faces are able to bypass almost any"}, {"start": 5.44, "end": 11.36, "text": " facial recognition software and Wally is real. Welcome to ML news. It's Monday."}, {"start": 16.72, "end": 21.2, "text": " All right, before we get int... |
Yannic Kilchner | https://www.youtube.com/watch?v=SPOqoI0zOPQ | [ML News] AI-generated patent approved | Germany gets an analog to OpenAI | ML cheats video games | #mlnews #dabus #alephalpha
OUTLINE:
0:00 - Intro
0:20 - Sponsor: Weights & Biases
3:45 - AI legally recognized as patent inventor
8:35 - Alpeh Alpha raises USD 27Mio to build European OpenAI
10:20 - AMP advances AI aided recycling
11:20 - DeepMind builds XLand RL environment
13:15 - Cognitive Behavioral Therapy as an ... | And AI is now officially listed as the inventor in a patent, Aleph Alpha raises $27 million to build Europe's open AI and an open source replication of Dali is released. Welcome to ML News. All right, before we get into all this stuff, this video is sponsored by Weights and Biases. Weights and Biases is a one stop sho... | [{"start": 0.0, "end": 7.28, "text": " And AI is now officially listed as the inventor in a patent, Aleph Alpha raises $27 million to"}, {"start": 7.28, "end": 13.84, "text": " build Europe's open AI and an open source replication of Dali is released. Welcome to ML News."}, {"start": 20.080000000000002, "end": 24.08000... |
Yannic Kilchner | https://www.youtube.com/watch?v=4xklF7PZ-BY | [ML News] MMO Game destroys GPUs | OpenAI quits Robotics | Today w/ guest host Sanyam Bhutani | #chai #mlnews #nvidia
Follow Saynam here:
YouTube: https://www.youtube.com/c/ChaiTimeDataScience
Twitter: https://twitter.com/bhutanisanyam1
Apple Podcasts: https://podcasts.apple.com/us/podcast/chai-time-data-science/id1473685440?uo=4
LinkedIn: https://www.linkedin.com/in/sanyambhutani/
Spotify: https://open.spotify.... | Once upon a time during his vacation, Yannick Light Speed Culture found chai. He had so much of chai and he liked it so much that he turned into the host of chai time data science. That's why I'm hosting machine learning news. Hi everyone, I'm Syyam. I host the chai time data science podcast on YouTube channel and I'm... | [{"start": 0.0, "end": 5.44, "text": " Once upon a time during his vacation, Yannick Light Speed Culture found chai."}, {"start": 5.44, "end": 10.4, "text": " He had so much of chai and he liked it so much that he turned into the host of chai"}, {"start": 10.4, "end": 11.4, "text": " time data science."}, {"start": 11.... |
Yannic Kilchner | https://www.youtube.com/watch?v=-cT-2xvaeks | [ML News] Facebook AI adapting robots | Baidu autonomous excavators | Happy Birthday EleutherAI | A look into the happenings of the Machine Learning world.
OUTLINE:
0:00 - Intro
0:25 - Facebook AI trains rapidly adapting robots
3:05 - Baidu presents autonomous excavator system
4:45 - EleutherAI turns 1
6:05 - Elon Musk says FSD harder than expected
8:10 - AI interview tools still fall short
11:10 - RunwayML AI-pow... | Facebook AI builds crazy walking robots by do builds automatic excavators and Luther AI turns one. Welcome to ML news. Hello and welcome to ML news, your moderately regular update of what's going on in the machine learning world. Let's dive in. Facebook AI blog writes AI now enables robots to adapt rapidly to changing... | [{"start": 0.0, "end": 6.5200000000000005, "text": " Facebook AI builds crazy walking robots by do builds automatic excavators and Luther"}, {"start": 6.5200000000000005, "end": 18.18, "text": " AI turns one. Welcome to ML news. Hello and welcome to ML news, your moderately regular"}, {"start": 18.18, "end": 25.04, "te... |
Yannic Kilchner | https://www.youtube.com/watch?v=PuOASKpiThY | I'm taking a break | I'll be back, don't worry :)
Links:
TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilche... | I'll go on a bit of a summer break. You might have noticed that the frequency of videos, especially paper discussion videos has been going down a little bit. That's because I've been preparing to summer up a bit. And we're really close to 100k subscribers. Thank you everyone who's already here. If you're not subscribe... | [{"start": 0.0, "end": 5.16, "text": " I'll go on a bit of a summer break. You might have noticed that the frequency of videos,"}, {"start": 5.16, "end": 9.32, "text": " especially paper discussion videos has been going down a little bit. That's because I've"}, {"start": 9.32, "end": 17.28, "text": " been preparing to ... |
Yannic Kilchner | https://www.youtube.com/watch?v=TrLrBL1U8z0 | [ML News] GitHub Copilot - Copyright, GPL, Patents & more | Brickit LEGO app | Distill goes on break | #copilot #copyright #gpl
GitHub and OpenAI release Copilot, an AI-powered code autocomplete system that can generate entire functions, classes, and modules from mere definitions and docstrings. Copilot was trained on all public GitHub repositories, and this has a lot of people upset about questions on copyright, code ... | An open door. An open window. An open bottle. OpenAI and GitHub invent copilot and everyone freaks out about copyright. Welcome to ML News. Greg Brockman writes an AI pair programmer in your editor. It's powered by OpenAI Codecs, a new AI system which can convert from natural language to code with increasing reliabili... | [{"start": 0.0, "end": 14.1, "text": " An open door. An open window. An open bottle. OpenAI and GitHub invent copilot and everyone"}, {"start": 14.1, "end": 21.86, "text": " freaks out about copyright. Welcome to ML News."}, {"start": 21.86, "end": 27.64, "text": " Greg Brockman writes an AI pair programmer in your edi... |
Yannic Kilchner | https://www.youtube.com/watch?v=9MJTeOaSMTk | Self-driving from VISION ONLY - Tesla's self-driving progress by Andrej Karpathy (Talk Analysis) | #tesla #selfdriving #karpathy
Tesla is pushing the state-of-the-art in full self-driving, and interestingly, they explicitly switch from having multiple different sensors to a vision-only system. We discuss the highlights of Andrej Karpathy's talk about Tesla's FSD system, how to label petabytes of data, how to sample... | All right, hello, everyone. Today we're going to look at Andrej Karpathy's CVPR talk about full self driving mode in Tesla and what Tesla has been doing to push that beyond its current state. So let's just say that autonomous driving is a hard problem. You have to control a car and pretty much anything could happen. H... | [{"start": 0.0, "end": 6.48, "text": " All right, hello, everyone. Today we're going to look at Andrej Karpathy's CVPR talk about"}, {"start": 6.48, "end": 12.52, "text": " full self driving mode in Tesla and what Tesla has been doing to push that beyond its current"}, {"start": 12.52, "end": 17.76, "text": " state. So... |
Yannic Kilchner | https://www.youtube.com/watch?v=tDk10VTHwNo | [ML News] CVPR bans social media paper promotion | AI restores Rembrandt | GPU prices down | #cvpr #socialmedia #machinelearning
In this week's ML news we look at CVPR's controversial action to ban paper promotions on social media during the review phase, among other things!
OUTLINE:
0:00 - Intro & Overview
0:25 - CVPR bans social media paper discussions
5:10 - WalMart uses AI to suggest substitutions
6:05 -... | CVPR forbids tweeting about papers, AI is used to restore Rembrandt, and a potential deepfake has big consequences in the country of Myanmar. Welcome to this week's ML news. Hello and welcome to ML news, your absolutely regular every week on Monday update on what's going on in the machine learning world. The first one... | [{"start": 0.0, "end": 5.68, "text": " CVPR forbids tweeting about papers, AI is used to restore Rembrandt, and a potential"}, {"start": 5.68, "end": 10.1, "text": " deepfake has big consequences in the country of Myanmar."}, {"start": 10.1, "end": 16.92, "text": " Welcome to this week's ML news."}, {"start": 16.92, "e... |
Yannic Kilchner | https://www.youtube.com/watch?v=k_hUdZJNzkU | The Dimpled Manifold Model of Adversarial Examples in Machine Learning (Research Paper Explained) | #adversarialexamples #dimpledmanifold #security
Adversarial Examples have long been a fascinating topic for many Machine Learning researchers. How can a tiny perturbation cause the neural network to change its output by so much? While many explanations have been proposed over the years, they all appear to fall short. ... | Hello there, today we're going to look at the dimpled manifold model of adversarial examples in machine learning by Adi Shamir, Odelia Melamed and Oriol Ben Shmuel. This paper on a high level proposes a new way of looking at the phenomenon of adversarial examples in machine learning, specifically in deep learning. And... | [{"start": 0.64, "end": 5.76, "text": " Hello there, today we're going to look at the dimpled manifold model of adversarial examples"}, {"start": 5.76, "end": 13.280000000000001, "text": " in machine learning by Adi Shamir, Odelia Melamed and Oriol Ben Shmuel. This paper on a high level"}, {"start": 13.280000000000001,... |
Yannic Kilchner | https://www.youtube.com/watch?v=6_q9DbX35kk | [ML News] Hugging Face course | GAN Theft Auto | AI Programming Puzzles | PyTorch 1.9 Released | #mlnews #gta #weather
In this week's ML News, we look at the latest developments in the Machine Learning and AI world with updates from research, industry, and society at large.
OUTLINE:
0:00 - Intro
0:20 - Hugging Face launches free course
1:30 - Sentdex releases GAN Theft Auto
2:25 - Facebook uses AI to help modera... | Huggingface releases a course you can now play GTA inside of an AI's mind and spot turns one. Welcome to ML news. Good evening. Huggingface the famous NLP startup releases a course that teaches you how to use their models, libraries and other code they release. This goes from introduction of how to use transformers an... | [{"start": 0.0, "end": 7.140000000000001, "text": " Huggingface releases a course you can now play GTA inside of an AI's mind and spot turns"}, {"start": 7.140000000000001, "end": 26.8, "text": " one. Welcome to ML news. Good evening. Huggingface the famous NLP startup releases a course that"}, {"start": 26.8, "end": 3... |
Yannic Kilchner | https://www.youtube.com/watch?v=g08NkNWmZTA | XCiT: Cross-Covariance Image Transformers (Facebook AI Machine Learning Research Paper Explained) | #xcit #transformer #attentionmechanism
After dominating Natural Language Processing, Transformers have taken over Computer Vision recently with the advent of Vision Transformers. However, the attention mechanism's quadratic complexity in the number of tokens means that Transformers do not scale well to high-resolution... | Hello there, today we'll look at excite cross covariance image transformers by Facebook AI, Indria and Sobon University. So in this paper, the authors propose a kind of a transpose of an attention mechanism. So instead of the attention working across tokens and tokens attending to other tokens, now the it is the featu... | [{"start": 0.0, "end": 7.6000000000000005, "text": " Hello there, today we'll look at excite cross covariance image transformers by Facebook AI,"}, {"start": 7.6000000000000005, "end": 15.84, "text": " Indria and Sobon University. So in this paper, the authors propose a kind of a transpose of an"}, {"start": 15.84, "en... |
Yannic Kilchner | https://www.youtube.com/watch?v=P38FZrbNHV4 | AMP: Adversarial Motion Priors for Stylized Physics-Based Character Control (Paper Explained) | #reiforcementlearning #gan #imitationlearning
Learning from demonstrations is a fascinating topic, but what if the demonstrations are not exactly the behaviors we want to learn? Can we adhere to a dataset of demonstrations and still achieve a specified goal? This paper uses GANs to combine goal-achieving reinforcement... | Hey, yo, where's my money? Well, give me my money. All right, we're going to get into this video in a second. Today we're going to look at AMP adversarial motion priors for stylized physics based character control by Xuebin Peng, Cema, Pieter Abbeel, Sergei Levine and Anju Kanazawa. And this paper is in the domain of ... | [{"start": 0.0, "end": 9.28, "text": " Hey, yo, where's my money? Well, give me my money. All right, we're going to get into"}, {"start": 9.28, "end": 15.88, "text": " this video in a second. Today we're going to look at AMP adversarial motion priors for"}, {"start": 15.88, "end": 23.44, "text": " stylized physics base... |
Yannic Kilchner | https://www.youtube.com/watch?v=Ihg4XDWOy68 | [ML News] De-Biasing GPT-3 | RL cracks chip design | NetHack challenge | Open-Source GPT-J | OUTLINE:
0:00 - Intro
0:30 - Google RL creates next-gen TPUs
2:15 - Facebook launches NetHack challenge
3:50 - OpenAI mitigates bias by fine-tuning
9:05 - Google AI releases browseable reconstruction of human cortex
9:50 - GPT-J 6B Transformer in JAX
12:00 - Tensorflow launches Forum
13:50 - Text style transfer from a ... | Summer has arrived. It's way too warm. My brain just shuts down when it gets warm like this. Hello Hello, my name is Janek and you're watching ML news, the completely irregular update on what's going on in the ML world. Right, let me take a moment to greet our regular viewers of ML news. I'm just kidding. There's no r... | [{"start": 0.0, "end": 7.28, "text": " Summer has arrived. It's way too warm. My brain just shuts down when it gets warm like this."}, {"start": 7.28, "end": 14.48, "text": " Hello Hello, my name is Janek and you're watching ML news, the completely irregular update on what's"}, {"start": 14.48, "end": 24.08000000000000... |
Yannic Kilchner | https://www.youtube.com/watch?v=8Oy7o3Yu-Xo | Efficient and Modular Implicit Differentiation (Machine Learning Research Paper Explained) | #implicitfunction #jax #autodiff
Many problems in Machine Learning involve loops of inner and outer optimization. Finding update steps for the outer loop is usually difficult, because of the.need to differentiate through the inner loop's procedure over multiple steps. Such loop unrolling is very limited and constraine... | Hello, there. Today, we're going to look at efficient and modular implicit differentiation by researchers of Google research. This paper on a high level extends what you know from frameworks like TensorFlow or PyTorch or JAX in terms of automatic differentiation, it extends it to multi level optimization procedures. S... | [{"start": 0.0, "end": 6.24, "text": " Hello, there. Today, we're going to look at efficient and modular implicit differentiation"}, {"start": 6.24, "end": 13.280000000000001, "text": " by researchers of Google research. This paper on a high level extends what you know from"}, {"start": 13.280000000000001, "end": 19.28... |
Yannic Kilchner | https://www.youtube.com/watch?v=bw1kiLMQFKU | [ML News] EU regulates AI, China trains 1.75T model, Google's oopsie, Everybody cheers for fraud. | #mlnews #wudao #academicfraud
OUTLINE:
0:00 - Intro
0:25 - EU seeks to regulate AI
2:45 - AI COVID detection systems are all flawed
5:05 - Chinese lab trains model 10x GPT-3 size
6:55 - Google error identifies "ugliest" language
9:45 - McDonald's learns about AI buzzwords
11:25 - AI predicts cryptocurrency prices
12:0... | The European Union seeks to regulate AI. Chinese researchers train a model 10 times as large as GPT3, Google makes an oopsie and Jacob Buckman appeals to the community to please commit more academic fraud. This and much more in today's ML news. Have fun. So lawfare rights, the European Union unveils its proposals for ... | [{"start": 0.4, "end": 8.08, "text": " The European Union seeks to regulate AI. Chinese researchers train a model 10 times as large as GPT3,"}, {"start": 8.08, "end": 13.36, "text": " Google makes an oopsie and Jacob Buckman appeals to the community to please commit"}, {"start": 13.36, "end": 19.92, "text": " more acad... |
Yannic Kilchner | https://www.youtube.com/watch?v=RZ7JiAk9azY | My GitHub (Trash code I wrote during PhD) | #phdlife #github #researchcode
A brief browse through my public GitHub and musings about my old code.
Link: https//github.com/yk
Links:
TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/... | Hey, what's going on? So I've recently graduated the PhD and during that time, I've written a lot of code, which is mostly garbage, but I thought we go through my GitHub, and I'll show you the most exciting and useless things I've ever written. So if you're on my GitHub, you're going to find a bunch of things includin... | [{"start": 0.0, "end": 6.36, "text": " Hey, what's going on? So I've recently graduated the PhD and during that time, I've written"}, {"start": 6.36, "end": 13.8, "text": " a lot of code, which is mostly garbage, but I thought we go through my GitHub, and I'll"}, {"start": 13.8, "end": 20.3, "text": " show you the most... |
Yannic Kilchner | https://www.youtube.com/watch?v=-buULmf7dec | Decision Transformer: Reinforcement Learning via Sequence Modeling (Research Paper Explained) | #decisiontransformer #reinforcementlearning #transformer
Proper credit assignment over long timespans is a fundamental problem in reinforcement learning. Even methods designed to combat this problem, such as TD-learning, quickly reach their limits when rewards are sparse or noisy. This paper reframes offline reinforce... | Hello there, today we're going to look at decision transformer reinforcement learning via sequence modeling by Lily Chen, Kevin Lu, and others of UC Berkeley, Facebook AI research and Google Brain. On a high level this paper ditches pretty much anything and everything of reinforcement learning in an offline RL setting... | [{"start": 0.64, "end": 6.08, "text": " Hello there, today we're going to look at decision transformer reinforcement learning"}, {"start": 6.08, "end": 13.120000000000001, "text": " via sequence modeling by Lily Chen, Kevin Lu, and others of UC Berkeley, Facebook AI research"}, {"start": 13.120000000000001, "end": 19.6... |
Yannic Kilchner | https://www.youtube.com/watch?v=oxsdp--ULRo | [ML News] Anthropic raises $124M, ML execs clueless, collusion rings, ELIZA source discovered & more | #mlnews #anthropic #eliza
Anthropic raises $124M for steerable AI, peer review is threatened by collusion rings, and the original ELIZA source code was discovered.
OUTLINE:
0:00 - Intro
0:40 - Anthropic raises $124M
3:25 - 65% of execs can't explain AI predictions
4:25 - DeepMind releases AndroidEnv
6:10 - Collusion ... | Anthropic raises 124 million for steerable AI. Peer review is threatened by collusion rings and the original Eliza source code was discovered. This and much more in ML news. Hello and welcome to ML news, your absolutely irregular update of what happens in the ML world. I thought I'd try something new. And if you like ... | [{"start": 0.0, "end": 7.2, "text": " Anthropic raises 124 million for steerable AI. Peer review is threatened by collusion rings"}, {"start": 7.2, "end": 12.96, "text": " and the original Eliza source code was discovered. This and much more in ML news."}, {"start": 17.92, "end": 24.64, "text": " Hello and welcome to M... |
Yannic Kilchner | https://www.youtube.com/watch?v=dmH1ZpcROMk | Reward Is Enough (Machine Learning Research Paper Explained) | #reinforcementlearning #deepmind #agi
What's the most promising path to creating Artificial General Intelligence (AGI)? This paper makes the bold claim that a learning agent maximizing its reward in a sufficiently complex environment will necessarily develop intelligence as a by-product, and that Reward Maximization i... | From the makers of is all you need and do we really need and is it even useful now comes enough. So today we're going to look at reward is enough by David silver Satinder Singh, Doina Preckup and Richard S Sutton. This paper is a more philosophical paper I feel though it presents itself as having practical advice in i... | [{"start": 0.0, "end": 9.72, "text": " From the makers of is all you need and do we really need and is it even useful now comes"}, {"start": 9.72, "end": 11.64, "text": " enough."}, {"start": 11.64, "end": 18.36, "text": " So today we're going to look at reward is enough by David silver Satinder Singh, Doina"}, {"start... |
Yannic Kilchner | https://www.youtube.com/watch?v=zWFkUGXjbdo | [Rant] Can AI read your emotions? (No, but ...) | #facerecognition #emotiondetection #mindreading
Face recognition has a bad rep in the ML community. While the technology continuously advances, so does the resistance against its applications, with good reasons: AI emotion analysis hints at a dystopian future where our lives are completely governed by algorithms. Howe... | We need to talk about your face or face recognition in general. A tweet has been making the rounds saying facial recognition is able to analyze in real time the emotions and feelings. Just that. And it showed a video of a apparent real time system looking at people's faces and determining what their emotions are. Now ... | [{"start": 0.0, "end": 10.24, "text": " We need to talk about your face or face recognition in general. A tweet has been"}, {"start": 10.24, "end": 15.16, "text": " making the rounds saying facial recognition is able to analyze in real"}, {"start": 15.16, "end": 26.0, "text": " time the emotions and feelings. Just that... |
Yannic Kilchner | https://www.youtube.com/watch?v=kU-tWy_wr78 | Fast and Slow Learning of Recurrent Independent Mechanisms (Machine Learning Paper Explained) | #metarim #deeprl #catastrophicforgetting
Reinforcement Learning is very tricky in environments where the objective shifts over time. This paper explores agents in multi-task environments that are usually subject to catastrophic forgetting. Building on the concept of Recurrent Independent Mechanisms (RIM), the authors ... | Hi there, today we're looking at fast and slow learning of recurrent independent mechanisms by Kanika Madan, Rosemary Nankö, Anirudh Goyal, Bernard Schilkopf and Josha Benjo. So this paper on a high level proposes an update to a previous paper which was about recurrent independent mechanisms. And the update it propose... | [{"start": 0.0, "end": 7.04, "text": " Hi there, today we're looking at fast and slow learning of recurrent independent mechanisms"}, {"start": 7.04, "end": 15.16, "text": " by Kanika Madan, Rosemary Nank\u00f6, Anirudh Goyal, Bernard Schilkopf and Josha Benjo."}, {"start": 15.16, "end": 22.82, "text": " So this paper ... |
Yannic Kilchner | https://www.youtube.com/watch?v=dWGjoInRaAs | [ML News] DeepMind fails to get independence from Google | #deepmind #google #mlnews
DeepMind has reportedly failed to negotiate for greater independence from Google/Alphabet. While DeepMind wanted to set up a non-profit-like structure, Google seems to go for the opposite approach and seek tight integration. How is AI best served?
Original Article: https://www.wsj.com/articl... | Hello, everyone. Today we're going to look at some news in the machine learning world. The Wall Street Journal here writes Google unit DeepMind tried and failed to win AI autonomy from parent. So apparently DeepMind has sought to become more independent of Google in the past. And here they write that it's been founded... | [{"start": 0.0, "end": 9.040000000000001, "text": " Hello, everyone. Today we're going to look at some news in the machine learning world."}, {"start": 9.040000000000001, "end": 15.9, "text": " The Wall Street Journal here writes Google unit DeepMind tried and failed to win AI autonomy"}, {"start": 15.9, "end": 23.0800... |
Yannic Kilchner | https://www.youtube.com/watch?v=2PYLNHqxd5A | Expire-Span: Not All Memories are Created Equal: Learning to Forget by Expiring (Paper Explained) | #expirespan #nlp #facebookai
Facebook AI (FAIR) researchers present Expire-Span, a variant of Transformer XL that dynamically assigns expiration dates to previously encountered signals. Because of this, Expire-Span can handle sequences of many thousand tokens, while keeping the memory and compute requirements at a man... | Hello there. Today we're going to look at Not All Memories Are Created Equal, Learning to Forget by Expiring and the system also known as ExpireSpan. It's by Sanbayar Sukbatar, Da Ju, Spencer Poff, Stefan Roller, Arthur Slum, Jason Weston and Angela Funn of Facebook AI Research and Luria. In this paper on a high level... | [{"start": 0.0, "end": 6.24, "text": " Hello there. Today we're going to look at Not All Memories Are Created Equal, Learning"}, {"start": 6.24, "end": 14.24, "text": " to Forget by Expiring and the system also known as ExpireSpan. It's by Sanbayar Sukbatar,"}, {"start": 14.24, "end": 21.76, "text": " Da Ju, Spencer Po... |
Yannic Kilchner | https://www.youtube.com/watch?v=JJR3pBl78zw | FNet: Mixing Tokens with Fourier Transforms (Machine Learning Research Paper Explained) | #fnet #attention #fourier
Do we even need Attention? FNets completely drop the Attention mechanism in favor of a simple Fourier transform. They perform almost as well as Transformers, while drastically reducing parameter count, as well as compute and memory requirements. This highlights that a good token mixing heuris... | Hello there, today we're looking at fnet mixing tokens with Fourier transforms by James Lee Thorpe, Joshua Ainsley, Ilya Eckstein and Santiago Antagnon of Google research. I know I'm a bit late with this one. But it's sort of a not only this paper, but it's a really interesting direction that's happening right now in ... | [{"start": 0.64, "end": 7.84, "text": " Hello there, today we're looking at fnet mixing tokens with Fourier transforms by James Lee Thorpe,"}, {"start": 7.84, "end": 16.16, "text": " Joshua Ainsley, Ilya Eckstein and Santiago Antagnon of Google research. I know I'm a bit late"}, {"start": 16.16, "end": 22.0, "text": " ... |
Yannic Kilchner | https://www.youtube.com/watch?v=rR5_emVeyBk | AI made this music video | What happens when OpenAI's CLIP meets BigGAN? | #artificialintelligence #musicvideo #clip
I used OpenAI's CLIP model and BigGAN to create a music video that goes along with the lyrics of a song that I wrote. The song lyrics are made from ImageNet class labels, and the song itself is performed by me on a looper.
OUTLINE:
0:00 - Intro
1:00 - AI-generated music video... | I wrote a song with lyrics made from ImageNet class labels and then I used OpenAI's clip model together with a big GAN and a backpropagation procedure to generate a music video that fits the lyrics of the song. The song is performed on a live looper and the lyrics mean absolutely nothing. I hope you think this is as c... | [{"start": 0.0, "end": 19.84, "text": " I wrote a song with lyrics made from ImageNet class labels and then I used OpenAI's clip"}, {"start": 19.84, "end": 28.04, "text": " model together with a big GAN and a backpropagation procedure to generate a music video that fits"}, {"start": 28.04, "end": 33.44, "text": " the l... |
Yannic Kilchner | https://www.youtube.com/watch?v=W-O7AZNzbzQ | DDPM - Diffusion Models Beat GANs on Image Synthesis (Machine Learning Research Paper Explained) | #ddpm #diffusionmodels #openai
GANs have dominated the image generation space for the majority of the last decade. This paper shows for the first time, how a non-GAN model, a DDPM, can be improved to overtake GANs at standard evaluation metrics for image generation. The produced samples look amazing and other than GAN... | Hello, these are generated images from a new model, actually a new class of model. It's been around for a while, but for the first time, this new class of model has been pushed to the point where the images they produce are not only look really nice and look like something you can't we've come to expect from the lates... | [{"start": 0.0, "end": 29.0, "text": " Hello, these are generated images from a new model, actually a new class of model. It's been around for a while, but for the first time, this new class of model has been pushed to the point where the images they produce are not only look really nice and look like something you can... |
Yannic Kilchner | https://www.youtube.com/watch?v=WknN4E-y44E | Research Conference ICML drops their acceptance rate | Area Chairs instructed to be more picky | #icml #machinelearning #conference
In a controversial move, ICML Area Chairs were instructed to raise the bar on acceptance to drop the acceptance rate by 10% from the previous trajectory. This raises a lot of questions about the pains of an academic peer review system under the load of an exponentially increasing fie... | Good morning, I hope you had a good night's sleep. It's just another day where the review system in machine learning is completely and utterly broken this time courtesy of the ICML chairs, apparently notifying the senior area chairs to reduce the number of accepted submissions by about 10%. According to current meta r... | [{"start": 0.16, "end": 5.2, "text": " Good morning, I hope you had a good night's sleep. It's just another day where the review system in"}, {"start": 5.2, "end": 13.36, "text": " machine learning is completely and utterly broken this time courtesy of the ICML chairs, apparently"}, {"start": 13.36, "end": 22.16, "text... |
Yannic Kilchner | https://www.youtube.com/watch?v=pH2jZun8MoY | Involution: Inverting the Inherence of Convolution for Visual Recognition (Research Paper Explained) | #involution #computervision #attention
Convolutional Neural Networks (CNNs) have dominated computer vision for almost a decade by applying two fundamental principles: Spatial agnosticism and channel-specific computations. Involution aims to invert these principles and presents a spatial-specific computation, which is ... | Hello there. Today we're looking at involution, inverting the inheritance of convolution for visual recognition by a number of researchers of the Hong Kong University of Science and Technology, ByteDance AI Lab, and Peking University. In this paper on a high level, the researchers tried to replace the good old convolu... | [{"start": 0.0, "end": 2.84, "text": " Hello there. Today we're looking at"}, {"start": 2.84, "end": 5.84, "text": " involution, inverting the inheritance of"}, {"start": 5.84, "end": 8.24, "text": " convolution for visual recognition by"}, {"start": 8.24, "end": 9.52, "text": " a number of researchers of"}, {"start": ... |
Yannic Kilchner | https://www.youtube.com/watch?v=7K4Z8RqjWIk | MLP-Mixer: An all-MLP Architecture for Vision (Machine Learning Research Paper Explained) | #mixer #google #imagenet
Convolutional Neural Networks have dominated computer vision for nearly 10 years, and that might finally come to an end. First, Vision Transformers (ViT) have shown remarkable performance, and now even simple MLP-based models reach competitive accuracy, as long as sufficient data is used for p... | Hi there, I'm sure you've seen this paper make the rounds. It's called MLP Mixer and All MLP Architecture for Vision. It's by Ilya Tolstikin, Neil Halsby, Alexander Kolesnikov and Lucas Beyer of Google Research. This is not going to be a long video because the concept is pretty simple. These people, did I say others o... | [{"start": 0.0, "end": 6.24, "text": " Hi there, I'm sure you've seen this paper make the rounds. It's called MLP Mixer and"}, {"start": 6.24, "end": 12.64, "text": " All MLP Architecture for Vision. It's by Ilya Tolstikin, Neil Halsby, Alexander Kolesnikov"}, {"start": 12.64, "end": 19.080000000000002, "text": " and L... |
Yannic Kilchner | https://www.youtube.com/watch?v=hsOMCwvFv80 | I'm out of Academia | #machinelearning #ai #phd
Done with my PhD in Machine Learning at ETH Zurich.
On to new lands!
Links:
TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitch... | Howdy diddly doo. Hi everyone. If you're wondering what the ridiculous thing on my head is, then that is my official graduation slash successful defense hat. I'm not yet allowed to technically use the title doctor but let's be honest who gives a crap anyway titles. Um, I'm a huge fan of this hat. My lab mates made thi... | [{"start": 0.0, "end": 8.4, "text": " Howdy diddly doo. Hi everyone. If you're wondering what the ridiculous thing on my head is, then"}, {"start": 8.4, "end": 16.42, "text": " that is my official graduation slash successful defense hat. I'm not yet allowed to technically"}, {"start": 16.42, "end": 24.04, "text": " use... |
Yannic Kilchner | https://www.youtube.com/watch?v=h3ij3F3cPIk | DINO: Emerging Properties in Self-Supervised Vision Transformers (Facebook AI Research Explained) | #dino #facebook #selfsupervised
Self-Supervised Learning is the final frontier in Representation Learning: Getting useful features without any labels. Facebook AI's new system, DINO, combines advances in Self-Supervised Learning for Computer Vision with the new Vision Transformer (ViT) architecture and achieves impres... | Hello there, I hope you have all seen this. This is a new system by Facebook AI. And what you're seeing here is a visualization of the attention maps of that neural network. In the middle is a supervised baseline. And on the right is this new system called dyno. It's not as much a system as it is a methodology for uns... | [{"start": 0.96, "end": 8.4, "text": " Hello there, I hope you have all seen this. This is a new system by Facebook AI. And what you're"}, {"start": 8.4, "end": 15.280000000000001, "text": " seeing here is a visualization of the attention maps of that neural network. In the middle is a"}, {"start": 15.280000000000001, ... |
Yannic Kilchner | https://www.youtube.com/watch?v=uwfVxckuq50 | Why AI is Harder Than We Think (Machine Learning Research Paper Explained) | #aiwinter #agi #embodiedcognition
The AI community has gone through regular cycles of AI Springs, where rapid progress gave rise to massive overconfidence, high funding, and overpromise, followed by these promises being unfulfilled, subsequently diving into periods of disenfranchisement and underfunding, called AI Win... | Hello there, welcome back. Today we're going to look at why AI is harder than we think by Melanie Mitchell of the Santa Fe Institute. This paper argues that the cycles of AI spring and AI winter come about by people making too overconfident of predictions, and then everything breaks down. And Mitchell here goes into w... | [{"start": 0.0, "end": 7.36, "text": " Hello there, welcome back. Today we're going to look at why AI is harder than we think"}, {"start": 7.36, "end": 15.6, "text": " by Melanie Mitchell of the Santa Fe Institute. This paper argues that the cycles of AI spring"}, {"start": 15.6, "end": 21.8, "text": " and AI winter co... |
Yannic Kilchner | https://www.youtube.com/watch?v=hIoCn_9QTVU | I COOKED A RECIPE MADE BY A.I. | Cooking with GPT-3 (Don't try this at home) | #gpt3 #airecipe #cooking
We went to the store and bought a set of completely random ingredients and had OpenAI's GPT-3 come up with a recipe, which we then cooked and ate.
Our Rules:
1. All Vegan
2. Follow the recipe as closely as possible
3. We must finish our plates
The Recipe:
1. Boil the potatoes and carrots.
2.... | Jonas is just looking up adjectives for bad food. I think I'm gonna need them. Look at this stuff. We're gonna go to the store, buy some random stuff, put it all into an AI that generates recipes and we're committing right now to cook... Can you just move your hands in a kind of random manner? And eat whatever it outp... | [{"start": 0.0, "end": 3.36, "text": " Jonas is just looking up adjectives for bad food."}, {"start": 5.28, "end": 7.92, "text": " I think I'm gonna need them. Look at this stuff."}, {"start": 7.92, "end": 12.48, "text": " We're gonna go to the store, buy some random stuff, put it all into an AI that generates"}, {"sta... |
Yannic Kilchner | https://www.youtube.com/watch?v=CRlN-cYFxTk | NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis (ML Research Paper Explained) | #nerf #neuralrendering #deeplearning
View Synthesis is a tricky problem, especially when only given a sparse set of images as an input. NeRF embeds an entire scene into the weights of a feedforward neural network, trained by backpropagation through a differential volume rendering procedure, and achieves state-of-the-a... | Hello there, look at these objects right here. What if I told you that I'm going to give you a bunch of pictures of these objects from different sides. And what you have to do is you have to come up with a system that generates me the picture as if the object was viewed from any direction. So something like this, righ... | [{"start": 0.88, "end": 7.28, "text": " Hello there, look at these objects right here. What if I told you that I'm going to give you a"}, {"start": 7.28, "end": 12.96, "text": " bunch of pictures of these objects from different sides. And what you have to do is you have to come"}, {"start": 12.96, "end": 19.52, "text":... |
Yannic Kilchner | https://www.youtube.com/watch?v=7OdhtAiPfWY | I BUILT A NEURAL NETWORK IN MINECRAFT | Analog Redstone Network w/ Backprop & Optimizer (NO MODS) | #minecraft #neuralnetwork #backpropagation
I built an analog neural network in vanilla Minecraft without any mods or command blocks. The network uses Redstone wire power strengths to carry the signal through one hidden layer, including nonlinearities, and then do automatic backpropagation and even weight updates.
OUT... | I built a fully functional trainable analog neural network in Minecraft with no command blocks and no mods. Check this out. Hello. Hello. Hi. I'm trying to build a neural network. Can you please... I don't want to buy your stuff. I... like... no, I don't want a bucket of... no, I don't want a bucket of puffer fish. Wh... | [{"start": 0.0, "end": 7.5200000000000005, "text": " I built a fully functional trainable analog neural network in Minecraft with no command blocks and no mods. Check this out."}, {"start": 19.84, "end": 28.8, "text": " Hello. Hello. Hi. I'm trying to build a neural network."}, {"start": 28.8, "end": 37.52, "text": " C... |
Yannic Kilchner | https://www.youtube.com/watch?v=qtu0aSTDE2I | DreamCoder: Growing generalizable, interpretable knowledge with wake-sleep Bayesian program learning | #dreamcoder #programsynthesis #symbolicreasoning
Classic Machine Learning struggles with few-shot generalization for tasks where humans can easily generalize from just a handful of examples, for example sorting a list of numbers. Humans do this by coming up with a short program, or algorithm, that explains the few dat... | Hi there, I have a little challenge for you right here, look at these numbers and see if you can figure out what comes where the question mark is. Now, if you look at it a little bit, you'll recognize that this is the sorting algorithm. So you're supposed to sort these numbers in ascending order. And that's going to b... | [{"start": 0.96, "end": 6.72, "text": " Hi there, I have a little challenge for you right here, look at these numbers and see if you can"}, {"start": 6.72, "end": 13.120000000000001, "text": " figure out what comes where the question mark is. Now, if you look at it a little bit, you'll"}, {"start": 13.120000000000001, ... |
Yannic Kilchner | https://www.youtube.com/watch?v=M2-BE5JotjA | PAIR AI Explorables | Is the problem in the data? Examples on Fairness, Diversity, and Bias. | In the recurring debate about bias in Machine Learning models, there is a growing argument saying that "the problem is not in the data", often citing the influence of various choices like loss functions or network architecture. In this video, we take a look at PAIR's AI Explorables through the lens of whether or not th... | Hello everyone, so maybe you've seen my last video about this topic, but every few months the debate about bias in machine learning models is resurfacing. And this time a tweet by Karim Karr is sort of in the middle of it. And he says four things to know about race and gender bias in algorithms. First, the bias starts... | [{"start": 0.0, "end": 5.44, "text": " Hello everyone, so maybe you've seen my last video about this topic, but every few months"}, {"start": 5.44, "end": 12.64, "text": " the debate about bias in machine learning models is resurfacing. And this time a tweet"}, {"start": 12.64, "end": 18.36, "text": " by Karim Karr is ... |
Yannic Kilchner | https://www.youtube.com/watch?v=rHQPBqMULXo | Machine Learning PhD Survival Guide 2021 | Advice on Topic Selection, Papers, Conferences & more! | #machinelearning #phd #howto
This video is advice for new PhD students in the field of Machine Learning in 2021 and after. The field has shifted dramatically in the last few years and navigating grad school can be very hard, especially when you're as clueless as I was when I started. The video is a personal recount of... | on how to do a PhD. So mainly that you don't repeat my mistakes. Train. We've made it into a PhD program. Congratulations, you made it. So today we're going to have a look at what to do during a PhD, how to succeed at publishing papers, how to deal with reviews, what to do at conferences and many other things. So I ho... | [{"start": 0.0, "end": 4.0, "text": " on how to do a PhD. So mainly that you don't repeat my mistakes."}, {"start": 7.28, "end": 7.76, "text": " Train."}, {"start": 12.88, "end": 19.6, "text": " We've made it into a PhD program. Congratulations, you made it. So today we're going to have a look"}, {"start": 19.6, "end":... |
Yannic Kilchner | https://www.youtube.com/watch?v=J7CrtblmMnU | Is Google Translate Sexist? Gender Stereotypes in Statistical Machine Translation | #genderbias #algorithmicfairness #debiasing
A brief look into gender stereotypes in Google Translate. The origin is a Tweet containing a Hungarian text. Hungarian is a gender-neutral language, so translating gender pronouns is ambiguous. Turns out that Google Translate assigns very stereotypical pronouns. In this vide... | So, you might have seen this tweet. Hungarian is a gender neutral language. It has no gender pronouns. So Google Translate automatically chooses the gender for you. Here is how everyday sexism is consistently encoded in 2021. F you Google. On the left hand side is a Hungarian sentence. Google Translate then translates... | [{"start": 0.0, "end": 3.84, "text": " So, you might have seen this tweet."}, {"start": 3.84, "end": 6.88, "text": " Hungarian is a gender neutral language."}, {"start": 6.88, "end": 8.44, "text": " It has no gender pronouns."}, {"start": 8.44, "end": 12.5, "text": " So Google Translate automatically chooses the gender... |
Yannic Kilchner | https://www.youtube.com/watch?v=P_xeshTnPZg | Perceiver: General Perception with Iterative Attention (Google DeepMind Research Paper Explained) | #perceiver #deepmind #transformer
Inspired by the fact that biological creatures attend to multiple modalities at the same time, DeepMind releases its new Perceiver model. Based on the Transformer architecture, the Perceiver makes no assumptions on the modality of the input data and also solves the long-standing quadr... | Hi there, how is everyone doing? Today we'll look at the Perceiver general perception with iterative attention by Andrew Yegal, Felix Gimino, Andrew Brock, Andrew Sisserman, Oriol Vinyols and Jao Carrera of DeepMind. This paper on a high level describes a model called the Perceiver. And what this model does is it inte... | [{"start": 0.64, "end": 7.44, "text": " Hi there, how is everyone doing? Today we'll look at the Perceiver general perception with iterative"}, {"start": 7.44, "end": 14.72, "text": " attention by Andrew Yegal, Felix Gimino, Andrew Brock, Andrew Sisserman, Oriol Vinyols and Jao"}, {"start": 14.72, "end": 23.76, "text":... |
Yannic Kilchner | https://www.youtube.com/watch?v=Elxn8rS88bI | Pretrained Transformers as Universal Computation Engines (Machine Learning Research Paper Explained) | #universalcomputation #pretrainedtransformers #finetuning
Large-scale pre-training and subsequent fine-tuning is a common recipe for success with transformer models in machine learning. However, most such transfer learning is done when a model is pre-trained on the same or a very similar modality to the final task to ... | Hi there, today we're looking at pre trained transformers as universal computation engines by Kevin Lu, Adita Grover, Pieter Abbeel and Igor Mordach. On a high level, this paper argues that pre trained transformers, specifically transformers pre trained on language modeling, are doing something called universal comput... | [{"start": 0.88, "end": 7.12, "text": " Hi there, today we're looking at pre trained transformers as universal computation engines"}, {"start": 7.12, "end": 14.56, "text": " by Kevin Lu, Adita Grover, Pieter Abbeel and Igor Mordach. On a high level, this paper argues that"}, {"start": 14.56, "end": 21.2, "text": " pre ... |
Yannic Kilchner | https://www.youtube.com/watch?v=Ag1bw8MfHGQ | Yann LeCun - Self-Supervised Learning: The Dark Matter of Intelligence (FAIR Blog Post Explained) | #selfsupervisedlearning #yannlecun #facebookai
Deep Learning systems can achieve remarkable, even super-human performance through supervised learning on large, labeled datasets. However, there are two problems: First, collecting ever more labeled data is expensive in both time and money. Second, these deep neural netw... | Hello there, today we're looking at self supervised learning, the dark matter of intelligence. This was written by Jan LeCun and Ishan Misra of Facebook AI research. And it is not a paper, it is more a blog post shared on the Facebook AI blog. And it outlines the current state of self supervised learning, what it is a... | [{"start": 0.96, "end": 7.28, "text": " Hello there, today we're looking at self supervised learning, the dark matter of intelligence. This"}, {"start": 7.28, "end": 14.88, "text": " was written by Jan LeCun and Ishan Misra of Facebook AI research. And it is not a paper,"}, {"start": 14.88, "end": 22.48, "text": " it i... |
Yannic Kilchner | https://www.youtube.com/watch?v=Z_kWZpgEZ7w | Multimodal Neurons in Artificial Neural Networks (w/ OpenAI Microscope, Research Paper Explained) | #openai #clip #microscope
OpenAI does a huge investigation into the inner workings of their recent CLIP model via faceted feature visualization and finds amazing things: Some neurons in the last layer respond to distinct concepts across multiple modalities, meaning they fire for photographs, drawings, and signs depict... | Hi there and welcome back my dear fellow scholars. Today we're going to look at multimodal neurons in artificial neural networks by Gabriel Goh, Nick Camerata, Chelsea Voss, Shan Carter, Michael Petroff, Ludwig Schubert, Alec Radford and Chris Ola that has appeared in this DistillPub journal which I think is a pretty ... | [{"start": 0.0, "end": 6.24, "text": " Hi there and welcome back my dear fellow scholars. Today we're going to look at"}, {"start": 6.24, "end": 12.08, "text": " multimodal neurons in artificial neural networks by Gabriel Goh, Nick Camerata,"}, {"start": 12.08, "end": 17.92, "text": " Chelsea Voss, Shan Carter, Michael... |
Yannic Kilchner | https://www.youtube.com/watch?v=cllFzkvrYmE | GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained) | #glom #hinton #capsules
Geoffrey Hinton describes GLOM, a Computer Vision model that combines transformers, neural fields, contrastive learning, capsule networks, denoising autoencoders and RNNs. GLOM decomposes an image into a parse tree of objects and their parts. However, unlike previous systems, the parse tree is ... | Hi, there. Today, we'll look at how to represent part-whole hierarchies in a neural network by the legend himself, Jeffrey Hinton. He describes a system also known as GLOM, that is a new approach to processing visual information using neural networks. And interestingly, the paper starts off by saying, this paper does ... | [{"start": 0.0, "end": 6.4, "text": " Hi, there. Today, we'll look at how to represent part-whole hierarchies in a neural network"}, {"start": 6.4, "end": 13.56, "text": " by the legend himself, Jeffrey Hinton. He describes a system also known as GLOM, that"}, {"start": 13.56, "end": 22.14, "text": " is a new approach ... |
Yannic Kilchner | https://www.youtube.com/watch?v=RSSVWpBak6s | Linear Transformers Are Secretly Fast Weight Memory Systems (Machine Learning Paper Explained) | #fastweights #deeplearning #transformers
Transformers are dominating Deep Learning, but their quadratic memory and compute requirements make them expensive to train and hard to use. Many papers have attempted to linearize the core module: the attention mechanism, using kernels - for example, the Performer. However, su... | Hi there, today we'll look at linear transformers are secretly fast-weight memory systems by Immanuel Schlag, Kazuki Airy and Jürgen Schmiduba. On a high level, this paper makes a connection between linear transformers, which are transformers that linearize the attention mechanism, such as the performer, and fast-weig... | [{"start": 0.0, "end": 7.46, "text": " Hi there, today we'll look at linear transformers are secretly fast-weight memory systems by"}, {"start": 7.46, "end": 11.620000000000001, "text": " Immanuel Schlag, Kazuki Airy and J\u00fcrgen Schmiduba."}, {"start": 11.620000000000001, "end": 18.12, "text": " On a high level, th... |
Yannic Kilchner | https://www.youtube.com/watch?v=_c6A33Fg5Ns | DeBERTa: Decoding-enhanced BERT with Disentangled Attention (Machine Learning Paper Explained) | #deberta #bert #huggingface
DeBERTa by Microsoft is the next iteration of BERT-style Self-Attention Transformer models, surpassing RoBERTa in State-of-the-art in multiple NLP tasks. DeBERTa brings two key improvements: First, they treat content and position information separately in a new form of disentangled attentio... | Hi there, today we'll look at Diberta decoding enhanced BERT with disentangled attention by Peng Cheng He, Xiaolong Liu, Zhang Fenggao and Waiju Chen of Microsoft. This paper is an improvement on BERT the language model and the Roberta variant of it. Specifically it suggests two improvements namely first is this disen... | [{"start": 0.0, "end": 7.48, "text": " Hi there, today we'll look at Diberta decoding enhanced BERT with disentangled attention"}, {"start": 7.48, "end": 14.42, "text": " by Peng Cheng He, Xiaolong Liu, Zhang Fenggao and Waiju Chen of Microsoft."}, {"start": 14.42, "end": 21.14, "text": " This paper is an improvement o... |
Yannic Kilchner | https://www.youtube.com/watch?v=o75ybZ-6Uu8 | Dreamer v2: Mastering Atari with Discrete World Models (Machine Learning Research Paper Explained) | #dreamer #deeprl #reinforcementlearning
Model-Based Reinforcement Learning has been lagging behind Model-Free RL on Atari, especially among single-GPU algorithms. This collaboration between Google AI, DeepMind, and the University of Toronto (UofT) pushes world models to the next level. The main contribution is a learn... | Hi there, what you're seeing here are predictions by a world model learned for Atari reinforcement learning. On the top you see what really happened during an episode of play. And on the bottom, you see the predictions of this world model, the world model just gets five frames at the beginning, which you don't even se... | [{"start": 0.0, "end": 6.8, "text": " Hi there, what you're seeing here are predictions by a world model learned for Atari reinforcement"}, {"start": 6.8, "end": 7.88, "text": " learning."}, {"start": 7.88, "end": 11.28, "text": " On the top you see what really happened during an episode of play."}, {"start": 11.28, "e... |
Yannic Kilchner | https://www.youtube.com/watch?v=R5DiLFOMZrc | TransGAN: Two Transformers Can Make One Strong GAN (Machine Learning Research Paper Explained) | #transformer #gan #machinelearning
Generative Adversarial Networks (GANs) hold the state-of-the-art when it comes to image generation. However, while the rest of computer vision is slowly taken over by transformers or other attention-based architectures, all working GANs to date contain some form of convolutional laye... | Hi there, today we'll look at TransGAN, two transformers can make one strong GAN, by Yifan Qian, Xu Yucheng, and Cheng Yangwang. So in this paper, the authors attempt to make a generative adversarial network, a GAN, out of only transformers. So far, attention or transformer-like things have been used in GANs, but they... | [{"start": 0.0, "end": 7.6000000000000005, "text": " Hi there, today we'll look at TransGAN, two transformers can make one strong GAN, by Yifan"}, {"start": 7.6000000000000005, "end": 11.68, "text": " Qian, Xu Yucheng, and Cheng Yangwang."}, {"start": 11.68, "end": 17.84, "text": " So in this paper, the authors attempt... |
Yannic Kilchner | https://www.youtube.com/watch?v=rNkHjZtH0RQ | NFNets: High-Performance Large-Scale Image Recognition Without Normalization (ML Paper Explained) | #nfnets #deepmind #machinelearning
Batch Normalization is a core component of modern deep learning. It enables training at higher batch sizes, prevents mean shift, provides implicit regularization, and allows networks to reach higher performance than without. However, BatchNorm also has disadvantages, such as its depe... | Hi there, today we're looking at high performance large scale image recognition without normalization by Andrew Brock, Soham Dey, Samuel L. Smith, and Karen Simonian of DeepMind. This is otherwise known as NF nets, normalizer free networks. So the point of this paper is to build networks, in this case, specifically co... | [{"start": 0.64, "end": 5.84, "text": " Hi there, today we're looking at high performance large scale image recognition without"}, {"start": 5.84, "end": 13.76, "text": " normalization by Andrew Brock, Soham Dey, Samuel L. Smith, and Karen Simonian of DeepMind. This"}, {"start": 13.76, "end": 20.72, "text": " is otherw... |
Yannic Kilchner | https://www.youtube.com/watch?v=m-zrcmRd7E4 | Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention (AI Paper Explained) | #transformer #nystromer #nystromformer
The Nyströmformer (or Nystromformer, Nyströmer, Nystromer), is a new drop-in replacement for approximating the Self-Attention matrix in Transformers with linear memory and time requirements. Most importantly, it uses the Nystrom-Method to subselect (or segment mean) queries and k... | Hi there, today we're talking about a nice term former a nice term based algorithm for approximating self attention by Jung Young, Hyeong, Chan Peng Chang, Rudra Z's Chakra Bharti, Ming Xing Tan, Glenn Fung, Yin Li and Vikas Singh. So this paper, yet another paper that proposes a approximation to the self attention me... | [{"start": 0.0, "end": 9.1, "text": " Hi there, today we're talking about a nice term former a nice term based algorithm for approximating self attention by Jung Young,"}, {"start": 9.1, "end": 22.56, "text": " Hyeong, Chan Peng Chang, Rudra Z's Chakra Bharti, Ming Xing Tan, Glenn Fung, Yin Li and Vikas Singh. So this ... |
Yannic Kilchner | https://www.youtube.com/watch?v=ahRPdiCop3E | Deep Networks Are Kernel Machines (Paper Explained) | #deeplearning #kernels #neuralnetworks
Full Title: Every Model Learned by Gradient Descent Is Approximately a Kernel Machine
Deep Neural Networks are often said to discover useful representations of the data. However, this paper challenges this prevailing view and suggest that rather than representing the data, deep ... | Hi there. Today we're looking at Every Model Learned by Gradient Descent is Approximately a Kernel Machine by Pedro Domingos. This paper on a high level establishes a theoretical connection between gradient descent learned models such as deep neural networks and kernel machines as you might know them from topics such ... | [{"start": 0.0, "end": 5.44, "text": " Hi there. Today we're looking at Every Model Learned by Gradient Descent is"}, {"start": 5.44, "end": 11.040000000000001, "text": " Approximately a Kernel Machine by Pedro Domingos. This paper on a high level"}, {"start": 11.040000000000001, "end": 16.8, "text": " establishes a th... |
Yannic Kilchner | https://www.youtube.com/watch?v=zdb8MM94A5c | Feedback Transformers: Addressing Some Limitations of Transformers with Feedback Memory (Explained) | #ai #science #transformers
Autoregressive Transformers have taken over the world of Language Modeling (GPT-3). However, in order to train them, people use causal masking and sample parallelism, which means computation only happens in a feedforward manner. This results in higher layer information, which would be availa... | Hi there, today we're looking at addressing some limitations of transformers with feedback memory, also known as feedback transformers by Angela Phan, Thibaut Lavril, Edouard Grave, Armand Joullin and Sanbayar Sogbatar of Facebook AI Research and Loria. On a high level, this paper, as it says in the title, it addresse... | [{"start": 0.0, "end": 6.04, "text": " Hi there, today we're looking at addressing some limitations of transformers with feedback"}, {"start": 6.04, "end": 13.4, "text": " memory, also known as feedback transformers by Angela Phan, Thibaut Lavril, Edouard Grave,"}, {"start": 13.4, "end": 19.36, "text": " Armand Joullin... |
Yannic Kilchner | https://www.youtube.com/watch?v=yFAuXmcGk2Y | SingularityNET - A Decentralized, Open Market and Network for AIs (Whitepaper Explained) | #ai #research #blockchain
Big Tech is currently dominating the pursuit of ever more capable AI. This happens behind closed doors and results in a monopoly of power. SingularityNET is an open, decentralized network where anyone can offer and consume AI services, and where AI agents can interlink with each other to prov... | Hi there. Today we'll look at Singularity Net, the global AI marketplace as it is advertised on their website. Specifically, we're going to look at the Singularity Net White Paper 2.0 as it appeared in 2019. So it's version version two, version one, I think appeared in 2017. So Singularity Net is a, as it says, a glob... | [{"start": 0.0, "end": 8.0, "text": " Hi there. Today we'll look at Singularity Net, the global AI marketplace as it is advertised on their website."}, {"start": 8.0, "end": 16.0, "text": " Specifically, we're going to look at the Singularity Net White Paper 2.0 as it appeared in 2019."}, {"start": 16.0, "end": 20.0, "... |
Yannic Kilchner | https://www.youtube.com/watch?v=iAR8LkkMMIM | Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity | #ai #technology #switchtransformer
Scale is the next frontier for AI. Google Brain uses sparsity and hard routing to massively increase a model's parameters, while keeping the FLOPs per forward pass constant. The Switch Transformer compares favorably to its dense counterparts in terms of speed and sample efficiency an... | Hi there, today we'll talk about switch transformers scaling to trillion parameter models with simple and efficient sparsity by William fetus Barrett is off and no one should see her of Google Brain. So as you can see right off the title, we're going towards trillions of parameters GPT three had 175 billion parameters... | [{"start": 0.88, "end": 5.5200000000000005, "text": " Hi there, today we'll talk about switch transformers scaling to trillion parameter"}, {"start": 5.5200000000000005, "end": 11.44, "text": " models with simple and efficient sparsity by William fetus Barrett is off and no one"}, {"start": 11.44, "end": 17.12, "text":... |
Yannic Kilchner | https://www.youtube.com/watch?v=hHZSA9z_abE | STOCHASTIC MEME DESCENT - Deep Learning Meme Review - Episode 2 (Part 2 of 2) | #memes #science #ai
Part 2 of Antonio and me examining the latest and greatest of deep learning memes.
Music:
Sunshower - LATASHÁ
Papov - Yung Logos
Sunny Days - Anno Domini Beats
Trinity - Jeremy Blake
More memes:
facebook.com/convolutionalmemes
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: htt... | At some point I will be able to code you, Janek. You will be able to? To code you. To code me? Yes, so that finally you will release videos in time. Random guessing, my classifier. 47% accuracy. Nice. Yes. Yes. If you change the seed you can get 48. Ha ha, you'll never reach me. Yes, I will. Wow, by coming up with a b... | [{"start": 0.0, "end": 3.0, "text": " At some point I will be able to code you, Janek."}, {"start": 3.0, "end": 4.0, "text": " You will be able to?"}, {"start": 4.0, "end": 5.0, "text": " To code you."}, {"start": 5.0, "end": 6.0, "text": " To code me?"}, {"start": 6.0, "end": 9.0, "text": " Yes, so that finally you wi... |
Yannic Kilchner | https://www.youtube.com/watch?v=T9XSU0pKX2E | OpenAI CLIP: ConnectingText and Images (Paper Explained) | #ai #openai #technology
Paper Title: Learning Transferable Visual Models From Natural Language Supervision
CLIP trains on 400 million images scraped from the web, along with text descriptions to learn a model that can connect the two modalities. The core idea is a contrastive objective combined with a large batch size... | So here you see a classifier that takes a look at this image and assigns one of many many labels actually one of a hundred and one labels as you can see here and one of the labels is a photo of guacamole a type of food and it assigns a really high probability to that as opposed to like the the second prediction which ... | [{"start": 0.0, "end": 7.76, "text": " So here you see a classifier that takes a look at this image and assigns one of"}, {"start": 7.76, "end": 12.200000000000001, "text": " many many labels actually one of a hundred and one labels as you can see"}, {"start": 12.200000000000001, "end": 19.84, "text": " here and one of... |
Yannic Kilchner | https://www.youtube.com/watch?v=j4xgkjWlfL4 | OpenAI DALL·E: Creating Images from Text (Blog Post Explained) | #openai #science #gpt3
OpenAI's newest model, DALL·E, shows absolutely amazing abilities in generating high-quality images from arbitrary text descriptions. Like GPT-3, the range of applications and the diversity of outputs is astonishing, given that this is a single model, trained on a purely autoregressive task. Thi... | A sphere made of Swiss cheese, a sphere with a texture of Swiss cheese. And there you have it. Beautiful, very appetizing Swiss cheese balls. My Swiss heart had just just skipped a beat out of this monstrosity. What's even cooler than a sphere made of Swiss cheese is a Taurus made of denim. These images are so cool, a... | [{"start": 0.0, "end": 9.120000000000001, "text": " A sphere made of Swiss cheese, a sphere with a texture of Swiss cheese."}, {"start": 9.120000000000001, "end": 10.68, "text": " And there you have it."}, {"start": 10.68, "end": 14.700000000000001, "text": " Beautiful, very appetizing Swiss cheese balls."}, {"start": ... |
Yannic Kilchner | https://www.youtube.com/watch?v=plK2WVdLTOY | Extracting Training Data from Large Language Models (Paper Explained) | #ai #privacy #tech
This paper demonstrates a method to extract verbatim pieces of the training data from a trained language model. Moreover, some of the extracted pieces only appear a handful of times in the dataset. This points to serious security and privacy implications for models like GPT-3. The authors discuss th... | Hi there. Today, we're looking at extracting training data from large language models by what appears to be a big collaboration between corporations and academic institutions. There are almost as many affiliations here as their authors. So this is joint work between, you know, as you can see, many, many sort of instit... | [{"start": 0.0, "end": 7.48, "text": " Hi there. Today, we're looking at extracting training data from large language models by what"}, {"start": 7.48, "end": 14.200000000000001, "text": " appears to be a big collaboration between corporations and academic institutions. There"}, {"start": 14.200000000000001, "end": 20.... |
Yannic Kilchner | https://www.youtube.com/watch?v=7DGlElSVYGo | MEMES IS ALL YOU NEED - Deep Learning Meme Review - Episode 2 (Part 1 of 2) | #memes #science #ai
Antonio and I critique the creme de la creme of Deep Learning memes.
Music:
Sunshower - LATASHÁ
Papov - Yung Logos
Sunny Days - Anno Domini Beats
Trinity - Jeremy Blake
More memes:
facebook.com/convolutionalmemes
Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.co... | Yanni just kidnapped me and now I'm and he told me okay Antonio just pretend everything is fine Just tell about the papers tell about the memes What's going on Yannick? We're gonna look at pictures and go home All right, we're back Antonio's back. Welcome back to meme review Antonio never left Oh, I'm going the channe... | [{"start": 0.0, "end": 6.92, "text": " Yanni just kidnapped me and now I'm and he told me okay Antonio just pretend everything is fine"}, {"start": 7.28, "end": 10.76, "text": " Just tell about the papers tell about the memes"}, {"start": 11.64, "end": 14.48, "text": " What's going on Yannick? We're gonna look at pictu... |
Yannic Kilchner | https://www.youtube.com/watch?v=BhUWvQmLzSk | ReBeL - Combining Deep Reinforcement Learning and Search for Imperfect-Information Games (Explained) | #ai #technology #poker
This paper does for Poker what AlphaZero has done for Chess & Go. The combination of Self-Play Reinforcement Learning and Tree Search has had tremendous success in perfect-information games, but transferring such techniques to imperfect information games is a hard problem. Not only does ReBeL so... | Hi there, take a look at this variant of the game Rock Paper Scissors. It's like usual Rock Paper Scissors, except with the added complexity that when either player chooses scissors, then the rewards and the losses are doubled. So for example, you see right here, player one chooses rock, and player two chooses scissor... | [{"start": 0.96, "end": 8.4, "text": " Hi there, take a look at this variant of the game Rock Paper Scissors. It's like usual Rock Paper"}, {"start": 8.4, "end": 16.080000000000002, "text": " Scissors, except with the added complexity that when either player chooses scissors, then the"}, {"start": 16.080000000000002, "... |
Yannic Kilchner | https://www.youtube.com/watch?v=R07CVhWbAXc | 2M All-In into $5 Pot! WWYD? Daniel Negreanu's No-Limit Hold'em Challenge! (Poker Hand Analysis) | #ai #technology #poker
Daniel Negreanu posted a set of very interesting No-Limit Hold'em situations on Twitter. I try to analyze them from the perspective of a poker bot. See how such bots think about the game and approximate Nash equilibria.
https://twitter.com/RealKidPoker/status/1337887509397741568
https://twitter... | Hi there, today I want to bring to you a little bit of a different video. The video right now is supposed to be sort of a motivational lead up to the next video I want to release. And the next video is going to be about Facebook's new rebel algorithm, which is an algorithm that solves two player zero sum imperfect inf... | [{"start": 0.56, "end": 6.72, "text": " Hi there, today I want to bring to you a little bit of a different video. The video right now is"}, {"start": 6.72, "end": 11.84, "text": " supposed to be sort of a motivational lead up to the next video I want to release. And the next"}, {"start": 11.84, "end": 18.72, "text": " ... |
Yannic Kilchner | https://www.youtube.com/watch?v=B9PL__gVxLI | DeepMind's AlphaFold 2 Explained! AI Breakthrough in Protein Folding! What we know (& what we don't) | #deepmind #biology #ai
This is Biology's AlexNet moment! DeepMind solves a 50-year old problem in Protein Folding Prediction. AlphaFold 2 improves over DeepMind's 2018 AlphaFold system with a new architecture and massively outperforms all competition. In this Video, we take a look at how AlphaFold 1 works and what we ... | It will change everything. DeepMind solves 50 year old grand challenge. The game has changed. DeepMind's latest AI breakthrough achieves historic new milestone, helps solve how diseases invade cells, improve protein folding prediction. AI breakthrough it also wipes your butt automatically. It is the newest DeepMind bi... | [{"start": 0.0, "end": 10.0, "text": " It will change everything. DeepMind solves 50 year old grand challenge. The game has changed."}, {"start": 10.0, "end": 19.8, "text": " DeepMind's latest AI breakthrough achieves historic new milestone, helps solve how diseases invade"}, {"start": 19.8, "end": 27.52, "text": " cel... |
Yannic Kilchner | https://www.youtube.com/watch?v=LB4B5FYvtdI | Predictive Coding Approximates Backprop along Arbitrary Computation Graphs (Paper Explained) | #ai #biology #neuroscience
Backpropagation is the workhorse of modern deep learning and a core component of most frameworks, but it has long been known that it is not biologically plausible, driving a divide between neuroscience and machine learning. This paper shows that Predictive Coding, a much more biologically pl... | Hi there, this is an LSTM cell or the computation graph of an LSTM cell. It is pretty hideous as you can see, but what I'm about to show you is even more hideous. This is the computation graph of the LSTM cell augmented with error units, evincing the connectivity scheme of the predictive coding algorithm. So you may s... | [{"start": 0.0, "end": 7.84, "text": " Hi there, this is an LSTM cell or the computation graph of an LSTM cell. It is pretty hideous as you"}, {"start": 7.84, "end": 15.92, "text": " can see, but what I'm about to show you is even more hideous. This is the computation graph of the"}, {"start": 16.64, "end": 25.36, "tex... |
Yannic Kilchner | https://www.youtube.com/watch?v=IaS72aHrJKE | Fourier Neural Operator for Parametric Partial Differential Equations (Paper Explained) | #ai #research #engineering
Numerical solvers for Partial Differential Equations are notoriously slow. They need to evolve their state by tiny steps in order to stay accurate, and they need to repeat this for each new problem. Neural Fourier Operators, the architecture proposed in this paper, can evolve a PDE in time b... | AI has cracked a key mathematical puzzle for understanding our world. This just in from MIT technology review and look at this puzzle right here. It's got the bumps, it's got the valleys, the surfaces, it's got the braille, it's got the bits, the ones and the zeros, not only going up and down like in the matrix, but g... | [{"start": 0.0, "end": 9.24, "text": " AI has cracked a key mathematical puzzle for understanding our world. This just in from MIT"}, {"start": 9.24, "end": 16.240000000000002, "text": " technology review and look at this puzzle right here. It's got the bumps, it's got the valleys,"}, {"start": 16.240000000000002, "end... |
Yannic Kilchner | https://www.youtube.com/watch?v=i_p5wLoCCiw | [News] Soccer AI FAILS and mixes up ball and referee's bald head. | #ai #tech #news
This soccer camera is operated by an AI to track the ball. However, the AI has an interesting failure mode and repeatedly mixes up the ball with the bald head of a referee. This raises some interesting questions about the role of ethics in AI research.
Footage from SPFL Championship : ICTFC 1 v 1 AYR ... | So, there is this recording of the soccer match which is quite interesting because the camera of the match is AI controlled which just means that it's programmed to track the ball. Now it tracks the ball by visual features and what's funny about this particular one is that the AI switches constantly between the ball a... | [{"start": 0.0, "end": 5.6000000000000005, "text": " So, there is this recording of the soccer match which is quite interesting because"}, {"start": 5.6000000000000005, "end": 12.0, "text": " the camera of the match is AI controlled which just means that it's programmed to"}, {"start": 12.0, "end": 17.400000000000002, ... |
Yannic Kilchner | https://www.youtube.com/watch?v=gch94ttuy5s | Underspecification Presents Challenges for Credibility in Modern Machine Learning (Paper Explained) | #ai #research #machinelearning
Deep Learning models are often overparameterized and have many degrees of freedom, which leads to many local minima that all perform equally well on the test set. But it turns out that even though they all generalize in-distribution, the performance of these models can be drastically dif... | Hi there, today we'll look at under specification presents challenges for credibility in modern machine learning by Alexander Damour, Catherine Heller, Dan Moldovan, and literally all of Google. All of Google is on this paper, including some others, including MIT and Google with a white space. But there is a lot of au... | [{"start": 0.0, "end": 6.48, "text": " Hi there, today we'll look at under specification presents challenges for credibility in modern"}, {"start": 6.48, "end": 12.44, "text": " machine learning by Alexander Damour, Catherine Heller, Dan Moldovan, and literally all of"}, {"start": 12.44, "end": 13.44, "text": " Google.... |
Yannic Kilchner | https://www.youtube.com/watch?v=NAJOZTNkhlI | Language Models are Open Knowledge Graphs (Paper Explained) | #ai #research #nlp
Knowledge Graphs are structured databases that capture real-world entities and their relations to each other. KGs are usually built by human experts, which costs considerable amounts of time and money. This paper hypothesizes that language models, which have increased their performance dramatically ... | Hi there, today we'll look at language models or open knowledge graphs by Cheng Wang Wang, Xiao Liu and Don Song. This paper on a high level proposes to construct knowledge graphs, which is a structured object that's usually built by human by experts, either fully manually or semi manually with heavy human involvement... | [{"start": 0.0, "end": 6.5200000000000005, "text": " Hi there, today we'll look at language models or open knowledge graphs by Cheng Wang Wang,"}, {"start": 6.5200000000000005, "end": 9.4, "text": " Xiao Liu and Don Song."}, {"start": 9.4, "end": 15.92, "text": " This paper on a high level proposes to construct knowled... |
Yannic Kilchner | https://www.youtube.com/watch?v=xJrKIPwVwGM | Rethinking Attention with Performers (Paper Explained) | #ai #research #attention
Transformers have huge memory and compute requirements because they construct an Attention matrix, which grows quadratically in the size of the input. The Performer is a model that uses random positive orthogonal features to construct an unbiased estimator to the Attention matrix and obtains a... | Hi there, today we'll look at rethinking attention with performers by researchers of Google, the University of Cambridge, DeepMind and the Alan Turing Institute. This paper is yet another paper in the quest to make transformers more performant and what better name to give to a technique than the performer. So the perf... | [{"start": 0.0, "end": 7.74, "text": " Hi there, today we'll look at rethinking attention with performers by researchers of Google,"}, {"start": 7.74, "end": 12.36, "text": " the University of Cambridge, DeepMind and the Alan Turing Institute."}, {"start": 12.36, "end": 18.88, "text": " This paper is yet another paper ... |
Yannic Kilchner | https://www.youtube.com/watch?v=3qxJ2WD8p4w | LambdaNetworks: Modeling long-range Interactions without Attention (Paper Explained) | #ai #research #attention
Transformers, having already captured NLP, have recently started to take over the field of Computer Vision. So far, the size of images as input has been challenging, as the Transformers' Attention Mechanism's memory requirements grows quadratic in its input size. LambdaNetworks offer a way aro... | Another day, another state of the art result in machine learning land on ImageNet. This time coming from a thing called lambda resnets. As you can see here, it outperforms efficient nets and resnets right here, not only in terms of top one accuracy, but also in terms of the trade off between accuracy and training time... | [{"start": 0.88, "end": 7.92, "text": " Another day, another state of the art result in machine learning land on ImageNet. This time"}, {"start": 7.92, "end": 15.52, "text": " coming from a thing called lambda resnets. As you can see here, it outperforms efficient nets and"}, {"start": 15.52, "end": 22.8, "text": " res... |
Yannic Kilchner | https://www.youtube.com/watch?v=DiNzQP7kK-s | Descending through a Crowded Valley -- Benchmarking Deep Learning Optimizers (Paper Explained) | #ai #research #optimization
Deep Learning famously gives rise to very complex, non-linear optimization problems that cannot be solved analytically. Therefore, the choice of a suitable optimization algorithm can often make or break the training of a Deep Neural Network. Yet, the literature is full with hundreds of diff... | Hi there, today we'll look at descending through a crowded valley benchmarking deep learning optimizers by Robin Schmidt, Frank Schneider and Philip Henning of the University of Tübingen. So this paper is an empirical investigation a benchmark into optimization algorithms for deep learning. The short story of the pape... | [{"start": 0.0, "end": 5.28, "text": " Hi there, today we'll look at descending through a crowded valley benchmarking deep learning"}, {"start": 5.28, "end": 11.84, "text": " optimizers by Robin Schmidt, Frank Schneider and Philip Henning of the University of T\u00fcbingen."}, {"start": 11.84, "end": 17.740000000000002... |
Yannic Kilchner | https://www.youtube.com/watch?v=TrdevFK_am4 | An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale (Paper Explained) | #ai #research #transformers
Transformers are Ruining Convolutions. This paper, under review at ICLR, shows that given enough data, a standard Transformer can outperform Convolutional Neural Networks in image recognition tasks, which are classically tasks where CNNs excel. In this Video, I explain the architecture of t... | Hi there, today we'll look at an image is worth 16 by 16 words transformers for image recognition at scale. So this paper is a bit special. Andrej Karpathy tweeted this out, and I'm going to guess many of you have seen it already. It's a paper that's under review at iClear. iClear, of course, uses open review. So all ... | [{"start": 0.0, "end": 8.0, "text": " Hi there, today we'll look at an image is worth 16 by 16 words transformers for image recognition at scale."}, {"start": 8.0, "end": 16.0, "text": " So this paper is a bit special. Andrej Karpathy tweeted this out, and I'm going to guess many of you have seen it already."}, {"start... |
Yannic Kilchner | https://www.youtube.com/watch?v=3baFTP0uYOc | Training more effective learned optimizers, and using them to train themselves (Paper Explained) | #ai #research #optimization
Optimization is still the domain of hand-crafted, simple algorithms. An ML engineer not only has to pick a suitable one for their problem but also often do grid-search over various hyper-parameters. This paper proposes to learn a single, unified optimization algorithm, given not by an equat... | Hi there, today we'll look at tasks, stability, architecture and compute, training more effective learned optimizers and using them to train themselves by Luke Metz, Nero Mahesvaranathan, C. Daniel Friedman, Ben Poole and Jascha Sol Dikstein. So on a high level, this paper deals with sort of a meta problem. It deals w... | [{"start": 0.64, "end": 5.2, "text": " Hi there, today we'll look at tasks, stability, architecture and compute,"}, {"start": 5.2, "end": 11.76, "text": " training more effective learned optimizers and using them to train themselves by Luke Metz,"}, {"start": 11.76, "end": 19.84, "text": " Nero Mahesvaranathan, C. Dani... |
Yannic Kilchner | https://www.youtube.com/watch?v=MQ89be_685o | The Hardware Lottery (Paper Explained) | #ai #research #hardware
We like to think that ideas in research succeed because of their merit, but this story is likely incomplete. The term "hardware lottery" describes the fact that certain algorithmic ideas are successful because they happen to be suited well to the prevalent hardware, whereas other ideas, which w... | Hi there, are you interested in winning the lottery? Then let me tell you this video is not for you. This video is not about winning the lottery. Okay, I've done enough videos with lottery in the title, only for people to be mad at me for not telling them how to win the lottery. This is about computer science research... | [{"start": 0.8, "end": 6.8, "text": " Hi there, are you interested in winning the lottery? Then let me tell you this video is"}, {"start": 6.8, "end": 14.08, "text": " not for you. This video is not about winning the lottery. Okay, I've done enough videos with lottery"}, {"start": 14.08, "end": 19.92, "text": " in the ... |
Yannic Kilchner | https://www.youtube.com/watch?v=O1b0cbgpRBw | Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained) | #ai #chess #alphazero
Chess is a very old game and both its rules and theory have evolved over thousands of years in the collective effort of millions of humans. Therefore, it is almost impossible to predict the effect of even minor changes to the game rules, because this collective process cannot be easily replicated... | Hi there! If you play chess you'll probably recognize the following moves as illegal. In the top row pawns move two squares at a time while they are not on their home row. In the bottom row you'll see a pawn moving backwards and another one moving sidewards even. So in classical chess these moves are illegal but there... | [{"start": 0.0, "end": 7.0, "text": " Hi there! If you play chess you'll probably recognize the following moves as illegal."}, {"start": 7.0, "end": 12.0, "text": " In the top row pawns move two squares at a time while they are not on their home row."}, {"start": 12.0, "end": 18.0, "text": " In the bottom row you'll se... |
Yannic Kilchner | https://www.youtube.com/watch?v=vLTmnaMpQCs | Learning to summarize from human feedback (Paper Explained) | #summarization #gpt3 #openai
Text Summarization is a hard task, both in training and evaluation. Training is usually done maximizing the log-likelihood of a human-generated reference summary, while evaluation is performed using overlap-based metrics like ROUGE. Both significantly undervalue the breadth and intricacies... | Hi Reddit, my boyfriend and I have been dating for a year and it has been great. Except for one thing, Dota. The other day on a Saturday I was over and he was playing a game. I thought it would just be one but instead he proceeded to play for three hours as I just sat there. What can I do? So this as you can see it is... | [{"start": 0.0, "end": 5.76, "text": " Hi Reddit, my boyfriend and I have been dating for a year and it has been great."}, {"start": 5.76, "end": 14.72, "text": " Except for one thing, Dota. The other day on a Saturday I was over and he was"}, {"start": 14.72, "end": 19.240000000000002, "text": " playing a game. I thou... |
Yannic Kilchner | https://www.youtube.com/watch?v=EbHUU-gLyRA | Self-classifying MNIST Digits (Paper Explained) | #ai #biology #machinelearning
Neural Cellular Automata are models for how living creatures can use local message passing to reach global consensus without a central authority. This paper teaches pixels of an image to communicate with each other and figure out as a group which digit they represent. On the way, the auth... | Check this out. So what you're seeing here is neurocellular automata that are learned to communicate with each other what digit they compose. So every pixel you see is like a little cell and it communicates with its neighbors and only its immediate neighbors about kind of its surroundings. And by doing that, all these... | [{"start": 0.0, "end": 16.0, "text": " Check this out. So what you're seeing here is neurocellular automata that are learned to communicate with each other what digit they compose."}, {"start": 16.0, "end": 27.0, "text": " So every pixel you see is like a little cell and it communicates with its neighbors and only its ... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.