How Does Deep Learning Work? | Two Minute Papers #24


Dear Fellow Scholars, this is Two Minute Papers
with Károly Zsolnai-Fehér. A neural network is a very loose model of
the human brain that we can program in a computer, or it’s perhaps more appropriate to say
that it is inspired by our knowledge of the inner workings of a human brain. Now, let’s note that artificial neural networks
have been studied for decades by experts, and the goal here is not to show all aspects,
but one intuitive, graphical aspect that is really cool and easy to understand. Take a look at these curves on a plane. These
curves are a collection of points, and these points you can imagine as images, sounds,
or any kind of input data that we try to learn. The red and the blue curves represent two
different classes – the red can mean images of trains, and the blue, for instance, images
of bunnies. Now, after we have trained the network from
this limited data, which is basically a bunch of images of of trains and bunnies, we will
get new points on this plane, new images, and we would like to know whether this new
image looks like a train or a bunny. This is what the algorithm has to find out. And this we call a classification problem,
to which a simple and bad solution would be simply cutting the plane in half with a line.
Images belonging to the red regions will be classified as the red class, and the blue
regions as the blue class. Now, as you see, the red region cuts into the blue curve, which
means that some trains would be misclassified as bunnies. It seems that if we look at the problem from
this angle, we cannot separate the two classes perfectly with a straight line. However, if we use a simple neural network,
it will give us this result. Hey! But that’s cheating, we were talking about straight lines.
This is anything but a straight line. A key concept of neural networks is that they
create an inner representation of the data model and try to solve the problem in that
space. What this intuitively means, is that the algorithm will start transforming and
warping these curves, where their shapes start changing, and it finds, that if we do well
with this warping step, we can actually draw a line to separate these two classes. After
we undo this warping and transform the line here back to the original problem, it will
look like a curve. Really cool, isn’t it? So these are lines, only in a different representation
of the problem. Who said that the original representation is the best way to solve a
problem? Take a look at this example with these entangled
spirals. Can we separate these with a line? Not a chance. But the answer is – not a chance
with this representation. But if one starts warping them correctly, there will be states
where they can easily be separated. However, there are rules in this game – for
instance, one cannot just rip out one of the spirals here and put it somewhere else. These
transformations have to be homeomorphisms, which is a term that mathematicians like to
use – it intuitively means that that the warpings are not too crazy – meaning that we don’t
tear apart important structures, and as they remain intact, the warped solution is still
meaningful with respect to the original problem. Now comes the deep learning part. Deep learning
means that the neural network has multiple of these hidden layers and can therefore create
much more effective inner representations of the data. From an earlier episode, we’ve
seen in an image recognition task that as we go further and further into the layers,
first we’ll see an edge detector, and as a combination of edges, object parts emerge,
and in the later layers, a combination of object parts create object models. Let’s take a look at this example. We have
a bullseye here if you will, and you can see that the network is trying to warp this to
separate it with a line, but in vain. However, if we have a deep neural network,
we have more degrees of freedom, more directions and possibilities to warp this data. And if
you think intuitively, if this were a piece of paper, you could put your finger behind
the red zone and push it in, making it possible to separate the two regions with a line. Let’s
take a look at a 1 dimensional example to see better what’s going on. This line is the
1D equivalent of the original problem, and you see that the problem becomes quite trivial
if we have the freedom to do this transformation. We can easily encounter cases where the data
is very severely tangled and we don’t now how good our best solution can be. There is
a very heavily academic subfield of mathematics, called knot theory, which is the study of
tangling and untangling objects. It is subject to a lot of snarky comments for not being
well, too exciting or useful. What is really mind blowing is that knot theory can actually
help us study these kinds of problems and it may ultimately end up being useful for
recognizing traffic signs and designing self-driving cars. Now, it’s time to get our hands dirty! Let’s
run a neural network on this dataset. If we use a low number of neurons and one
layer, you can see that it is trying ferociously, but we know that it is going to be a fruitless
endeavor. Upon increasing the number of neurons, magic happens. And we now know exactly why! Yeah! Thanks so much for watching and for your generous
support. I feel really privileged to have supporters like you Fellow Scholars. Thank you, and I’ll see you next time!

77 Replies to “How Does Deep Learning Work? | Two Minute Papers #24

  1. This episode has some pretty cool figures and animations (thanks for Christopher Olah!), but since this wasn't about pure computer graphics, the visuals are not as spectacular as in the fluid simulation papers. Did you still find it as interesting? Please let me know here in the comments section. ๐Ÿ™‚

  2. This is absolutely fascinating. Everyone just normally shows the diagram of the neural networks and I never really understood what was going on, but the way you showed how the graph gets manipulated with subsequent layers made things much clearer. Thanks!

  3. I thought it was very interesting and the animations really helped eith understanding! Also, loved the sound of your mind being blown. Pretty much the same sound I make.

  4. Great to see this ๐Ÿ™‚
    By the way, on Olah's blog even the comments occasionally hold a gem worthy to be checked out. Some people link interesting papers down there.
    That being said, those papers probably aren't typically well suited for this channel. They often are abstract and not very visual. Still, if you are interested in digging deeper, this is highly recommended. And maybe there even are one or two papers that actually are visual enough for this channel hidden in there.

  5. Great video, thanks. I would be really thankful if you could share your opinion on something:
    Does this at all aimed to represent the inner workings of the human brain?
    I have an example: I have a 2-year old and it is enough to show her an image of a crocodile once or any new object, lets say a crane ONCE.
    She then basically recognizes ALL crocodiles and cranes if she sees.
    So it is not about comparing lots of images and having a more sharp line between objects but feels like understanding the basic concept of a crocodile.
    It may sound a bit platonic, but hopefully get what I mean. She doesnโ€™t need a lot of data to draw a line between the two objects. In extreme cases she needs only one example to recognize that type of object.
    I am thinking of this a lot, but i may totally miss the point as a laic. Kรถszรถnรถm!

  6. Wow, your videos are really great and professional! I thought this was gonna be all stuff I already knew since it was so short and just an introduction, but you managed to stick some parts in there which I had not previously seen, like those visualizations and that thing about knot theory. Great work!

  7. great indeed thanks! I subscribe!
    just one passage I do not have clear: why it must draw a straight line? can't it draw a curved line in the first place or a circle in the bull eye example?

  8. Very very well done! That is the way students can get difficult matter in minutes. Thanks for this …

  9. One minor thing: may be a (better) motivation should be given, why a straight "line" is necessary in the first place.

  10. I have been looking for easy ways to understand Deep learning for a class project, since we only working on Multi Layers Neural networks. And i find your video easy to understand. but now i am thirsty for more.
    Please advise with resources or links i can use to gain deep knowledge on this topic.

    Thanks in advance

  11. Hey, really interesting video. Could you recommend any resources for learning more about the mathematics surrounding neural networks?

  12. I have to say, this was one of the best, clear and easy to understand videos I've ever watched about this subject. Exactly what I searched for, very well done, thanks a ton!

  13. but how does it decide from all of the edges that a combination of edges make a certain face, even u let it learn for 10000000 years it wont know that thats me unless i told it at least once am i right ??
    what i mean is the out put already known we just want to filter to the output !
    correct me if am wrong please

  14. I've just discovered this channel and and I have to say big thank you for your work. Man you are doing amazing stuff, I could not imagine how much effort have you put inside each video but the result is 10000000x better.

  15. It's pretty good in that it avoids getting lost in too much depth (pun intended) but it severely lacks in that the only explanation for deep learning is that it involves neural nets with a number of hidden layers. Which doesn't differ from basic feed forward supervised learning neural nets.

  16. Awesome visualisations.Really helped a lot in understanding the NNs.Can u do some more videos explaining what kind of models can a NN take care of and what it depends on?

  17. Honestly, I still don't understand diddly squat. But I am glad that there are people out there who do. Just make sure that SkyNet has a sense of humor before flipping the switch, please.

  18. In decades of teaching and research I found that this intuitive explanation of this episode to be the clearest demonstration of NN as a classifier.

  19. Great video. Can you please clarify what you mean at 5:00 with 'increasing the number of neurons'? I understand that if you increase the dimensions, you will be able to warp and manipulate your graph to find a suitable place for your straight line, and then scale back down to original dimension, but what do you mean by increasing the neurons? I'm also assuming by layers you mean dimensions.

  20. How could I have missed this awesome video for so long?

    As I mathematician, I really liked the explanation: "… homeomorphisms, which is a term that mathematicians like to use.". ๐Ÿ˜‰

  21. In terms of the classification problem, is there an advantage of using this over the K nearest neighbors algorithm?

  22. How exactly the data be if images when represented as points in higher dimensional space looks like a knot.Awesome.

  23. you know your stuff when you can explain a complex subject it in the simplest form, for anyone, anywhere to understand AND appreciate. Kudos.

  24. hats off, you have done a great job. My question is do deep neural networks transform space or transforms data. From what I gather they transform space.

  25. This is so concerning , how Artificial Intelligence – Deep learning etc. can start mimic human :/
    The Danger to human species from A.I. depicted in Hollywood Movies is not too far .

  26. Thanks a lot for this video, you are the only one who actually provides interesting presentations with clear explanations

  27. Simply the best simple explanation for neural networks and deep learning. Thank you very much for this very nice and outstanding piece of work!

  28. One of the most difficult tasks is to take a complex subject and simplify it to where anyone can understand. And to do it 2 min?
    Your work is absolutely amazing, thank you! I can only imagine the effort put into each video. And to think you present papers from so many different fields, you are truly a inspiration.

  29. Really the best and simple explanation sir..
    I am wondering can we apply deep learning( concept of warp of dimensions )to the problem of string theory which mainly deals with the multi dimensional objects like torus..knots…

  30. You managed to capture one of the most breathtakingly beautiful ideas behind neural networks in a five minute video, and conveyed it with simplicity and elegance.

    Very well done.

  31. As a non-scientific person, I found this video verry clear and interesting. Do you know where I can find other 3D representation of data, used for data sorting? Like the ones we see at the end (the intricated TOR and the spiral-like donut)
    thanks !

  32. Good stuff, liked and subbed. I was heavily into NNs, GAs, classifiers etc many years ago, i think it's time to jump back in ๐Ÿ™‚

  33. SOMEONE HELP!!!! ๐Ÿ˜ญ … I'm the type of person who (litterally) struggles 4 attempts per math course!

    I'm trying to make sense of NN's & Deep Learning Algorithms so I can survive the A.I era … HELP!!!!!!๐Ÿ˜ญ๐Ÿ˜ญ

  34. Awesome explanation – all the vector and linear algebra makes sense now – thank you ๐Ÿ˜‰

Leave a Reply

Your email address will not be published. Required fields are marked *