title
stringlengths
12
112
published
stringlengths
19
23
url
stringlengths
28
28
video_id
stringlengths
11
11
channel_id
stringclasses
5 values
id
stringlengths
16
31
text
stringlengths
0
596
start
float64
0
37.8k
end
float64
2.18
37.8k
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1015.28
and he discusses this at the end of the paper, it's not really biologically plausible, but
1,015.28
1,026.84
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1020.06
there's an ensemble effect, we won't go into that. But all these decent so the blue arrows
1,020.06
1,033.68
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1026.84
are always the same for each time step. But not necessarily the same between different
1,026.84
1,040.16
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1033.6799999999998
layers. So that might be this f might be different from this f down here. However, the function
1,033.68
1,045.96
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1040.1599999999999
passing information from from layer l to layer l plus one is the same in every single column
1,040.16
1,051
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1045.9599999999998
across the image, it's a bit like a convolutional network in terms of weight sharing. So you
1,045.96
1,057.26
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1051.0
can imagine it as one by one convolutional network in that sense. But except the information
1,051
1,063.64
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1057.26
does not only go up the layers, it also goes down the layers over time. As I said, this
1,057.26
1,070.76
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1063.64
is an iterative procedure, goes up, down, and laterally. The second thing is now that
1,063.64
1,078.02
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1070.76
you ask, Oh, well, if every single column has the same weights, wouldn't that simply
1,070.76
1,085.4
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1078.02
sort of how how can you localize any information? And the answer is that you have a side input,
1,078.02
1,090.6
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1085.4
like in a neural field, you have a side input annotating each location, basically a positional
1,085.4
1,098.12
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1090.6
encoding, honestly. So in in addition to what the image patch looks like, you also get your
1,090.6
1,104.04
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1098.12
kind of either your x y coordinates, or you could also get your relative coordinates to
1,098.12
1,112.36
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1104.04
some other coordinate frame in there. And so the network knows where it is. And that's
1,104.04
1,119.96
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1112.36
going to be important, because what Hinton wants to build are these islands. So the imagination
1,112.36
1,127.48
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1119.96
of Hinton is that this is going to be somewhere in between like after time step 10. And you
1,119.96
1,134.72
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1127.48
want to run it for 100. And he imagines that there will what will emerge are these sort
1,127.48
1,142.5
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1134.72
of islands. So imagine the image is now a 1d vector down here. Or you can imagine these
1,134.72
1,149.72
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1142.5
columns in 2d, whatever fits, you know, whatever fits your brain better. But imagine the images,
1,142.5
1,156.12
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1149.72
the image is simply a 1d line right here. He imagines that the bottom vectors, they
1,149.72
1,162.6
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1156.12
will just, you know, happily kind of be describing whatever that is at the very bottom level.
1,156.12
1,169.08
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1162.6
But then at the next level, once it goes to sort of higher resolution or lower resolution,
1,162.6
1,177.52
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1169.08
higher abstraction, there will be there must necessarily be vectors that are the same.
1,169.08
1,182.32
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1177.52
If the system works and look at these two vectors and look at these two vectors, they
1,177.52
1,187.68
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1182.32
are the same because they now describe objects that are larger than one location, right,
1,182.32
1,194.36
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1187.6799999999998
the cat's head is larger than simply one location. Therefore, at the layer that represents the
1,187.68
1,201.52
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1194.36
cat's head, you expect because these are all all neuron, all the up and down functions
1,194.36
1,207.86
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1201.52
in the same layer have the same weight, you expect that the embedding of a cat's head
1,201.52
1,216.02
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1207.86
is the same in in the different columns. This is if the system works, this must be the case.
1,207.86
1,221.12
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1216.02
And then as you go up, you expect more and more of these what what hint calls islands
1,216.02
1,230.8
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1221.12
to emerge, right? So they they agree. And the idea the idea between all of this message
1,221.12
1,238.68
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1230.8
passing is that over time, all of these things kind of reinforce each other. So we looked
1,230.8
1,246.2
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1238.68
at a column before, and we maybe said, Okay, so this vector down here, it gets information
1,238.68
1,252.96
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1246.2
from the top, saying, Hey, you know, there's a cat here. So you might be like a cat ear
1,246.2
1,257.84
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1252.96
or a cat eye or something like this. And then it gets information from the bottom saying,
1,252.96
1,262.76
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1257.84
well, there's a bit of as you know, fur here, and there's some cartilage showing and so
1,257.84
1,269.84
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1262.76
on. And it has already sort of figured out that it might be an ear. And these informations,
1,262.76
1,274.04
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1269.84
they own they reinforce itself now, like they'd be like, okay, you know, you're saying I'm
1,269.84
1,278.48
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1274.04
part of a head and you're saying there's a bit of fur and cartilage. And I already kind
1,274.04
1,284.76
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1278.48
of noticed that I'm a bit like an ear. So I'm probably more an ear. So the idea is that
1,278.48
1,291.2
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1284.76
over time, you have this consensus algorithm, there's one thing missing. And that is, how
1,284.76
1,296.36
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1291.2
do the different columns communicate with each other. So I said there are different
1,291.2
1,304.44
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1296.36
parts, there is one missing. And that one missing is going to be, I'm just going to
1,296.36
1,314.3
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1304.44
call it whatever a and a is going to be an attention mechanism across all the other columns
1,304.44
1,320.18
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1314.3
at the same layer. So if we look here, this cell receives information from above from
1,314.3
1,327.48
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1320.18
below from itself. And also, in an attention mechanism way, it's going to receive information
1,320.18
1,335.72
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1327.48
from all of the different, all of the different embeddings at the same layer. You can see
1,327.48
1,346.84
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1335.72
that, you know, it puts in everything we got in here. Now, the attention, he says, is easier.
1,335.72
1,354.16
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1346.84
And so these are the four parts right here. At each discrete time, and in each column
1,346.84
1,359.8
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1354.16
separately, the embedding at a level is updated to be the weighted average of four contributions.
1,354.16
1,365.28
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1359.8
The prediction produced by the bottom up neural net acting on the embedding at the level below
1,359.8
1,372.58
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1365.28
at the previous time, the prediction produced by the top down neural net acting on the embedding
1,365.28
1,378.84
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1372.58
at the level above at the previous time, the embedding vector at the previous time step,
1,372.58
1,384.56
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1378.84
these three we got, and then the attention weighted average of the embeddings at the
1,378.84
1,394.72
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1384.56
same level, right at the same level in nearby columns at the previous time. So nearby heat,
1,384.56
1,401.8
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1394.72
sorry, he later backpedals a bit, I think on nearby and what nearby exactly means. And
1,394.72
1,407.6
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1401.8
he some parts. So this, this is idea, I think this is still up for debate. And this is,
1,401.8
1,413.96
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1407.6000000000001
I think, where I can help. But what he wants to do is he wants to aggregate, he wants to
1,407.6
1,421.44
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1413.96
attention aggregate, and he wants to simplify attention. So instead, what we usually have
1,413.96
1,429.04
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1421.44
is we're going to produce queries, and keys and values, queries, keys and values. And
1,421.44
1,436.24
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1429.04
they're all going to be different functions of our input. And then we're going to do query
1,429.04
1,443.1
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1436.24
times key transposed softmax of that times value. And that is going to be our attention
1,436.24
1,448.12
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1443.1000000000001
mechanism that allows you know, arbitrary information to be routed around and so on.
1,443.1
1,454.92
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1448.12
Attention says, Nope, what I want is simply that all the queries, the keys and the values,
1,448.12
1,464.76
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1454.9199999999998
they're all just equal to the embeddings themselves. So the attention mechanism would work out
1,454.92
1,477.52
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1464.76
to be the softmax of x times x transposed times x. And what that does is if you yourself
1,464.76
1,485.28
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1477.52
are the query, and every vector also itself is the key, what do you attend to, you attend
1,477.52
1,492.36
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1485.28
to vectors that are very similar to yourself. And you can see that in Hinton's diagram,
1,485.28
1,497.88
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1492.36
the one we circled dark blue, what would it attend to? Well, it would probably attend
1,492.36
1,504.76
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1497.8799999999999
to its left hand neighbor, the one you can see circled, I'm going to circle it. This
1,497.88
1,510.6
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1504.76
one, it will probably attend a lot to this one, it might not attend so much. And the
1,504.76
1,516.72
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1510.6
ones over here, it might not attend at all. What does this give us, especially since the
1,510.6
1,524.36
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1516.72
values are also these vectors, this is a consensus algorithm, it is not meant as a way to pass
1,516.72
1,529.56
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1524.36
information around, it is not meant like in a transformer as a way to do computation,
1,524.36
1,535.76
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1529.56
because we have no trainable weights in this process. It is simply meant as a consensus
1,529.56
1,543.88
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1535.76
algorithm. So it imagines that by doing this, by sort of attending to things that are similar
1,535.76
1,549.96
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1543.8799999999999
to you, and then integrating their values, there will be these islands forming. And that's
1,543.88
1,555
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1549.96
what you see right here, you can imagine if two vectors are already close at the same
1,549.96
1,561.64
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1555.0
year, this mechanism will make them even closer. So this is a sort of a clustering algorithm.
1,555
1,570.6
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1561.64
And so that my question is, that these drawings, you look at them, they are very specifically
1,561.64
1,577.74
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1570.6
constructed, they are constructed such that a parse tree is emerging. So when you look
1,570.6
1,585.28
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1577.74
at this, you have a clear sense I can probably, I can probably move all of that crap out of
1,577.74
1,593.84
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1585.28
the way. You can see the parse tree, right? Because the black thing is going to be the
1,585.28
1,597.8
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1593.84
top node right here. Let's leave away the scene level embedding for now, the black thing
1,593.84
1,605.16
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1597.8
is going to be the top node. And then it has two child nodes, this one, and this one. And
1,597.8
1,610.32
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1605.16
then it has four, every one of those has two child nodes. But it's not it doesn't have
1,605.16
1,614.96
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1610.3200000000002
to be in this case. So this dynamically and every one of them, you know, the black ones
1,610.32
1,620.96
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1614.96
are individual. This is dynamically constructing a parse tree, right? The parse tree here
1,614.96
1,632.52
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1620.96
is something like this. And then the da da da. So this is pretty cool. But it is also
1,620.96
1,638.4
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1632.52
drawn deliberately such that a core problem does not arise. And the core problem would
1,632.52
1,646.44
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1638.4
be something like, well, what if this vector here was actually also pointing like this?
1,638.4
1,652.96
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1646.44
Okay, so it is not in it is not in the same, it is not in the same area of the parse tree,
1,646.44
1,660.84
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1652.96
right? If you go down the parse tree, it is actually here. Now, if we do what Hinton says,
1,652.96
1,669.32
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1660.84
and if for this vector here, we do this aggregation via attention on the same layer, what we will
1,660.84
1,676.8
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1669.32
attend to is this vector over here. Now, this is probably not meant to be because this vector
1,669.32
1,682.4
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1676.8
over here, it can represent the same thing. But you can see it's not in the in the same
1,676.8
1,690.08
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1682.3999999999999
path of the parse tree. And he mentioned this a little bit throughout, but not necessarily
1,682.4
1,697.28
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1690.08
clear. And the drawing makes it seem like there's no problem. But I hope you can see
1,690.08
1,702.82
GLOM: How to represent part-whole hierarchies in a neural network (Geoff Hinton's Paper Explained)
2021-02-27 15:47:03
https://youtu.be/cllFzkvrYmE
cllFzkvrYmE
UCZHmQk67mSJgfCCTn7xBfew
cllFzkvrYmE-t1697.28
how this is a problem. The attention would pull in information from over here. However,
1,697.28
1,707.42