1 00:00:09,740 --> 00:00:11,340 NICK CHENEY: I'm Nick Cheney. 2 00:00:11,340 --> 00:00:14,110 I'm finishing my PhD in Computational Biology 3 00:00:14,110 --> 00:00:15,460 at Cornell University. 4 00:00:15,460 --> 00:00:20,050 Gabriel Kreiman and I are interested in seeing 5 00:00:20,050 --> 00:00:23,580 how deep networks respond to neuroplasticity. 6 00:00:23,580 --> 00:00:26,320 So in the brain, we know that the brain is constantly 7 00:00:26,320 --> 00:00:27,070 in flux. 8 00:00:27,070 --> 00:00:30,280 And neurons are growing and dying. 9 00:00:30,280 --> 00:00:32,397 Weights are changing in response to stimuli. 10 00:00:32,397 --> 00:00:34,480 But most of the time, machine learning, what we do 11 00:00:34,480 --> 00:00:38,150 is pre-train a network on some training set. 12 00:00:38,150 --> 00:00:41,350 But then we want to use it for real-- we freeze it and keep it 13 00:00:41,350 --> 00:00:42,910 in some static form. 14 00:00:42,910 --> 00:00:46,150 There's been a lot more emphasis lately on more online learning 15 00:00:46,150 --> 00:00:48,970 so that you learn as you're working on a data set. 16 00:00:48,970 --> 00:00:51,280 And in those kind of environments, 17 00:00:51,280 --> 00:00:53,960 we think that the network will be changing quite a bit. 18 00:00:53,960 --> 00:00:57,010 So we're looking at how that network could 19 00:00:57,010 --> 00:00:59,380 be robust to those kind of changes 20 00:00:59,380 --> 00:01:02,025 much like the brain is to every day stimuli 21 00:01:02,025 --> 00:01:03,640 and actions it sees. 22 00:01:03,640 --> 00:01:08,350 To start out just doing a very simple test looking 23 00:01:08,350 --> 00:01:10,460 at perturbations of the network. 24 00:01:10,460 --> 00:01:14,770 So just throwing random changes at the weights that make up 25 00:01:14,770 --> 00:01:17,590 this network and seeing how that affects its ability 26 00:01:17,590 --> 00:01:19,690 to classify images. 27 00:01:19,690 --> 00:01:23,740 After that, we're looking at how different parts of the network 28 00:01:23,740 --> 00:01:26,920 respond differently to these kind of changes. 29 00:01:26,920 --> 00:01:30,340 And then, ideally, we'd like to have some kind of rule that 30 00:01:30,340 --> 00:01:32,900 doesn't affect the performance of the network that much 31 00:01:32,900 --> 00:01:35,440 and it's able to maintain its ability 32 00:01:35,440 --> 00:01:38,480 to classify throughout seeing a number of stimuli. 33 00:01:38,480 --> 00:01:40,720 So we know that the brain has certain learning 34 00:01:40,720 --> 00:01:45,977 rules like Hebb's rule, in which neurons that fire 35 00:01:45,977 --> 00:01:48,310 one after another end up strengthening their connections 36 00:01:48,310 --> 00:01:52,030 or conversely weakening their reactions. 37 00:01:52,030 --> 00:01:55,480 Then we're soon going to see if rules like that end up 38 00:01:55,480 --> 00:01:58,900 providing stable perturbations where the network can easily 39 00:01:58,900 --> 00:02:01,810 recover and maintain what it's doing 40 00:02:01,810 --> 00:02:05,340 or unstable ones where we're going to go down on track. 41 00:02:05,340 --> 00:02:07,720 Or we know that deep networks act 42 00:02:07,720 --> 00:02:09,770 similar to how the brain works. 43 00:02:09,770 --> 00:02:14,200 Jim De Carlo gave a great talk about how 44 00:02:14,200 --> 00:02:16,060 the features we see in deep networks 45 00:02:16,060 --> 00:02:17,860 are similar to the features of the brain. 46 00:02:17,860 --> 00:02:20,214 And we know that the brain is constantly undergoing 47 00:02:20,214 --> 00:02:21,130 these kind of changes. 48 00:02:21,130 --> 00:02:25,510 So we're curious scientifically to see how these computer 49 00:02:25,510 --> 00:02:30,010 models respond in understanding how these two systems are 50 00:02:30,010 --> 00:02:31,570 the same or different. 51 00:02:31,570 --> 00:02:35,050 But also from an engineering context online 52 00:02:35,050 --> 00:02:40,240 learning where the network is changing while it's learning 53 00:02:40,240 --> 00:02:43,150 is going to be, I think, a much larger part 54 00:02:43,150 --> 00:02:46,040 of the use of machine learning going forward. 55 00:02:46,040 --> 00:02:49,720 So understanding how stable these things are 56 00:02:49,720 --> 00:02:53,170 to constantly changing parameters, I think, 57 00:02:53,170 --> 00:02:55,870 will be quite informative for those kind of studies. 58 00:02:55,870 --> 00:03:00,430 Being able to explore new types of materials 59 00:03:00,430 --> 00:03:03,490 and learn a lot about both computer vision 60 00:03:03,490 --> 00:03:06,180 and neuroscience has been a lot of fun. 61 00:03:06,180 --> 00:03:08,900 And, certainly, informative deep learning 62 00:03:08,900 --> 00:03:10,820 is a very hot topic right now. 63 00:03:10,820 --> 00:03:13,860 So being able to dive hands in a little bit 64 00:03:13,860 --> 00:03:16,390 and get some experience working with these models and some 65 00:03:16,390 --> 00:03:19,810 of the latest software packages, I think, 66 00:03:19,810 --> 00:03:22,610 will be useful going forwards too.