1 00:00:04,820 --> 00:00:08,194 [MUSIC PLAYING] 2 00:00:10,122 --> 00:00:12,420 ISHITA DASGUPTA: I'm Ishita Dasgupta, 3 00:00:12,420 --> 00:00:15,090 I'm going into my third year of my PhD 4 00:00:15,090 --> 00:00:17,615 at Harvard in Computational Cognitive Science. 5 00:00:17,615 --> 00:00:18,990 DAVID ROLNICK: I'm David Rolnick. 6 00:00:18,990 --> 00:00:21,600 I'm just getting into my fourth year with my PhD at MIT. 7 00:00:21,600 --> 00:00:23,940 I am in the applied math department. 8 00:00:23,940 --> 00:00:26,315 ISHITA DASGUPTA: So we're working with Hopfield networks, 9 00:00:26,315 --> 00:00:27,640 which is a kind of-- 10 00:00:27,640 --> 00:00:32,179 it's a concept in which tiny neurons are kind of connected-- 11 00:00:32,179 --> 00:00:33,970 they're all connected together, and the way 12 00:00:33,970 --> 00:00:36,000 they update each other, basically 13 00:00:36,000 --> 00:00:38,212 determines the state they're going to be in. 14 00:00:38,212 --> 00:00:40,170 It has been used in the past to model memories. 15 00:00:40,170 --> 00:00:42,435 It's basically that there are certain kinds of states 16 00:00:42,435 --> 00:00:45,060 that the neurons prefer to be in given the way that they're all 17 00:00:45,060 --> 00:00:46,131 connected together. 18 00:00:46,131 --> 00:00:47,880 And you can make them go into these states 19 00:00:47,880 --> 00:00:50,400 by initializing at a different point. 20 00:00:50,400 --> 00:00:52,870 And so it's been used to store memories before, 21 00:00:52,870 --> 00:00:54,120 but these are static memories. 22 00:00:54,120 --> 00:00:56,167 Like, once you're in one of those memories, 23 00:00:56,167 --> 00:00:57,000 you just stay there. 24 00:00:57,000 --> 00:00:59,250 So we were working with this kind of model 25 00:00:59,250 --> 00:01:01,770 to make some changes to it and have 26 00:01:01,770 --> 00:01:03,660 it be such that you can go from one 27 00:01:03,660 --> 00:01:05,266 such memory to another such memory 28 00:01:05,266 --> 00:01:07,140 and decide what probability it is that you're 29 00:01:07,140 --> 00:01:09,098 going to go to one memory or to another memory, 30 00:01:09,098 --> 00:01:11,970 and so basically add some stochastic dynamics 31 00:01:11,970 --> 00:01:14,379 to a Hopfield network. 32 00:01:14,379 --> 00:01:16,170 DAVID ROLNICK: Well, the idea is that there 33 00:01:16,170 --> 00:01:20,490 are many situations where the living brain is going 34 00:01:20,490 --> 00:01:24,390 to be faced with the task of reconstructing or simulating 35 00:01:24,390 --> 00:01:27,580 a stochastic sequence of actions. 36 00:01:27,580 --> 00:01:30,210 So for instance, if one were simulating 37 00:01:30,210 --> 00:01:32,220 an event in which one didn't know quite 38 00:01:32,220 --> 00:01:35,550 what the probabilities were that something was going to happen, 39 00:01:35,550 --> 00:01:38,130 then you can imagine playing it out in your mind 40 00:01:38,130 --> 00:01:42,055 and imagining one way of realizing it 41 00:01:42,055 --> 00:01:45,210 and each state in your mental sequence 42 00:01:45,210 --> 00:01:46,960 would be determined by the previous state. 43 00:01:46,960 --> 00:01:49,860 So if something's falling, then its state when it's falling 44 00:01:49,860 --> 00:01:51,990 is determined by the state when it was upright. 45 00:01:51,990 --> 00:01:56,160 And if we can understand how we could use memory 46 00:01:56,160 --> 00:02:00,780 to generate these sequences of patterns that are determined 47 00:02:00,780 --> 00:02:03,290 by stochastic rules, then we would 48 00:02:03,290 --> 00:02:11,820 be able to get a better sense of what kind of imagination memory 49 00:02:11,820 --> 00:02:15,150 connections there are possible even in a very 50 00:02:15,150 --> 00:02:16,650 simple model of the brain. 51 00:02:16,650 --> 00:02:19,470 And we're working with sort of the simplest model of memory, 52 00:02:19,470 --> 00:02:22,560 but it still turns out to be extremely powerful in being 53 00:02:22,560 --> 00:02:26,830 able to create these patterns of stochastic sequences Markov 54 00:02:26,830 --> 00:02:27,522 chains. 55 00:02:27,522 --> 00:02:28,980 ISHITA DASGUPTA: So far, we've just 56 00:02:28,980 --> 00:02:32,200 been modeling it on using-- 57 00:02:32,200 --> 00:02:34,695 computationally modeling what we think should happen. 58 00:02:34,695 --> 00:02:36,195 For us, there's a bit of theory work 59 00:02:36,195 --> 00:02:37,778 to figure out what kind of connections 60 00:02:37,778 --> 00:02:40,120 we should put in there so that it should work, 61 00:02:40,120 --> 00:02:42,390 and then we actually set up those connections 62 00:02:42,390 --> 00:02:43,950 and see if it does work. 63 00:02:43,950 --> 00:02:47,180 And we're hoping at some point to be able to tie it back 64 00:02:47,180 --> 00:02:51,120 to actual real world situations in which 65 00:02:51,120 --> 00:02:53,280 this kind of stochastic sequence of events 66 00:02:53,280 --> 00:02:58,680 actually happens in the brain, but that is currently on the-- 67 00:02:58,680 --> 00:02:59,825 like, in the future. 68 00:02:59,825 --> 00:03:01,200 Right now, we're just making sure 69 00:03:01,200 --> 00:03:04,614 that we can model this kind of behavior in a computer. 70 00:03:04,614 --> 00:03:06,030 DAVID ROLNICK: In some sense, it's 71 00:03:06,030 --> 00:03:09,910 an engineering task or a theoretical task followed 72 00:03:09,910 --> 00:03:11,370 by an engineering task. 73 00:03:11,370 --> 00:03:15,600 Understanding what can be done in a system like this 74 00:03:15,600 --> 00:03:17,520 and then simply building it. 75 00:03:17,520 --> 00:03:19,055 We built it and now we have to-- 76 00:03:19,055 --> 00:03:19,930 ISHITA DASGUPTA: Yes. 77 00:03:19,930 --> 00:03:21,930 DAVID ROLNICK: --see how it works in practice. 78 00:03:21,930 --> 00:03:22,620 ISHITA DASGUPTA: It becomes kind of 79 00:03:22,620 --> 00:03:24,030 like an experimental science, we're just 80 00:03:24,030 --> 00:03:26,071 changing parameters and seeing how things change, 81 00:03:26,071 --> 00:03:28,495 because these are not entirely clear and predicable. 82 00:03:28,495 --> 00:03:30,370 You can't just say that because you built it, 83 00:03:30,370 --> 00:03:31,967 you should know how it works. 84 00:03:31,967 --> 00:03:33,550 There are too many degrees of freedom, 85 00:03:33,550 --> 00:03:35,700 so there are a lot of things to be tested 86 00:03:35,700 --> 00:03:37,882 to see how well it performs in different-- 87 00:03:37,882 --> 00:03:39,090 under different environments. 88 00:03:39,090 --> 00:03:42,440 [MUSIC PLAYING]