- Apology for the reading
Justin apologizes that the reading was difficult.
- Theory of Meaning
A review of the theory of meaning introduced in the last class. How exactly do we go about ascribing meanings to a word? Everyone has different meaning of words, based on their own experience of it. Would aliens be able to enjoy our music, or is it particular to our culture? Hofsteder argues that there is something fundamental in the patterns of music that cause it to be enjoyable.
- Universal Information
Would aliens enjoy our music? Hofstatder argues they would, because the patterns of information encoded are universal and inherently beautiful.
- Information and Entropy
- Number Theory
Outlining typographical number thoery, or TNT.
- Context Free Grammar
Graphical demonstration of context free grammars, using the open source computer program "Context Free".
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation, or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
JUSTIN CURRY: All right, hello. Welcome back. Today's I think the fifth lecture. So I want to start off actually with some apologies. And it's going to be a bit of a sobering note. I want to quickly recap some of the things we talked about last lecture, and kind of bring forth some words of caution.
But before I do that, I want to also apologize for this chapter. How many of you were kind of out of your chair excited in reading this past chapter? All right. No. In fact, I see the opposite of shooting your hand up. So I take it that most people didn't enjoy this last chapter.
Yeah, I mean, neither did I. And really the only reason I assigned it as reading is so that you guys could get kind of a fundamental idea of MIU, like the PQ system, we had a completely formal typographical system for encoding statements. And we could play in a very mechanical way with these statements. Of course, unlike PQ and MIU-- PQ to an extent, TNT-- typographical number theory-- has the advantage of coding something which we think we know things about-- and that being numbers, and the property of natural numbers-- 0, 1, 2, 3, 4, and so on-- and truths about them.
So I wanted you mainly to become familiar with the notation, and become familiar with the idea that you can start with a kind of couple of seeds of axioms. And just by applying these kind of recursive rules of development, you can produce kind of a whole web of new strings, which happen to, when interpreted, provide truths, or things which we think we know about numbers. And that was really the main point of this chapter.
There are going to be a few other things I'm going to highlight. But we're going to, once again, do a mishmash of topics, like we seem to do every lecture. And I think we'll have something exciting to show you.
So first off, recap of-- not yesterday, but yesterday in terms of learning-- last week, where we were interested in kind of fundamentally a theory of meaning. And this is already something I want to kind of offer my apologies for is, in many ways, I think we provide a pretty convincing idea of what we think a theory of meaning is, even though it's not at all agreed upon in the academic linguistic philosophical community, what a theory of meaning actually is.
What we kind of advertised is that the meaning of kind of "snow is white." And what we take meaning to mean is that there's some sort of complex. And in the words of Douglas Hofstadter, an exotic isomorphism. So I could have exotic, or E for exotic, for the activity which goes on in your brain when you, as your visual perceptual devices hone in on the chalkboard, and we see these broken bits of limestone against this shale, or whatever the chalkboard's made out of. And as our brain undergoes kind of a complex set of edge detection algorithms, we hint this light and dark here and here and here and here.
And then, as this comes into focus, we then recognize we have kind of global objects. I don't know if you care about the quotations, but you probably noticed this group here, and here and here. And then once you have these broken down, you then say, OK, I recognize these. You then perform edge detection on each of the letters. And you say OK, S-N-O-W spells "snow."
Snow then feeds to this whole kind of long and complicated conceptual semantic network of the memory of when you first saw snow falling. Let's see. You might just associate it with the sensory feeling of cold. You might also associate snow with going skiing or snowboarding, and all sorts of things. And this continuing to branch out, and it becoming a very individual thing.
And I think that's a very important point to notice is that really what "snow is white" means to me is, in some ways, fundamentally different than what it means for you. And that's a dangerous statement. That means that kind of a theory of meaning lacks almost a formal mathematical single interpretation, which we intend it to be
What philosophers and linguists always wanted is we want "snow" to point somehow to this exterior world-- the world which we live in-- and refer it to the actual crystallized structure of water we know as snow. And that was the upstanding citizen meaning of snow.
And there was nothing else-- none of this personal understanding of, well, there was that one time where we got caught in a blizzard in Kansas, or whatever. Like we wanted to get rid of all kind of individual interpretations. So I advertised this isomorphism between what the perceptual input of "snow is white" does, and what that lights up in your brain.
But then we kind of carried forth to another idea. And we brought out this very, very, very connected notion of information.
So if you will cast your mind backwards, Hofstadter talks about this record which we strap onto a space shuttle or something and send it rocketing across the universe. And he kind of poses the problem of suppose some alien civilization stumbled upon this record and then said, OK, what is this? And there is this kind of argument of to what extent would the record mean anything when it was completely out of context of this kind of human cultural sociological entity which we support here on earth. I mean, what does a record mean to someone living on alpha centauri. I don't know.
And one of the big arguments right away was that the structure of a record, with kind of these very neat, concentric grooves-- or actually spiraling, to be a little more appropriate. And somehow these aliens busted out their microscopes and started analyzing little sections of this strange artifact. And they would then notice these kind of regularities in sort of grooves. And then they would kind of ask, well, that's odd. I wonder if that means anything? And then this idea of them somehow from the record be able to reverse engineer what a record player might be, or look like, and then actually play the music.
So even assuming if they had somehow figured out this incredible string of ideas, even if they were to play the music, would the music mean anything to them? Would it just sound like a bunch of garbled mishmash? Latif, you have a comment?
AUDIENCE: [INAUDIBLE] noise.
JUSTIN CURRY: So Latif argues that music would just be noise. How many of you agree with Latif? You think Latif's right? Does anyone have the cahoneys to stand up to Latif?
JUSTIN CURRY: So Max is saying we're assuming that they can hear at all. OK. So let's go ahead and grant them that.
And if we're going to kind of engage in this philosophical dialogue, let's grant them the ability to hear, and let's even assume that they can hear in a similar frequency range that we do. Let's say if in many ways somehow there is this universal form known as a human, and for planets with similar gravitational pulls and things like that, and just the fact that carbon happens to be a really stable molecule, that evolution carried out in almost exact sense on this other planet. Would the music still mean the same thing? Would it have meaning? Yes?
AUDIENCE: Well, the sounds themselves would still pretty much sound the same. But if there was somebody singing, that probably wouldn't make any sense to them. It would be like listening to any foreign music in a language that you don't understand.
JUSTIN CURRY: OK. So let's say it was a piece of Bach. No vocals. Just kind of triumphant sounds of glory or whatever. Felix.
JUSTIN CURRY: All right. So it'd probably [? wouldn't ?] know what piano is. But do you think they would still have this kind of trigger of beauty, or even kind of awe or desire over the music, even if they didn't know it, just from the sound?
AUDIENCE: Where do those desires come from? What triggers those desires?
JUSTIN CURRY: So let's-- I should be careful-- not desires, but just a sense of beauty. How fundamental do you think a sense of beauty and organization is in this music?
AUDIENCE: That's based on your interpretation. If somebody else [INAUDIBLE] says that OK, I can associate this with that. This sounds more like this. Or this makes me feel a certain way. And that's what makes me feel that it is beautiful [INAUDIBLE].
JUSTIN CURRY: So even for individuals on this earth, what Bach's music means is different for everyone. OK.
AUDIENCE: But there's still sensation there. And I think that's one of the coolest things that you can have. When you listen to foreign music, you can in many ways decipher despair, and all these different feelings. So maybe it really depends on if these people get the same sort of sensations [INAUDIBLE] from those different frequencies.
JUSTIN CURRY: Exactly. But even putting aside the very complex semantic networks, which everybody has internally when they trigger with this music, and assuming that these aliens don't have it, I think in many ways you can make a good case that there's something about pattern which is fundamentally beautiful in music. And that in many ways this is mathematically describable.
And that's kind of a careful note to think about, just the idea that pattern is itself beauty, and it's something which is universal, that pattern is detectable by anyone, and that it is, in some ways, the only thing which there is to meaning, the idea of pattern.
So last time we really connected pattern-- or I tried to connect-- to information. Now once again, before we go running off in all sorts of tangents, I wanted to provide a word of caution. Because what this means, what information-- and in particular I at least mentioned the idea of this kind of entropy version of information.
And I'm not really going to explain this formula that well. Sorry, there's [? the sum ?] place. I forgot the minus sign last time. Log px.
But this is just this idea of the probability assigned to something, and in some ways the less probable something is, the more entropy there is to it-- or the other way around.
But either way, the main thing I want to point out is that there are these mathematical ideas of how to measure information in entropy form. And what this fundamentally boils down to is if you were to describe by either this record, or, as we talked about previously, a picture or a piece of music, and just feed it kind of as a string of 0s and 1s, we could actually assign certain values of entropy, and view this as a measure of information.
And it's interesting that there are certain patterns that all languages and certain symbols share. And some of these are closely related to notions of entropy. And you can actually produce an information entropic measure of the letters in the alphabet. And that's just based on how commonly they occur in language.
But there's yes-- sorry-- go ahead Latif.
AUDIENCE: [INAUDIBLE] because that's the only sort of thing the human mind can handle. So it creates all of these languages that have the same structure.
JUSTIN CURRY: So Latif argues that these patterns and languages emerge because that's all the human mind can handle. But I would argue no. Because they've actually done the same analysis on DNA. And DNA, as far as a language, presents the same patterns. So unless you think the human mind created DNA, then I would argue no.
And once again, so then I talked about kind of also a related phenomenon. And these are things for you to Google and pursue in your own time, is Zipf's law. And it's this idea that if you were to take any language, and just to rank the symbols based on frequency of occurrence, there's actually a very nice power law behavior. The second most common letter occurs exactly half the time as the first, and the third most common letter happens exactly a third of the time than the first. And there's this power law behavior which comes out of languages if you just sort them by frequently appearing symbols.
And this is kind of an interesting argument. Zipf was, I believe, a linguist at Harvard in the '40s, or '50s-- I'm not sure. And he noticed this behavior. And it's interesting, because then you can take something and ask, is this a fake? Is this an actual language? I have no idea what it means. But I just noticed these certain re-appearing symbols like that.
And I also notice that, and then start ranking them. And there's this idea that you actually discern intelligible languages based completely on the frequency of occurrence of these symbols. Well, all this has a rigorous footing.
We talked about, then, a related notion. Because fundamentally, I said, this picture doesn't capture what we mean by information. Because if you take a picture-- where is my eraser-- if you take a picture, and you kind of have just a bunch of seething dog barf. And you convert it into these 0s and 1s, and then do this kind of information entropy analysis on it, it might not differ all that much from if you take a picture which has something like the Sierpinski gasket drawn on it, which has obvious regularity, obvious pattern, and, it could be argued, obvious meaning, and so on ad infinitum.
So this measure, this kind of Shannon and entropy information, it's all just kind of a field of information theory which if you guys want to, you can come to MIT and audit a class in if you want to. This kind of analysis on this conversion to 0s and 1s of this versus the eating dog barf would be not that all distinguishable.
So then we presented this other idea. And we really got to play around with this, is that, well, this obviously has meaning, because I can write a short little program with some syntax and et cetera, blah, and then run it. And it's very short. And it presents this entire picture.
This sequence of 0s and 1s that go on for a million, billion, whatever digits, that's needed to encode the visual picture of a Sierpinski gasket is actually an excessively long description. It's kind of like taking a pendulum, and then someone asking you to describe it, and you going, well it rocks there, and back, and there, and back, and there, and back. And you just keep saying that. And then forever, however long period of a time, you want to describe the pendulum, you just keep saying that. You record your voice saying that. It's a very inefficient way to encode the regularities of the phenomenon.
There's the idea that this can actually be expressed mathematically using just kind of a idea of iteration of symbols. And we had this Lindenmayer system. I have no idea if I'm spelling this correctly. L-I-- linden. But there has to be a mayer.
JUSTIN CURRY: Oh. So there's an N-M. N-M. I never won the spelling bee. And was F minus R minus or plus plus R minus F. And we could encode these, and then we had this very nice program for expanding this out in a recursive manner. And it just took a few lines of code, which takes much less information to encode the lines of code than an actual picture of what's going on. So this leads to a very important idea of kind of algorithmic information.
And we then really, really started playing around with the idea of, well, all science ever does is reverse engineering. We take phenomenon, like the Sierpinski gasket, or the regularities of a pendulum swinging back and forth with respect to some angle theta, and we describe it very simply. So we make our algorithmic content for the pendulum as low as possible by using these symbols to encode the regularities of the dynamics here. And I really apologize to those of you who haven't had calculus. But these double dots indicate a rate of change of the rate of change of the angle theta.
So we then started playing around with looking at cellular automaton, which is essentially this grid. And we divide it up. And the value of this grid, whether it's black or white, is somehow governed by the neighbors.
And by just playing around with those rules, we suddenly stumbled upon an entire wealth of behavior. By changing the number of colors it could have, and how quickly it changed, we could actually emulate a wave equation. We could make splashes in a puddle using this kind of very simple finite and deterministic rule.
We modeled voting patterns by just saying, well, I'm going to make my colors somehow dependent on what my neighbors are doing, or what my neighbors are thinking, kind of indicating the dangers of groupthink. If you always let yourself be affected by what other people are thinking, we just become kind of a homogeneous culture. So somehow, just by these very, very simple rules, which Curran gracefully coded up in just a few lines, we were suddenly describing huge realms of complex behavior in the social, physical, and cultural worlds.
And that kind of presented some interesting ideas. We started asking, well, what about this and that? And I remember one line we ended up stumbling upon was, well, because we see the self-similarity of behavior, or the same voting rule which describes how humans act in groups, also describes gas droplet condensation. If you have a bunch of spraying water on your shower curtain, the reason why it [? droplets ?] [? up ?] is because each molecule of water is seeking to reduce its overall energy, and it can do that by grouping up with people. And suddenly you've got this clustering, this coarsening property, which we saw in water droplets.
And then we started, oh, does that mean that the universe is fractal, and that there's this concepts which apply on all levels. And I've kind of felt ourselves getting caught up in the moment, and maybe leading you guys down paths which, although exciting, aren't exactly well founded. It's not like everyone in the scientific community, if you went to them and said, is the universe a giant fractal, a lot of people would probably say no, obviously not.
And there would be some various reasons for this. Because if you're talking about self-similarity along scales, this view of, well, we have the solar system, the sun, and these tiny planets going around, et cetera. And like, a-ha, but it's the same model that the atom is.
Wrong. One of the fundamental things in physics nowadays is patching up the disagreement in mathematical description and behavior of things at the atomic realm, which is not at all like this. It's, in fact, if you guys have done chemistry, and looked through the books, you see these weird drawings of probability clouds, which are governed by, fundamentally, quantum mechanics. And it doesn't at all match the classical mechanical description of the solar system. So in that sense, no, at all. The universe is obviously not a fractal, because the rules don't apply on all the same levels.
And then I did this very exciting demonstration-- or at least I kind of thought it was exciting at the time-- where I took a piece of paper, and I crumpled it up, and then I unfolded it. And I said, well, the kind of typology and morphology of the piece of paper is almost identical to the kind of thing we see in mountains and valleys and rivers. On this kind of pattern formation along scales, the fact that what I do here on a very local level is what we see on a very large level. I suggested this idea of the universe conceptually being a fractal.
But what a mathematician or a physicist might actually tell you is that my equations for describing the phenomenon are scale-free. And there's this idea of being scale-free.
And we do this all the time in fluid dynamics. I can drag my finger through the water, and see little vortices form behind my fingers. And then we've actually got exact pictures of mountains which pierce the clouds. And the clouds are flowing around this mountain tip. And you get these von Karman sheets, where these vortices split off. I think it's von Karman.
But there's so many beautiful pictures you can see in fluid dynamics and things like that. So this being mountain. Or I could equally erase this and say finger. And really what I should be trying to do here is advocating the beauty and elegance in our formalism with mathematical equations, and how we can make these scale-free, and describe phenomenon at all sorts of levels. So I just wanted to be cautious, and not say things which will later get me in trouble.
But aside from that, I encourage you all to get carried away and explore new ideas. And then the reason why Curran and I fundamentally believe in computer science and mathematics as a framework for thinking is that you can be as lofty and philosophical and out there as possible. But at the end of the day, you have to either be able to make it work rigorously with proof, or by execution on a computer, which for most intents and purposes, is just as good as proof. Because either it works or it doesn't. And you're thinking about the universe either works or it doesn't. And fundamentally you need falsifiability. So there's a chance that your thinking might not work.
And it's fun to be metaphysical and talk about things higher than the universe and space and time, and what the structure of these things are. But at the end of the day we want to be able to test our observations, or prove them mathematically, or using computers. So that's kind of my sobering note of caution. And I'm sorry for those of you that had to go through it and didn't want to hear it.
Now on to slightly more fun things. First, let me check the time. So number theory. I really want to just draw your attention to the idea of what we can take as to be an axiom, and why we can't prove these things.
And if you'll cast your eyes onto page 221, we saw kind of this pyramid of-- And we had 0 plus SS. S0 equals SSS0, and so on.
And, well, first of all I want to highlight some things, some elegance. And this is really what makes logicians the most anal creatures on the planet, is that we don't have numbers here. We just have one number. And that number is 0. And then every other number is just the successor of 0, or the successor of the successor of the successor of 0, or the successor times what we like to say in our metalanguage 1,729 S's, and then a 0, and that representing the number 1,729.
But there's some also elegance in this, the fact that you only need one concept-- well, two concepts. You need a 0, and you need the concept of a successor. Then, suddenly you get the whole thing for free. Which is pretty.
But what was highlighted by this example was that if we didn't assume it, we couldn't prove this statement, which says that for every a, where a is some variable, 0 plus a is a. What a dull and obvious thing.
But all we have at our disposal are kind of these things. And in fact, we get this whole mountain-- this pyramid of true statements. And we want to the leap to this generality that, well, this is obviously the case. But fundamentally our desire to leap to this conclusion comes from our understanding, our mental models of how numbers, and how specifically integers behave.
And that's an important point. This idea of really all we ever have at our disposal are our mental models, and by making them rigorous, and trying to make them rigorous through formal systems, we really kind of see whether or not our definitions encompass as much as we want. So we could never actually prove this statement with the way that TNT was set up without this axiom. So we eventually had to assume it.
And Hofstadter calls this-- just to write it here-- omega incomplete, where omega is kind of the thing we used to refer to all the integers at once. And it's just that idea that even though we have this infinite stack of true things, we don't have the thing which describes them all as a truth-- namely this.
But you know, this notion of mental model I think is really important. Because I remember first seeing this proof in high school. And I remember just being kind of utterly shocked and in some ways horrified by it.
So, how many people believe the following? 0.9999 repeating is equal to 1? [? Ders ?] believes it. [? Devin ?] believes it. Can anyone come up and prove it? Felix, do you believe it? Can you prove it?
AUDIENCE: I'd have to think about it first.
JUSTIN CURRY: Think about it first.
JUSTIN CURRY: Deep down. Does anyone want to show off, and leap to the chalkboard like a young Gauss, and just go ahead and show this to me? [? Ders? ?]
AUDIENCE: Is there something like multiplying by 10?
JUSTIN CURRY: So I'm going to go ahead and take your suggestion, and kind of lead you down the proof. So we're going to call this thing x. And we'll temporarily forget that we're 1. We're going to call x this 0.9 repeating. So [? Ders ?] recommends we consider the quantity 10x, which I think is 9.9 repeating. A bunch of hands.
AUDIENCE: [INAUDIBLE] that infinity will be 1, because it's approaching a number [INAUDIBLE].
JUSTIN CURRY: Well, without going to the idea of a limit, we can try to prove it more fundamentally. I don't really know. Felix, do you know?
JUSTIN CURRY: So Felix says we should subtract x. So suddenly we get 9.0. Oh, sorry. I mean x itself is 0.9 repeating. Dot, dot, dot. So then we perform the subtraction, we get 9. So we have now this new truth that 9x equals 9. And the only number which satisfies that is x equals 1.
And I actually will never forget the girl sitting next to me in pre-calculus when my teacher first did this. And she goes no! It can't be! But it's not 1! Look! Because if it were 1, I would have written 1.
But somehow I wrote down a completely different number, which was the same thing. And this kind of shows the really important thing which mental models have to tell us. We can have what appears to be a correct understanding of an object-- namely, real numbers. But then we're continually surprised when they do things like this to us, even at the most basic levels. So these are just kind of meta statements about what mathematics does, and what we do in mathematics.
And in particular what we're doing in TNT is we have this idea that number theory should encompass- or at least TNT should encompass all of our thoughts of number theory. And this goes back to what Euclid said about his axioms of geometry. He said, we know these things to be fundamentally true, or at least we believe they are true, and that they encompass all of our knowledge of geometry, and that everything they produce as a result of those assumptions should be true and certain.
But then we got the whole case with [? Sakura ?] and Gauss in hiding, because of course Gauss, who discovered it 50 years before everyone else, and then show his notebooks. He goes, oh, sorry, I did that theorem in between my first cup of coffee and my second cup of coffee. But, well, good, I'm glad you discovered it, too.
And just this concept of non-Euclidean geometry, where you break that fifth postulate, that if you have a parallel line or a line, and a point not on it, that there exists a unique line-- unique exclamation point-- such that it does not intersect. But if you're working on the surface of a sphere, for example, and you define your lines to be great circles, and then, in fact, any possible line you draw, where "line" is great circle, you actually have two intersection points. Something like that, although that's a terrible picture. Sorry.
So Hilbert, this very guy who is trying to advocate all this stuff involving number theory, said, well, we just have these fundamental axiomatic notions of a point and line. And the most we can get from them are their logical relationships with each other. The second we try to interpret these things, we then get ourselves into trouble. When we interpret the statement line, and we mean the straight thing on this flat board, and not the curvy thing on the surface of the Earth, we get ourselves into trouble, because we're providing an interpretation, and not sticking directly to the formalism.
Shows you also one of the kind of dangers of getting away with your interpretations of what your formalities tell you. And I'm kind of proud of Hofstadter for doing that, and cautioning us against interpretation. Because it gets you in trouble.
First I'm going to field some questions about this chapter before I kind of introduce what Curran's going to do. And then move on to newer and more exciting things.
TNT. I mean, I remember sitting in an undergraduate seminar on Godel/Escher/Bach, and someone going, what the was he talking about with these supernatural numbers, and omega inconsistency? Because, I mean, these aren't trivial concepts. I mean, entire fields of mathematics have been devoted to them. So if you guys are feeling completely, ahem, I understood this in my sleep. Yeah, Sandra.
AUDIENCE: I was confused with how many different rules TNT had, because it talks about seven categories. [INAUDIBLE]
JUSTIN CURRY: Right. So he also talked propositional calculus, and how he assumed that into TNT. Yeah, and I'm sorry, but I didn't assign that really as reading. So that was kind of out of context.
But did any of you guys try the little exercises in the book itself? There's one section I want to highlight. And I give brownie points to everybody who gives me a right answer. Because I think I have my own view of answers. But I'm not sure if they're right. Let's also test you guys' knowledge of the notation.
So we have tilde, upside down A, C, colon, backwards E, b, colon, parentheses, SS0, dot b, parallel lines, C. First of all, can someone tell me what this means by giving it an interpretation, and then tell me whether or not it's true or false. Latif.
AUDIENCE: There exists-- there is no C-- or no, for all of C, there's not [INAUDIBLE].
JUSTIN CURRY: Well, here, before we do the tilde, let's do this. Say again.
AUDIENCE: For all C--
JUSTIN CURRY: For all C--
AUDIENCE: There exists no b.
JUSTIN CURRY: There exists a b or no b?
AUDIENCE: No, there exists a b.
JUSTIN CURRY: There exists a b.
AUDIENCE: Such that the successor of the successor of 0 times b is C.
JUSTIN CURRY: OK. So this without the tilde, is it true or false?
JUSTIN CURRY: Yes. It does. With the tilde it's true. And why?
AUDIENCE: Because two times anything is not [INAUDIBLE].
JUSTIN CURRY: So fundamentally, what this statement is saying is that for all numbers-- natural numbers-- that number is even, because it's divisible by 2, or it's a multiple of 2-- Which is clearly not the case.
And using our rule of substitution, or specialization, we could have picked, well, let's say 3, or, I'm sorry. 3 doesn't exist in our system. It's SSS0. And through our interpretation, we can see that, well, there is no number b. There does not exist a b such that 2 times that number is 3, namely because 3 is odd.
So let's try a little harder one real quick. And then with the tilde, which means not, or false statement is now made true. So this was round one.
Round two. For every C that not [INAUDIBLE] b colon, colon. SS0, dot b, equals C. Anyone who is not Latif can answer. I'm going to successfully knock you guys out. So the last man standing, actually, gets the hardest question. So first, anybody willing to field an interpretation who's not Latif? Max.
AUDIENCE: For all C there doesn't exist b such that--
JUSTIN CURRY: [INAUDIBLE] and call it 2.
AUDIENCE: All right. 2 times b is C.
JUSTIN CURRY: So for all C there does not exist a b such that two times b is C. You have a 50/50 shot on this.
AUDIENCE: Oh, it's false.
JUSTIN CURRY: Yeah, there you go. Exactly. It's false. Because we specify let C be 2. And clearly when there exists a B-- not not exists-- there exists a b, namely [? one ?] that 2 times 1 is 1. So we now have two people eliminated. Three.
So now let's try this. For every C such that there exists a b-- dammit-- not a tilde, I mean-- SS0 dot b equals C. [? Ders. ?]
AUDIENCE: For every C there exists a b that is not 2 times b equals C.
JUSTIN CURRY: And is that true or false?
JUSTIN CURRY: So say again. We have a--
AUDIENCE: There's not a C.
JUSTIN CURRY: For every C--
AUDIENCE: For every C--
JUSTIN CURRY: --there exists a b.
AUDIENCE: --there exists a b that is not--
JUSTIN CURRY: --such that this--
AUDIENCE: --such that 2 times b equals C.
JUSTIN CURRY: Right. So actually, so specify. So what do you say this is? True or false?
JUSTIN CURRY: Think it's actually true. Because this statement says that for every number there exists another number such that this doesn't work. But if you have, say, 4, you could pick anything-- like 3. And 2 times 3 is not equal to 4, which is exactly what that is.
So instead of the tildes, which I find really confusing, because it's out front, you can just consider it to be a not equals here, which I think is easier. So this is true, I believe.
Fourth round. Sorry. Even as I'm saying these things, I'm getting tripped up on myself. Notation's kind of cumbersome. For all C, such that the successor of successor of 0 times b equals C.
All right. So anyone who is neither Max nor Latif nor [? Ders. ?] Sandra.
AUDIENCE: All right. There does not exist a b such that all of C, all members, such that [INAUDIBLE]. So that 2 times b equals C.
JUSTIN CURRY: Yeah, exactly. First, it's easiest to consider without the tilde, and just consider there exists a b for all of C such that this is the case. So with the tilde, just the way it's written, is this true or false?
AUDIENCE: I think it's false.
JUSTIN CURRY: So, wait. Hold on. Did I write down the right thing?
So first, let's consider the case without the tilde. So we have this magical b, such that any number we put in, this number is the product of 2 in that. So is there any one number such that any number is just equal to 2 times that?
So as an example, we could specify C to be 4, or 3-- yeah, exactly-- 3, and then b to be 1. And clearly we have found a C, namely 3, such that 2 times 1 is not that. So this is false. With the tilde it's true. Excellent. Good work.
JUSTIN CURRY: Say again?
AUDIENCE: You can't use minus 1? [INAUDIBLE]
JUSTIN CURRY: No, no, no. Exactly. Well, you would express 1 in this notation as just the successor of 0. So it's not that you don't have access to 1. It's just that in our notation this is how we would write 1, basically.
All right. Well, good. I just want people to try these.
So two more to go, I believe. So anyone who is not any of the other four people who have gone, riddle me this, Batman. So b such that not for all C there exists SS0 times b equals C. Anybody. [? Navine. ?]
AUDIENCE: [INAUDIBLE] such that for [INAUDIBLE] such that 2 times b equals C.
JUSTIN CURRY: OK. Can you give me a truth evaluation?
JUSTIN CURRY: So we have this b. And we're saying that for this specified value of b, regardless of what we put in here, this will hold. Or to kind of say it a little more clearly-- and this involves some of the symbols shunting you do in the chapter-- there exists a b where if you put in any C, it's not going to equal.
So you're saying false? Or is it true? There exists b such that not for every C. 2 times b is whatever.
So let's pick 3. And I mean, yeah, I mean, this is actually I'm pretty sure true. Because the second you fix b-- let it be 4-- it's clear that not for every number, 4 times 2 is that number. So in particular, for 6, this is not the case. I can't put in any number and get it. Are there still questions here?
AUDIENCE: [INAUDIBLE] last one.
JUSTIN CURRY: Do this one again?
AUDIENCE: Do the next one.
JUSTIN CURRY: All right. Do the next one. So we're OK with this being true? OK.
AUDIENCE: Yeah, [INAUDIBLE].
JUSTIN CURRY: Yeah. And it's really weird complicated logical relations. That's why most of the time mathematicians don't use these things-- because they get tripped up on the symbols. Whereas they already know up here what they're doing is right.
So since we're running out of space, I'm going to put the sixth round up here. And let's try to do this quickly, so we can pass off the show. So there exists a b-- and I could do the interpretation. Backwards E, b, colon, upside down A, C, colon, tilde, parentheses, SS0, dot b, parallel lines, C. So anyone who is none of the previous five people-- Mia.
AUDIENCE: Do you want me to read it?
JUSTIN CURRY: Yeah, give me interpretation, and truth value if you're brave.
AUDIENCE: There exists b for every C so that there is not 2 times b equals C.
JUSTIN CURRY: Right. So the colon, I could replace the tilde with what other symbol? A line here. So I think that's easier. So there exists a b [INAUDIBLE] for all C 2 times b is not equal to C. So is that true or false?
AUDIENCE: It's true.
JUSTIN CURRY: OK. Think about this. So there exists a b such that for all C-- so regardless of what you put in here--
JUSTIN CURRY: OK. So then it's false. This flipping back and forth between these existential and quantifier-- existential and quantifiers, and things like [INAUDIBLE] can really make this confusing, because what you have here is that there's a b such that regardless of what you put in here, this equality never holds.
But that's not true, because you can take any C. Eventually it's going to work, because if we specified b to be 2, that's the thing which exists. Then we can find a C. So it's not true that for all C. And particularly that C value being 4. So 2 times 2 is equal to 4, even though this thing claims that this value of b which we picked, would never be equal, regardless of what you put in here.
So I think that's my answer key. True, false, true, true, false. And it fits the second hint, when he says either there are four true and two false, or four false and two true. And that's related to actually how you shunt these tildes.
But once again, unless you're really planning on a life as a logician, you're not going to have to spend a lot of time manipulating formal systems. And as such, it's good to get practice with this, just because the difficulty, which you have of taking in these new symbols, and forming-- kind of creating more space in your neural network for fitting these places in-- is, I think, an important exercise.
And that really goes back to that idea of a theory of meaning. And it's one of the last things I want to apologize before I hand off the lecture to Curran, is that when I say things like recursion, and I say things like formal systems, isomorphisms, algorithmic information, Shannon entropy, et cetera, et cetera, neural nets-- these terms don't really mean the same thing for you people that it does to a professor who's been spending years working long nights torturing himself over solving these problems late into the night. And that process of thinking again and again, and making mistakes, and then refining your understanding of something, forces certain parts of your brain to meet here and here and here.
And just by talking to it at you guys, what I'm trying to really do is just inspire an interest in what I'm saying. And it's not like I can condense somehow seven years of undergraduate and postgraduate work into a two-hour lecture, and just immediately go into your brain, and I want to put this neuron here and there and there, and suddenly you have the same depth of understanding about these subjects that Seth Lloyd or anyone else who is a specialist in these fields, the kind of understanding they have. I can't endow that in just a simple two-hour lecture, unless I assign pages and pages of problems, and you're working 100 hours a week, which I'm not going to do, because I'm not evil.
But aside from that, the fundamental thing to pull out here-- and I've noticed that I say fundamental-- the important idea here is that you can start with essentially a basic set of statements which you think can capture something true, and apply a recursive rule, a recursive algorithm to producing new strings, which can produce new things. So as we go into the next part of today's lecture, I want you to kind of think of this truth tree, which we try to grow.
And it all starts kind of from Peano's axioms, number theory. And we just apply these rules, and create different statements. It's just like the MIU tree that we made in the first lecture to. And I noticed no one's challenged me on my $20.
AUDIENCE: [INAUDIBLE] all these sums are true, and they fall into the [INAUDIBLE].
JUSTIN CURRY: Right. So what happens is that some of these things, we start with those five basic axioms, which Hofstadter outlines. 0 is not the successor of any number. Let's see. 0 is not the successor of any number. You have to assume that 0 plus the number is just that number itself.
And here you go. Yes, he actually states them in this form. So you don't go ahead and give it an interpretation. Genie is a djinn. Genie is a 0, and djinn is a number. Every djinn has a meta, which is also a djinn. So every number has a successor, which is also a number. So Genie is not the meta of any djinn. So 0 is not the successor of any other number. Different djinns-- sorry, this is page 216-- different djinns have different metas. So if two numbers are not equal, the next guy after them are also not going to be equal.
And then finally, if Genie has x, and each djinn, relays x to its meta, then all djinns get x. And that's the principle of induction, that if 0 has a property P, and the successor and any number relays that property P to its successor, then all numbers have it, because you can give from 0 to 1, and then 1 gives it to its successor, and it's 2. And it always has that property P. And that's what mathematical induction relies on.
And from these basic things, you can actually derive most of number theory. And you just apply these rules of induction. And you start with this trunk. And you get a new theorem-- well, since 1 and 2 aren't equal, then 2 and 3 aren't equal. So you just kind of add things to your tree based on these rules.
And it's completely local. It's completely based on what you have at your given point, and what rule you're willing to apply. And then it's amazing the kind of emergent patterns which you can get. And we happen to call that number theory. Today, you'll shortly see some things which are a little more interesting.
But on that note I think we're going to take a two minute break to de-stress, and then get things set up for Curran to take over. Thank you.
CURRAN KELLEHER: OK. So what he was actually talking about can also be called a context-free grammar. A context-free grammar is something that has symbols and production rules. And the symbols here in this case are S and 0, and their various mathematical operators. But they're just symbols in terms of the grammar. And the production rules are the rules of inference, where you can take one string and perform some manipulation on a part of it to get a new string. It's a rule of inference.
So you can use a similar system to define moving around circles on the screen. So I'm going to explain what's going on here. This is a program called Context Free. It's an open source project that you can download and play with yourself.
So what's going on here, the code startshape just is the entry point. It's not going to change. And we define a rule called SPIRAL. And inside this rule we have CIRCLE, which plots a circle on the screen.
And then we call SPIRAL again. And y space 2 means that we increment the y position by 2 units. And size space 0.9 means that we multiply the size of this by 0.9 every time we go up.
So this rule defines this image. And what this program does is, when the thing gets too small to see, it just stops doing that. So this is actually an infinite recursion. But it stops at some point, because it gets so small.
So this is our framework. And by changing this code little by little, we're going to get some amazing pictures. So I'm going to just do these changes, and explain them as I go.
So I just had this spacing to be 2, so it's clear that we're just trying circles. I'm going to decrease the spacing to be 0.4. And rerender so it looks like that. So I'm going to define another rule, also called SPIRAL. And so as I go, I'm going to explain the features of this language.
When you define a rule that has the same name twice, what happens is whenever you call this rule, it calls one or the other with equal probability. So what I'm going to do here is say flip 90, which means flip our sort of frame of reference by 90 degrees.
Actually, first of all, before I do this-- sorry-- I'm going to add a rotation. So rotate 1. Rotate 1 degree each time. So it rotates a little bit. You see that? 1 degree each time.
So if we decrease the size by 0.99 every time, it's going to get smaller a little bit slower. So we see this spiral happen. So if we do 0.9999, it will be even more spirally. So that's what we get. I don't know why it's going off the screen. There we go.
So now I'm going to define another rule called SPIRAL, and flip by 90 degrees. So if I render this, what do you guys think is going to happen? What do you think?
AUDIENCE: It's going to flip [INAUDIBLE].
CURRAN KELLEHER: It's going to flip? So maybe I wasn't clear about what it means to call that flip thing.
So when we say rotate 1, we're rotating 1 degree this way. So after we call flip, when we say rotate 1, it's going to rotate the opposite direction. So say we call the first rule five times, draws five of these circles. Then we call the second rule once, it flips. And then we call the first real five more times, it's going to just turn the other way a little bit.
And so initially, they're going to be called with equal probability. So if I render it, this is what we get-- this sort of meandering thing. So what's happening is half the time it's calling the first rule, which moves the circle up a little bit. It rotates it, and decreases its size, and draws the circle. And half the time we're calling this other rule, which flips the direction in which we're rotating. So this is what we get.
Another feature of the language is we can change the probabilities at which these things are called. So at, say, 0.1, I put 0.1 right next to the second SPIRAL rule, the flipping rule. If I don't specify a number, it gets 1. So that means that the first rule is going to get called with a probability of 1. And the second rule is going to be called with a probability of 0.1. And this is whenever--
AUDIENCE: Maybe 1 minus that.
CURRAN KELLEHER: 1 minus that?
AUDIENCE: No, in the first rule they call the probability of 1 minus 0.1, because you're rolling a die. You can never go beyond a probability of 1.
CURRAN KELLEHER: Right. So talking about probability, what it actually does, I think, is computes the sum, and then takes the fraction of that sum that each of the rules have for a probability. Either way, this one happens less probable than 50% now. So we can see that it goes for slightly longer periods of iteration without any flippage. So if we do it again, decrease it 0.01, it'll flip even less. Flips with even less frequency.
So if we make it 0.001 it flips a lot less. So already we're seeing these really cool pictures being generated by such simple rules, simple context-free grammars. Any questions so far?
So what happens if I add inside this rule another instance of spiral without the flip? What does this mean?
AUDIENCE: [INAUDIBLE] First it flips, and then it just goes on like that for awhile.
CURRAN KELLEHER: Yeah. So more or less. You said first it flips, and then it goes on without flipping. So yeah, with this rule, instead of just flipping, what it's going to do is start this one off on its tangent, and also just keep going its original way without flipping.
So we'll render this and see what it does. So it does exactly that. See? Whenever it branches, it also keeps going its original direction.
So we can change some parameters around, and we're going to get some organic-looking forms. So if we decrease the probability-- no, increase the probability of branching, we get this. It just goes out of control. So maybe that's not what we want to do. So now I put it back.
If we multiply a size by 0.99, yeah, there we go. Now we increase the probability of branching. And we get trees. Look at that. It looks like a tree. That's really wild.
So if we increase the probability of branching even more, we get thicker trees, because they branch more. It's pretty wild, isn't it? Isn't that cool?
So it makes you wonder, does nature use these sort of context-free grammars in the way it grows plants, or is it like a Lindenmayer system? Is it fixed, these global rules that get applied at smaller and smaller scales? I think in nature, what we see is sort of a mixture of both, because some plants have very regular features, and some plants don't.
So it's still a mystery, plants. But it looks pretty organic. So I'm just amazed by this sort of thing. So we can play with the parameters, and get really cool growths.
So let's think about this as a model of plant development. We're modeling that the size of the plant is just constantly shrinking. And when it branches, it's mass sort of doubles, which is sort of a bad model. Think about a tree. Think about a tree just going up. And then, when it branches, it still keeps going up, but a branch goes off to the side. So we can change our grammar to do this. And we'll get some tree-like things.
So this is the rule. The rule on top is just the rule that it does when it's going. So I'm going to change this to just increment y. And I'm going to change this.
So one of these, it's just going to go straight. And the other one is going to branch and get smaller. So size, point 0.9, but for just going straight.
And what flip does in this context-- well, first, I'll explain it later. I'll say rotate 45 degrees. And size is 0.3. So what we get is this very sparse structure. So we can increase the probability of branching to 0.5 or something. Maybe 0.9. Maybe 0.2. So we get these things that sort of look like trees.
So I'll just explain the rules, in case you didn't follow. So the first spiral in this rule, this part of the rule encodes for the fact that whenever it branches, the size gets a little smaller by a factor of 0.9. And you form this branch.
And this, the second rule here, says size 0.3. That means that the size of this branch is 0.3 the size of the original trunk. And the flip part, flip 90 in this rule, means that next time it branches it's going to branch in that direction. We can actually take it out. And if we take out the flip, the branches will all just go in the same direction all the time. So if we take it out, see there? They only go in one direction.
So we can sort of play with the parameters here. Let's say, 0.95. Let's just see what happens.
Yeah, we got some pretty interesting things. If we make the branches a little bit thicker-- maybe 0.4. See? Look at that. It's like a tree or something. Any questions so far, and comments?
So if we add a little bit of rotation to the main rule-- rotate 1 degree-- yeah, Latif.
CURRAN KELLEHER: Say again?
AUDIENCE: [INAUDIBLE] does it call itself the thing that it's calling [INAUDIBLE].
CURRAN KELLEHER: Ah, good question. Yes. When it calls itself, like inside, like this one, or this one, or this one--
AUDIENCE: Which one does it call, the [INAUDIBLE] one or the [INAUDIBLE]?
CURRAN KELLEHER: Right. So which one does it call when you call SPIRAL? So this is where the probability comes in. Anytime you call it, there are these three instances where SPIRAL is called inside of SPIRAL.
CURRAN KELLEHER: So it calls either one with certain probabilities. And the probability of the first one is 1 out of 1.02. And the probability of the second one is 0.2 out of 1.2. Sorry, the probability of the first one is 1 of 1.2. So it's a very high percentage. And the probability of the second one being called is 0.2 out of 1.2. So yeah, every time you call it from within anything, it goes into this program and say, tell me which one should I call. And the program assigns which one it is based on these probabilities.
AUDIENCE: So it doesn't matter if you put it [INAUDIBLE] it will still [INAUDIBLE].
CURRAN KELLEHER: So you're saying maybe we could put it outside?
AUDIENCE: [INAUDIBLE] put it inside the function is so that the function-- I mean, the thing itself could be called.
CURRAN KELLEHER: The reason why I put it inside the function is so that it can call itself. So like--
JUSTIN CURRY: What happens if you take, say, SPIRAL flip 90, and you put it up in the top row for SPIRAL? [INAUDIBLE].
CURRAN KELLEHER: So you're saying what if I take this out of here, and put it up here?
JUSTIN CURRY: Sure.
CURRAN KELLEHER: Yeah. I mean, I don't know.
I don't know. But it has to be inside a function.
JUSTIN CURRY: So I think the idea is that you have kind of two options, whether or not you're just doing the simple SPIRAL circle routine, or if you're then doing this other spiral routine, where they rotate 45 degrees and change the size by 0.4, instead of this spiral where you flip 90, and you change the size by 0.5.
And see, this was what I thought. I'm not exactly sure how the algorithm's implemented. But I think by having 0.2 next to the bottom spiral, that means the top spiral is called only with a probability of 0.8. But that's obviously much higher than 0.2. So every time the computer rolls its die, it's trying to decide, do I either execute the top spiral, or do I execute the bottom spiral. And then the content of each of those spirals then governs the behavior you see here.
CURRAN KELLEHER: So what we did, when we moved this up, whenever we call a spiral a spiral, it branches. We have two spirals now. And that's the probability of that one is really high. So that means pretty much every time we're branching. So we each get this mass of branches. And we'll never finish.
So yeah, I mean there are all kinds of really interesting modes that we can come to with this. So I'll take it out, and put it back where it was. It's still computing. OK. There we go. It stopped. Ah, yes.
So it looks crazy, doesn't it? So if I take out that rotate, it's all very regular. It's sort of a regular structure. And we can change the angle maybe 60 degrees-- I don't know-- or 90 even. And it looks sort of like roads, maybe. I don't know.
AUDIENCE: It's like a tree near a railroad. You can see that [INAUDIBLE].
CURRAN KELLEHER: Yeah. It's like you're saying maybe it's like a river with these little sub-rivers going off of it. Yeah, I mean this kind of structure is found everywhere in nature. It's really amazing.
So if we add a little bit of rotation to the main going forward rule, or rotate 1, we get this sort of veiny.
AUDIENCE: Maybe that's when the wind is going.
CURRAN KELLEHER: Yeah, when the wind is blowing on the tree. It looks a lot like vines. When a vine is crawling up a wall. Or a root, a root of a plant.
AUDIENCE: [INAUDIBLE] underground root. Underground.
CURRAN KELLEHER: Yeah, underground, a root sort of looks like this.
AUDIENCE: [INAUDIBLE] for the special tree if you tried [INAUDIBLE].
CURRAN KELLEHER: So can I make the Koch snowflake, or the branching tree with this sort of thing. I haven't tried. Maybe I could do it somehow. But I don't think so. I don't think I can, because how this program acts is completely random. It's stochastic. It chooses which rule to do on the fly.
AUDIENCE: [INAUDIBLE] It wouldn't be recursive.
CURRAN KELLEHER: Well, it is still recursive. It's totally recursive. But it's not deterministic. So it's something being deterministic gives it absolute rigid regularity that we find in the Koch snowflake, or that branching tree that I showed earlier. But yeah, they're executed randomly. So you get these irregular fractals.
JUSTIN CURRY: But the Sierpinski gasket, you can also generate it stochastically using just a random random [? die ?] rule. And you pick your three points, and then you just throw a random dart. And then, what, you then take the distance to the closest edge, and you fill that point in. You can do that. I forget. So the chaos game.
CURRAN KELLEHER: The chaos game with the Sierpinski triangle, what you do is you have these three dots. I did explain this once before. But I'll just do it again quickly.
You have these three dots, and you start at a certain point, like say here. And you take where you are, and you choose at random one of the three dots, and go half way from where you are to that dot. So let's say we choose this dot, we go halfway there. Say we choose it again, we go halfway. And then say we choose this dot, we go halfway from here to here.
So after doing this 1,000 times, we get all these points that in the limit, if you were to do it in an infinite number of times, it would be the Sierpinski gasket. And so it starts looking like the Sierpinski gasket after a little while.
So that's stochastic. I mean, I don't know, maybe I could code that in this language. Maybe it's possible. I haven't tried.
So there's some prepared examples that I sort of have prepared. And one of the things that I want to talk about is these regions of stability and instability in this system. It's a very dynamic system. And you get these different situations.
Yeah. Look at this. Oh, wait. That's a PNG. That's not the actual thing. Oh, well. Just consider this. I'll open the actual one. Here it is.
So what's going on here is I have this one rule. First of all, this rule, these two rules are similar to the ones that-- I'll just get rid of this one for the sake of explanation.
So this is pretty cool. It's a tree. So I'll explain the rules. The first rule, goes forward by a unit of 1, and the circles are at 1. So we can actually see all the little circles. Goes forward by 1, and decreases in size. The size is multiplied by 0.99. So that's the main rule.
And then we have this other rule which has a probability of 0.02 over 1.02, the sum of them. And that's the branching rule. So it just calls tree again, with a rotation of 20 degrees or minus 20 degrees. So it's 20 minus 20. So this is our rule. And it generates this tree.
And then we add this other rule. So this is a rule that multiplies the size of the thing by 5, which is pretty extreme. But it happens with a very small probability. So I'll decrease the probability even more-- 0.0003. So maybe 0.0001. OK. It just happened a few times.
So we can see that usually it doesn't happen. All these branching and all these iterations, it didn't happen. But on this one here, the size of it multiplied by five. So we had this bigger circle. And then it propagated more. And then it happened again on this tip, this very end branch of it. And it happened again.
So I mean--
JUSTIN CURRY: [? It's an evil ?] tree.
CURRAN KELLEHER: This pattern appears in evolution, actually, which is really fascinating. So in evolution, you have these species and whatnot, or different strains of genome, which sort of diverge and branch out. And say at this point in time-- and the point in time being the number of iterations overall-- at this point in time, say, some huge catastrophic event happened on the earth. And this one organism, or small set of organisms is the only one that survived.
So their weight, so it suddenly increased. And then they propagated and spread themselves. And this is what we get, this new branching of evolution. And then the same thing happened here. Bam. And then it branched again, and it branched again.
So it's a very unstable, unpredictable system, because you could have these little events that just completely change the face of the system. Without this new rule, it's a stable system. We know the size is always going to get really smaller and smaller and smaller until it disappears. That's guaranteed in the limit. But with this system we introduce a rule which goes backwards, so it makes the system unstable, and much more unpredictable.
So if we increase the probability of this rule to 0.001, the system goes out of control really quickly. And we have 0.01, it just gets completely out of control. And it just gets bigger and bigger and bigger.
See this message here? "A shape got too big"? So this is an error that we get when the thing just goes out of control.
JUSTIN CURRY: So this could also correspond to meteorites being frequently thrown in the face of the earth, and only very few strands of species surviving at a time.
CURRAN KELLEHER: Well, yeah, I guess this would correspond to that.
JUSTIN CURRY: Versus destroying genetic diversity at more frequent intervals.
CURRAN KELLEHER: Yeah. Think of the metaphor to evolution again, I don't know, I guess it would correspond to just huge catastrophic events happening all the time, and miraculously every time one species survives. Sort of extreme metaphor. Doesn't really hold. But it's interesting, this regions of stability and instability in this space of parameters to the system. So yeah, it's pretty fascinating.
So I'll just show some more examples, and then I'll be done. This thick tree. Yeah. When we have certain parameters sets, we get these really organic-looking forms. So if we increase the probability of branching even more, you get these really nice, thick organic-looking trees.
AUDIENCE: So do organisms [INAUDIBLE]?
JUSTIN CURRY: That's the question, right?
CURRAN KELLEHER: So she asks, do organisms actually use this sort of system to live?
AUDIENCE: And if they had ones that didn't have that system, and did have some system, is that [INAUDIBLE] or not?
CURRAN KELLEHER: Right, so you're asking this question sort of from a totally different angle. She's saying, for a given organism if it develops this kind of recursive system, is it evolutionarily more advantageous, like trees? I think for plants that's one of the main things that plants sort of developed, and that made them, evolutionarily speaking, more feasible, more fit. So that's why.
JUSTIN CURRY: So that's an important point to consider. However, what would actually make a tree bend like that? Let's say that it's a relatively calm area, and we're not having hurricane force winds forcing our trees to bend that way. Why would a tree grow like that?
CURRAN KELLEHER: Yeah. [INAUDIBLE].
JUSTIN CURRY: Sunlight. Exactly. It's phototaxis. And plants have this feedback mechanism where they're actually able to sense light. And it's just like if were to have a potted plant here, and that light in the corner illuminating us, it would actually be able to dynamically change the probability of splitting.
And of course the method for that is actually a little different. You have these [? oxygen ?] chemicals being exchanged, collapses cell walls. And a tree is actually very quickly able to grow and bend in a different direction.
So it's not exactly just a probabilistic-- what happened there-- it's not exactly just a probabilistic, context-free grammar. It's got to have some sort of feedback mechanism. And then with the level of evolution, changing the rules even there.
So yeah, I mean look at all the number of leaf branches. And when I say leaf, I mean the one on the end. There are so many of them. And it maximizes the amount of sunlight that hits. It maximizes the amount of surface area on the tree. So I think that's one of the big reasons why it's more fit. You had a question?
AUDIENCE: No, I just had a comment. It also makes sense, like you said, the surface area. And it wouldn't make any sense to have a branch at the bottom, because all the line was already gone from--
CURRAN KELLEHER: Right. It wouldn't make any sense--
AUDIENCE: Yeah, if you had some mutation where a branch did come at the bottom, that tree wouldn't have any advantage.
CURRAN KELLEHER: Right. So you're getting at selection. So he said, if there were some trees that had these set of rules that made it start branching at the bottom and on the top, it would be selected against, because it's not feasible, because the light would be blocked out by the higher branches. Yeah, that's the nature of evolution. Yeah.
AUDIENCE: And that also looks like a brain, because you've got this cerebral cortex on the top. That's [INAUDIBLE]. And you've got the connections and the [INAUDIBLE].
CURRAN KELLEHER: Exactly. So he said it's sort of like a brain, like this is a cerebral cortex. And you have all these connections that go out to the surface of the brain, which have all these brain cells.
JUSTIN CURRY: Or even a vein structure.
CURRAN KELLEHER: Yeah, veins. The structure of the veins in your body coming out of your heart. It just fractals out to all of your body. So, yeah, it's this sort of universal form that appears everywhere. It's really amazing.
So all we can do is sort of awe at it say, wow, "They're like the same, man!" But where does it come from? What does it mean? I don't know. It's something that needs to be explored, I think.
AUDIENCE: [INAUDIBLE] it had to be that way?
CURRAN KELLEHER: Could it be that it had to be that way? What do you mean by that?
AUDIENCE: You can see that it's the most efficient algorithm, and [INAUDIBLE] stress towards efficiency. So at some point we had to hit that algorithm. And when it gets something good, it doesn't want to let it go.
CURRAN KELLEHER: Yeah. So you're talking about it being the whole system of all of biology and things evolving. He said maybe it's the only thing that works. It has to exist, because it's the most efficient algorithm for developing our biology, our forms. And so once it appeared, it sort of took hold in this big evolutionary series of events.
And I think you're right. And it also is coded in a very simple set of rules. Look at this. It's very little text that encodes for all of this. And I think similarly, in our genomes, if the evolutionary paradigm comes up with an efficient way of encoding a certain set of rules to do something, which is fit, which makes us more fit, then it sticks. Yeah, I think I think this notion of fractals and the recursive algorithm being encoded into our genomes is a reasonable thing to hypothesize.
AUDIENCE: Doesn't that also show up in [INAUDIBLE] behaviors, [INAUDIBLE]?
CURRAN KELLEHER: Yeah. Ants. Ant colonies. Oh, man. Yeah. It's everywhere.
JUSTIN CURRY: Emergent properties.
CURRAN KELLEHER: Emergent properties. So I think I'm done. I think this is my spiel. And I'll give it back to Justin.
JUSTIN CURRY: I just want to kind of wrap things up, kind of give a sense of conclusion, and the direction of where we're going. Ah, if you want to kill the projector.
So exactly, Curran's kind of hinting at an idea, and we're all kind of at the brink of this concept which I told you at the very beginning is the stated thesis of Godel/Escher/Bach, which is that the universe, at a fundamental level, is a formal system, and that it obeys certain deterministic rules, or perhaps probabilistic rules, but kind of formal system nonetheless.
We have this kind of label which we just stick on something, and that being the [? I ?] label, which actually hides a lot of detail, just like in the way that--
But trying to understand them in a fundamental way is the stated goal of this course. And I'm still debating, because I'm kind of continuously and probabilistic modifying the course of this course. And I'm trying to decide, and the stated plan right now is that after we do Murmann and Godl, and you'll get this weird Asian kick from both Curran and I combining zen and logic. And we talk about Godel's incompleteness theorem, finally.
And then we're going to leap forward chapters in the book 16 to a self ref and self rep chapter, which will be kind of a little over the top. First of all, it's a very long chapter-- 57 pages. But it's going to have this idea of a kind of typographical genetics. And we're going to look at how genetics and protein folding, and kind of the processes which kind of make us correspond to some of the formal systems we've been talking about in a prescribed way.
Then we're going to leap backwards. And since what I've realized this course has become is really a topics course of a bunch of things, then leap to essentially brains and thoughts. Because the bottom line is Hofstadter's thinking hasn't been patched through all the way. Otherwise we would have it solved. We'd be like, a-ha! Good thing we solved consciousness. We can go on and do other things.
It's not solved. And there are these huge kind of gaps missing between. OK, maybe I buy it that the universe is a formal system. But what on earth does Godel's incompleteness theorem have to say about physical systems? And let's ignore all that, and then start talking about the brain, and kind of these meta structures in the brain and the mind, and thinking in artificial intelligence. And that kind of wrapping up the course.
But then, of course, I'm also thinking about possibly showing a movie. And that being Waking Life. I don't know if any of you have seen it. And that just being essentially my gift to you guys for working so hard in this class. But I might need to get permission slips. So that's yet to come.
And I kind of apologized for whatever slow pace today's lecture was. Hopefully next one will be a little more exciting. But other than that read, Murmann and Godel. I meant to do the dialogue that precedes the chapter today. But obviously we can't. We can do it for next lecture, if we want. And then, yeah, that should be fun. And read that handout from I am a Strange Loop. Because I think it will explicate. And that's really what I'll be lecturing from for a large bit of next lecture.