Description: Behavioral methods to study cognitive development in infants, probing infants’ evolving understanding of objects and their physical behaviors, and understanding of agents who engage in goal-directed activity and initiate social interactions.
Instructor: Liz Spelke
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
ELIZABETH SPELKE: I want to start with an observation about this summer school. There's a lot of development in this summer school. You've got two full mornings devoted to it-- today and on Thursday. It also came up pretty majorly in Josh Tenenbaum's class last Friday and I learned early this morning also in Shimon Ullman's class that I couldn't be here for yesterday afternoon. And the issues have come up in many other classes as well, including Nancy's, Winrich Freiwald's, and so forth.
Now, what's come up is not only the general questions about development, but specific questions about human cognitive development. Questions that have been addressed primarily through behavioral experiments, not experiments using neural methods or computational models. And the topic that I'm going to be trying to-- that Allie and I will try to get you to think about for this morning is even narrower than that. It's about the cognitive capacities of human infants.
And I think a fair initial question would be, why so much focus on early human development? And that question will get sharper if you look at where major organizations are putting their research money. They are not putting it into the kind of work that I'm going to be talking about today. There is no-- in the Obama BRAIN Initiative, where they're looking for new technologies, there's no call for new technologies to figure out what human knowledge is like at or near the initial state and how it grows over the course of infancy. And the European Human Brain Project doesn't have development as a major area in it, either.
So I think it's fair to ask, why is CBMM taking such a different approach and putting so much emphasis on trying to get you guys to think about and learn about human development? And two general reasons, I think. One is, it's intrinsically fascinating. Come on. We are the most cognitively interesting creatures on the planet. And we're extremely flexible. At the very least, we know that a human infant can grow up to be a competent adult in any human culture of the world today and any human culture that existed in prehistory. And that means extremely varied-- they've had to learn extremely different things under different circumstances and have succeeded at doing that.
We also know that by the time they start school, if they go to school at all, the really hard work of developing a common-sense understanding of the world is done. That is, it's not explicitly taught to children. Most of it isn't even very strongly implicitly taught to them in the form of other people trying to get them to learn things.
What you're trying to do when you have a young kid, as those of you who have them know, or have had them know, is you're trying to get them not to climb off cliffs or explore the hot pots on the stove and so forth. You're really not spending very much of your time trying to get them to learn new stuff. They're doing that on their own. So it's I think a really interesting question, how do we do that? Intrinsically interesting in its own right, even if it were of no other use to us.
But historically it's also been recognized as being really important for efforts to understand the human mind, understand the human brain, and build intelligent machines. So Helmholtz, who came up in Eero's talk last night, was not only a brilliant neurophysiologist and a physicist, he was extremely interested in perception and cognition. And he wrote about fundamental questions about human perceptual knowledge and experience. How is it that we experience the world as three-dimensional?
He concluded that we didn't know the answer and never could know the answer, unless we could find ways to do systematic experiments on infants of the sort that could already be done to reveal mechanisms of color vision, for example, as were described last night on adults-- systematic psychophysical experiments on infants. But he looked at infants and said, I don't see any way to do that. We can't train them to make psychophysical judgments and so forth. But he was aware of their centrality.
So was Turing, who in thinking ahead to how one might build intelligent machines, suggested that one aim to build a machine that could learn about the world the way children do. And a side of the work that's come up so many times in the whole Hubel-Wiesel tradition that started in the late '50s, I think one of the most exciting and important developments within that field, we're not just focusing on the response properties of neurons in mature visual systems, but rather on the development of those neurons and the effects of experience on them.
When you discover that you get these gorgeous stripes of monocularly-driven cells in V1, it then immediately became really interesting to ask, suppose an animal were only looking at the world through one eye? Or suppose they could look at the world through the two eyes, but not at the same time, or not at the same things at the same time? What would happen to those cells? And there was gorgeous work addressing those questions from the beginning.
Now, that work has somewhat receded from attention. I think that's a mistake. I think that there's a great deal to be learned from those kinds of studies now. And if I get nothing else across over this time, I hope you'll at least get the idea that this is a field worth following, looking at development in humans, looking at development of perceptual and cognitive capacities in animal models of human intelligence as well.
So more specifically, I think there are three questions about human cognition for which studies of early development in general and in human infants in particular can shed light on. Two of them I'm not going to really be talking about today, except indirectly.
One is the question, what distinguishes us from other animals? We come into the world with very similar equipment. But look what we do with it. We create these utterly different systems of knowledge that no other animal seems to share. What is it about us that sets us on a different path from other animals? That's question one.
And the other question I won't talk about is-- well, I'll talk about it a tiny bit, but not directly-- is, where do abstract ideas come from? It seems like we not only develop systems of knowledge, but those systems center on concepts that refer to things that could never in principle be seen or acted on. Like the concept "seven," or the concept "triangle," or the concept "belief," or ethical concepts and so forth. Abstract concepts organize our knowledge. But since they can't be seen or touched or produced through our actions, how do we come to know them? I think studies of early development can shed light on that as well.
But the question I want to focus on today is the third question, and it's the one that Josh raised on Friday. How do we get so much from so little as adults? As adults, you look at one of the photographs he showed of just an ordinary scene and you can immediately make predictions about, if you were to bang it, what would happen? What would fall? What would roll? We seem to get this very, very rich knowledge from this very, very limited body of information at any given time.
And what that suggests is that we are able to bring to bear in interpreting that scene a whole body of knowledge that we already have about the world and how it behaves. But that raises the question, what is it that we know and how is our knowledge organized? What aspects of the world do we represent most fundamentally? Which of our concepts are most important to us and generate the other concepts and so forth? How can we carve human knowledge at its joints?
And now this can be studied in adults and you've seen a number of examples of this. You saw it in Nancy's talk last Tuesday, right? Anyway, last week sometime. Studies using functional brain imaging to get at our representations of human faces. You saw it in Josh's talk. He was mostly using data from adults to be probing the knowledge of intuitive physics that he was focused on and that his computational models are trying to capture.
You're going to see it on Thursday in-- no, tomorrow in Rebecca Saxe's talk, where she'll talk about human adults' attributions of beliefs and desires and other mental states to people. It's certainly studyable in adults, but it's difficult to answer these questions. It's difficult to answer these questions in any creature, but I think it's especially difficult to answer these questions in adults for a couple of reasons.
One is that our knowledge is simply too rich. By the time we get to be adults, we know so much and we have so many alternative ways of solving any particular problem, that it's a real challenge to try to sift through all our abilities and figure out what the really fundamental, most fundamental concepts that we have are. And the second problem with adults is we not only know too much, we're too flexible. We can essentially relate anything to anything. We can use information from the face to answer all sorts of questions about the world.
And here, I think, infants are useful for a maybe seemingly paradoxical reason. They're much less cognitively capable. They know much less about the world and they're far less flexible-- I'll show you examples of this-- far less flexible in the kinds of things that they can do with the knowledge that they do have. Nevertheless, they seem to come into the world equipped with knowledge that supports later learning. And because it's supporting later learning, it's being preserved over that learning. It's being incorporated in all of the later things that we learn. So it remains fundamental to us as adults. And I think this can help us, to think about how our own knowledge of the world is organized.
OK. So that's a general overview. How do we study infants? Now here's where the tables turn radically. We have way better methods for studying cognition in adults than we do in infants, just as Helmholtz thought. They can't talk to us. They don't understand us when we talk to them, so we can't give them structure. Oh, and unlike willing trained animals, you can't train them to do things, at least not in any extended sense.
They can't do much. I'm most interested in infants in the first four months of life before they even start reaching for things, much less sitting up by themselves or moving around. The interesting thing is, from day one, from the moment that they're born, they're observing the world. They're looking at things and they're getting information from what they see.
Now, their observations-- we've learned over the last half century or so that their observations are systematic and they're reflected in very simple exploratory behaviors, like when a sound happens somewhere in the visual field, turning the head and orienting it toward the sound. Even newborn infants will do that. Or if something new or interesting is presented, infants will tend to look at it. And these behaviors I think can tell us something about what infants perceive and know.
And before getting to the real substance of what I want to focus on today, let me give you a few examples of this. What kinds of things do infants look at? Well, if you present even a newborn infant-- infants at any age, really-- with two displays side by side, and vary properties of those displays and the relation between them, you'll see that they look at some things more than others.
So they'll look at black-and-white stripes more than they'll look at a homogeneous gray field. That's useful. It allowed people to get initial measures of the development of visual acuity which infants-- it actually overturned a somewhat popular view that at birth, infants couldn't see at all. We know from these simple studies that they can. And we also know that their acuity starts out very low but gets pretty good by the time they're four to six months of age. It doesn't reach full adult levels until about two years.
We also know that they look at moving arrays more than stationary arrays, and they look at three-dimensional objects more than two-dimensional objects. In addition to having intrinsic preferences between different things, they also have a preference for looking at displays that change or displays that present something new. So jumping from the '50s when those first studies were done up to the '80s, there was a whole flurry of studies showing babies pairs of cats on a series of trials and then switching to a cat and a dog. And the babies would look longer-- these are three-month-olds-- would look longer at the dog than at a new example of a cat. So they're able to orient to novelty.
And they also look longer at a visual array that connects in some way to something they can hear. Now, one of the things I spend a lot of my time studying is foundations of mathematics-- numerical and spatial cognition in infants. I'm not going to talk about it at all today. But I kind of couldn't resist giving just one example of looking at what you hear that connects to infant sensitivity to a number.
This is a study that was conducted in France by Veronique Izard and her colleagues with newborn infants in a maternity hospital. She played infants sequences of sounds, and each sequence involved repetitions of a syllable. For half the infants, each syllable appeared four times. For the others, it appeared 12 times. And for the ones for which it appeared four times, each syllable was three times as long. So the total duration of a sequence was the same for the two groups, but one involved four syllables and one involved 12. And after they heard that for a minute, the sound continued to play and now she showed, side by side, an array of four objects versus an array of 12 objects. And the babies tended to look at the array that corresponded in number to what they were hearing.
Now, all of this gives us something to work with, but it raises a nasty problem. And the problem is, what are babies perceiving or understanding? Today we're not going to be asking, how can babies classify things? What do they respond to similarly? What do they respond to differently? We're going to be asking, what sense do they make of them? What are they representing? What the content of the representations that they're forming in each of these cases?
And these studies as I've just described them don't tell us. Let's take the case of the sphere versus the disk. When this study was first conducted, the author concluded that babies have depth perception, that they perceive three-dimensional solid objects. Is that a justifiable conclusion?
AUDIENCE: Not necessarily.
ELIZABETH SPELKE: Why not?
AUDIENCE: Because they are not [INAUDIBLE].
ELIZABETH SPELKE: Yeah. OK. So when you present things that differ in depth, you're presenting a host of different visual features that for us as adults are cues to depth. The question is, are they cues to depth for the infant? And the fact that the infant is looking longer at something we would call a sphere than at something we would call a disk, doesn't tell us whether they're looking longer because they're thinking, "sphere," or "3D," or "solid," or something like that, or whether they're looking longer because they're seeing a more interesting pattern of motion as they move their head around, or because as they converge on one part of the array they're getting interesting differences in how in-focus different parts of it are, and so forth.
All of the different cues to depth could-- what we want to know is, what's the basis of this preference? And the existence of the preference doesn't tell us that. Similarly for the cats, and similarly for this single isolated experiment that I gave you on number, right? Does this say anything whatsoever about number, or could there be some sensory variable where there's just more going on in a stream of 12 sounds and there's more going on in an array of 12 objects, and babies are matching more with more, independently of number? These studies in themselves don't tell us.
In order to find out, what we need to do is take these methods and do systematic experiments. And these experiments work best under the following conditions. When you're studying a function that exists in adults and whose properties have been explored in adults in detail systematically, when you have a body of psychophysical data that you can rest on in your understanding of what's happening in adults, and you can then apply that to infants.
So one example of that took as its point of-- this is work by Richard Held, a wonderful perception psychologist who worked at MIT. Still is active, actually. He's retired but still active. And he did these beautiful experiments that started with the sphere-versus-disk phenomenon. And first of all, he tried to take it apart and say, let's just focus on one cue today, OK? Binocular disparity at the basis of stereo vision. So he put stereo goggles on babies. These were babies ranging in age up to about from birth to about four months, I think.
He put stereo goggles on them and showed them, side by side, two arrays of stripes. In one of the arrays, the same image went to both eyes. In the other arrays, the edges of the stripes were offset in a way that leads an adult to see them as organized in depth-- some stripes in front of others. And he showed that infants looked longer at the array with the disparity-specified differences in depth than the array where it didn't.
He did not conclude from that that they have depth perception, but it gave him a basis for doing a whole series of experiments that asked, in effect, do you see this effect under all and only the conditions in which adults have functional stereopsis? So he showed, for example, that if you rotate the array sideways 45 degrees so that you still have double images on the stereo side, but we wouldn't see depth because our eyes are side by side, not one above the other, the effect goes away. He varied the degree of disparity and showed that you only get this preference within this narrow range where we have functional stereopsis. And he was able to show the striking continuity between all of the properties of stereo vision in adults and in these infants.
So that study and a bunch of others using other methods, I think have resolved this question of whether depth-- when depth perception begins. Its beginning very early. Stereopsis comes in around two to three months of age. Other depth cues come in at birth. It's beginning very, very early. But it didn't come from single experiments. It came from systematic patterns of experiments.
In the case of cats versus dogs, we don't really have a psychophysics of cat perception, but steps have been taken to try to get to what the basis is of infants' distinction between dogs and cats in those experiments. And interestingly, what's popped out are faces. Turns out, you can occlude the cat and the dog's whole bodies, and if you leave their faces, you get these effects. If you occlude their faces and leave their bodies, you mostly do not, unless you cheat and give other obvious features, like all the dogs are standing and all the cats are sitting, or something like that. But in the normal case, faces are coming out as an important ingredient of that distinction.
In the case of abstract number, there's also a lot of work in adults on our ability to apprehend at a glance approximate numerical value of sounds in a sequence or visual arrays. We've learned a lot about the conditions under which we can do that and the conditions under which we can't. That's not my topic for today, but Izard and her collaborators have been testing for all of those conditions in newborn infants. And so far, so good. It looks like there is a similar alignment between the patterns of-- the factors that influence infants' responses in those studies where they hear sounds and see arrays of objects and the factors that influence our abilities to apprehend approximate number.
OK. So this gives us some good news and some bad news. The good news is that I think questions about the content of infants' perception and understanding of the world can be addressed. The bad news is that we can't do it very fast. You can't do it with a single silver-bullet experiment. You have to do it with a long and extensive pattern of research. In the past, research on infants has gone extremely slowly. Basically, the methods that we have allow you to ask each baby who comes into the lab maybe one, or if you're lucky, a couple of questions, but not more than that. So it takes a long time to do a single experiment.
I do think, though, that this work is poised to accelerate dramatically and that we're poised to-- this is a good time to be thinking about infant cognition because I think we're soon going to be in a different world, where we can start asking these questions at a much more rapid pace. That's for at least two reasons, both of which, by the way, are being fostered by the Center for Brains, Minds and Machines and undertaken by people who are part of that center.
One is, there are now efforts underway to be able to test infants on the web. These basic simple behavioral studies, you can assess looking time using the webcam in an iPad or a laptop, and you can test babies that way. And there's attempts to do that, which would make it possible to collect data doing the same kinds of experiments that have been done in the past, but much more quickly.
Two, as Nancy already mentioned and Rebecca may talk about tomorrow, there are efforts underway to use functional brain imaging to get at not only what infants look at, but what regions of the brain are activated when they look at those things, which will give us a more specific signal of what infants are attending to and processing, someday, hopefully, in the near future. And we just had a retreat of CBMM, where there was a lot of brainstorming about new technologies to try to get more than just simple looking time out of young babies. So maybe some of that will work as well.
But what I want to focus on today is that even this slow, plodding research has gone on for long enough at this point that I think we've learned something about what infants perceive and what they know. And I tried to put what I think we learned into two slides. Here's the first one. I think that very early in development, baby in the newborn period, but anyway, before babies are starting to reach for things and move around on their own, they already have a set of functioning cognitive systems, each specific to a different domain.
One is a system for representing objects and their motions, collisions, and other interactions. Another is a system for representing people as agents who act on objects, and in doing so, pursue goals and cause changes in the world. A third is a system for perceiving people as social beings who can communicate with, engage with other social beings and share mental states.
And then three other systems that I won't talk about today. One system of number, which I think is being tapped in that first Izard experiment. And two systems capturing aspects of geometry, one supporting navigation of the sort that Matt Wilson studies, the other supporting visual form perception of the sort that IT and occipital cortex represent.
I think each of these systems operates as a whole. In Josh's terms from last Friday, it's internally compositional. Infants don't just come equipped with a set of local facts about how objects behave, they come equipped with a set of more general rules or principles that allow them to deal with objects in novel situations and make productive inferences about their interactions and behavior. Each of these systems is partially distinct from the other systems. It's distinct in three ways.
First, each of them operates on different information. It's elicited under different conditions. Second, it gives rise to different representations with different content. And third, most deeply, it answers different questions. So for example, we have two-- infants have two systems for reasoning about people, but each system is answering a different question. The agent system is answering the question, what is this guy's goal? What is he trying to accomplish? What changes is he affecting in the world? The social system is asking, who is this guy related to? Who is he connected to? Who is he communicating with?
Each of the systems are limited, extremely limited relative to what we find in adults. Each captures only a tiny part of what we as adults know about objects or agents or social interactions. Each of them, I think, interestingly, is shared by other animals. I didn't expect that to be true when we started doing this research. But as far as we can see so far, it's hard to find anything that a young human infant can do that a non-human animal can't. And I'll give you examples of that, too.
And finally-- and I won't talk about this much, unfortunately. I think each of these systems continues to function throughout life and supports the development of new systems of knowledge. So when we think thoughts that only humans think, we engage these fundamental systems that we've had since infancy and other animals share. I also think this research tells us something about how we do that.
I think that in addition to having these basic early developing systems, we have a uniquely human capacity to productively combine information across these systems, and through those combinations, to construct new concepts. I think these new concepts underlie, or they tend to be abstract, and they underlie a set of very important later-developing systems of knowledge, including knowledge that allow us to form taxonomies of objects, of tools, of natural kinds like animals and plants, and to reason about their behavior, such that when we encounter some new thing, we already know a lot about the kind of thing that it is and can use that to infer many of its specific properties, and also to direct our learning very explicitly to fill in the gaps in our knowledge.
Another is the systems of natural number in Euclidean geometry. Natural number, children seem to construct over the first three to five years of life. Euclidean geometry seems to take much longer, much, much later. Molly Dillon, who's also here, has been trying to work on understanding-- and so has Veronique Izard-- how children go from six years of age, where they seem absolutely clueless about the simplest properties of Euclidean geometry, to 12-year-olds who, whether they're in the Amazon and have never been to school, or studying geometry in school, seem to have a basic rudimentary understanding of points and lines and figures on the Euclidean plane.
A third is a system of persons and mental states. And I won't talk about it, but I'm only talking for the first half or so of this time, then Alia Martin's going to take over. And you'll touch on-- you'll get to some of those issues.
Now, as Nancy said last week, I have this out-there hypothesis that I don't think anybody else in the world believes, but I still believe it. That this productive combinatorial capacity either is or is intimately tied to what's the most obvious cognitive difference between us and other animals, namely our faculty of natural language. In particular, I think that there are two general properties of natural language that make it an ideal medium for forming combinations of new concepts.
One is that the words and the rules of-- well, three properties, actually. One is that the syntactic and semantic rules of natural languages are combinatorial and compositional. That is, if you learn the meanings of words and you learn how to combine them, you get the meanings of the expressions for free. You don't need to go out and learn what a brown cow is if you know what brown is and you know what a cow is.
Second, the words and the rules of natural language apply across all domains. They're not restricted to one domain or another the way infants' other cognitive capacities seem to be. So if you learn how "cow" behaves in the expression "brown cow," and then you hear "brown ball," or something that a different domain of core knowledge would be capturing, you can immediately interpret that combination as well.
And then the last thing about natural language that I think makes it so useful for cognitive development is that it's learned from other people. And other people talk about the things that they find useful to think about, right? Word frequency is a really good proxy for what the useful concepts out there are. So a child who has a very powerful combinatorial system that can create a huge set of concepts is going to have a search problem when they try to apply those concepts to the world.
Something will happen in the world. And if they now have a million concepts that they could bring to bear, which one are they going to use? Are they to test them all out? Having too many concepts, too many innate concepts, would not necessarily be a blessing. But if you use language to guide you to the useful concepts, I think you'll do better. The ones people are going to talk about around you most frequently are going to be the ones that it's going to be most useful for you to be learning at that point.
So let's go back to that first set of questions, which is what I want to be focusing on today. And as I said, I'll talk particularly about three domains where infants seem to develop knowledge quite rapidly over the course of infancy. And I'll spend most of my time on the first one, objects.
So object cognition is really interesting and it seems to span this really big range. It seems to involve many different kinds of processes. If you're going to figure out what the objects are, what the bodies are in a scene, then you need segmentation abilities. You need to be able to take an array like this and break it down into units, figuring out what different parts of that array lie on the same object and what parts lie on different ones. So early mechanisms for doing that can participate in object representation.
But also to perceive objects, arrays are cluttered and objects tend to be opaque. And when they are, it's never the case that all of the surfaces of one object are in view at the same time. And it's often the case that you're only seeing a little bit of any given object at a time. Yet somehow we're able to see this as a continuous table that's extending behind everything that's sitting on it, and even sort of as a continuous plate, a single plate that's partly-- that's on the table behind the vase, and so forth. So to represent objects, we've got to be able to take these visual fragments and put them together in the right sorts of ways.
Something that's harder to show in a static image, but that of course is radically true about the world is that our perceptual encounters with objects are intermittent. We can look away and then look back, or an object can move out of view and then come back into view, yet what we experience is a world of persisting objects that are existing and moving on connected paths, whether we're looking at them or not.
And finally, objects interact with other objects and we need to work out those interactions. And the working out that I'm interested in is not what this little boy is doing, but what his younger sister is doing as she's sitting in her infant seat and observing him acting on these towers and wondering what's going to happen next. OK? At least that's the problem on the table for today.
OK, so a standard view for a very long time has been that different mechanisms solve these different aspects of the problem of representing objects. That segmentation depends on relatively low-level mechanisms. Completion and identity through time, it's going to depend on how much time we're talking about and how complicated the transformations are. They're sort of in the middle. And this is all about reasoning, about concepts that go beyond perception altogether, like the mass of an object, which we can't see directly, and so forth.
And I kind of believed that that was true when we started doing this work. And because I did and wanted to know where the boundaries were of what infants could do, I started by working on these problems here. And that's what I'm going to talk about today. But let me flag at the outset that I no longer believe that the real representations of objects that organize infants' learning about the physical world, I no longer believe that they're embodied in a set of diverse systems. I think there's a single system that's ultimately at work here. Of course it has multiple levels to it, including low-level of edge detection, and so forth.
But that there's a single system at work that both-- that tells us what's connected to what and where the boundaries of things are in arrays like this, how things continue where and when they're hidden, and how they interact with other things. That's one unitary system, and I'll try to show you what evidence supports that view, though, of course, jump in with questions or criticisms or alternative accounts.
OK, so here's an intermediate case to start with. You present a-- it was studied a lot by Belgian psychologist Elvin Meshot back in the 1950s, I think-- '50s or early '60s. Take a triangle, present it behind an occluder, and ask babies, in effect, what do you see in that triangle? Do you see a connected object or do you see two separate visible fragments?
We did these studies with four-month-olds because they're not yet reaching for things and manipulating objects. We used the fact that they tend to like to look at things that are new. So we presented this display repeatedly-- we, by the way, is Phil Kellman, now at UCLA and studying all this stuff in adults primarily, also studying mathematics now. Anyhow, so we presented displays like this repeatedly to babies until they got bored with them. And then we took the occluder away and in alternation, presented them with a complete triangle and with a triangle that had a gap in the center.
And we reasoned that there were two possible outcomes of the study. Possibility one is that as empiricists and the then-very influential child psychologist-- developmental psychologist Jean Piaget argued, for a four-month-old infant who isn't yet reaching for things, the world is an array of visible fragments. So they will see this thing as ending at this edge where the occluder begins, and this display will look more similar to them than this display, so they'll be more interested in that one.
There was also the theory from Gestalt psychologists and others that predicted the opposite, that there would be automatic completion processes that would lead any creature, whether they were experienced or not, to perceive the simpler arrangement, which is this one. Those, it seemed to us, were the only two options. Baby research is really fun because it can surprise you even when you think you've covered all the bases. Neither of those turned out to be true.
What happened instead was that when we took the occluder away, you still saw an increase in looking both to the connected object and to the separate object, and those two increases were equal. Now, this could have been for an extremely boring reason. Maybe babies were only paying attention to the thing that was closest to them in the array. So we very quickly tested for that in the following way.
Instead of contrasting an array with a small gap to an array that had it filled in, we contrasted an array with a small gap to an array with a larger gap, too large to have fit behind the occluder. And there, babies looked longer at the array with the larger gap. So we know it's not that they're not seeing this back form and its visible surfaces, but they seem to be uncommitted as to whether those surfaces are connected behind the occluder or not. They don't see them as ending where the occluder begins, but they don't clearly see them as connected, either.
And we showed that this was quite generally true, both for simpler arrays and for more complicated-- well, for richer ones, like a sphere. We did this with a bunch of different arrays. And under these conditions, where the arrays are stationary, that's what we found. But there was one condition where we got a different finding, and that's when we took one of these arrays and moved it behind the occluder, never moving it enough to bring the center into view, but moving it enough such that the top and bottom were moving together. And when we did that, now babies looked longer at the display that had the gap.
That raised the question, why is motion having this effect? And the immediate possibility, we thought, is motion is calling their attention to the rod, so they're tending to it more than they otherwise would, and it's leading them to see its other properties, like the alignment of its edges. So to test that, we gave them misaligned objects differing in color, differing in texture. All of the edges-- none of the edges were aligned with each other. If motion was just calling attention to alignment, it shouldn't do that in this case.
But in fact, we found that after getting bored with that, infants expected something like this, not something like that. They looked longer at the display with the gap. So it looks like the motion is actually providing the information for the connectedness, and the alignment is not playing much of a role at all.
Now, what could be going on here? This is the kind of thing I think that Josh likes to call a suspicious coincidence, right? That an infant is looking at this array, and isn't it odd that we're seeing this-- I'm seeing the same pattern of motion below the occluder as I'm seeing above it? Now that could be two separate objects that just happen to be moving together, but that would be rather unlikely. You're much more likely to see a pattern like that if in fact there's a between it and it's just one object that's in motion. I think that's probably the right way to think about what's going on in these experiments.
But if it is, notice that not all coincidences that are suspicious for us are suspicious for infants. For us, it's a suspicious coincidence that this edge is aligned with that edge. For infants, it's not. I think this is a case where we can see infants can be useful for thinking about our own cognitive abilities because they seem to share some of our picture of the world, but not all of our picture of the world. And that can be a hint as to how that picture gets put together and how it's organized.
So what kind of motion? We've tried a bunch of different ones. One of them is vertical motion. That's interesting because it's also a rigid displacement or motion in depth. They're both rigid displacements in three-dimensional space. Actually, all of these three are. But in this case, you don't get any side-to-side changes in the visual field. I think I animated this. Yeah. So this is kind of what the baby is seeing.
By the way, all of these studies were done with real 3D objects and they had textures on them, and so forth. They've also all since been replicated in other labs using computer animated displays, which we didn't have-- which weren't available back in the day. And you get the same result. So I'm just doing cartoon versions of them here, but actually babies showed these effects across a range of different displays. So there's vertical motion. Here is motion in depth.
Oh, and by the way, we're not restraining babies' heads, so it's not going to be anything near as, like, simple uniform as what's at their eye, is what I'm showing here. And then rotational motion, like that, around the midpoint. And what we found is that babies used both vertical motion and motion in depth about as well as they used horizontal motion to perceive the connectedness of the object. They did not use rotary motion.
So I know there's a lot of interest and projects focused on perceptual invariance. And I think there's an interesting puzzle here, and it's one that Molly is very interested in, in the work that she's doing on geometry. These are all rigid motions. But somehow, rotation seems to be a whole lot harder for young intelligent beings to wrap their heads around than translation is-- including translation in depth or vertical translation. There's something hard about orientation changes. And in fact, I think they remain hard for us as adults. If you think of things like how the shape of a square seems to change if you rotate it 45 degrees so it's a diamond. It's no longer obvious that it's got four right angles. There's something about orientation that's harder than these other things. And I think we were seeing that here.
When an object-- when a baby is sitting still and a rod is moving behind an occluder, it's moving both relative to the baby and relative to the surroundings, which of those things matters to the baby? So Phil Kellman did the ambitious experiment of putting a baby in a movable chair and moving the baby back and forth. In one condition, the baby is looking at a stationary rod, but his own motion is such that if you put a camera where the baby's head is, you'll see the image of that rod moving back and forth behind the block.
In the other condition, the motion of the rod is tied to the motion of the baby, so it's always staying in the middle of the baby's visual field, but it's actually moving through the array. And it turned out that it's-- OK, so whether the baby was still or moving didn't matter at all. So if the object is-- sorry, I did these wrong. This should be still, that should be moving. If the object is still, and whether the baby is still or moving, it doesn't work. If the object is moving-- the diagram is right. It was just my label that's wrong. If the object is moving, it doesn't matter whether it's being displaced in the infant's visual field or not. It's seen as moving.
Now, this isn't magic. The studies are not being done in a dark room with a single luminous object where the baby wouldn't be able to tell. There's lots of surround-- it's in a puppet stage and that is stationary. So there's lots of information for the object moving relative to its surroundings in all of the conditions of this study, and I'm sure that's critical.
But for the point of view of the infant's connecting of the visible ends of the object, the question he's trying to answer is, is that thing moving? Not, am I experiencing movement in this changing scene? Retinal movement.
So if it's the case that-- what those last findings suggest is that the input representations to the system that's forming objects out of arrays of visual surfaces already capture a lot of the 3D spatial structure of the world. This is a relatively late process. And it allows us to ask, is it even specific to vision? Would we see the same process at work if we presented babies with the task of asking, am I feeling-- are two things that are moving in the world connected? Or are they not, in areas that I'm not perceiving? We can ask that in other modalities.
So we did a series of studies-- this is with Arlette Streri. We did a series of studies looking at perception of objects by active touch. By taking four-month-old babies and putting a bib over them. Now I said they can't reach for objects, but if you put a ring in a baby's hand, even a newborn's hand, they'll grasp it. So we put rings in their two hands. And in one condition, the rings were rigidly attached, although the array was set up so that they couldn't actually feel that attachment and they couldn't see anything, about the object, anyway, because they had the screen blocking them.
But as they moved one, the other would move rigidly with it. In the other condition, the two were unconnected, so they would move independently. And after babies explored that for-- over a series of trials, and as in the other studies, we then presented visual arrays in alternation where the two rings were connected or not. And found that in the condition where they had moved rigidly together, infants extrapolated a connection and looked longer at the arrays that were not connected. In the case where they moved independently, they did the opposite.
Now, that doesn't tell us that there is a single system at work here. It could be that there are, as Shimon, I believe, was saying yesterday afternoon, there are redundancies in the system. You have different systems that are capturing the same property. That's still true. But here's a reason to-- we went on to ask not only what infants can do, but what they can't do. And I think it gives us reason to take seriously the possibility that there's actually a single system at work here.
What we did-- I haven't pictured it here-- is, instead of varying the motion of the things, we did vary their motion, but we also varied their other properties. So their rigidity. We contrasted a ring that was made out of wood with a ring that was made out of some kind of spongy, foam-rubbery material-- their shape, their surface texture. Asking, do infants take account of those properties in extrapolating a connection? Are they more likely to think two things are connected to each other if they're both made of foam rubber than if one of them is made of foam rubber and the other is made of wood?
We never found any effect of those properties, just as we didn't in the visual case. So we see not only the same abilities, but the same limits. And while that's not conclusive, I think it adds weight to the idea that what we could be studying here-- we started in the visual modality. But what we could be studying here is something that's more general and more abstract. Basic notions about how objects behave that apply not only when you're looking at things, but when you're actively-- when you're feeling them, actively manipulating them, exploring them in other modalities. So I put a question mark because it's not absolutely conclusive, but I think we should take seriously that possibility.
OK. Only motion. Is motion the only thing that works? Or will other changes work, so if an object changes in color? We created a particularly exciting color change by embedding colored lights within a glass rod so it's flashing on and off. Succeeded in eliciting very high interest in that array. Babies looked at it for a long time, but only the motion array was seen as connected behind the occluder. So it looks like not all changes elicit this perception. It's an open question what the class of effective changes is. Maybe it's broader than just motions, but it doesn't seem like all changes work.
Finally, is motion the only variable that influences infants' perception of-- the only property of surfaces that influences infants' perception of objects? The answer to that seems to be no. So we studied that in a different situation for which this is just a very impoverished cartoon. We took two block-like objects-- of different colors and textures in some studies, same color and texture in others. It didn't matter. And put one on top of the other and either presented them moving together or moving separately.
And then tested whether babies represented them as connected in either of two ways. Some of the studies were done with babies who were old enough to reach. And then we could ask, are they reaching for it as if it were a single body or as if there were two distinct bodies there? I could give you more information about that if you're interested.
The other was with looking time, where we had a hand come out and grasp the top of the top object and lift it. And the question is, what should come with it? Will the bottom object come with it as well or will the top object on its own? When the things had previously moved together, they expected it all to move together. When they'd moved separately, they expected only the top object would move by itself. And when there was no motion at all, findings vary somewhat from one lab to another, but mostly they tend to be ambiguous in the case where there's no motion.
So there it looks like motion is doing all the work. But if you make one simple change to this array that you can't do in the occlusion studies, you simply change the size of this object and present it such that there's a gap between the two objects. And you can either do it with this guy floating magically in midair, or you can do it with two objects side by side, both stably supported by a surface. If there's a visible gap between them, the motion no longer matters. They will be treated as two distinct objects, no matter what.
So what I think is going on here is that babies have a system that's seeking to find the connected, the solid connected bodies. The bodies that are internally connected and will remain so over motion. And that's what's leading them to see these patterns of relative motion or these visible gaps as indicating a place where one object ends and the next object begins.
I did want to get on to the problem of tracking objects over time, perceiving not what's connected to what over space, but what's connected to what over time. Under what conditions are the thing that I'm seeing now the same thing that I was seeing at some place or time in the past? So conceptually, it feels like continuity of motion over time is related to connectedness of motion over space. And it's been tested for in a variety of ways.
Here's one set of studies that we did, where we have an object that moves behind a single screen, and then either is-- and it starts here, ends up here. And either is seen to move between the two screens or is not. And we ask babies in effect, how many objects do they think are in this display, by boring half the babies with this, half the babies with that, and then presenting them in alternation with arrays of one versus two objects, neither of which ever passes through the center, but the arrays differ in number. In the one case, it's either moving over here or it's moving over there on different trials.
And what we find is that in this case, they expect to see one object and look longer at two. In this case, they expect to see two objects and look somewhat longer at one. There's actually an overall preference for looking at two, but you get that interaction and there's a slight preference for looking at one in that condition. Providing evidence, I think, that babies are tracking objects over time by analyzing information for the continuity of-- or discontinuity of their object motion.
Now, Lisa Feigenson has conducted stronger tests of this, I think, with somewhat older babies. When babies get older and they do more, you can do stronger tests. So these are babies who are old enough to crawl, old enough to eat, and old enough to like graham crackers. So she puts the baby back here, and in one set of studies, she takes a single graham cracker, puts it in one box, and then takes two graham crackers, one at a time, and puts them in the other box. And then the baby, who's being restrained by a parent, is let loose. And the question is, which box will they go to? And they go to the box with the two graham crackers.
My favorite study, though, in this whole series was one that she and Susan Carey ran as a boring control condition. I think it's the most interesting of the findings, though. In the boring control condition, they were worried about the fact that maybe babies are going to the box with two because they see a hand around that box for a longer period of time, doing more interesting stuff.
So they did the following boring control. The two condition was the same as before. So a hand comes out with a single graham cracker, puts it in the box, comes out empty, takes a second graham cracker, returns with a second graham cracker, puts it in the box.
In the other condition, the hand comes out with one graham cracker, puts it in the box, comes out again with the graham cracker, and then goes back into the box with that graham cracker. So you've got more graham cracker sightings on the left. You've got a same amount of hand activity on the two sides, but the babies go to the box with two. They're tracking the graham crackers, not the graham cracker visual encounters. They're tracking a continuous object over time.
Finally, objects. Scenes don't usually just contain a single object that's either connected, continuously visible or not, or connected or not. They contain multiple objects and those objects interact with each other. Shimon talked yesterday afternoon about the evidence that babies are sensitive to these interactions, at least down to about six months of age in the conditions he was talking about. In slightly different conditions, the sensitivity has been shown as young as three months of age.
Basically, here's a paradigm that will show that, if you have a single object that's moving toward a screen. Another object is stationary behind the screen. But at the right time, the time at which this object, if it continued moving at the same rate, at the point where it would contact that object, this object starts to move in the same direction. And now, after seeing that repeatedly, the screen is taken away and babies either see the first object contacting the second and the second one immediately starting to move, or they see the first object stopping short of the second an appropriate gap in time, and then the second object starts to move. And they look longer at this display, providing evidence that they inferred that the first object contacted the second at the point at which it started to move.
Interestingly, as in the case of the occluded object studies, if instead of having the second object move, you have it change color and make a sound, so it undergoes a change in state, but no motion, the babies no longer infer contact in this condition. They are attentive to those events. They watch them a lot, but they're uncommitted as to whether that first object-- this is work of Paul Muentener and Susan Carey relatively recently. It wasn't done with cylinders, it was done with a toy car that hits a block, I think, or doesn't hit the block. They're uncommitted as to whether the car contacted the second object or not, if the second object changes state but doesn't move.
Returning to the case where they succeed-- namely, this thing went behind a screen, the other thing started to move, infants inferred that they came into contact-- that begins to suggest that maybe babies have some notion that objects are solid, that two things can't be in the same place at the same time, that when one moving thing hits another thing, one or the other of them or both, their motion has to change, because they're not going to simply interpenetrate each other. And Josh already very briefly pointed to some very old studies suggesting that babies have-- make some assumption that objects are solid as early as-- I think in the earliest studies done with babies it's about two and a half months of age.
These are these studies that Renee Baillargeon did that start with simply a screen, a flat screen, rotating on a table, rotating 180 degrees back and forth on a table. Then she places an object behind this wall. The screen is lying on the table with its back edge right here at the middle. She places an object behind it, and then the screen starts to rotate up around the back edge and the question to the infants in effect is, what should happen to that screen?
And the two options she presents to them is it either gets to the point where it would contact this object which is now fully out of view, and stops, and then returns to its first position, which is a novel motion, but consistent with the existence, location, and solidity of that hidden object. Or it continues merrily on its way and the same pattern of rotation as before. When it does that, of course, it's going to come back flat on the screen and there's not going to be any object there. If there had been an object, it would have had to be compressed. Or what I think actually went on in those studies, it was quickly and surreptitiously knocked out of the way.
And infants looked less at this event than at this one-- this one, sorry-- providing some evidence that they were representing these objects, both as existing when they were out of sight, and as solid. So this is just a summary, not a claim about knowledge development, about-- I'm attempting to characterize here with motion over just one dimension of space and time, how infants seem-- what infants seem to represent about the behavior of objects.
Namely that each object moves on a continuous path through space and over time. That it moves cohesively. It doesn't split into pieces as it's moving. So if you've seen something move like this, then you find it unlikely that if this were lifted, it would go on its own, and you look longer at that. There is no merging, where two things that previously moved independently now move together. So after looking at this, it would also be unlikely, if you lifted this, for the whole thing to jump up at once. They move without gaps. They move without intersecting other objects other objects on their paths of motion, such that two things are in the same place at the same time. And they move on contact with other objects and not at a distance from them. So that's just a summary of what I think these studies show about four-month-old infants, not newborns.
They also show that infants' perception of objects is really limited. There's all these situations under which we see unitary, connected, bounded objects when they don't. And interestingly, research by Fei Xu and Susan Carey shows that even when you present really quite surprisingly old infants, 10-month-olds, with objects that should be really familiar to them, like toy ducks and trucks, they don't assume that these two objects will be distinct if they undergo no common motion. If they're simply presented stationary, the babies seem uncommitted as to whether there's a boundary between them or not. So they're using very limited information to be making these basic-- building these basic representations of what's connected to what, where one thing ends and the next begins.
Now, this changes very abruptly between about 10 and 12 months of age. They start treating those as two separate objects, whether they're moving together or stationary or not. Now, infants' tracking of objects shows very similar limits. So I told you they succeed in perceiving-- representing two distinct objects in a situation like this. But up until and including 10 months of age, they fail in this situation. If a truck comes out on one side of a single large screen, so you're not getting information for the motion behind that screen, and a duck comes out on the other side, and you ask babies, in effect, how many things are there? One or two? By removing the screen and alternately presenting those two possibilities, they are uncommitted between those two alternatives.
In this situation as in the previous one, there's this very abrupt change between about 10 and 12 months of age. And I can't resist saying, even though I'm way over time, that Fei Xu has shown that that change is interestingly related to the child's developing mastery of expressions that name kinds of objects. So she's been able to show, for example, that if you simply ask for individual infants, when did they start succeeding here, their success is predicted by their vocabulary as reported by parents.
She's also shown that if you take a younger infant who would be slated-- destined to fail this study, but as you bring objects out on the two sides, either familiar ones or novel ones, starting at about nine months of age, if you name them and you give them distinct object names, they now infer two objects. And in fact, they'll even do it if the two things you bring out from behind a single wide screen look the same. If you bring one thing out and say, look, a blicket, and put it back in, and then bring something out and say, look, a toma, even if it looks the same, they'll infer two objects. So there seems to be this change that's occurring at the end of the first year quite dramatically that's overcoming this basically meant that we're seeing earlier on.