Below, Professor Brian Williams describes various aspects of how he teaches 16.412 Cognitive Robotics.
Brian Williams: Artificial intelligence focuses on how people reason consciously. In cognitive robotics, we focus on developing robots that can think at the level that humans do. As humans, we’re good at planning and executing tasks. Researchers working in the field of cognitive robotics focus on giving robots these abilities, too. But nothing ever goes like you expect. Humans constantly adapt. We try to create robots that can adjust their goals and develop contingencies. So the main things that students learn in the course are how to plan systems dynamically, and how to monitor and diagnose problems with cognitive robots. Students also learn how to program these systems.
Brian Williams: Students are accustomed to reading chapters in textbooks—material that took decades for scientists to understand. But cognitive robotics is an active research area. It’s moving so quickly that every three years or so it reinvents itself. This course is focused on helping students close the gaps in the research. To be at the cutting edge of research, students need to read across papers and understand core ideas that are developed from a collection of publications. And then they need to be able to reduce that understanding to practice.
There’s also no better way to understand something than to teach it, implement it, and put it in a bigger context of some real-world application. That’s why we have a grand challenge at the center of the course experience.
Brian Williams: I like the idea of learning communities, of everybody trying to learn about a topic together. The grand challenge is a communal learning experience driven by a cutting-edge research question in cognitive robotics that allows us to focus on core reasoning algorithms. Students work in teams to present advanced lectures about different aspects of the topic.
Brian Williams: It’s important for students to work in teams because research is a collaborative endeavor. The notion that doctoral students are lone wolves is just not accurate. The more students can practice effective collaboration, the better.
It’s also the case that developing lectures is hard work. Just producing a first draft of a lecture can take 20 to 30 hours. And then you need to spend another 6 hours improving it. So, to develop a high-quality lecture, you really need two people working together.
Brian Williams: Yes. As I mentioned, the field of cognitive robotics is moving really fast. What normally happens is that members of the research community will generate tutorials on emerging themes. These tutorials encapsulate core ideas that everybody should know. The problem is that there’s just so much we need to know—but not enough time to write all the tutorials. So some of the students in the class are assigned to write tutorials related to the topic of the grand challenge. And a few others will write corresponding Jupiter or Python notebook problem sets. Along with the lectures, students end up producing materials that are enormously helpful to researchers in the field. This is important because I want them to learn that as scientists, their role is to consolidate ideas and to teach the community.
Brian Williams: The first grand challenge I encountered was during my time at NASA. They had just lost a billion dollar Mars mission. The spacecraft stopped working and the humans on the ground couldn’t control it. The NASA administrator asked us to build a spacecraft that could do the thinking that the ground crew should have done to ensure the safety of the spacecraft. It was an interesting example of a visionary challenge. It was exciting. We learned a huge amount from that challenge. It drove research in the field for about 10 years.
Brian Williams: Probably deep down, I am not so driven by competition, which is one way for research to advance. My experience at NASA taught me that grand challenges are about people working together to tackle a big problem that nobody knows how to do. In academia, we often have competitions when we know there are a number of ways to approach a problem. In these situations, scientists are driven by the challenge of examining the details of the problem. But grand challenges focus on an earlier stage of research—one in which people don’t know even one way to solve the problem.
› Read More/Read Less
Brian Williams: That is an interesting problem, because when the whole class does a project collaboratively teams can become too large. When that happens, people begin to feel disenfranchised. What I do to combat that is to make clear from the beginning what elements or materials individuals are responsible for contributing to the project. I have students write down what they are contributing so that I can assess their work accurately.
Another piece of the assessment puzzle is providing good feedback. The place where feedback matters the most is during the dry run for the students’ advanced lectures. A week before the students give their lecture to the class, they do a dry run for the teaching team and receive feedback. The process takes about two and a half hours. We teach them how to capture students’ interest at the beginning of the lecture and how to clarify the main points they want students to learn. We also help them convey the synergies between the main points and encourage them to consider the role of examples in their presentations.
Brian Williams: Yes! We were curious to see how we might teach this course to students without programming backgrounds and who don’t care necessarily about how the algorithms work, but who find it cool to work on grand challenges and to build robots. We experimented with teaching such students during an MIT Independent Activities Period. We gave participants a grand challenge—and 3 weeks to tackle it. We provided them with a system that could do a little bit of thinking. Then we explained the reasoning of the underlying methods. They played around with the system in the lab. Then we make the system a bit smarter, and let them work on the challenge again. The system became increasingly smart and the challenge became increasingly easier to do—and that was fun. Students were excited! They didn’t need to have any kind of deep background in cognitive robotics to engage with the work. Based on this experience, we believe teachers will soon be able to teach cognitive robotics to high school audiences.
Brian Williams: For research scientists around the world, we have the Summer School on Cognitive Robotics. It’s five days long, and revolves around a grand challenge. But rather than me giving the lectures, several top people from the field give 15 tutorials. Often researchers become so specialized in their niche areas of inquiry, they lose sight of how their piece of the puzzle fits with all of the others. The Summer Schools helps researchers see the big picture. It builds community.
Brian Williams: And how to catalyze community. An engaged, collaborative community is absolutely key.
6.041 Probabilistic Systems Analysis and Applied Probability, 6.042 Mathematics for Computer Science, or 16.09 Statistics and Probability.
16.412 Cognitive Robotics can be applied toward a doctorate degree in Aeronautics and Astronautics.
Every spring semester
The students' grades were based on the following activities:
10% Participation & attendance (mini quizzes)
40% Problem sets
30% Advanced lecture & implementation
20% Grand challenge
Breakdown by Year
Mostly first and second year graduate students, a few undergraduate students.
Breakdown by Major
Variety of majors
Typical Student Background
Students come to the course with a variety of interests, but all are passionate about some notion of robotic systems. Students from the Media Lab have explored smart kitchens and smart buildings. Other students are curious about underwater vehicles, while some are drawn to smart cars. Whatever a computer can make a little smarter—that’s what interests them.
During an average week, students were expected to spend 12 hours on the course, roughly divided as follows:
- Met 2 times per week for 1.5 hours per session; 25 sessions total.
- Sessions included advanced lectures, delivered by students, on topics related to the grand challenge.
- Some sessions include mini-quizzes to assess students’ understanding of content.
Out of Class
- Students completed required readings and worked on problem that asked them to perform modeling exercises, using existing autonomy tools, and implement algorithms (sometimes from scratch).