Class 12, Part 2: The Future of Work & The Employment-Productivity Debate

Flash and JavaScript are required for this feature.

Download the video from Internet Archive.

Description: This class will discuss the ongoing debate over the “future of work.” Has the IT revolution advanced to the point that its productivity gains will be creating large scale unemployment? The class will examine arguments advancing this argument, including that the nature of work has fundamentally shifted with the entry of IT at scale into both services and production sectors. It will then look at the response by a prominent economist that the linkage between productivity gains and rising net employment over time, in place since the industrial revolution, has indeed not significantly shifted. It will also look at the linkage between education and higher-skill employment, and the argument that this is a key explanation for rising income inequality. The class will also look at the issue of "jobless innovation" and its linkage to fundamental problems in the production sector, and at a new book arguing that fully autonomous robotics is unlikely, and that robotics will continue for a long time on the path of deep integration with people, where robotics is an extension of human capabilities not a displacer of those. Part two of two.

Instructor: William Bonvillian

The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at

WILLIAM BONVILLIAN: This video that we saw by David Mindell is about his new book, which is called Our Robots Ourselves, Robots and the Myth of Autonomy. And he means exactly that. Autonomy really is a myth and will be so. And David has a unique perspective on this question, because this is his life.

In other words, he has been deeply involved with robotics at just cutting-edge levels for the last 30-plus years. So he was deeply involved in a lot of the early undersea exploration and working with Bob Ballard and Alvin and the pods that came off Alvin. And then he did a lot of work with NASA and was involved in the robotics on Mars and the planetary explorations using robotics.

And then he spent a lot of time with the US military that had been doing drone technology and has a real hands-on feel. He's also a pilot and flies extensively, both helicopters and aircraft. He's been involved in a helicopter-drone company, so he knows exactly what those are and how they work.

And in addition, he's just taken leave from MIT, and he set up his own robotics company, including with people from Lincoln Labs, which has the greatest radar expertise. And the issue there is, how do you create robots that can work in really intimate proximity with people, right? Not kind of arm's length distance, much less locked in on a floor, but really in close proximity. So they're developing interesting technologies to enable that.

So he's got a whole personal sense of what this robotics movement is all about. And he kind of takes us back from the literature that has portrayed this kind of ongoing nightmare of job displacement from robotics, and I think, frankly, puts us in a much more realistic posture of what it's actually going to be like when robotics start to scale up to a greater extent than they have so far.

And his work-- we read JCR Licklider when we took up the book The Dream Machine, his biography by Mitch Waldrop. And Licklider, as you remember, gave us this picture of how people and machines were going to work together. And Licklider, in a way, was responding to a period of technological concern that computers were going to replace people, because they could think better than people. And Licklider, although a great technologist, was trained as a psychologist. And his field was man-machine interface-- how do people work with machines? How does that actually function?

And so he comes to computing with this vision of a symbiosis between, computers are going to do what they're good at, and people are going to do what they're good at, and we're going to have a symbiosis between the two that's going to optimize both territories. And indeed, that's exactly the direction that computers go in. In other words, I'm not spending sleepless nights wondering when this thing is going to replace me, right? We thought it was. And Norbert Wiener at MIT had invented the term cybernetics and wrote the first book on the topic. But Wiener had a very dark vision of what computing was going to be for the future of human thinking. It didn't turn out to happen.

So Mindell is arguing that even the most threatening technology, which arguably is robotics, it's just not going to work out that way. A symbiosis is going to be what's achieved here. And he takes apart each one of these territories in the book. I mean, I had you do the video. But it's a very rich and very well-written book. It's a tremendously fun read, and he tells a lot of great stories.

But his vision is that there can be a richer space when people and robots are joined. There's a ritual understanding and ability to make perceptions and judgments that occurs from that mix. And that turns out to be exactly his experience undersea and doing space robotics, that there's just a different kind of environment that evolves here. And it's better than a machine alone or a person alone, that there's a new symbiosis that enhances both sides in a way that we can draw on.

And what he finds is that when people, in effect, start to operate the robotics-- and in the end, in these systems, autonomous systems don't work very well-- people really need to be part of the process with the robot-- that the robot becomes an extension of themselves. And that, in fact, all their perceptions are in the robot. So even if the robot is a mile deeper undersea or tens of thousands of miles away in space running around the moon, that the person operating the robot and cooperating with the robot is in the robot. And the sensory systems that are available to the robot really become theirs. In effect, we get extended with a whole new kind of reach.

And that's really his vision of what occurs here. That's the UAV story, that's the undersea exploration story, that's the space story. And, he argues, this is going to be the driverless car story, too.

So we have been proceeding-- and he goes through this at rich length in his book-- we've been proceeding under the assumption that we're going to get replaced by driverless cars. But he argues, wait a minute, 40 years of experience of robotics says that's not what happens. Instead, this symbiosis starts to occur, right? Because people are not going to want to give up certain kinds of level of controls and certain kinds of decision-making. They're going to want to be involved in those.

So he argues, that just as with undersea exploration and with the other robotics fields that he's talking about, a richer driver experience that involves much more engagement with the surroundings, much greater information and knowledge access, much better safety and much better comfort, a lower stress environment, can occur in a driverless car setting. That people will still be in command of the car. They'll be able to cede a lot of lower-end territory to the robotic activities, but will retain certain kinds of charge and be able to operate in a different kind of level of perception.

We'll see, you know? Time will tell here. The driverless car movement has been pushed by a whole community that assumes that he wants to get people out of the driver's seat into the trunk, or at least watching movies in the back seat. Mindell argues that's actually probably not going to occur. And then he lays out a series of technological challenges that have to happen.

So LIDAR, which is the fundamental radar system we're using for driverless cars-- and David knows a lot about radar, because that's what his new company is organizing right now-- LIDAR happens to be very problematic in wet surfaces and snowy services. It fails, right? That's why Google has based its driverless car operation out in sunny California, right? Uber is having much more trouble in Pittsburgh. And I'm sure your friends, Steph, are going to have trouble when they worry about Boston in the wintertime with their driverless car initiatives. It's highly problematic.

You can get around that by using ground-penetrating radar, which avoids the surface water problem and surface slickness problem. But at the moment, that's another $70,000 per car. So the prices will get driven down the cost curve, and maybe we can do that. But these are significant barriers here.

I was at a presentation recently by Gill Pratt and John Leonard, who are leading the Toyota driverless car initiative. And it was an interesting discussion. The leader of the discussion said, well, you know, we're going to have driverless cars within five years. And Gill Pratt, who led the whole robotics effort in DARPA-- and a former MIT faculty member, one of the most respected experts in robotics in the world, and named by Toyota to lead their effort-- Gill had this line and he said, you know, people tend to completely overestimate what we're going to accomplish in the next five years and completely underestimate what we're going to accomplish in the next 50. We're not good at making that balance. And in effect what he was saying is, the driverless car problem is not a five-year problem.

And then John Leonard, who is running MIT's part of the Toyota initiative, one of MIT's famous robotics experts, he put up a video. And the video was him taking his kids to school. And they drive down a residential street-- one of these kind of urbanized suburbs around here. And there's a T-intersection. So they leave the residential street, and they have to make a left turn to get to the school. The kids are sitting in the back.

And in the lane immediately in front of him going in that direction, going to the right, are cars that are erratically coming across at about 40 to 50 miles an hour. And the lane that he wants to get into, it's bumper to bumper, completely stalled. And he's got to turn left.

He finally is able to solve the problem, he indicates, by rolling down his window, making a judgment about what driver might be the most sympathetic. Waving, establishing an eye contact, and this driver is willing to let him pull in in front of her. But he's then got to wait-- she's got to be at the right point in the line, and the erratic 50-mile-an-hour cars have got to let up a little bit so he can get across. He's finally able to achieve it. He does this every morning.

He said, look, I can't write the algorithm for the left-hand turn into heavy traffic. I can't do it. I've been thinking about it for a long time. That is at least a 10-year problem. How is my driverless car going to establish eye contact with the driver in that far lane. How are they going to signal out the window? How are they going to make that personal connection?

All of us, when we drive all the time, are making constant judgments about other drivers. John Leonard says, I can't write the algorithms to evaluate the crazy 18-year-old hot rod driver versus me versus an elderly 65-year-old driver. I can't make those differentials in my algorithms to make this thing work.

So the problem of driving in an urbanized setting is going to be profound. He said, look, we can probably figure out interstate driving-- the variables are much more manageable. But urbanized setting, it's really, really complicated.

So I guess what Mindell is telling us-- and those are just examples that are not in his book, but I'm pulling up from recent experience. But Mindell is telling us that this autonomy is not necessarily at hand. And even when it is available, and he uses the example of the moon landing, the astronauts preempt it, because they see things, i.e. a small crater, that would have prevented the landing in the right location. So they have to take over the controls and move the thing. People are not going to want to give up that level of involvement, he argues, and shouldn't, because there's some things they're going to be better at judging than even the best machinery we can come up with.

So there's an example that David uses, which is that a combination of a chess master and a computer always beats another chess master alone or a computer alone. And so far, that's our experience. Now, all of these things could change over time, and technology will definitely advance. But what David is arguing is that this robotics revolution is not going to be a revolution around autonomy. It's going to be a revolution like this revolution around symbiosis.

And I think that's a really important thing for us to keep in mind as we think about the speed which these changes are going to start to happen. The complexity is really high. And the nature of the changes themselves. And in David Mindell's world of cobotics and assistive robotics, there is much more room for people than a truly totally autonomous system.

And that presents-- again, back to our education discussion, that's back to our training discussion-- that presents a whole new set of ways of thinking about what this is going to look like that we probably ought to keep in mind. And it also means that the timetable of these changes is not as abrupt as the media has been telling us in the last couple of years. It's a more manageable-- David Mindell would argue-- and more gradual kind of timetable.

All right. Sanam, it's all yours.

SANAM: Yeah. So I think that the question about symbiosis between people and machines is really interesting, especially considering the debate about whether people and robots are a lot like substitute goods or complementary goods. And I think that the consensus here is that they are equally complementary. But I think that also has some interesting implications for what that could do to the human environment. So even with goods that are complementary, it's likely that this will push out the demand for the types of labor that is more well-versed in high skill [INAUDIBLE] in working with these types of technology and reduced demand for labor that's less. So I think that there are really important considerations about when you're integrating technology into a human environment, how it's going to affect that environment.

So I think just a question that a lot of you had was, when you are looking at building these autonomous or semi-autonomous technologies into an existing human environment, it's going to require a lot of knowledge about that environment in order to make these important predictive models. So what are some of the biggest challenges in doing that? And where do you see the potential failings or pitfalls?

CHLOE: So one of the things I was interested in-- I really agree with a lot of the points that he makes. I think he lays out really optimistic, but also appropriately realistic future for our symbiotic relationships with robots in a lot of different fields. But [INAUDIBLE] your question, he raises the issue of the pilot, the soldier, and the astronaut who are all, to some extent, a little reticent to fully adopt complete autonomy in their own respective fields.

So I guess one of the places where you could consider a potential failure of that symbiosis is, is that hesitation to adopt the fully-autonomous system something we should train out of these humans? Or should we adapt our technological development to match it? Like is that a failure point or is that OK?

MAX: It's possible that the want or the desire to go away from autonomy-- maybe they have some sort of instinctual realization that the computer cannot do everything that they can. It's not as good at pattern recognition, that kind of thing. So maybe it's not a failure point. Maybe instead it's a--


I'm not sure what you would call it.

CHLOE: Interesting.

MAX: Bless you.

CHLOE: Like an actual benefit to that.

MAX: Yeah, almost like a warning system. Yeah?

AUDIENCE: Yeah. And on top of that, I feel like there's a whole-- especially thinking about cars, but I'm sure for pilots and other people as well-- there's a lot of culture around being a driver or being a pilot. You can define yourself as a pilot. And I think by taking away any control that you have over an aircraft or something, that can be something that people don't want to do just because people like to drive, they like the open road, they want to feel like they're in control. And taking that away could just be a cultural issue as well.

RASHEED: I thought autopilot was pretty well-accepted and kind of integrated by now. I know, especially for long-distance flights, cross-Atlantic, I definitely don't want the pilot of this airplane reading and charting the navigation across the Atlantic Ocean. Like, I'm good. We can definitely have more integrated technology to allow machines to do that. But at least within the pilot community, I thought this was pretty accepted. And now with fighter pilots and things like that, you're going so fast, it doesn't make sense to have a human interpret all these signals that are coming in.

CHLOE: Sorry.

RASHEED: Yeah, no, go ahead.

CHLOE: No, you are right. It is very widely accepted and integrated. But also, I think you make an interesting point in terms of how people define themselves and that job being actually part of their identity. Also, I think as we've developed more and more complex machines and those machines become integrated into our life, we interact with each other through those machines.

So even just taking the example of air traffic. There's a very complex communication network between air traffic controllers and different pilots from commercial aviation to general aviation. And even though that system has a lot of autonomous components in it in today's day in age, there's still generations of protocol, both very officially delineated and also sort of adopted as practice. And I think that the humans who are willingly part of that system as controllers and pilots know that very well. And so they know how to interact with the other pilots, both in manners and how to make well-informed, safe decisions based on what other people are doing.

So I think-- yeah. I think that even though they might be flying, technically, a completely autonomous plane, they still serve as the end-point of the communication with other humans who are in charge of other completely autonomous systems.

WILLIAM BONVILLIAN: And Chloe, I think you're absolutely right on that from what I understand from David's book and discussions with him. In addition, he gives a remarkable example. It's a startling example that starts the book off. When you get a chance, the book is really worth reading. He tells the story of an Air France crash over the Pacific 15 years ago. Chloe, I see you're smiling, so you're thinking about the story.

CHLOE: Yeah, I was just thinking about this.

WILLIAM BONVILLIAN: And you can extrapolate and see if I get the story right. But essentially, the story that he tells is that there's a moment in the flight they're on autopilot. As you point out, Rasheed, it can be a very useful tool for pilots and is used in many parts of a flight pattern. And they're on autopilot, and the plane enters an area of very low temperatures and ice is forming on the wings. So the autopilot system default mechanism, at that point, is to remove the autopilot control and switch it to the pilot control.

And what has happened is that the pilot of the aircraft is back in the restroom, because they're on autopilot over the middle of the Pacific. And the engineer, the flight engineer and the co-pilot, are at the controls. They're relaxing. And suddenly, they're in charge. And each has a different reaction. And it ends up completely destabilizing the aircraft and throwing it into a spin right into the Pacific Ocean, where all the lives and aircraft are lost.

And they had the opposite-- they're not ready. So this moment of people being ready to take control again after having been out of control turns out to be a nightmare in all kinds of other settings, right? It's very unmanageable.

How do you keep the person who needs to be in overall control completely in the game? And how do you keep them completely involved? So that's one of the most complicated design problems for autonomous vehicles in general, and driverless cars in particular. Since we're probably not going to get to fully driverless cars soon, how is that relationship going to work? And how are we going to play that out? And how do you keep just enough of a level involved?

And interesting, Rasheed, back to your point, the autopilot system in aircraft has actually now evolved to the point where they keep that balance right between the machine and the person and the overall person in control and the desire of the person to retain the control. They figured out what the signaling is so that that actually works well on both sides. But that's a really complicated balance to work out. Have I got that right, Chloe, would you say?

CHLOE: Yeah, yeah.

WILLIAM BONVILLIAN: Anything you have to say more on this? Are you studying this?

CHLOE: Well, in AeroAstro we do a little bit of failure investigation. But I also fly personally, and I've had a lot of experiences that validate that exact thing. So I used to fly a pretty small four-seater aircraft.

MAX: Cessna?

CHLOE: Yeah, well, a Columbia, which is basically the same size as a Cessna, but a lot faster. And I used to fly with my dad a lot. And one of the stories that we would repeatedly live out when we were landing at a new airport was that the air traffic controller would, based on the size and type of our aircraft, would vastly underestimate the speed at which we were making our descent.

And a full descent into most of these airports is a multi-leg type thing, but you start up to 20 minutes before you're actually on the ground. So it's pretty complicated. There are other airplanes in the air. You have to match their speeds and rates of descent, so you don't have two people colliding on the runway.

And repeatedly, how fast we were going would be underestimated. And we would be coming down much more rapidly than the air traffic controller would think. And so we'd be going faster than a much, much larger aircraft was. So that was just another case of where establishing that human contact with other people to let them know where you were and with the people whose instrumentation was telling them something different, it's important to have people who know their own missions--

WILLIAM BONVILLIAN: Interesting example.

CHLOE: --involved in the process.

MARTIN: I think there's also an opportunity for using autonomy in an area where there's human bias. There was a case study of a Japanese flight, where the co-pilot knows that what the actual pilot is doing is wrong. But because he's an authority figure, he won't contradict them. So using some autonomous things to say like, no, this is wrong, we're definitely doing something wrong, or at least put some kind of flag is useful.

In sci-fi, it was really cool, because it talked about human bias injustice, like when you make a decision. And so there was this one science fiction where in the future, an AI was the judge, because he'd be non-biased. I don't know. I think that stuff's pretty interesting.

AUDIENCE: That's already kind of happening, that they've been using algorithms in some jurisdictions to determine whether someone should get bail or not based on their backgrounds. It's like a proprietary software, which is kind of scary to begin with, because people don't really understand--

MARTIN: How it works.

AUDIENCE: --how it's making its decisions. But you'll put in different factors about the person, and that can help inform the judge as to whether they should be let out on bail or not. And so I think there's a big-- someone just filed a lawsuit about that, since you're not allowed to face your accuser, if your accuser is an algorithm. So we're already getting into this--



RASHEED: Well, why would you even start down that road?

AUDIENCE: Well, because-- the justification--

MARTIN: It's more fair.

AUDIENCE: --whatever you think of it--

MAX: And it's faster.

AUDIENCE: --is that there are a lot of judges who are seeing hundreds of cases a day. A lot of things are affecting their judgment from-- I think the paper I read said it could be like a therapy, or a team won the night before, or they're hungry-- like they're influenced by these outside factors. So if we automate it, then we take out some of those biases and other biases that judges just have inherently from whatever their upbringing.

RASHEED: Or like compassion maybe? I don't know if there's a compassion--



AUDIENCE: It's a very hard--

MARTIN: I mean, also the way I would think about it is, great, because you get a nice heuristic. And if there's ever an issue, that's the one that goes to court and will get a full thing. Because there's probably a lot of issues that are very quick cases. But there's a whole thing at MIT about this now that they're doing, like lawyers and tech. And it's around Sloan and around the media lab. But it's a field that MIT is looking into, but I don't know if we'll come up with a law school.

STEPH: But there was a really cool study that I just read for my paper. I pulled it up. It got published literally this week by Carnegie Mellon, entitled "A Human-Centered Approach to Algorithmic Services, Considerations for Fair and Motivating Smart Community Service Management that Allocates Donations to Nonprofit Organizations." So it's a very long title. But it's essentially--


WILLIAM BONVILLIAN: Yes, it is. The title of your paper is going to be shorter, I'm sure, Steph.

STEPH: I hope so. So there's three people. One of them is from the Center for Machine Learning, another one's from the School of Design, and then there's a stakeholder from an organization called Food Rescue, which serves the greater Pittsburgh area. And what was really cool about this was essentially that they not only talked to computer scientists who are designing the algorithms for who gets the food allocations, but then they also evaluated, I guess, the ethos that was driving the way that the programmers were devising their algorithm and the decisions that they had made, like the cost-benefit analysis and who actually gets the food, why they should get them, what considerations their children should have, et cetera. And then from there, they started thinking about the ways in which the algorithm was designed, how that would impact, in implementation, the lives of the people receiving the food and whether or not that was fair.

And interestingly enough, and I think that this is really, really crucial in not only the design process, but generally in policymaking, was the question that they bring up about empathy. Like, to what extent are the algorithms that are being created reflective of the individual programmers' ethics and personal values? And that was something that they really bring to light. And this, I think, is an incredibly innovative case study, and it's being applied to the nonprofit sector in a really small, piloted way. But I could see studies like this scaling up to other algorithms and really striving to understand the ways in which they prejudice decision-making, even if they act as heuristics. Heuristics are just utilizing values and shortcuts to decision-making based on your intuition in the decision-making process.

So that could be really cool. But at the same time, for me, as someone who studies both design and politics, there's an enormous-- and I guess my religious studies also influence this a bit-- there's a huge movement in religious studies to move away from empathy, because it can be utilized as a tool for manipulation. Because if you have insight into what an individual matters and what motivates them to action, then that is not good.

But what, indeed, you should have, as was stated by a divinity professor at Yale, and I think he also does some work in philosophy, is that what you should strive for and what I think the algorithm should strive for is compassion, not empathy, which is distinct from empathy. The difference is that in empathy, you seek to feel the way that the other person feels. Where in compassion, you seek to understand how they feel, but you don't truly try to access those emotions.

And so I think that compassion, intellectually, is more sustainable and is something that you can actively implement in an algorithm. Whereas if you try to make your algorithm empathetic, it starts getting into the realm of making valued judgments, which is ultimately going to be damaging and more prejudicial than if an individual is making a decision. Because then you can leave the perjury to the machine and not place the impetus of whatever discrimination is happening on an individual.

MARTIN: Also, hacks.

LILY: Always hacks.

RASHEED: Thanks.


STEPH: Yeah, the Microsoft-- just quickly on that point.

LILY: Mm-hmm.

STEPH: They just released an article in The New York Times, I think it was yesterday, on the hack that happened in England on the NHS. And they were wondering, what is the role of Microsoft in all of this? Should they be to blame? So, a question.

LILY: Well, the whole time I was watching this, Mindell, he's brilliant, obviously. But I also think that he-- I couldn't help but think that he was being a little hypocritical. Because I think his whole message and premise was don't worry about job displacement by autonomy or robotics, autonomous robots. It'll never happen, because the symbiosis of humans and machines is more powerful than the machines alone or the humans alone.

But I don't think that we can deny that certain jobs have been displaced or replaced by automated machines, robotics, et cetera. And although more jobs can be created in the future or as a result of improved technologies, we can't really get around the fact, I think, that the people who do lose those jobs often have families. And then, as we've been talking earlier today, then they're out of the game. They don't go back for a four-year-- an advanced degree-- because, oh, now I need a PhD or a master's degree. They're often permanently unemployed or displaced into other service sectors.

So, yeah, David Mindell, you're right. We're probably never going to be completely autonomous. But then again, there are jobs-- like, you can't deny the fact that there are jobs that are replaced.

CHLOE: I had the exact same feeling. I wouldn't necessarily say hypocritical, but maybe biased.

LILY: He's never going to be out of a job. He's the person who invents these. Like, yeah, of course, he's the one who's staying ahead of the technological race, because he is the technological race, you know? [LAUGHS]

CHLOE: But the symbiosis is very real for the scientists and engineers whose limits have been extended by these types of endeavors. But for the people who have never gotten to that level of education in the first place, yeah, [INAUDIBLE].

MAX: The thing is, he pointed out like, OK, yes, there are people in manufacturing jobs who get replaced. But it's similar to how-- I think it was one of the other readings, where they were saying that people who would have to light lamps in the street got replaced by electrical lights, or people who would shout news to factory workers were replaced by radios. You know, it's going to happen.

LILY: Yeah, it's going to happen. But I think he's taking a--

MAX: It sucks, but it happens. That's how progress works.

LILY: --a self-centric view.

MAX: I don't know really what to-- what's the alternative, stop progressing? Because then other countries start progressing, and then you're left behind. And then everyone's unemployed, because now your country doesn't have money. That's not good.

MARTIN: Yeah, [INAUDIBLE] or somebody else will.

STEPH: [INAUDIBLE], I just wanted to make a quick point about the concept of fetishization.

MAX: Fetishization?

STEPH: Fetishization, yeah.

MARTIN: Technology fetishization or a product fetishization?

STEPH: Yeah, because I think, at least in my understanding, fetishization in post-modern philosophy is the process by which you love the idea of something so much that you relentlessly pursue it. And then you continue operating in the pursuit of that, right?

And the question that-- as much as I love technology and as I'm fascinated by it, and as much as I someday maybe want to be an engineer, I ask myself, how is it that we have come so far in autonomous vehicular technology, and we can't get more public bus routes? It doesn't seem like a trade-off to people, but to me, that's an enormous concern. And I'm constantly looking up questions like this on Quora. And people are like, well, it doesn't matter, because they don't have to be mutually exclusive.

WILLIAM BONVILLIAN: Well, this is the whole problem with change in legacy sectors. How do you introduce innovation in the legacy sectors? That's at the heart of that kind of question. And changing a public transportation system, an established sector, may, in many ways, be far harder than introducing the completely disruptive technology that's completely outside the scope of an existing realm.

MAX: Also, doesn't this fit in with what Bill was saying before, where city driving is unbelievably difficult, and we probably will never get autonomous city driving.

WILLIAM BONVILLIAN: Well, I'm not saying never. I'm just saying it's--

MAX: Well, it'll take a really long time.

WILLIAM BONVILLIAN: This is more than a decade of a problem.

STEPH: But I guess I'm curious, why do we view these-- I'm not going to say nearly impossible-- but these incredibly technically challenging problems as more solvable than getting more public buses on the route? Why do we have more persistence and resilience when it comes to technical innovation?

WILLIAM BONVILLIAN: Because it's a frontier territory, and the barriers aren't in the way.

MARTIN: Yeah, there's a lot more barriers for making a train and how much it costs.


STEPH: But I'm not talking about trains. Literally, public buses.

RASHEED: No, I think you were trying to get at, it's like there's no barrier to figuring out what the new autonomous vehicle is, because there's no existing infrastructure on autonomous vehicles that I have to overcome. Whereas in order to get new public bus routes, I have to contend with the established system of public bus routes and kind of go through that.


MARTIN: There was a start-up that was like Uber, but for buses, which is optimized for people's-- huh?


MARTIN: Something like that.

AUDIENCE: I think they just went under.

MARTIN: Yeah, it's just a lot of the financing-- the economic cycle doesn't churn continuously. It needs to get spun. But a quick point-- I think you've brought a really good point in terms of the people writing these papers. I was at a talk by a diplomat on Trump in Mexico. And he had this really funny term where he's like, yeah, it's kind of interesting to read a lot of these papers, because a lot of them-- or like when tech billionaires talk about, yeah, we're going to do this and help the [INAUDIBLE] go to their dinner, he's like, yeah, they're kind of like limousine liberals, which I thought was a really funny term, not to criticize liberals.

But I think the people that read 10,000-- there's also a Chinese saying, which is, much better to walk 10,000 miles and talk to the people on the way than to read 10,000 books, that I think is really true. Where it's like--

WILLIAM BONVILLIAN: Are you familiar with that one, Luyao? Good.


I wanted to certify--



LILY: Verified!

WILLIAM BONVILLIAN: It's a great line, Martin. I wanted to use it myself, but I wanted to validate it first.


MARTIN: I liked the limousine liberals. And I was like, yeah, that was pretty funny.

WILLIAM BONVILLIAN: That's an old term. That term's been around for [INAUDIBLE].

MARTIN: I didn't know.

STEPH: I'd never heard of that before, but it makes total sense.

MARTIN: But it does really classify the Silicon Valley kind of--

STEPH: Libertarianism?

MARTIN: Yeah. But I think the big issue is we really need to start talking to the people on the street and what they're thinking-- what they think about the issue, rather than just like, this is what could be. Also, put it into practice, because in practice it could be a lot of different things. And most people just want to do good. And sometimes their words-- I'm clouded by the way I view those words. You know what I mean?

STEPH: I'm just really perplexed by the fact that people will spend 50 years trying to develop a technology. But in two months or two years, you can't increase support for public transit. That, I think, is to me, what's insane about the public sector.

AUDIENCE: Maybe [INAUDIBLE] because of the nature of the good. One is public good and the other one is marketable, tradable, private goods, and there's profit.

CHLOE: Also, to tie it back to one of your earlier points about how we have these Snapchat-esque companies, Facebook-esque companies, before we have the big players. And also because it's easier maybe now for a group of 20 to 30 scientists and engineers to make a Mars rover and have that to be a successful mission-- to have a lot of little things like that, where they can define how they want their innovation to operate before we can then take that infrastructure that is established by those pioneers and apply it to public good work.

WILLIAM BONVILLIAN: You know, interestingly, Sanjay Sarma makes the argument-- of MIT-- he makes the argument that we're not going to get to anything resembling autonomous vehicles until we integrate the systems we're developing into the infrastructure itself. In other words, suppose rather than trying to make the vehicle entirely independent of everything that's around it-- people, everything that's around-- suppose we integrated the vehicle into a new set of smart infrastructure. So in effect, there are rails for every car, right? They just happen to be cyber rails. And that would simplify all kinds of things in the autonomy project.

LILY: Or like parking in a parking structure, where you just get to a dock and the parking structure deals with it.

WILLIAM BONVILLIAN: Right. So in other words, making the infrastructure smart in parallel with making the vehicle smart radically resolves a lot of the problems here. And yet, we haven't even thought about that. We're so busy embarked on making the vehicle totally independent that we haven't even thought about a much more logical path, he would argue, which is to upgrade the infrastructure at the same time we're upgrading the vehicles.

And he argues, what's going to happen to people? Well, you've got a massive infrastructure upgrade, which is going to be heavy employment focused. It's a significant kind of offset. And we have a thought about doing that. And that gets into, in a way, some of the issues that you're talking about.

STEPH: Yeah. And what's interesting is that in the article that I conducted with the co-founder of New Urban Mechanics, he said that the firms that he has talked to, and NuTonomy specifically-- not to, so to speak, throw them under the bus-- they don't think that those are considerations that fall in their domain. They think that the market is, at some point, going to decide these things for them, and so they should just worry on fixing the technology. But if there's anything that this class is teaching us is that we need to include these considerations within the research and development process, even at the basic or early stages.

WILLIAM BONVILLIAN: Right. So, Sanam, how about a closing thought on David Mindell. We've had a robust discussion. Thank you all. I think everybody got into it this time, too.

MAX: I love the under the bus thing.



SANAM: Yeah, I think it's interesting that this debate about autonomous, semi-autonomous technology raises some really philosophical questions, first, about what we were talking about-- humans trust and agency when it comes to technology. And then also about what the goals should be of when we talk about integrating technology into deeper and deeper parts of life, what should the goals be? How much should there be intervention on? And how could we improve the overall infrastructure? So I think Mindell, at some point, talked about how this is engineering at its philosophical best. So I think that's an interesting point.

WILLIAM BONVILLIAN: Good point. Good closing points, Sanam. All right, one more to go, and you're back to me.

RASHEED: You saved the best for last?


WILLIAM BONVILLIAN: You could be the judge, Rasheed.

MARTIN: That was such a good line.


WILLIAM BONVILLIAN: But that is the cover of my next book written with Peter Singer. These are both Peter and I out hiking.

MARTIN: Nice pictures too, though.


RASHEED: So he took one of you, and then you took one of him?

WILLIAM BONVILLIAN: No, I think his mother took that picture.

MARTIN: Their new jackets.

WILLIAM BONVILLIAN: My wife took that picture of me. But this is, as you saw, a chapter that-- it's actually going through a far amount of revisions from the version you've got. But it's an attempt to kind of wrestle with this movement here around the future of work. And this concept that people are going to get displaced by automation, this has been with us for a long time.

So the first major episode is really in Britain in 1815 around the Luddite movement, with weavers smashing automated looms that are putting them out of work. And eventually, a whole division of the British Army, like 15,000 British soldiers, get pulled in to put down this pretty significant movement in Britain. So that's the first episode. We regularly go through this debate about every 30 years, roughly.

In 1950, I mentioned this before, but Norbert Wiener painted this very dark vision of computers displacing people. It took us a long time to evolve into Licklider's vision. In the 1960s, there was huge anxiety about workforce automation, as a number of new technologies are being introduced into the industrial processes. In 2015, that concern came right back, that a mix of artificial intelligence, machine learning, robotics was going to be very threatening.

This is an old debate. So the famous economist John Maynard Keynes once famously wrote, "Thus we have been expressly evolved by nature with all our impulses and deepest instincts for the purpose of solving the economic problem. If the economic problem is solved, making will be deprived of its traditional purpose." In other words, our lives are organized around our work in many, many important kinds of ways. And that's how our lives get meaning. And if we're blowing up the work model, what's going to happen?

But the important point to realize here is, there's no sign of that yet. The American workweek is 47 hours at this point. So we're some distance away from this. We have time to reflect on this, if we ever indeed come close to that kind of level of realization. I just wanted to expose you to this famous Keynes point, because Keynes is thinking about this in the midst of the Depression. He's thinking about the whole future of work.

So the backdrop that we're facing now is significant work disruption. And we've talked a lot about that today. Half of the manufacturing jobs, as we talked back in class number three, were lost between 2000 and 2010. And we saw the data on median income decline, and we talked about the barbell problem. And Brynjolfsson and McAfee paint a picture of technological job displacement and describe the accelerating round of IT technologies that are headed towards the workplace.

Now, when we read about advanced manufacturing, the fix for the manufacturing sector is full of new technologies like 3D printing and digital production and advanced sensors and photonics, and in advanced materials, a raft of new technologies that are also going to be entering that sector to make that significantly more efficient and get the US back in the game. That's the whole object of that exercise, right?

So there is a raft of new technologies that is headed from the IT world and elsewhere into the workplace that we're all going to have to reckon with. And there are technological dystopians. So Martin Ford argues that there is a system of "winner take all distribution" that's evolving, that the technology of software towards monopoly and the ability of computers to do more than they're programmed for-- deep learning, as it's called-- is going to push out a whole lower end, that this "winner take all distribution" is going to be very disruptive in this society.

David Cowen argues that the country will be divided by this technological advance into two countries. We'll have a developed world, and we'll have an undeveloped world in the United States. And indeed, Peter Temin of MIT, the economist, has just written a book arguing that current inequality in the United States has reached such a level that the right comparison is to look at developing world economics in a US context to understand it for you.

Just for example, Germany, the lowest 20% compared to the highest 20%, it's 1 to 4. In the United States, the lowest 20% of the population income to the highest 20% is now 8 to 1. That's pretty dramatic income inequality. That's a pretty-- that's different than how the US was for the previous century.

In their Second Machine Age book, Erik and Andrew argue that the low-skilled replaced the middle-skilled jobs that were displaced by technology. They argue for tax policy as a fix here. Although, as we've discussed and Rasheed led us into it, that politically is a very tough proposition.

There are studies that project very high levels of technological job displacement. So out of Oxford, Frey and Osborne look at occupational descriptions and conclude, oh, 47% of all US jobs have high likelihood of being replaced by automation. Rob Atkinson at ITIF has given a very strong critique of that study. He argues that it assumes a highly unlikely 3% labor productivity rate-- the advent of all this automation. We haven't seen that since the 19th century. So don't expect it to happen anytime soon, Atkinson argues.

They also argue that Frey and Osborne engage in a lump of labor fallacy. In other words, they assume that there's a fixed amount of work and that the amount of work doesn't grow spurred by these new technologies. That's what David Autor led us to understand better, this whole complementarity. In other words, there isn't a fixed amount of work. The complementarity can create more, so that there are larger net potential gains here across the economy.

The most realistic study of technological displacement so far has been by the OECD done last summer. They looked at 22 OECD nations. And rather than look at occupational descriptions, they decided, let's go talk to the workers that are actually doing those jobs. And this went through 22 different OECD countries. They actually went out and talked to the workforce. And they would talk to people about, you may have a particular job title, but what are you actually doing? What does your work actually consistent of?

And of course, it turns out, and we all surmise this in a way, people are doing many things that aren't necessarily captioned in their job title. Heaven forbid if somebody held me to my job title. Nothing would ever get done. And we all know this, right? And that's what the OECD found-- people are doing much more stuff than their particularly narrow job description may say that they're doing.

So they concluded over an extended period of time, across those developed countries, the effect of technological displacement was maybe 9%. Now, that's a significant number, even over an extended period of time. That's a significant number. But it's not 47%. In the United States, the number was 10%. The range was 6% to 12%, depending on what kind of employment you had in your country. That means we have a job ahead of us, but it's not a nightmare. It's not something we should be sleepless over every night from now on.

There's a recent effort on-- this is not in the book, in the chapter-- but Daron Acemoglu, another very noted MIT economist who does a lot of work with David Autor and is a real student of innovation, he's a growth economist, Daron looked at robotics. And he looked at job displacement in the robotics sector. And he had a pretty useful-- had a good database that he was working from.

He concluded that over a 17-year period ending in 1977, a total for the entire United States economy of technological job displacement caused by robotics was somewhere between 300,000 and 600,000 jobs over a 17-year period. That is not a big number. Job churn in the United States per week is something like 75,000 jobs per week. Look at it in that kind of context.

So these changes are going to be significant. And I would argue they will occur. But we may have a time period that we're able to look at them and think and plan and act rather than being completely disruptive overnight.

So there's a quote by the vice president of the Danish Conference of Trade Unions named Nauna Hejlund. Her comment at an OECD conference I went to last fall was, "New technology is not the enemy of workers, old technology is." And what she's saying here is that the companies most likely to fail, where all jobs will be lost, are those who are not keeping up with the technology. That workers have a stake in keeping their firms current with current realities.

And here's a chart that just shows the rate at which robots are being installed into production. It's actually slowed fairly significantly. Now, look, we're in a process of moving from old-style, industrial robotics, which used to weigh tons and they were fixed in place and they had to be fenced off, because you didn't want to be anywhere near them, because they were unsafe. And they would do the perfect weld, and that's all they could do, on an oil line.

We're moving to a whole new generation of robotics that are much more flexible, that are subject to voice command, that are a radically different kind of robotics than the old industrial robotics. So some of this data may reflect the fact that that new generation of robotics is only now just starting to be thought about and put together. Max?

MAX: Do you think that this has to do with some of the anxieties that workers have expressed around being replaced by robotics? Like there might be some sort of correlation, relation there?

WILLIAM BONVILLIAN: This slowdown rate?

MAX: Yeah.

WILLIAM BONVILLIAN: I don't know. I don't think so, because we don't have an organized workforce anymore. We're down to about 11% unionization in the United States.

MAX: Really?

WILLIAM BONVILLIAN: Yes. So workers don't have a lot to say at this point in the US about a lot of these changes. They don't have a lot of control over their employment, frankly, which is another set of issues.

But the chapter argues that the most immediate near-term problem is not technological displacement. That's going to be a longer-term problem. The most immediate near-term problem is what economists call "secular stagnation." And that's the problem we're in now. And this term was developed by Larry Summers in 2013 to essentially describe this situation that we've got at the moment-- interest rates around zero and US output is insufficient to support full employment. He's running in 2013. It's gotten better since then. But we do have this structural unemployment, where a significant number of people who had jobs no longer are in the workforce. That's the problem that's still with us.

He argues that there are a series of factors that affect this, that are hampering the investment demand. So decreasing population growth-- we can all see that-- a relative decrease in the cost of capital goods, excess money being retained and not used by large corporations-- so retained earnings by large companies is very high at this point. On the savings side, there's, he would argue, excessive reserved holdings in developing countries due to, in part, post-crisis financial regulation, inequality is a cause here, and increasing intermediation costs are all causative factors in this period of low growth. We have very low GDP, and we have very low productivity. So economy-wide, our productivity is about 1.12%. Not a good level, right?

MAX: But we still have low unemployment.


MAX: But we still have low unemployment.

WILLIAM BONVILLIAN: Yeah, we have 4.9% unemployment. But go back to Rasheed's barbell example-- a lot of that is the thinning out of the center being pushed to lower-end services jobs.

RASHEED: Also, the labor force participation rate is too high.

WILLIAM BONVILLIAN: Yeah. So we have low productivity, low capital investment, and a low growth rate, growing inequality, and a declining middle class. These are big problems. Now, low productivity rate, low capital investment, that tells us that automation is not happening at the moment, because automation is designed to increase productivity and is based on increased capital investment. We're not seeing either one of those, so that tells us that we're not entering a period of radical growth in automation. Martin?

MARTIN: So is this just the US or is it global? And also, is it taking into account the industrialization jobs leaving?

WILLIAM BONVILLIAN: Yeah. It does take into account the decline in manufacturing employment. This is a US picture that Summers is painting, and he's going to propose a US solution in a second, which is a major infrastructure package. But the secular stagnation is a phenomena now in European economies as well, and in the Japanese economy. So this is a worldwide phenomena of the developed world at the moment. And it's a problem that you all have. You're going to have this future work problem for sure, but now you've got this one.

AUDIENCE: Do we also have a aging workforce?

WILLIAM BONVILLIAN: Yes, aging workforce, right. And that's a powerful part of this story, too. OK?

So if you have a population increase, a 1% year population increase, that guarantees you one additional percentage point in your growth rate. If you have zero population increase, you just pull one percentage of growth out of your growth rate, out of your GDP rate. So that's a phenomena that a lot of these developing countries are starting to wrestle with. And China's going to have to wrestle with this one fairly soon.

So here's what Summers is talking about in terms of excessive savings over investment. Look at the split between savings and investment levels. That means we're not investing our savings in the society, in the economy. And Summers identifies this gap.

RASHEED: And in 2008 it's like where?

WILLIAM BONVILLIAN: Right there, right? Well, that's when savings falls apart. But there's still a big gap in the system. And investment hits then, too.

I want you to know of this economist named Robert Gordon, because you all are hanging out at MIT, and we believe that technological advance is the most powerful thing ever. But Robert Gordon paints a different picture. And you just ought to be aware of this debate-- not that he's necessarily right. This was by far the best-selling book on economics for the last two years. And Gordon's book is called The Rise and Fall of American Growth. He argues that the current low-growth secular stagnation, and he uses the term, is the result not of insufficient demand, but of insufficient supply, i.e. insufficient technological supply.

So he argues that the IT revolution is fading and that we never got as much out of the IT revolution as we did out of prior innovation waves. This one just wasn't that big compared to electricity and compared to railroads and some of the other big waves that we've been through. And that we're locked on to an innovation wave that is just not, at this stage, producing that much productivity growth.

And he goes into enormous depth about each one of these technologies. So a complete rarity for an economist-- he actually has read into the technology literature and is looking at what these intentions actually were in the second half of the 19th century and what they meant. So it's an intriguing alternative story, one that's contrary to the story that we've been telling ourselves for a long period of time.

How do we fix this? Gordon says that it involves-- fixing societal headwinds is the best way out of this box. And improving growth will depend on educational attainment. And remember when we read Goldin and Katz, we talked about how a key to the historic American growth rate was that there's always a rise in increased technical requirements in the economy? And the genius of the American system, at least till the mid-'70s, was that we kept an education curve between high school, and particularly Mass college education, out ahead of that, right?

That, in turn-- those curves are related to each other. That talent base, as we know from Romer, helps drive the technological base. If you get a large part of your economy out in prospecting or involved in supporting a technology advance, the whole system will do better. So when we leveled out the graduation rate at the college level in the mid-'70s, we paid a big price in terms of ongoing technology productivity and technology advance, as well.

So that's kind of what he's suggesting. It's an interesting-- he doesn't spell it out quite that way. But it's an interesting idea. So tackling some of these underlying social problems, including inequality, i.e. bring more people into the economy, all of these are fixes, in his mind, for getting around the fact that the IT revolution isn't quite what we hoped for.

Now, what do I think of that? You know, time will tell. We're going to find out. I think it may be we had the rapid rise, we're on a more moderate growth pattern in the IT revolution, as we talked about we talked about innovation waves. And we can only expect more moderate growth as a result of that. But we are waiting for the next big thing, for the next big innovation.

So Summers' solution is to throw money in infrastructure-- and, well, let's do roads-- and that will help that community that got displaced in manufacturing jobs that aren't upskilling. That could help them. But the problem with Summers' solution, I would argue and the chapter argues, is that it doesn't do anything to address productivity, which is the heart of the problem. Getting that productivity number up, as we know, increases GDP. It's a key causative factor behind the growth rate. And let's get that growth rate up and overall economic well-being will improve, and you'll have more money to distribute, right? Let's at least get that right.

But an infrastructure program that's aimed at roads and sewers, we already have our road system. We have 90,000 miles of interstate highway. We're not going to get, by repairing it, we're not going to get the innovation wave boost that the combination of building that interstate system and internal combustion engines got us back in the '40s and '50s. We're just not going to get that out of it. So Summers' prescription of just doing infrastructure will help people who are underemployed, but it's not going to wrestle with this underlying productivity rate, which is really the problem at hand.

And the alternative here, the book argues, is investing in innovation infrastructure. In other words, having a larger view of what that infrastructure ought to be and investing in that. All right. That's this text. And let's take it apart. Luyao, it's all yours. I'll go back to the hiking picture at the beginning.

RASHEED: Everybody's favorite.

MARTIN: So it's the last 16 minutes, pretty much. So do you want to wrap up, and then go back to the discussion?

WILLIAM BONVILLIAN: Well, let's let Luyao tell us some key observations about this chapter. I agree, we're heading towards the deadline, and there's some things I wanted to say, too. But go ahead.

AUDIENCE: Yeah, we realize there's basically the competition of the technology replacement and the possible solutions that we have, and also all the policy factors that we want to take into account. And I think the technology replacement can actually be an opportunity for economies having an aging population. But at the same time, the United States facing low investment rate and low savings and low economic growth is probably problematic for this substitution to work well.

So what are the suggestions that we have to just improve-- maybe this is too general. Yeah, so what are the-- well, let's talk about this. So in one of the suggestions it's saying by reducing the number of working hours, it can actually help to encourage consumption and help to solve unemployment. And how do you guys like this idea?

MAX: Of reducing hours?

AUDIENCE: Yeah, reducing the number of working hours.

WILLIAM BONVILLIAN: France has been working on this for a while.

LILY: I would love that. But then I think you always have the people who-- I mean, how do you enforce it and how do you regulate it? Because you'll always have the people who will want to work more hours to get ahead. That's one of the reasons that--

AUDIENCE: Yeah. That's what Chinese do, I guess.


MAX: Yeah, you define when people start getting paid overtime. It's not that hard. And then, make it so that overtime, as it currently is, you have to get permission from your supervisor in order to actually get those hours.

AUDIENCE: At most companies, that's already in place. If you're going overtime, you need an excuse for it.

MAX: Exactly.

AUDIENCE: And people just do it under the table instead. Because by billing overtime, you're admitting you can't finish your work in the normal hours, so [INAUDIBLE].

LILY: Yeah. So in my field, we're not paid hourly. We're paid on-- the way that we are measured is in the output of publications that we have. So, yeah, I didn't want to work until 1 o'clock in the morning three nights last week. But I did, because I had a paper I had to get out. And so I can't imagine, in my field, an hour limit being in place.

MAX: Well, yeah, but that's different from a factory job or some sort of--

MARTIN: Yeah, I think cultural context is important. Just like I don't think it's very-- I don't think I went into the USC to do less.

WILLIAM BONVILLIAN: Right. This is not a very American solution, I would say.


MARTIN: I would try to think about-- well, maybe there's certain sectors--

WILLIAM BONVILLIAN: Because we are driven workaholics.

MARTIN: Yeah. It is very much the culture.

STEPH: Well, in the future of work, one of the trends I've seen is people talking about who is going to lose jobs. And they always say that the alternative is that the people who are going to gain jobs are creatives and that it's the creative economy that's going to gain the most from automation in manufacturing. So there's going to be really interesting opportunities in, say, virtual reality, for people who are interested in producing, say, entertainment specifically for that technological environment.

AUDIENCE: I think Romer proposed that there will be more job placement in the middle class. And we'll see a higher demand for high-pay, high-skilled jobs and low-pay, low-skilled jobs in this, and we'll see a reduced size of middle class. So in terms of the rising demand for the low-skill, low-pay jobs, what would be the social problems associated with that kind of future?

RASHEED: Yeah, I think you just highlighted a pretty important point that we tried to get at, which is just this hollowing out of the middle class. But more importantly, the economic models that we're going to have to use are not going to be-- now, everyone's income levels are distributed relatively evenly in accordance to population numbers. But you have to use models from different countries and stuff like that.

It's going to be like an entirely different way to think about the economy, because we've never really had to deal with an economy where we're so-- now, we're like-- we've never been bimodal before, having to deal with like, we only have low-skill, low-pay jobs or high-skill, high-pay jobs. And on the [INAUDIBLE] the US sits on, I don't think we've had to deal with that before. And so--

STEPH: I don't think a democracy has had to deal with it before.

AUDIENCE: I mean, would you not say that early America was pretty bimodal? You had plantation owners, and you had the bankers in New York. Everyone else was pretty poor.

WILLIAM BONVILLIAN: Yeah. I think there were times in the later part of the 19th century where we had these kind of inequality numbers, too.

AUDIENCE: Yeah, so I think it was a huge problem--

WILLIAM BONVILLIAN: But we got out of them. We got out of them, essentially by raising the education level of the entire workforce.

STEPH: And increasing social labor protections.

WILLIAM BONVILLIAN: Yes, and we did that too.

MARTIN: Yeah, that is a fundamental assumption that I think is pretty interesting-- the assumption that there should be a middle class or that it's natural for there to be. Or if that's always going to be a case. Not to say it shouldn't be or it should be. Just it's interesting that we have the assumption that the fundamentals have always shown that that should be a thing.

WILLIAM BONVILLIAN: I think I'd argue that that's pretty fundamental to the ethos of the country.

STEPH: The American model, yeah.

WILLIAM BONVILLIAN: Overall, immigration occurred here, and social classes were significantly reduced in the United States from the period of founding until the latter part of the 19th century when industrialization took off. But that it was a pretty strong middle-class base. And that talent base, that involved base, that citizen base, is pretty key to the effect-- the ability of democracy to function. And what happens in democracy when you start to really split the society and significant inequality occurs? I'm worried about some of this signaling system we're getting on that at the moment.

MARTIN: Yeah. I think it'd be interesting to do a comparison to another time in history--

STEPH: Like feudalism?

MARTIN: Well, yeah, just to go-- I mean, [INAUDIBLE] from there, because we definitely are living in interesting times.

AUDIENCE: But intuitively, after learning economics for several years, I think raising the labor mobility is probably one of the solutions. So how do you guys feel about how effective would this be and how feasible would it be?

MARTIN: Labor mobility?

AUDIENCE: Raising labor mobilities.

MAX: Doesn't that increase inefficiency, though?

WILLIAM BONVILLIAN: Well, we've decreased labor mobility in the United States significantly in the last several decades.

RASHEED: Wait, define labor mobility--

WILLIAM BONVILLIAN: The ability to move up in--

AUDIENCE: Yeah, once they lost their employment, they can move to find new employments with a new set of skills.

RASHEED: Got it.

WILLIAM BONVILLIAN: Right. But what happens in these failing industrial economies is that people-- homeownership is typically a family's key asset. If the community is failing because of industrial decline, that asset collapses. It's much harder for people to get out at that point. So there's been less labor mobility, and interestingly, less job churn in the United States in the last decade.

And then there's social mobility, which arguably has been pretty key to this country. In other words, your ability to improve yourself, which has always been an operating assumption in the United States. You could always do better than your parents, right? And that may be coming to an end, as well. That may be significantly forward closed.

MARTIN: They've shown that this generation is worse off in real numbers than their parents were. Or they're less likely to do what they do.

WILLIAM BONVILLIAN: Fortunately, you have more stuff. That's the one really--

MARTIN: Yeah, there's a saying where, if you could choose to be alive today and be poor, or be a billionaire--

WILLIAM BONVILLIAN: We may be poor, but we got iPhones.

MARTIN: --like a multi-millionaire. Yeah, it's like you're a rich-poor, right? It's like, yeah, I can't afford anything. But I got my phone, I got--

MAX: I got beams.

STEPH: Actually, interestingly, in terms of memes representing your culture, I saw a meme last night that said, why do you think I post so many selfies? I don't have a car, I don't have a house to post pictures of, so I'm just going to post photos of my face. And I think that's really--

MAX: That's so sad.

STEPH: No, I think that's representative of our social opportunities, right?

AUDIENCE: And you know that millionaire who was complaining that the reason that millennials don't have houses is because they're spending all their money on avocado toast.



WILLIAM BONVILLIAN: All right. So I'm going to take the opportunity to do a quick wrap up of just this class, because we did the wrap up of the overall class at the outset. But just very briefly here-- Brynjolfsson and McAfee tell us that the advent of IT is going to be significantly disruptive in the foreseeable future, in the short to mid-term future. And it's going to create significant technological job displacement.

And David Autor in his piece "Why Do We Still Have So Many Jobs?" responds with a long-standing set of thoughtful economic arguments, arguing that we can't just look at the displaced jobs. We also have to look at the new complementary jobs that are coming about and the net increase in new technologies and production that will accompany these technological changes, too. They create a ground condition that may make this shift more manageable.

And Autor also talks about how our economy-- he looks at inequality and ties it to education and education attainment. And he argues overall that this inequality problem is tied to our education levels, education attainment in our society. It's going to require significant upskilling of our population.

And then David Mindell takes us back to the, what's the future of working going to look like? And takes a deep dive into what most people view as the most threatening territory-- robotics-- and argues that, yes, there will be technological job displacement, but the relationship with robotics is a complementary one, that there's going to be a human machine symbiosis. It's going to be a more complicated picture than just straight technological job displacement. There will be new opportunities for people in this mix.

And then reading from that chapter of the upcoming book that I'm doing with Peter Singer, looking through the literature, we conclude that the jobless future is not on us today, and the low productivity rates and low investment rates suggest that it's not evolving rapidly. So we may have a little time here.

And then, meanwhile, we do have this deep problem with a low growth rate, a low productivity rate, a low capital investment rate, which is creating secular stagnation, in Summers' terms. And that's something that we need to get on, because that's the immediate future. And then, overall, this general upskilling of the workforce serves both purposes. So let's get on with that.