Lecture 2: Introduction and Overview II

Flash and JavaScript are required for this feature.

Download the video from Internet Archive.

Description: In the second lecture of the course, Prof. Schilbach continues his overview by talking about how economists think about human behavior, utility functions, and social preferences.

Instructor: Prof. Frank Schilbach






FRANK SCHILBACH: So today what I'm going to try and do is go through some of the survey questions that we asked you last time at the end and try to illustrate some of the topics and some of the issues of behavioral economics using short survey answers and trying to give you a better or more precise overview of what kinds of topics are we going to talk about.

As you know, this survey was anonymous. Somebody was asking, do we need ethical IRB approval for that? We take ethical issues, in fact, very seriously in all of the studies that we're doing. The answer to that is, since the survey was anonymous, and there was no personally-identifiable information collected, it's OK to do that, in particular in situations where arguably nobody is harmed from answering those types of questions.

The surveys, you might say that's not a particularly rigorous experiment. A survey, that's exactly right. So the survey that we we're asking, in some sense, provides some suggestive evidence of some of the behavioral issues are going on. A lot of the evidence that I'm going to show you in class are much more rigorous in terms of actual much more rigorous experimentation that's kind of hard for me to do in a short survey. So you think of these evidence I'm showing you as suggestive. And then I'm going to show you more rigorous evidence that those kinds of phenomena, in fact, also holding once you do things more rigorously.

So now when you think about how do economists think about human behavior, there are broadly three aspects in which you can think about that. The first one is-- or broadly speaking, we have constrained optimization, where people have a utility function, which is specifying in some mathematical form of what makes people happy. There's stuff in the world, consumption and so on, that goes into the utility function. You can eat apples, bananas, and so on. And essentially, the more of those you eat, usually the happier you are, which economists have construed as a concept called utility. Higher utility is good. And it specifies what makes people happy.

Now there are different aspects to that. You can think of this as there's the instantaneous utility function. And I should have said in recitation, we're going to discuss a review of utility maximization. If you have taken 1401, 1403, 1404, and are very familiar with this kind of material, you might want to skip recitation this week. Future recitations will be much more general and specific to the class.

So what makes people happy is essentially two things. One is there's an instantaneous utility function, which is kind of like at this moment in time what makes you happy or on a specific day-- often it's defined on a daily basis or even yearly basis at a specific period or moment in time, what makes you happy. That's the instantaneous utility function. And then there's time, risk, and others-- these are like risk, time, and social preferences.

So time preferences is how do you aggregate over time, today versus tomorrow, two days, three days, four days, five days from now? How do you aggregate these up? So you might be happy-- you might make a choice today that makes you today very happy. But you regret it in five days from now. And the question is how do you aggregate these instantaneous utility functions into one big function? Those are time preferences.

Risk preferences is when there's risk involved, when things are uncertain, so if I offer you a lottery, if you want to play the lottery or the like, if you take certain classes, if you study for an exam or not, these are often risky choices. In a sense, you can do one thing or another. Often you don't know what the outcome might be in various choices that you make in life. And then often some people are what we call risk averse, to say they avoid risk. And that affects their choices.

And then finally, as far as social preferences, you could say social preferences in some sense go into the instantaneous utility function, in a sense of that's just another sort of argument of that function. Or you might say, in some sense I care about myself. And I care about others. And so if I'm aggregating my own instantaneous utility and others' instantaneous utility, [INAUDIBLE] larger function overall.

But broadly speaking, one part of people's utility constrained optimization is the utility function. Second is people's beliefs. This is how they believe-- what people believe about their environment. When you think about purchasing things, that would be not usually prices, it would be returns to certain investment. If you take a class versus another, or if you take a major versus another, what does that lead to in terms of your future earnings, your happiness, and so on and so forth?

And then one of that is essentially your priors, what do you think about the world? And then how do you update your information once I provide you with new information? So there's like some prior beliefs that you have. I'll provide you some new information. And you update it to your posterior beliefs overall.

And then when you think about the utility maximization, problem from a mathematical perspective, you might say, once I know your preferences, and I know your beliefs or your information, I have all the information I need. I can solve for the optimum. And that's how you should behave. When in some sense, the whole problem is solved for you. So we are done.

But then even conditional on that, people seem to make choices that don't quite fit the classical economics framework, which is kind of how are people-- or the question is, how do people use utility functions and beliefs to make certain decisions? And do we find certain anomalies or deviations from the perfect utility maximization in a sense of using preferences and beliefs? So there's different things that some influences on behavior aren't just about utility and beliefs. In particular, the issues like-- as I talked a little bit about last time, which are frames, default and nudges, and sort of heuristics.

That's to say the way I present a problem to you might affect your choices greatly for given preferences and for given information. That is to say you might think you like one thing or another. And you have certain information about the returns to these choices. But then the way I present an information or I set the default, for example for savings or other choices, people might decide dramatically differently.

So now what I'm going to do is essentially take each of these items, essentially first preferences, the utility function, second beliefs, and then choices and decision making, and show you essentially how using psychological insights might be used to improve people's-- to understanding their decision making as a whole or understand certain choices that people make and how essentially psychological aspects might affect people's behavior.

So the first thing I'm going to talk about is a social preferences is like how do people think about themselves and others? And much of classical economics thinks that people are selfish. They essentially care about themselves and nobody else. Is that a good assumption? Or what do you think about that? Yes?

AUDIENCE: Yeah. I think you can think of it in many ways that eventually leads you to think [INAUDIBLE] you can say that you value [INAUDIBLE] other people [INAUDIBLE] caring about other people and comparing to caring about other people because it makes you happy somehow.

FRANK SCHILBACH: Right. So one thing you're saying is essentially it's actually tricky to figure out what's selfish and what is not, is in some sense, if I'm nice to all of you, it could be that I'm really nice. It could be that I just really care about evaluations or whatever. So in some sense, it's hard to interpret from people, what they do, in terms of understanding their motives. That's actually a very tricky question in behavioral economics, trying to figure out, are people truly altruistic? Or are they doing things for others for ulterior motives?

I want to step back a little bit and saying overall, the assumption of selfishness is actually a pretty good one. In many situations, when you think about people's choices, usually the choices affect people themselves. When you think about what kind of classes do you choose, what kind of profession do you want to get into, whom you want to marry and so on, often that's a choice that affects yourself. These are individual choices. People are selfish. Of course, people are not selfish in all respects. But overall it's actually mostly true. It's actually a pretty good assumption in many situations.

What I'm going to talk now about is in some situations, it's not a good assumption. So we should amend or think about this a little more broadly. And one way I think about this is to think about charity. 2% of GDP is spent on charity. I guess it was $373.3 billion in 2015, which is a lot of money. So there's some evidence that people seem to be caring about others in some way.

Now one way to think about this is to say, you have a utility function that has as an argument, other people's utility or consumption. So there's some people who say, I'm donating money to somebody in Kenya. I have a utility function that says, this person in Kenya, if that person has higher consumption, that makes me happier. That's one way to think about this. But there's also a broader ways. And you may call that pure altruism in the sense of I'm just caring about that person. If that person does better, that makes me happier. But that's probably not the whole story. So what other reasons might people have to give or be nice to others? Yeah?

AUDIENCE: [INAUDIBLE] warm glow from others knowing you gave?

FRANK SCHILBACH: Right, so warm glow-- can you say more? What do you mean exactly by that?

AUDIENCE: If other people that you gave, you can get a warm glow feeling from people knowing that you gave things to other people.

FRANK SCHILBACH: So you're saying two things, I think. One is warm glow, just feeling good about updating about yourself in some sense. People call that often self image or the like, where essentially just you want to be a good person. Somebody asks you, would you like to give money to somebody else? It's not like you actually care about this other person. But you want to maintain your image of being a good person. And it makes you just feel good about yourself.

You said also something different, which is you care a lot about what others. So one is about self-image, what do you think about yourself. I'm a good person. I give to others. And if somebody asked me, I should probably give. And you said something else, which is I care a lot about what other people think about me. And so usually we think of that as called social image, which is to say, I care a lot about others' opinions. And if other people think I'm a nice person, giving money helps with that.

What else? What other motives do we have? Yes?


FRANK SCHILBACH: Right. So that's a little bit, in some sense, semantics, in the sense of if I give money to pay lower taxes, in some sense, if I looked at this more carefully, it may look like-- and I'm going to show you some evidence of essentially the fraction of money that's donated to [INAUDIBLE]. I'm going to split this up. And there, some aspects in that, for example, if you donate money to Harvard for a building, it's not obvious that's necessarily altruism as opposed to you feel really good about yourself to have your name on a building, if you have $100 million or something that's at your disposal.

So you're saying essentially some behaviors that may look altruistic, but in fact, they're not, in part because it's just individual optimization. So in some sense, that's more like a misspecification, a misunderstanding of people's motives, which I think surely is in some of what looks like altruism, which in fact, it's not. There was-- yeah?

AUDIENCE: [INAUDIBLE] more altruistic [INAUDIBLE] altruistic societies [INAUDIBLE] have access to [INAUDIBLE] then you could be acting altruistically, but in a sense of trying to ultimately optimize your own [INAUDIBLE].

FRANK SCHILBACH: So if I can rephrase-- we're going to get to this, in fact, in a few lectures on the social preferences, which is to say if you compare societies, in certain societies, for example, if people do things like whaling or the like-- essentially, if you do occupations where you need cooperation to be successful in your finding food or the like, that might lead to altruism, where altruistic behavior people, help each other not necessarily because they like each other. But essentially it's necessary to be able to survive. And then that might lead to either just cooperative behavior as a whole, or perhaps to people are, in fact, nicer to each other.

So we will actually get to that. And that's essentially exactly as you say, sort of an evolutionary perspective, and saying depending on what sort of the social the incentives are for society, depending on what kinds of professions people do, people might be nicer or less nice to each other because it's necessary to survive or be rich and so on. What else? Yes?

AUDIENCE: You might have expectations about whether other people will reciprocate [INAUDIBLE].

FRANK SCHILBACH: Exactly. So there is essentially reciprocity. If I do something really nice to you, and then ask you a favor in return, you might not actually want to do that. But you feel so inclined because I've been [INAUDIBLE] you. It's a thing that we do in society. And reciprocity is really important.

Quite related to that is fairness. in some sense, if somebody does something, or if somebody gets paid a lot more than somebody else, you might want to share that with others, not necessarily because you want to. But it's perceived as unfair. And people are uncomfortable with that. Anything else? Yes? Sorry.

AUDIENCE: [INAUDIBLE] stress. So maybe when you observe suffering, you take it on as your own suffering. So when you act altruistically, it's not necessarily because you care about these people directly. But you now want to alleviate your own suffering.

FRANK SCHILBACH: Yes, exactly. So that's what economists would refer to as the inequity aversion, one way or the other, essentially seeing inequality or inequity makes people uncomfortable. And that comes in various forms. One form of that would be, as you say, in absolute terms, look here, people, and this is-- I said a lot of poverty and development economics. Here is people who are extremely poor. It's either our moral obligation, or I just feel uncomfortable seeing people suffering. We should help them to get over a certain-- up to a certain standard of maybe reducing or eradicating absolute or extreme poverty.

There's a different version of that, which is just to say, inequality is just uncomfortable and unfair overall. We have, for example, in the US people are a lot richer than say in India. Nevertheless you might want to think you would like to live in a fair and just society, where people at least have a reasonably high living standards that they live on. And then it's either a moral or other obligation. And people might give because it might just feel uncomfortable to them.

I think we said all of the things that I have here mentioned. And so there's various questions on various motives on altruism and why people give and are nice to others. Part of what we're going to discuss when we talk about social preferences is trying to disentangle those different motives. And there's a bunch of different experiments that people have run to trying to learn about why do people give? What determines altruism? What kinds of circumstances make people become nicer to others? Or if you wanted to increase altruism, how would we do that? Yes?

AUDIENCE: Does caring about others mean the general population? How would they look at it as caring about your friends or family? [INAUDIBLE].

FRANK SCHILBACH: I think those are very much related. There's a bit of a question overall. When you think about your utility function, usually it's defined for individuals. It's essentially you, yourself, how are you doing? There's other views, where you say, it's actually the household as a whole. So essentially everybody in your household is a unit. And then you maximize as a whole. There's a bunch of things about within the household, conflict and issues that arise. And decision making within households might not be optimal.

But you can think of altruism as narrowly defined about friends and people that you love. And presumably you have like higher altruism towards those individuals. And then people who essentially are further away, there's less of that or perhaps other motives. These things are surely related. But they're distinct. I think when you think about your parents or your family, what do you do for them, often there's different motives than when you-- for example, reciprocity et cetera, seems a lot more important in some ways or perhaps pure altruism, as opposed to inequity aversion or when you think about poor people across the world. But I think these things are very much related. And the question is, which of those particular elements are more important in different settings?

So then the key questions we're going to ask is-- one is what is the nature of such social preferences, i.e. the motivation to help and hurt others? We mentioned already most of them. And we're going to try and understand which of those are important in which settings? So one of them is what determines and why do people give to others? And a separate question-- so broadly speaking, you can think of this as like why are people nice? And what determines-- to others, and why are they doing that-- or appear nice, if you want.

And a second version is called social influences. How does the presence of others affect your behavior and your utility? And then to say it might be that you care a lot about others, what other people think, or you are very jealous, and so on and so forth. That's not about being nice. That's just about your utility might be deeply affected what other people think about you. They're going to talk a little bit about things like Facebook and Instagram, et cetera, social media in particular. It might affect how do you feel about yourself. And social influences might shape people's behavior.

So when you think about then why are people nice-- and this is what you were saying earlier, about when you think about where do the nations go? I was saying, 2% of GDP is given to charity. When you look at some of those charities, for some of the charities, it seems pretty clear that people do that because they want to be nice to others. For other charities, perhaps not so much.

And some of the education ones are essentially donations to essentially the sponsor like a quarterback at your school. It's not clear that this is altruism. Other donations, if you give for health or poverty, I think that's pretty obvious.

Now what we did in class is we-- so one way to think about social preference as a measure, that's a very basic and coarse way of doing that, is what's called the dictator game to measure people's preferences. And it's very simple. And one of the questions we asked you was, here's if you get $10, you could split it-- and it's a hypothetical question in class, actually in lecture 12 or something. We're going to do this at least at some point with real money. But the question is, if you had just $10, what would you do with that $10? How much would you keep for yourself? And how much would you give to somebody else?

The other version that we asked you was-- or there was two versions of that. One was the recipient is informed about the circumstances of the decision. The other one is the other person might never notice. The money is just given anonymously. How can we use that to-- what can we disentangle potentially from those two choices or options? If people choose differently in one and two, what does that tell us? Yeah?

AUDIENCE: [INAUDIBLE] one, they chose to split it, that might be more of a social thing because they know that the other person will know. So they might worry that [INAUDIBLE] too much. But if they answer the same for the current two, then they're a little bit more selfless because let's say then they would split [INAUDIBLE] whether or not the other person knows.

FRANK SCHILBACH: Exactly. So in some way, at least, you can think of some form of pure altruism, in some sense. I would be very happy if you had $10 more on your bank account, even if you never knew. There's no reciprocity. There's no you knowing or learning about what I did or didn't do. It's just like I'd be happier if you had more money. And therefore I'm giving it to you.

Of course, there's things like self-image and so on, are hard to rule out. In some sense maybe it's not like I'm actually happy about you. I'm just happy about be about me being happy about you. So there's tricky issues about that. But it's getting close to a form of pure altruism. If instead, you learn about that either I explicitly-- it was me who gave you money, or at least there was somebody else doing it, then that's much more about the person who cares about where the money is coming from and so on. That tells you more about social motives potentially about how the other person feels when they get money, how others were treating them, or perhaps how I feel when the other person gets it and so on.

Is there a question? No.

AUDIENCE: [INAUDIBLE] I don't agree that it would necessarily have to be about repetition [INAUDIBLE]. It could simply be about promoting the idea of getting [INAUDIBLE]. So if a person [INAUDIBLE] it doesn't serve to promote the idea [INAUDIBLE]. But if it did, the person did know that somebody else was going to give them money, it would serve to [INAUDIBLE] even if-- they could be equally selfless even if [INAUDIBLE] your intentions are [INAUDIBLE].

FRANK SCHILBACH: So just to clarify, so in number two, I think what I'm saying is that's pretty close to pure altruism. In number one, I'm saying there's other forces at play. One of them is about people care about what others do. You were saying in some sense it's a version of that and saying, if it promotes altruism, in some sense, if I give you money, and you learn about that I give you money, and the hope is that you'll give money to others, in some sense you would only do that if you care about others one way or the other, or if you learn something from those kinds of actions.

That's all good. I think I'm just saying there's-- so those kinds of experiments will help us disentangle potentially different motives for giving. This is a whole course. And you can't rule out all sorts of things. But it's clear that in some sense, there is something different, which has to do with people caring about others one way or the other in some way. Yeah?

AUDIENCE: I actually did have a question. Do you think the results could change if we gave a larger sum of money, like $500 or $1,000 or something? Because with $10, the difference between getting $5 and getting $10 isn't going to be big. That's the difference between having a nice $5 and being able to get four bags of chips versus two bags of chips. So if you do it with $1,000, the difference between $500 and $1,000 could be the difference between-- something larger, like $500 would maybe pay for [INAUDIBLE] a while. But with $1,000, you could maybe buy a small car or something with it.

FRANK SCHILBACH: Yes, understood. So there's only two questions here. One is I'm asking you hypothetical questions. And you might say all sorts of things. But once I'm actually putting down money and say, here's actually $10, how are you going to behave? So what we're going to do in class, we're going to have some hypothetical choices. Then we have some real choices and going to look at how that's different.

In general people choose actually pretty similar and hypothetical to real choices. Then what we will not be able to answer in class is how would you choose if I give you $500? Because I'd be poor at the end of class. However, there have been some experiments with bankers and rich people and so on, where actually they gave people $1,000. And what you see overall is that people's giving behavior in those kinds of experiments actually is quite similar to smaller sums. I think there may be some differences.

And I think there's a huge difference potentially in particular when you think about poverty in developing-- or giving in developing countries. There's a huge difference potentially whether you give people large sums of money, and that could change their lives, and people could invest and so on and so forth, as opposed to very small amounts or even repeated small amounts. If they give you $5 every few days for quite a while, that might actually not change things that much because every small thing feels really small.

But instead, if I give you $1,000 right away, you might invest and try to have a better life and so on. So in that sense, I think that's potentially different. When you think about these kinds of experiments actually look quite similar in the behavior at least in lab games.

So now how much do people give? So the strict version, just to be clear in terms of the standard model, would say, people should give exactly zero. If you care only about yourself, why would you give money to others? You just keep everything. Usually subjects give about $2 to $3. The average giving in class with about $3 in the first case and $2 in the second case. So in some sense, it seems essentially that when the person knows about the circumstances-- so something that's valued more either being it signals altruism, or just the other person might be happier, and so we appreciate that in some way.

And so here is what this looks like. You can see the distribution. A lot of people give either $0 or $5. There's a few of you who would give everything. That's very nice. There I do wonder if it's actually if you gave-- it was actually 10 real dollars, would you really follow through? But surely there are some really nice people.

There's some issues with those kinds of games in a sense of I'm eliciting your behavior in terms of what would you do if I gave you $10 with some other student or some other person that's fairly rich? You could see some people actually saying, I'm giving $0. And instead I'm donating the money to somebody in Kenya. It would look in the lab as if I was really selfish. But in fact, I'm actually altruistic. So there's issues with these kinds of lab experiments. But in general, I think people's behavior in these lab games is fairly predictive of real world things that we care about.


AUDIENCE: I'm curious about the order in which these questions were asked because I remember on the survey we had the question on the left being asked first. And I think my thought process would have been different if I had been asked the questions in the reverse order.

FRANK SCHILBACH: Yes, absolutely. So I think in an actual experiment that I or others would run, this would be randomized what the order is. And then they can test for that directly. Or what you would do, is you randomize across people. So some people get one question. And people will get the others. We haven't done that in part just because otherwise we would have had to send you different links. And it was already chaotic enough to send you one link, which seems to be the most we can handle. But usually what you would do, is essentially you would randomize the order and then be able to do that.

My guess is that broadly speaking the results would be similar qualitatively. There might be some small differences in terms of the exact numbers. And surely the order of those kinds of questions often matters, which in some sense goes to we have a number-- the third aspect of choice, where if you think about in then the classical model, the order should surely not matter. Essentially I'm giving you some choices. You have a utility function. You have beliefs. And now just you'd be done. The order is completely irrelevant. But it turns out that order effects often in fact are quite important, in an experiments, and I think in the real world as well.


AUDIENCE: [INAUDIBLE] think about how too much altruism may be [INAUDIBLE] if someone gave me $5 [INAUDIBLE] when you gave $10, they might think they're really weird or not [INAUDIBLE].


FRANK SCHILBACH: No, that's a great question. In fact, we do-- so a lot of these experiments, and what I'm going to tell you a little bit about is as well, a lot of the experimental evidence that's been collected so far, has been collected in rich countries and Western countries, industrialized countries, rich and educated people. And the acronym for that is WEIRD, which is essentially-- think of a lot of experiments are done with mostly college students and rich universities in the world. And their behavior might not be as-- so in some sense, the hope is that we find universal characteristics or parameters of people's behavior.

But it's not unreasonable to think that in other societies, people behave quite differently. And for example, one thing that we find in India, in part-- we do these kinds of games with people. And there, sometimes people would say, if I gave too much, or if somebody else-- often we have these games, where it's not just one choice, but another choice. And if somebody, for example, offers you too much money, the other person reacts and says, this is weird. I can't accept this money, because otherwise I would owe them some debt or the like.

So there's these kinds of behaviors, where in some sense, it's more complicated because of other reciprocity or other reasons. In China and other countries, there's a clearer culture of reciprocity, that people are fairly careful of, for example, on whom to invite and so on and so forth, and whom to accept presents and so on from because you know in some sense, you have to reciprocate at some point. And if you're not able to do that, that's very bad. So I think that's exactly as you say, it's a lot more complicated in many settings. Yeah?

AUDIENCE: So I just have a question about the graphs. What is the y-axis?

FRANK SCHILBACH: It's the density. It's the aggregates. It adds up to 1. So essentially, it's a fraction-- think of this as the fraction overall-- it's essentially--

AUDIENCE: [INAUDIBLE] that are greater than 1?

FRANK SCHILBACH: It's the PDF, essentially. It adds up its aggregates up to it to 1. So another version of that would be just a fraction of people.

AUDIENCE: But you can't have a fraction that's larger than 1. I'm confused.

FRANK SCHILBACH: No, this is not the fraction. This is the PDF, right?

AUDIENCE: That's the PDF.

FRANK SCHILBACH: Yeah, exactly. It's the PDF that's essentially aggregates to 1. But you could relabel this and essentially look at the fraction of people. The graph would exactly look the same. Maybe you should just do that.

So second, let's talk a little bit about time preferences. So one interesting example-- and in some sense, I think one of the things we'd like to encourage you in behavioral economics or in psychology and economics is look around in the world and think about what's weird or what kind of things in the world can be explained with behavioral economics, the things that we see in the world?

And here's one example of that, which is just a weird thing, if you think about it. So there's this bet that's called-- there's these betting agencies. And one of these bets is where dieters can bet on their own weight loss success. And they often lose. So how does this work? There's William Hill, which is a sports betting agency, and they offer all sorts of weird bets. And one of them apparently is also some somewhat unusual wagers, where people are allowed to bet on their own weight loss. And in fact, they do that. But they often lose while doing this. Now why is this a weird bet? What's weird about this? Yes?

AUDIENCE: [INAUDIBLE] you have control technically over the outcome [INAUDIBLE].

FRANK SCHILBACH: Yes, exactly. So if you had perfect self-control, and if you had perfect foresight, in some sense, if you understood what your preferences are, and you have plans for the next month or two-- of course, there's unexpected things that could happen-- but if you understood your preferences well and had perfect self-control, you would know that either you can lose weight, and then you would win the bet. Or you would not lose weight, but then you would not even start the bet in the first place. But here, over 80% of betters, in fact, lose. So now how do we think about that? How do we explain this behavior?




FRANK SCHILBACH: And so more specifically, what does that mean?

AUDIENCE: So [INAUDIBLE] starting tomorrow. But when tomorrow comes, they do not [INAUDIBLE].

FRANK SCHILBACH: Right. But there's two things going on, potentially. So one is of control, in the sense of I have low self control. I like eating donuts. And I like eating donuts a lot. So I'm not going to lose any weight any time soon. Can that explain that behavior by itself? Or what else do we need? Yes?

AUDIENCE: A part of it.

FRANK SCHILBACH: Oh sorry, yeah?

AUDIENCE: That's part of the answer. The other part is I'm stopping [INAUDIBLE] tomorrow [INAUDIBLE] that I want to lose some weight [INAUDIBLE].

FRANK SCHILBACH: Exactly. So what you need is some form of naivete or optimism in a sense of saying, maybe today I know I really like donuts. But tomorrow I will just eat lots of salad. And so in some sense, there's this overoptimism naivete, where people are essentially naive about their future preferences. Plus there needs to be some form of self control problems. And that combination leads to these kinds of bets.

So one part is self-control problems. The second part is a naivete overconfidence. Putting these together could get some people to essentially engage in these kinds of bets. Is there some other motivation why you would engage in this bet? Yes?


FRANK SCHILBACH: Exactly. So it could be that in some sense, I'm just-- so one explanation is-- I think this is a really great deal. Somebody offers me this great deal. I'm just going to gluon some weight. And I'm going to make a bunch of money. And then afterwards, I guess I'll eat a bunch of donuts. So I don't really necessarily need to want to lose weight. I just think it's a great deal. So that's overconfidence about my self-control. Then I engage in the bet. It turns out actually I don't have self-control. And I lose the money and eat lots of donuts.

Now the second thing that you're saying is actually, I might actually be aware of my self-control problem. And so I might actually understand that I have a self-control problem. Now I'm engaging in this bet because as it turns out, on average, I guess there's a 20% chance of me actually succeeding. And perhaps the bet itself might help me in achieving that goal. So that's essentially some form of a commitment device, where engaging in that is helping me lose weight.

I might actually know. And there's a question of are you fully sophisticated or perhaps partially or fully naive? In some sense, I might know that there's a good chance that I might actually lose the money. But it might be worth for me actually engaging in the bet because the expected value of losing weight might be higher than the loss of money overall. So I think we said all there is to say here.

So essentially there's either-- again either people are just overconfidence or naive. Second one is people might want to set themselves incentives to lose weight. Again you need some naivete and a sense of-- or some stochasticity or something. Otherwise it couldn't be-- otherwise you would only engage in the bet if that's actually helpful. So it needs to be-- some people either-- it's stochastic, or they're partially naive. In some sense they think there's a chance it'll work. But in fact, maybe they overestimate how high the chance of succeeding is.

So overall what this reflects is there's a conflict between short-run and long-run plans. There's actually two things going on. One is what you want in the short run might be different from what you wanted the long run. Second one is you might have imperfect foresight about that. Yes?

AUDIENCE: Were people allowed to bet they would gain weight?


AUDIENCE: Were people allowed to bet they would gain weight?

FRANK SCHILBACH: Actually, I don't know. You could try. I actually don't know. But I think that's probably easier to do. So maybe they wouldn't accept the bets. Or your quota would be worse. Yeah?

AUDIENCE: Could there also be a change in utility, where you [INAUDIBLE] I'll get-- if I lose 10 pounds, I'll get $500. But over time, you're like, oh, eating is actually-- changing my eating habits is Oh, working out is difficult. So now I'd rather pay the $500 but not have to lose the weight because I'm no longer as interested.

FRANK SCHILBACH: Right. She's saying in some sense, there's some learning uncertainty involved. Yeah, I think that could be the case for some people. You would think in over 80% of people, they know their preferences by now. So a lot of behaviors-- and I have the same I'm chronically overoptimistic about when I get up in the morning. Every night, I'm sort of, tomorrow at 7:00, I'll get up. And I'll exercise and write a poem and so on and so forth. And then that never happens.

So I think you say, there is potentially uncertainty about the world. And you might just not know how hard losing weight is, in this example. There's plenty of examples, which we're going to show you, where people over and over make the same choices that seem to be not optimal. But you think people should have all the information they need. And people should have learned over time. But I think part of that is some explanation, potentially, is learning. That's exactly right.

So then what we have done here-- in our survey, we asked you two questions. One was, how often do you think you should be exercising? The other one is, how often do you, in fact, exercise? These actually look quite similar. There seems to be one person who thinks they should, and they actually do exercise 50 times a month--


--which is very impressive. I don't know whether you were truthful. You know who you are. So if you plot this against each other, what do you essentially see is quite a bit of mass below the 45 degree axis, which is to say there's a bunch of people who say they should be exercising more than they actually do. And that's a typical pattern in the world. Lots of people think in the future, they should be doing things or more virtuous things. In the future, they would like to be a better and more virtuous, more healthy person than they actually are.

So if you had to choose for next month or next week or the like, you would choose lots of virtuous and good things for yourself. But then if you had to choose for right now, that might not actually happen. And that leads to this discrepancy between people's desires and the actual behaviors, which we're going to then study and try and understand.

And then the broad question there, then is to say, given that that's the case, do people understand their behavior? Are they sophisticated? And are they engaging in certain commitment devices or certain behaviors that might help them overcome those types of issues?

So at the source of this conflict is essentially what we call present bias, which essentially is manifested in what we ask you about as well, which is ask you two questions about a choice between $100 in cash two weeks from now versus $x in 54 weeks from now. If you think about 52 versus 54 weeks from now, that's really far in the future. And two weeks early or late really don't make a difference. And most people would probably say something close to 100.

If I instead ask you right now, would you like to have $100 versus in two weeks from now, a lot more people say they'd rather have $100 now. Or they take like a lower amount right now than $100 and the future, essentially because right now, they put a lot of weight into the immediate present. And that's what we refer to as present bias. Essentially people put disproportional very high weight on the president, compared to anything that's in the future.

And so when you look at that, it's a little bit noisy. And the question is not quite ideal because in some sense, what I really would like to do is ask you about 52 weeks from now and 54 weeks from now and then wait for a year and then ask you the exact same question again. Now I can't really wait for you to do that. So in some sense, the circumstances in a year from now I might be quite different from what they are now. You might be richer, or more educated, wiser, and so on.

But for what it's worth, it does seem to be the case that when you ask people about $100 in 52 weeks versus 54 weeks from now, the distribution is very close to 100. Essentially you don't care very much about whether you get $100 in 52 weeks versus 54 weeks from now. If you look at like $100 now versus $100 in two weeks from now, I have to pay you a higher amount in two weeks from now to get you indifferent between now and two weeks from now, which is saying like you care more about today versus two weeks from now. Your discount rate between today and two weeks from now is higher than it is from 52 versus 54 weeks from now. Yeah?

AUDIENCE: I was wondering how you could distinguish this time preference and expectation of this. For example, if you offer money now is more certain than two weeks from now, [INAUDIBLE] I do not see you again. But if I look at 52 versus 54, they are both uncertain.

FRANK SCHILBACH: Yes. So just to repeat the question, the question is what about risk? So if I ask you $100 now, and I give you $100 bill, and say-- or some money in the future, who knows whether I'm going to show up and actually give you the money. And there's reason to be skeptical. I will be here in two weeks. Don't worry. So there's a bunch of experiments that, in fact, try to deal with that fairly carefully.

So they will do things like in some experiments with students, they give the professor's office name. They have the phone number, the business card-- they even give checks that are sent automatically. So people try to deal with that. It's a great question. In fact, there's lots of research in that to try to do that. Some people said, this present bias is actually not real that you find in these types of experiments. But then people have been very careful in trying to disentangle risk from time preferences. And they find essentially even if you try to minimize risk as much as possible in those experiments, present bias seems to persist in those kinds of choices. Yeah?

AUDIENCE: I'm curious about the reasoning or being indifferent between $100 now and less than $100 in the future.

FRANK SCHILBACH: You have to ask your classmates about that. It could be that in some sense, you say-- if you, for example, say you know that you tend to waste money in certain situations-- maybe you're really tired today, and you're just going to say, I'm going to waste the money on hamburgers-- you might say maybe in a week from now, things are different. I think there's lots of explanation that you can come up with. Again, you have to ask your classmates what they were thinking. If anybody has an explanation, please let us know.

I think one broad bottom line from this in these kinds of experiments, you see a lot of behavior that looks-- there's lots of potential anomalies. There's lots of noise, essentially, in some of those kinds of data. In aggregate, things often look-- the patterns emerge. But lots of people make choices that are in fact hard to rationalize. And we try and look at aggregate patterns. Most of these things are systematic. But there's some people, they're on the very left. Maybe these are just mistakes, for example, or they just misunderstood the question. Or there's typos and so on and so forth.

So let me summarize here. So people tend to be fairly patient for far-off decisions. So in the future, you're actually quite patient between stuff that happens between a year from now, two years from now, five years from now. People invest all sorts of things for the far away future. But when it comes to immediately relevant decisions, people tend to be quite impatient. You care a lot about right now what happens, the next maybe day or the next few hours. You care a lot about that much more than a week or two or even a year from now.

So there's then two broad questions to begin to look at. One is evidence of those kinds of conflicts between the current and future selves in terms of short-run desires and long-run goals. And we can try and find evidence of that. Second, we're going to look at whether and how people predict their future utility and behavior. And that's a general pattern in behavioral economics and psychology and economics, is to say actually the fact that people have present bias itself is not a huge problem in the sense of causing welfare losses or making people unhappy, as long as people are sophisticated.

So as long as I know I have self control problems, and I'm fully aware of them, I can set my decision environment or my choices such that I can actually mitigate a lot of these biases, in part because I can buy certain commitment devices. I can set alarm clocks and all sorts of things that might help me improve my behavior. The problem comes from naivete.

And then to say, if I have self-control problems, and I'm naive on top of that, then I might get really screwed because I think, I'm going to go do stuff in the future. I'll do it in the future. And so on-- and then the future comes. And surprise, I'm still present bias and have like self control problems. So if I think that I'm have more self control than actually have, that might cause a lot of bad behaviors or problems because I make mistakes that are then hard to fix in the future. That's a general pattern about behavioral economics.

Similar issues-- for example, you think about memory, if I have imperfect memory, that's of course, potentially a problem. But I can fix it in some sense. I can set myself reminders. I can help other people have them remind me and so on. I can set my decision environments that I can account for imperfect memory. The problem comes if I'm overconfident. If I think I can remember everything , I talked to people and never take notes and [INAUDIBLE] I'll remember, but then an hour later, I've forgotten everything, that really needs to problems. And again it's not the bias itself in terms of having imperfect cognitive function or the like, but rather the overconfidence of the same that often leads to welfare losses or making people unhappy or making inferior choices.

So let me talk about beliefs then for a bit. So a broad set of beliefs or-- sorry, were there any other questions about preferences? Yes?

AUDIENCE: Yeah, for present bias, is it in general symmetrical compared to loss versus pain? Do people prefer to lose in the future rather than to lose now? And is it [INAUDIBLE] in general?

FRANK SCHILBACH: Yeah. We'll talk about that. In fact, in some sense, people tend to-- so essentially people like to have gains in the present and losses in the future. In some sense-- think of it like this. You put more weight on the present utility than you put on the future utility. How do you do that? You want to have more gains and fewer losses or negative things going on.

That is not to say that loss aversion always happens-- so loss aversion as a concept of gains and losses, you are loss averse in the present, and you also loss of in the future. But if you could pick, would you rather have a gain in the present or a loss-- or a gain or loss in the present versus in the future, you would tend to say, I'd rather have the gain now and the loss in the future, precisely because you overweigh, essentially, the present period or utility. I don't know if that-- but we'll talk about that.

So let me talk about now beliefs and information updating. So one very basic question that I think we ask you and that's a classic question that people, like Kahneman and Tversky and so on would often ask, which is essentially the question, suppose 100 people have HIV. We have a test for HIV that's 99% accurate. This means that if a person has HIV, the test returns a positive result with 99% probability. And if the person does not have HIV, it returns a negative result with 99% probability. If a person's HIV test came back positive, what's the probability that she has HIV?

Now that's a very straightforward question, that there's a clear correct answer. If you know Bayes' rule, you can essentially just calculate that. I'm sure somewhere in statistics, et cetera, people have taught you that. Now 22% of this class would have answered 99%. That's usually the classic what's called base rate neglect. For what it's worth, you guys are much better than previous years. I don't know why that is. But maybe people get smarter over time. But still this is MIT. So if 20% of MIT students get this wrong, surely the fraction in the general population who gets that wrong is a lot higher.

And the typical explanation is that there's what's called base rate neglect. The logic is something like this. A positive person will probably receive a positive result. If she's tested positive, she's likely to be HIV positive. And then that's 99%. Now that's a very sort of natural and easy answer. In particular if you only had like five or 10 seconds to answer this question, that's a natural response many people would make. Usually the fraction of people would say 99% would be a lot higher or maybe half or more. In previous years, I think it was about half. And very few people, in fact, would give the correct answer of 50%.

And the mistake here is essentially what's called sort of base rate neglect, is to ignore the fact that the population, the base rate, is in fact, very low. It's only 1%. And you should condition on that when you try to solve this problem. So here's what you see. So 50 is the correct answer. If I remember right. But there's lots of spread around this. And the fraction of people who get it right is maybe 50% or even lower than that. So now in some sense, you can look at this and look into the answer as like 50%.

The key part is not is this a specific problem and some people maybe misunderstood and so on and so forth. The point here is that this is actually a fairly basic problem in terms of probability theory. This is a very simple problem to solve in a sense of if you think about real problems you need to solve in the world, the world is way more complicated than that. However, even among MIT students, something like half of the students, when given several minutes at least to think about this-- and I don't know how much time you took-- but when given quite a bit of time to think about it-- get the answer wrong.

That means essentially, in many other decisions you have to act much faster, people don't think about these things that much, people have not taken statistics. People have not taken math, and so on. And so the fraction of people who would get these very basic problems is probably much higher, let alone much more complicated problems.

So broadly speaking, these problems are hard. Often what we do when we have cognitively demanding tasks, we tend to have sort of quick intuitive shortcuts. In some sense, think about the problems are really hard. We try to simplify and make it easier for us. We give intuitive answers to those types of questions. These intuitive answers are actually often not bad. They're actually pretty close in many situations. But often they systematically leads to incorrect answers.

And essentially then, there's a whole field of essentially cognitive psychology and so on that tries to think about what kind of biases arise. And in particular, Kahneman and Tversky, who-- Danny Kahneman, a psychologist who got the Nobel Prize in economics, with the experiments-- and you may have read Thinking Fast and Slow-- with their experiments, essentially demonstrated a large number of those kinds of anomalies and biases in people's cognition, if you want, in terms of heuristics that they use for these kinds of probability and other problems. And these are systematic biases. These are not so subtle random mistakes that are added to people's choices.

People systematically in systematic ways deviate from optimal choices or correct calculations. And the question is, can we understand these systematic biases better and then try to explain behavior systematically? And if we understood these mistakes better, then we could sort of potentially fix the mistakes and improve people's decision making.

There's another question that worked reasonably well-- but MIT is a little bit of special population-- which is the question of overconfidence. And it's a little bit hard to elicit with a simple survey. So what we are asking you about, what's the probability of you yourself would earn a lot of money, versus what's the probability of what students in your class would make this type of money? Now if you think about these two questions, at least under some assumptions, they should add up to the same number. So essentially if you aggregate over everybody, if the people's expectations are correct, they should add up to the same number, at least under some assumptions. Usually they do not.

The difference is actually relatively small. MIT is a little bit of a special population in a sense. If you look at like surveys across the world, for example, if you think about driving behavior in the world, something like 90%, 95% of drivers think that better than the median driver. Lots of people think they're smarter than the median person and so on. There's lots of evidence of overconfidence in the world.

So MIT to some degree has actually an opposite issue, which is under confidence. When you look at student surveys at MIT, there's quite a bit of evidence that essentially-- and I'll show you this, in fact, in class-- about the typical MIT student thinks they're worse than the average MIT student. So--


--there seems to be some overconfidence here. But it's, in fact, not particularly strong. So there you can see there's some over and under confidence. Perhaps there's a little bit of overconfidence here. But in fact, it's fairly weak. But overall as a pattern in the world, there seems to be quite a bit of overconfidence in many situations. And then the question is, does overconfidence lead to bad outcomes? If you're overconfident about certain situations, then are you going to fail a lot because you do stuff that is too difficult for you?

If you're under confident, you might not engage in certain behaviors and not try things that would be good for you. In fact, Maddie, who is currently traveling-- one of your TAs-- has some very interesting work in India about women being under confident and thus not trying to convince the household members or their husbands and so on to try and work in the labor market. And that leads to lower labor supply, earnings, and so on and so forth.

So if you are under confident, you might not try certain things that might be good for you. And then the problem is if you never try it, you might actually never learn about it. If you never even try it yourself, then you might never even fail or learn that you might be actually able to do that. So these potential traps of under confidence-- or if you're overconfident, you might just fail because you try I think it's actually not good for you.

And then there's other issues about motivated beliefs that I discussed this already last time, which is the idea that you might be overconfident, not because you actually necessarily believe or deeply believe what you like to believe, but in fact, because it just makes you feel good about the world or about yourself. So I might not actually-- if you really ask me I'm a better driver than the median driver, I might not actually necessarily-- if you make me think about it, I might actually agree that that's not the case. But I like to think about I do things well. So in some sense, I might just say yes because it makes me feel good about myself. So essentially overconfidence could be driven by motivated beliefs. And that's another topic that we're going to try and study.

So now, as I said, there are preferences and beliefs that we study. So in some sense, preferences about what you want, beliefs about what you think the world is like. Now putting this together, as I said, people's choices should be entirely determined. But that tends to be not the case. And one classic survey that's a survey question or a version of a survey question that Richard Thaler-- who won the Nobel Prize recently in economics-- has asked a bunch of people, which is the following. The question is, imagine that you're about to purchase an iPad for $500. The salesman tells you that you can get the exact same-- good in a nearby location for $15 off. You would need to walk for 30 minutes in total. Would you go to the other store?

So you look at this question. If an economist looks at this, the question is, there's some cost of walking. It's tedious to walk. Maybe it's raining, maybe not. And then you have some value of time. Either your value of time for 30 minutes is below or above $15. If the answer is your value of time is higher, you're not going to walk. If your answer is the value of time is lower, you're going to walk. Case closed. You're done. And you know your preferences. And you have all the information you need.

Now I'm asking you a related question, which is in fact, the exact same questions, if you think about it, except for what's different is it's about an iPad versus an iPad case. I'm asking you the exact same question, which is the question about would you walk for half an hour for $15? So again either your value of time is above or below $15 for 30 minutes. If the answer is yes, you should say-- if your value of time is higher, you should say no to both questions. If the value of time is lower, you should say yes to both questions.

But notice here that questions are exactly the same. The only thing that's changing is essentially the value of the item. Now why might people still say different things? Or what's potentially affecting behavior here? Yes?


FRANK SCHILBACH: Yes, exactly. So we're thinking in relative terms, which is to say the $15 seems pretty small when it compares to $500. That's to say, $500 is a lot of money. So if you spend $500, you might as well-- whatever, $15 more or less, it doesn't really matter. When you spend $30, $15 is like half of that. It's a lot of money. So you get like $15 off. That would be a really great deal to get.

But when you think about now going back-- and this is I guess when you go back to recitation, look at utility maximization-- there's no place for these relative considerations. When you maximize utility, it would be like the utility of this potentially consumption items and so on. These are all in absolute terms. And either money makes you happy or not or walking makes you unhappy. There's no relative comparison here overall.

Yet what seems to be really important is all these relative concerns. And then you look at this question. This is I think one of the question that's the most robust I've found in any of the surveys I did. This always works, essentially. People are much more likely to-- or say, at least, they're much more likely to walk for an iPad case than they are for an actual iPad. There's various reasons why you might say, in this case, maybe if you have an iPad, you buy an iPad for $300-- whatever I said-- you might have a lot more money. And so you're richer and so on and so forth. And maybe that can explain the results.

There's 10 different versions of these question that all have different flavors. And essentially this result is extremely robust. People tend to think in relative terms and not in absolute terms when making these kinds of choices. And their choices are extremely malleable to those kinds of comparisons. Any questions about that? Yeah

AUDIENCE: So how about joint evaluations? So if you were to ask people that you would buy this iPad, you get the $15 off and also for the same iPad case, you get $15 off. And if you go there and buy both, you get $30 off. So do you think the same behavior to walk would [INAUDIBLE]?

FRANK SCHILBACH: I think what people in some ways do in their heads, they do essentially what fraction do you get off in some ways? And if you then bought both, I guess that would be-- I don't know what the prices where-- it would be I guess $530. And then you get-- I think it would probably be somewhere in between. The behavior would probably be quite similar to the iPad thing overall because you would have-- I think what people do in their mind is to say, I'm spending already x dollars-- in case $500 or $530-- now does the $15 look large or small compared to that?

$15 looks really large compared to $30. So let's just walk. And I might as well do it and get a really good deal. I get 50% off. When you have $500 or $530, the $15 seems really small. That's not really a great deal. So why bother doing it? Let's just spend it all anyway. But of course, the issue is $15, and you're going to use that $15 for something else, presumably not just iPads. And then you should give essentially the same answer.

There's some very interesting evidence I'm going to show you at the very end of the class, which is about poor versus rich people. So there's some evidence that [INAUDIBLE] and others have done that do exactly these kinds of relative thinking comparisons with rich people and with poor people. And what you find there overall is that rich people do this a lot, poor people actually not so much. And why is that?


FRANK SCHILBACH: No. It's for the same amount. They keep the amounts the same.

AUDIENCE: They value it a lot more than rich people.

FRANK SCHILBACH: So they value it more. Yes, that's right. Yeah?

AUDIENCE: Because rich people have more money, they can more afford to spend less time thinking about the $15 and therefore, might make more of a snap judgment [INAUDIBLE] poor people might deliberate about it for a minute.

FRANK SCHILBACH: Right. Exactly. So in some sense-- so there's two different explanations. One is in a way, like a classical explanation, is to say there's certain amounts of money that for the rich, it just doesn't matter. If you have a million dollars, why walk anyway and whatever? I just don't pay attention. And in some sense, I make them certain mistakes. But the mistakes that really not very costly. It's just that I don't really care. I might make mistakes. But it's really not an issue because I'm rich anyway.

There's another view, and that is to say, for the poor, at a daily basis, they evaluate something like $15-- for the poor, they know what $15 is worth. For them, it's not a comparison to $500 or whatever. $15 is a meal for your family. It's about the difference between your child being hungry versus getting food and so on. So then for them, $1 is $1. And they're less affected by these framing and other choices.

But that's one of the most interesting evidence in poverty research, where people essentially try to understand how do people's monetary choices-- or how are they affected by these framings? And the view overall is that the poor might, in fact, be more rational in some sense, in the sense of less swayed by framing and other choices that might lead to inconsistent choices, while the rich do that a lot more.

And the downside of that seems to be that thinking a lot of money, the poor then spend a lot of cognitive energy on thinking about these monetary issues. And that might lead you to make worse decisions in other domains of life. You might have just less cognitive function or cognitive energy if you want available-- or resources available for other potentially important choices in life. And that's what we talk about in the property development section.

Then there was another choice that we did that usually tends to work. And this is a very interesting line of research by Dan Ariely and others, which essentially is about anchoring. In some sense, looking at this exposed, I think the questions were not quite right. So the idea of anchoring is to say, if I set you a very arbitrary anchor, but let you think about certain numbers, in this case, it was the sum of people's phone numbers. Then essentially when you have a high sum of phone numbers, you willing to pay more. If you have a sum of phone numbers, you're willing to pay less.

There's a bunch of experiments actually done at MIT with MBA students, mostly, I have to say-- but MBA students mostly, about Dan Ariely, who used to be a faculty member at MIT, that a bunch of these kinds of experiments were essentially-- you can anchor people fairly arbitrarily. And if you would make them think about the last two digits of their social security number, if that social security number is high, they're willing to pay more for wine than if the social security number is low. And this is also a real choice that tends to be quite robust.

One problem with the experiment that we're doing here that didn't seem to show up-- in 2017 there was more anchoring. Here it doesn't seem like maybe it's upward sloping, maybe not. One problem here is that the anchor-- if you look at the values of the anchor that we were setting, they're actually pretty low, they're essentially between something like 10 and 30, while people's valuation tends to be often a lot higher. So maybe that's why it potentially didn't work. It could be that you guys are less prone to anchoring.

But overall I think the bottom line is there's quite a bit of evidence essentially that when you anchor people, that affects their choices. And again of course, it shouldn't affect your choice at all. When it makes you think about your social security number, a large or small numbers, that should by no means affect how much are you're willing to pay for a glass of water a bottle of wine or anything else. Yet there's a relatively robust line of research that shows that in fact, that's the case. And for example it seems like, at least in 2017, that was in fact going on.

Finally, I want to talk to something briefly, which is quite interesting literature in social psychology. And it's this paper by Darley & Batson, which is a classic study in psychology that studies helping behavior. And broadly speaking, you can think of social psychology perhaps as a way of thinking about how do social circumstances or your environments affect your behavior or your preferences? And that's a deviation from what economists think about. Usually economists would say, you have certain preferences. You either want to be nice or not. Or you either want to eat certain things or not. It shouldn't depend that much on your social environment. And social psychology studies in detail are like how your choices or behaviors-- in this case, I guess helping behavior-- is affected by your environment.

And so what they did is they studied helping behavior. These are Princeton theology students on their way to a seminar. And they pass an ostensibly injured man slumped in a doorway, coughing and groaning. This is somebody in the experiment. So social psychology studies are very rich in creating interesting environments. So they had this man who essentially is looking very injured. And they were looking at determinants of helping behavior.

So the question is, do these theology students stop and help this man when they're on the way to their seminar? And they had three different manipulations. I should say very clearly this with a very small sample alert, the study is very small. So there's some question-- does this replicate, which is often for several social psychology studies, classic studies, sample sizes are relatively small. So there are some questions on how does it replicate? But I think the point broadly speaking, is a good one, which is why I'm telling you about this.

So what are the three manipulations? Three things. One was they had a lecture on the parable of the Good Samaritan versus some other content. What is the Good Samaritan parable? Yes?

AUDIENCE: It's about helping someone in need on the side of the road?

FRANK SCHILBACH: Exactly. So it's about essentially this person who-- long story short, there's a person in need on the side of the road. They're people who are from the same area, the priests and so on-- nobody is helping because they're really busy. And then the Samaritan comes, who is an outcast in society and so on. And he's helping. And the point is you should help anybody and so on, regardless of background and so on and so forth.

So the whole story is essentially saying, you should be helping somebody. So there are these students now who have been essentially-- they've been thinking about this parable. They've thinking about helping behavior. They're o supposed to give essentially a lecture about the parable of the Good Samaritan. In their mind, is to think about, you should be helping others. So we determine that mindset in [INAUDIBLE], and then in fact, to see the practical version of that in life. The question is, does that affect behavior?

The second variation is variation and time pressures, just are people in a hurry or not? And the third one is personality measures, in particular religiosity. So people are more or less religious. And you might think that that also affects or determines how much people are helping. So now you can look at which of those three things matters for behavior. Who thinks one is most important? What about two? Three?

I set this up in some sense maybe not the right way. So it turns out two is, in fact, as you said, the most important thing. The hurry condition, essentially-- again, it's a small sample. But in no hurry, about 60% of people stopped, medium hurry, 45%. Of people in a high hurry, essentially nobody stops. People essentially-- but the reason they're worried about coming to that seminar, about the Good Samaritan, and they're not sort of particularly good Samaritans themselves. It turns out the personality characteristics don't seem to really predict behaviors.

And so what we learn from that is that situations can matter a great deal. Now in some sense that's very much consistent what I was saying to you earlier, about social influences affect behavior, in some sense, in the sense of you care about others in a way depends a lot on your circumstances in the sense of are you observed or not or in malleable situations. But I think in sense, here's like a choice that's exactly the same choice. You might either be nice or not. But in some sense, the value of time seems to be quite important here for people.

So people's choices in some sense might actually be-- it's not like you are a nice person versus not. But in some situations, you might behave very nicely and friendly and in others, not so much. And that's a much more fundamental problem perhaps for economics or for writing down utility functions because essentially, the issue is that it's not even clear what your social preference parameter is. It's not even clear are you a nice person or not so much a nice person because it depends so much on your environment. And people are shaped by their environments overall.

So one thing they're going to think about towards the end of the class is to say, people's-- it's not just that their preferences or their beliefs systematically deviate in ways that we can quantify from the classical model. But in fact, it's the case that they're actually very volatile and maybe even hard to elicit in some ways because they're changing so much. And in fact, people might not even know necessarily their preferences. Or their preferences are just malleable in the sense of they depend a lot on circumstantial situations.

So then let me summarize quickly what we discussed. So I think one way-- and this is what I was saying earlier-- one way to think about this class is one goal of this class is really to get you to think more and observe the world more in a way through the lens of economics, but also through the lens of psychology and economics. That's to say you can think about a lot of situations, how people behave and try to understand why they behave in the ways they do. And you often see certain puzzles or certain behaviors that don't look quite right.

It seems like people are sort of making mistakes in some ways. They seem to be sort of erratic in various ways. But often you can actually put a lot of structure in their behavior. We can actually understand systematically through a certain puzzle to a certain [INAUDIBLE], through certain theories or certain topics that we discussed in class, what people do and why they do that.

There's also similar issues about firms. When you think about when you get credit card offers, when you think about a sales item and so on, when you think about like pricing decisions that firms make, a lot of these pricing or decisions that firms make are actually thinking very carefully about the psychology of consumers. So essentially what firms are doing, they're trying to think about potential-- what they call behavioral biases or issues that consumers have. And when you think about what do you see in the world, often that makes actually a lot of sense through the lens of psychology and economics or behavioral economics.

So you can think about yourself or others when you see certain products. Think about how does this relate to the content of the course that we have? And I think that's a general lesson. So what we try to do in the course, in part, is think about individual choices and some theories in economics and psychology, but in particular think about how do these series and relate to real world outcomes, how can we perhaps apply them to situations in the world and potentially improve choices?

So here's broadly where we are. As I said, you're going to talk about preferences for the next few weeks, in particular time preferences and self control. As I said, a lot of that has to do with procrastination, including for problem sets. As I said, the problem set will come probably something around Monday. Here's the reading for next week. This is on the course website. Please read Frederick O'Donoghue 2002 sections 1, as well as 4.1 to 5.1. Thank you very much.