Description: In this lecture, Prof. Schilbach discusses the ways that quasi-hyperbolic discounting has many real-world applications for work, exercising, credit cards, drinking, and smoking, among others.
Instructor: Prof. Frank Schilbach
FRANK SCHILBACH: So I'm going to briefly recap what we discussed, and then sort of talk about a number of different applications, ranging from, like, work, exercising, credit cards, savings behavior, drinking, smoking, fertilizer use, and so on. And then, that'll spill into the next lecture, or go into next lecture as well. I'm going to sort of summarize where we are. What do we know about time preferences? What have you learned? What's useful from what have you learned for the world, and what things might be still up for investigation?
OK, so what happened so far is, like, I showed you a simple model of exponential discounting. Again, that's the workhorse model of discounting. In economics, it's one of the most successful and most important models that people have written down. In economics, the Solow model, et cetera, like, many sort of different important models, thinking about long-run growth and so on, use this particular model. So it's been tremendously important and successful.
It has different implications that we discussed at length, both in class, and lecture, and in recitation, which is constant discounting, dynamic consistency, and no demand for commitment. We discussed some evidence that shows that these assumptions-- these implications-- are not warranted. And then we talked about a different version of this model, or like an extension, if you want, of that model, which is the quasi-hyperbolic discounting model, which adds an additional parameter that measures people's present bias or present focus, as people call it sometimes these days, which allows us to be more flexible and be able to look at short-run and long-run discounting in the same model, as in, like, there is one parameter, beta, that measures people's short-run discount factor, and another parameter, delta, which is close to one off, and that measures the long-run discount factor. That makes the model more flexible, and we're able to explain some phenomena that might be hard to explain otherwise. Any questions on that so far, or last time, or the like?
OK, so then next, we talked about sophistication versus naiveté. So this is the issue that we discussed before, which is present bias creates time inconsistency, right? When thinking about the future, we want to be patient. When the time actually comes, when the future actually arrived, we are impatient. So then the key question you might ask is, like, well, do people understand the time consistency?
We talked about two different extreme assumptions or versions if this. One is full naiveté, which is the idea that the person does not realize that she will change her mind. When thinking about the future, she thinks she's going to follow through on her favorite plan. When the future comes, she will be patient. But then of course, the future arrives and surprises happen. And the person is surprised by their own present bias. And sort of there is false optimism about future patience, and sort of the-- over and over again, the person might say, this time is different.
Second extreme assumption is full sophistication. That is perfect foresight. The person actually understands their beta perfectly well, and understands that she, in the future, might not stick to the plans that she has, and might change her mind. So she does sort of her best, given the future self's anticipated changes in behavior. So taken into account as a constraint what the person will do in the future, the person optimizes that way. There's no surprises about future present bias. The person has rational expectations.
Those are the two broad extreme assumptions that we discussed. Now, how can we tell-- how can we actually tell-- so if you wanted to know about your friend, and try to understand, is this person naive or sophisticated, how can we actually tell whether that's the case? What do we do? What data could you collect from your friend or from people that you see? Yes?
AUDIENCE: Just giving them [INAUDIBLE]. Trying to see, like, what they procrastinate [INAUDIBLE].
FRANK SCHILBACH: And what exactly would you collect?
AUDIENCE: I guess if you could, asking them what they think that they're going to do, and then seeing what they actually do.
FRANK SCHILBACH: Right, so one thing you can do is collect the person's beliefs. So you can ask them, what are you going to do in the future? And if the person mispredicts what they're going to be in the future-- in particular, if the person thinks they're going to be more patient in the future than they actually are, so if they think their beta in the future is higher than it actually is, that would suggest at least some naiveté, right? What else could we elicit? Yes?
AUDIENCE: There are some choices that people make that wouldn't make sense if they're not being sophisticated. So for example, is we see them restricting their choices in some way that we would expect a sophisticated person-- like that would make sense if they're [INAUDIBLE] naive.
FRANK SCHILBACH: Exactly, so if you offered them commitment devices, and we said, here's a commitment device. You can change your future behavior in certain ways that make certain behaviors in the future more expensive. So for example, you might tell your friend, or your friend might offer to you, if I don't do the problem set until Friday, 5:00 PM, because you want to have fun Friday night, I'm going to pay you $100.
Now, if you make that choice, or if somebody offers you that option to pay them $100 in case you haven't done the problem set by Friday 5:00 PM, that choice does make doesn't make any sense if the person is an exponential discounter. That person only makes sense if you are a present bias or if you have self-control problems in some way. And you must be sophisticated in some sense. So it must indicate some form of sophistication.
To be clear, it doesn't indicate perfect sophistication. You might be only partially sophisticated. But at least it indicates some form of sophistication. I'm going to talk about partial sophistication in a bit.
The same awareness issue does not arise with exponential discounting. Why is that? So in some sense, if you thought about it, like, if you have looked at like the exponential discounting model, there's nothing about sophistication and naiveté. There's no parameter that measures that. And why is that? Why do we need like a delta hat, or whatever? Yes?
FRANK SCHILBACH: Right, exactly. So the whole issue only arises because there's time and consistency. The future self wants different things that the current self does. In the exponential discounting model, there's no such issue. There's no time and consistency. So the future self will always do what the current self actually wants to do unless circumstances change. So we don't need any parameter that sort of looks at, like, how much is the future self deviating, because that's not even an issue in the first place.
OK, so then we talked about sort of extreme assumptions. So these are extreme assumptions on the two ends. One is, like, full naiveté. You're entirely naive. You just cannot imagine that your beta in the future will be different from one. That's full naiveté. And then there is the other extreme assumption, which is full sophistication, which is beta hat equals beta, right? Essentially, it's like you understand entirely what's going on with your future beta.
But of course, there's a whole range in between from beta to 1 that beta hat could take. And that's what we refer to as partial naiveté. That is to say beta hat measures the beliefs about future beta. The extreme cases are useful to think about. But presumably, the truth is somewhere in between. So the intermediate case might be sort of the most relevant one, which is beta hat is in between beta and 1.
So that is to say, the individual understands that they will experience present bias in the future. So I know that I have some present bias in the future. As an example, say, my beta might be 0.6. I might think-- I understand to some degree that my future beta is not 1. But I might think it's like 0.8 or the like. So I understand that I will be present bias in the future, but I underestimate the degree of present bias in the future.
And so if I'm partially naive-- and this is what Maya was saying earlier-- I might demand commitment devices anyway. I might understand I have a problem in the future. So I want some commitment devices. But I could also overcommit. I could demand commitment devices that are actually not useful for me.
Why is that? Because I sort of underestimate how bad my self-control problem is. I understand there's a self-control problem. Somebody offers me a commitment device that's fairly weak. I say great, that's going to help me. That's going to help me follow through. But then, surprise, my self-control problem is actually worse than I anticipated. And then I have a commitment device and I'm actually failing with that commitment device, because the self-control problem happens to be worse than anticipated. Any questions on this?
OK, we're going to talk tomorrow about, like-- a little bit about solving problems with partial naiveté. We talked about solving problems with full naiveté and full sophistication. Partial naiveté is a little bit trickier, because it sort of requires iterating forwards and backwards. We'll talk about this briefly tomorrow.
OK, so now, demand for commitment, we already talked about this before. Here is sort of a formal definition. It's defined as an arrangement entered into by an agent who restricts his and her future choice set by making certain choices more expensive, perhaps infinitely expensive. That's to say, at the margin, you might sort of pay for something that might make restrict your choices or make your choices in the future more expensive. If it's not available at all, you can think of this as, like, the price is infinite, right?
So I don't want to eat donuts tomorrow. I can make donuts more expensive-- more and more expensive. If it's infinitely expensive, donuts are just not available to me.
So now of course, as we said before, time-inconsistent preferences, the selves differ. And this is, again, repetition. Selfs differ between what you want today versus in the future. You might sort of worry about misbehaving in the future. If you understand that, you might want to discipline your future self by demanding a commitment device.
OK, so who of you is using commitment devices, or can you give me an example of a commitment device that you have used in the past, successful or not? Yes?
AUDIENCE: I have an app on my phone where I can set a timer [INAUDIBLE].
FRANK SCHILBACH: Does it work?
FRANK SCHILBACH: I should try that. Yes?
AUDIENCE: I have a browser extension that launch certain sites at certain times of the day.
FRANK SCHILBACH: Does it work for you, too?
FRANK SCHILBACH: Why does it not work?
AUDIENCE: Because it's way too easy to just turn it off.
FRANK SCHILBACH: Right, exactly. So that's an example of a commitment device that's sort of partial in some ways. It's not sort of strong enough. So either you can sort of substitute to, like, Firefox or whatever, and another browser, you can substitute your phone, you can substitute your friend's phone even, and so on. Or you might just actually be able to turn it off yourself.
But you can circumvent. I think there's some apps that have sort of options that don't allow you to do that at all. But you guys are, like, a lot of CS majors. So maybe you can get around that as well. Any other examples? Yes?
AUDIENCE: When you go shopping, buying more vegetables or something so that you feel like, oh, I have to eat them, otherwise I just wasted my time and money.
FRANK SCHILBACH: So what do you do actually then? So what's your commitment device?
AUDIENCE: I guess it's like making the choice to eat something else, like, more expensive. Because I'd have to, like, go back to the grocery store and I'd have to pay for it.
FRANK SCHILBACH: But how do you commit then? Or, like, what's restricting your options?
AUDIENCE: The fact that I'm lazy and I don't want to go somewhere to get other food.
FRANK SCHILBACH: But, like, so you know that you're going to do that in the future potentially. So now, how do you avoid or change your future behavior? Can you sort of incentivize yourself? Or can you make sure that-- another version of that would be, like, so what people often do is when they buy, for example, potato chips or the like, they buy really small bags, in part sort of knowing that if they buy a big bag, which actually would be cheaper, they would just eat it all. So then, you have, like, only very small portions. And then if you want another one, you have to sort of go to the store and buy more.
So that's like a version-- it's sort of a version of a commitment device, where essentially, you commit yourself to, if you want to eat more potato chips in the future, you have to go back to the store as opposed to having, like, a big bag that you can just consume in one session. Yeah. Any other examples? Yes?
AUDIENCE: Carrying cash instead of card. Like, if you [INAUDIBLE] cash on you, you won't overspend.
FRANK SCHILBACH: I see. And so that's interesting. Yeah, exactly. So, like, credit, it's interesting. Because in some sense, in some other settings, carrying cash is sort of not helping. But you're saying, like, instead of having a credit card where you can spend, usually, as much as you like, depending on what your credit limit is, you might say, I'm going to go out with $100, and, like, once I've spent the $100, I'm not going to spend more. I have to sort of go back home or the like. And then, I might sort of not give in to temptations.
The reason I was hesitating a little bit is, like, in developing countries, in part, in some settings where I work, lots of people have lots of cash on hand from their work. For example, cycle rickshaw drivers, they would have a lot of cash on hand because they get, like, trips-- they get paid for every single trip. And then they have lots of cash on hand. It's actually quite bad, because then they can spend it on lots of things any day. And having it in a liquid, or their savings account or the like, would be a different form of commitment device that might help them.
There was more-- yes?
AUDIENCE: There's a show, Nathan For You, where he wants to help people lose weight. So he takes a picture of them-- like a very embarrassing picture-- and gets a notarized letter that gets sent out in two weeks if they don't lose five pounds.
FRANK SCHILBACH: And does it work?
FRANK SCHILBACH: Wow, interesting. I've not heard of this. What is it called?
AUDIENCE: That's how embarrassing the picture is.
FRANK SCHILBACH: I see. Yeah, you can see, like-- yeah, it depends a little bit, I guess-- yeah, so when asked does it work, does it work on average, or what's being shown, or how costly is it to fail-- but it sounds like, at least for some people, that's, in fact, effective. Yeah?
AUDIENCE: There are some accounts where you can put money, and it's a bit harder to take that money out [INAUDIBLE]. So that's kind of [INAUDIBLE] and you can take it out [INAUDIBLE].
FRANK SCHILBACH: Right.
AUDIENCE: If you really need it [INAUDIBLE].
FRANK SCHILBACH: Right, in fact, a lot of retirement accounts in the US, in many places, have penalties for early withdrawal. It's a lot of, like, many employers that offer 401(k) and other savings vehicles-- essentially, that's like a retirement sort of-- tax-deferred retirement savings often sort of subsidized by the employer. But often, one condition is that you have to pay a 10%, or something, penalty to withdraw it. Similarly, I think for some savings accounts, that's sort of similar.
And the idea is very much, like, helping you resist temptations to change your plan. So you're going to plan for saving for retirement or something else. But in fact, you might sort of withdraw early if you're tempted. And then usually, the penalty is only something like 10% or the like. Because when people have actual shocks, like health shocks or other issues, they want to be able to take out the money at some penalty. So they don't want to have it to be entirely liquid, because that would be bad for them. Any other example? Yes?
AUDIENCE: I have an alarm clock on my phone where I have to take a picture of something. So I'll get up, and go to the bathroom, and take a picture of the thing.
FRANK SCHILBACH: Does it work?
AUDIENCE: No, I take the picture and I go back to bed.
FRANK SCHILBACH: I see, fair enough. Does it work sometimes?
AUDIENCE: Yeah, kind of.
FRANK SCHILBACH: Sometimes, I see. Yeah, so there the issue, in some sense, is, like, if it doesn't work, then you're kind of worse off than before, because you slept worse than you would have otherwise. And you know, there's no benefit of you getting actually up. Yeah. One more? Yes?
AUDIENCE: Would, like, societally implied commitment about-- it's like, No Shave November, would some sort of variance on that work?
FRANK SCHILBACH: Of what, exactly?
AUDIENCE: Like No Shave November or variants of this, would that imply a commitment device?
FRANK SCHILBACH: Yeah, I mean, I think in some sense, there's often-- some commitment devices are essentially sort of public attention or the like, where people are announcing one way or the other publicly-- which is kind of what you were saying earlier-- where society, in some sense, has some influence where people would say they would publicly declare that they're going to lose weight. They would publicly declare that they're going to save money, and so on. And then, there would be social shaming in some way or the other to do that. Now, what you want-- so in some sense, if it's just society imposing things on people, that's not enough. What you want is a person actively sort of making some announcement or some statements with a goal of making it more costly not to follow through.
OK, so as I said before, demand for commitment requires at least some partial sophistication. I have some examples for you here. One is, like, StickK. I don't know if you have any of-- has any of you used StickK? No? OK, so StickK is a website founded by an academic, Dean Karlan and co-authors. What StickK does is, it's a commitment device that works, in some sense, in the way I was saying before. For example, if you want to of commit to certain behaviors, for example, if I wanted to finish a paper draft by Friday night, I would have to find a referee.
I would, for example, ask Aaron to be my referee. And I would say, Aaron, I'm going to give you my credit card information. And if I don't finish the paper by Friday night, I'm going to pay $100 either to Aaron or to some charity-- could be anti charity, could be the pro-smoking, pro-whatever society. The money is going to be gone.
And so then, Friday night comes. Aaron is the referee who can then decide, or will decide, then, whether I have actually followed through. Did I actually write a draft? Is there an actual paper that's done? And if it's done, then nothing happens. I don't have to pay any money. If it's not done, the money is gone. And since the credit card information is on the website, they can actually just withdraw that money.
What's the problem with this commitment device, potentially? Yes?
AUDIENCE: The referee not wanting to enforce it?
FRANK SCHILBACH: Yes, exactly. So, like, I tried to do this a few times in school. And my friend was my referee. And then, Friday night would come. And then I would sort of just convince my friend to, like-- I'll pay you dinner with if you let me get through. So you need a very good friend who is actually sort of then insisting, and sort of credible, potentially. Or your enemy could be your referee, if you wanted, who is happy sort of then to take your money.
But I think it's worth trying. You can try it out and see whether it works. I'd love to hear some thoughts on whether it works or not.
There's Clocky which I think is coming out of MIT. It's an alarm clock that runs away, which is kind of similar to what you mentioned earlier. There's also Tocky, which is the alarm clock that jumps around, and then you have to sort of find it.
There are some forms of-- this is a fake alarm clock-- I don't think it actually exists-- where essentially, if you don't get up and press a button on your alarm, you essentially give money to charity or anti-charities if you want.
There's a thing called Antabuse, which is quite interesting, which is a drug that essentially interferes with the metabolism of alcohol. So this is some people have trouble metabolizing alcohol anyway. So when they drink, they flush. They turn-- they get headaches. They have to throw up, and feel unwell, and so on. So for them, drinking is very unpleasant as it is.
It turns out, there's a drug that can actually sort of reproduce those kinds of behaviors or feelings. And it's essentially-- you can take it. And then for the next 24 to 48 hours, you will not be able to metabolize alcohol. There's also versions of that that are sort of implants for a month or the like. But essentially, they make alcohol consumption very unpleasant.
And this is sort of meant to help people who have problems with alcohol consumption, that if you don't want to drink tomorrow, or in two days from now, or the like, or the next month, you can take it and then commit to not doing that. And then alcohol consumption becomes very unpleasant when consuming alcohol anyway. It's a little bit dangerous. Or it can be quite dangerous if people then drink anyway, because there's sort of an alcohol antabuse sort of reaction that's quite dangerous.
So it hasn't been particularly successful, in part also because people sort of fall off the bandwagon. So if you essentially take that daily, or, like, every second day, what happens then is that people start wanting to drink. And then they just stop taking the Antabuse. And then they have to sort of just wait it out for 24 hours or the like until the drug is out of their system. And then they drink anyway. But in principle, it's like the ideal commitment device. Because essentially, the only reason for you to take this drug would be to reduce your future drinking.
There's also sort of various versions of self-control devices which essentially either restrict certain websites that you can go to-- you can restrict yourself-- or it restricts them to certain hours, or you can sort of give yourself a budget for a number of hours that you can use certain websites for. Any questions?
So I also have a video of commitment devices, which I'm not sure whether the audio actually works. So we have to try this out. If not, I'll show it to you tomorrow.
OK, so as you can see, I think we can learn a lot from watching movies. And I love TV. So what can we learn from this? I think that it can actually-- we can learn a lot about sort of commitment devices, and whether they work, and why they work, or why they might not work. So what did what did we learn? Yes?
AUDIENCE: [INAUDIBLE] reversible commitment devices are not as effective.
FRANK SCHILBACH: Right, so just making, like, future choices more expensive might not be enough, right? I might sort of say I might have self-control problems. Now, I'm going to increase the price of my future behavior. But that doesn't mean that it actually works. It might just be more costly. Now, I have to climb up the ladder or whatever. But then I might still eat the cookie, and that's not helping. What else did we learn? Yeah?
AUDIENCE: But I think you have to find a balance. Because if you make the penalty too high, you're not going to use the commitment device.
FRANK SCHILBACH: Right, exactly. So what they did, essentially, they made the penalty-- or they essentially made the price of cookies infinitely high. They just sort of gave them away to the birds. Now they don't have any cookies at all, and sort of now, they're going to eat no cookies whatsoever, which is kind of not what they wanted either, right?
AUDIENCE: But I think another lesson is that once you've made one of these commitment devices so costly that you actually just transfer it to another temptation [INAUDIBLE].
FRANK SCHILBACH: Yes, exactly. That's what I was saying earlier. You might sort of restrict your browser. You might sort of, like, shut down Chrome, or even your entire laptop. But then, if you start surfing on your phone, that's not really helping. So in some sense there's, like, substitution to different potential either sort of goods, or devices, and so on, or technologies that might sort of undo your commitment device. So they need to be foolproof in some sense, in the sense of, like, being able to not substitute to other things.
OK, so first, substitution across temptation goods can mitigate the usefulness of commitment. If, essentially, you have other vices or other things you're going to do instead, then it's not really helpful to shut down one thing only. So you have to think about, if I don't do the behavior that I'm trying to prevent, is there sort of other behaviors that am I going to substitute towards, and then it's not going to be helpful either?
Now, it's also helpful to think about what is it actually-- what is it actually required for a commitment device to be helpful to a person? So first, kind of obviously in some ways, the person needs to have some self-control problem in the first place. This needs to be something that, in the future, some behavior you want to change. Second, there needs to be some sophistication, right? Like Frog and Toad, they kind of know that they're going to eat too many cookies. They need to sort of understand that.
Third, and this is what was said earlier, the commitment device needs to be effective. We need to have something that's sort of strong enough to be able to help us overcome our self-control problems. And then fourth, it needs to be-- the person needs to actually think that the device is effective. If I sort of don't have faith in myself, I think this is not useful, then I'm never going to take it up.
Or conversely, I guess, there's issues potentially with, like, naiveté. There's two things that naiveté could do. So one part in naiveté-- what I was already talking about before-- which is to say, I might be naive in a sense I'm saying, like, I know-- I don't have-- if I'm fully naive, I might think, I don't have a self-control problem. It's not really bad. So I don't really need a commitment device in the first place. That's one version where I sort of under demand commitment by saying I don't actually need it. It's not helpful for me anyway. In the future, I will behave. So I might not demand commitment.
Another version is to say, I might actually demand commitment devices where I overcommit myself in a sense, like, I think the commitment device is actually helpful when in fact it is not. So Frog and Toad were not doing that. In some sense, they sort of like understood, to some degree at least, pretty well what they're going to do in the future.
I think they sort of underestimated the substitution in this case, in the sense of, like, they thought giving the cookies to the birds would be helpful. But in fact, then they sort of underestimated the substitution. And in the end, you know, they're going to be worse off. Because now, he's going to make the cake anyway, and then eat a lot of cake. He could have as well just eaten the cookies in the first place. Any questions on this?
OK, so now, let me sort of turn toward some academic papers, and sort of actual empirical setups. So this is the paper that you, I hope, all read, which is Ariely and Wertenbroch. Dan Ariely was actually a professor here at Sloan, and did a bunch of experiments, mostly with MIT students at Sloan-- MBAs. And so these are 51 executives at Sloan. These are sort of highly incentivized individuals, in the sense that they take classes, and if they don't-- if they fail a class, if they do badly in a class, they have to actually pay for it themselves. So they really are incentivized to do well.
They had to submit three papers in their class to get a 1% grade penalty for late submissions, which is quite a bit. There were two groups. Group A had evenly-spaced deadlines. Group B had the option to set their own deadlines. And now, the first sort of result here is that there was demand for commitment in the sense that, like, 68% of people chose deadlines prior to the last week. So when given the option when do you want to set your deadlines over the course of the semester, 68% of people chose deadlines that were not the last week. Why is that demand for commitment? Yes?
AUDIENCE: More choices as to when to do the paper, just to make a deadline at the end.
FRANK SCHILBACH: Exactly, like, you-- there's no reason, unless you have sort of issues with self-control and the like, you just make it essentially costly for you to submit late. If you are worried about submitting everything late, that's a good thing for you to do, and it might help you. Again, notice that it requires some sophistication. You need to sort of understand your future behavior to do that.
Now, what they find is, like, they find no late submissions, which is interesting by itself. But in particular, they find that group A has higher grades than group B, OK? Group A is the group that has evenly-spaced deadlines, compared to group B that has the option to set their own deadlines. It is consistent with self-control problems, in the sense that people set their own deadlines that are early. What else is it consistent with? What did we learn from that? What do we learn from the fact that group A does better than group B? Yes?
AUDIENCE: That the commitment device wasn't strong enough?
FRANK SCHILBACH: Yeah, so in some sense, the commitment devices that people chose for themselves was not strong enough, or, like, weaker than the evenly-spaced deadlines. Now, notice that evenly-spaced deadlines are in the choice set of people. Like, if you choose your own deadlines, you can just say, I'm choosing evenly-spaced deadlines. And you should do as well-- if you're perfectly sophisticated, you would do as well as the group A.
But the fact that the groups-- group B is doing worse sort of tells us that people are not optimally setting deadlines. Either they don't-- there's a fraction of people who don't choose any deadlines. There's at least 32% of people who sort of say, my deadlines are all at the end. So maybe these are naive people. Or even among the people who set deadlines, they might choose deadlines set further back in the semester, and they do worse than in the evenly-spaced deadlines.
So what did we learn? So people do set early deadlines-- quite a few of them. So we learn about people are at least partially-- or some people are sophisticated. But they're only partially sophisticated. They don't set deadlines optimally, which is sort of consistent with people being naive, at least in some way. Yeah?
AUDIENCE: I was wondering, does it make sense to try to find the difference between not adequately measuring your commitment [INAUDIBLE] estimating how long it would take you to do the paper?
FRANK SCHILBACH: Right, that's a great question. So the question is, so there's two things that are perhaps uncertain when you think about the future. One is your present bias-- your preference parameter about how much work you're going to put in today versus tomorrow, and how much you're going to procrastinate. A second question is about your beliefs about your effort costs in some way or the other, which is how costly is it for you to actually do the problem set, or write the paper, and so on. And you might underestimate how tedious it is to do.
There's sort of a large literature on what's called the "planning fallacy." People always plan too many things, and think whatever they plan takes shorter than it actually does. Or put differently, things always take longer than people think when they make plans. And the odd thing is that also is the case even if you take that into account. So in some sense, even if you're aware of the planning fallacy, and you plan, and you know that people are going to-- things are going to take longer, somewhat oddly enough, people are even taking longer even if you take that into account.
But anyway, so the question is kind of like how can we separate that. I think, from this evidence, we cannot separate this. There are sort of like cleaner experiments that are more careful about this. In some sense, if it were just about like underestimating effort costs, then you would not set your own deadlines necessarily. Like, the setting early deadlines is consistent with self-control issues. But just from the performance, you're right. That could just be people underestimate effort costs as well.
In general, I think this is hard to separate in many situations. The key part about the self-control problems about the time-inconsistent is a demand for commitment which sort of reveals that people have some form of self-control problems related to time preferences. Any other question?
So now, there's one concern is the two sessions are not randomly assigned. Why is that a problem, potentially? So we have, like, 51 executives. 25 or something-- 26 are in one and 25 are another, but they're not randomly assigned across the two sessions. Yes?
AUDIENCE: Well, if you're just [INAUDIBLE] you might want to pick the class with the [INAUDIBLE].
FRANK SCHILBACH: Right, so there could be selection people sort of switch sessions. I am not sure recitations-- I'm not sure that's actually possible. But setting that-- so that's surely one problem, like, selection into the different groups. Setting that aside, what other problem do we have? Yes?
AUDIENCE: [INAUDIBLE] something like if the two sessions are at different times, and there's some advanced class that we can see as one session but not the other, you could end up with more advanced students in [INAUDIBLE] section, which could affect how good they are at writing [INAUDIBLE].
FRANK SCHILBACH: Exactly, there could just be other-- sorry, this is a version of what you were saying earlier. So I misunderstood a little bit. Exactly, there could be, like, people are-- for other reasons than the actual treatment, people might-- one session is early. One is late. One is on Thursday. One is on Friday. One has a better TA. One has a worse TA, or whatever. People might sort of select into different-- so the characteristics that people have when they select into or when they end up in session A versus session B might be different. So what you compare is essentially different types of people.
Second, there's only two sessions or sections. That's essentially an issue of, like, sample size and statistics. They're going to talk to one of the recitations about that in more detail. But essentially, the bottom line is that if you randomize, add-- or if you were to randomize, which they haven't even done here, but if you were randomize at the level of a section, then if you only have, like, two sections, that's essentially not enough. Because you might face correlated shocks.
What is that? For example, it might be like one TA is better than the other. And one class, A or B, the group might do much better than the other, not because of the deadline policy, but just because of the TA. Or again, it could be that, like, the recitation is early in the morning, and people don't pay attention, or whatever.
There might be many other things happening and so on. So in some sense, the sample size is A, too small to start with. 50 people is very small for an experiment. Two, if you randomize or if you have only, like, two treatment groups, you need more clusters, as people call it-- more sort of items of randomization overall.
So now, there's a second experiment that they did sort of to deal with the first issue. This was sort of like, again, not a huge sample, but somewhat better. It's 60 people. And now, it's randomized into three treatment groups. One is evenly-spaced deadlines. B is, like, no deadlines. Three is, like, self-imposed deadlines. This is now a proofreading exercise over 21 days. This is now randomized at the individual level, so it's not in groups anymore. So we get a little bit away from this cluster issue that we had before.
Now, what does an exponential discounter do in terms of performance? What do we predict what's going to happen? So like, a person with beta equals 1, what do we say is going to happen to the performance of these groups?
AUDIENCE: It should be the same?
FRANK SCHILBACH: Yes. Which one? All of them?
AUDIENCE: I'm not sure.
FRANK SCHILBACH: So let's start with the self-imposed deadlines and the no deadlines. What kinds of deadlines is the exponential discounter going to set?
FRANK SCHILBACH: None, somebody said?
FRANK SCHILBACH: Yes, and--
AUDIENCE: I don't think they need the deadline.
FRANK SCHILBACH: Yeah, like, why would you set a deadline? You didn't need a deadline. You're going to just do it whenever it's best to do. There might be some-- you might get sick or, like, some other shocks might appear in the course of the semester. So deadlines will just restrict yourself. So you will not self-impose any deadlines.
So early deadlines essentially just limit flexibility. That's not helpful to do. You don't need to sort of self-impose any deadlines. And then, you might sort of do weekly worse-- in group A, you might do it a little bit worse, because essentially, shocks might happen or the like. There's no reason to believe that, like, if you restrict your option, you might do better. Group A might happen to do as well as C and B. But at least they are going to be weekly worse, in part because people get sick around deadlines or the like. OK?
So what about a sophisticated person with beta hat equals beta smaller than 1? Yes?
AUDIENCE: Self-imposed deadlines should equal evenly-spaced deadlines?
FRANK SCHILBACH: Can you explain more?
FRANK SCHILBACH: So the evenly-spaced deadline will do better or worse than no deadlines, to start with?
AUDIENCE: They would do better?
FRANK SCHILBACH: And why is that?
AUDIENCE: Because they-- if they have present bias, then they'll procrastinate. So then they would have three things due at the same time.
FRANK SCHILBACH: Exactly. Exactly, so that should help. And then the self-imposed deadlines, there, you can optimally set the deadlines, so you could probably do at least weekly better than the evenly-spaced ones, because you might also take into account some constraints that you have in a semester, like your family is visiting or the like, that might be sort of-- so evenly-spaced deadlines, some might not be optimal because of that. If you self-impose them and you're perfectly sophisticated, you can take that into account.
So deadlines can help. And flexible deadlines tend to be preferable, because you can take into account your individual specific costs that you might face. OK, what about the fully naive person? Yes?
AUDIENCE: So no deadlines and self-imposed deadlines should be the same. Because if you're fully naive, you don't think you need that much [INAUDIBLE] commitment devices, so you just set them to the end as if you're expontential. And evenly-spaced should be weekly better, because even if it's not fully optimal, it should stop some of the [INAUDIBLE].
FRANK SCHILBACH: Correct. Correct, so as before, the evenly-spaced deadlines, since the person has beta smaller-- or some people have beta smaller than 1, the evenly-spaced deadlines likely do better, or help people do better, unless they sort of miss a lot of deadlines anyway because of procrastination. But assuming the deadlines are sort of helping people space things out better, they're going to do better than the no deadlines case.
And then, the fully naive person says, why would I set any deadlines? I don't need any deadline. Flexibility is good. I'm going to do it early anyway. Of course, that's not going to happen. But then essentially, B is doing the same as C, because the person does not set any deadlines at all, OK? So now, what about the partially naive person? Yes?
AUDIENCE: A is better than C which is better than B?
FRANK SCHILBACH: Can you say more?
AUDIENCE: Sure, so I mean, I guess if they're sufficiently sophisticated, then it's is possible that Cis better than A which is better than B. But if they're mostly naive, then they will do some kind of deadline imposition which is not the absence of deadlines in its entirety, but it might not be as good as [INAUDIBLE].
FRANK SCHILBACH: So I guess the answer is, like, to some degree, it depends. So we still have, like, A is better than B. So if people have beta smaller than 1, the evenly-spaced deadlines still, as before, are going to do somewhat better than the no deadlines.
Now, for the self-imposed deadlines, essentially two things can happen. One is, like, the person might actually set some deadlines that happen to be actually helpful. If they're sufficiently sophisticated, the deadlines might actually help at least some people. But for some people, they might set overambitious deadlines and say, oh, I'm going to get it done all next week. Well, it turns out that's actually not true. And then they sort of said too-early deadlines and then have to rush, or they miss the deadline, or the like. They might actually do worse.
So there, we have essentially no clear predictions. Some commitment should help. So C could be better than B. But individuals might also overcommit. So C might actually be worse than B. So that's-- essentially, there's no empirical-- clear empirical prediction. Any questions on what I showed you here? Yes?
FRANK SCHILBACH: Yeah.
FRANK SCHILBACH: So you mean, like, a subsample of people who happen to choose evenly-spaced deadlines? Yeah.
AUDIENCE: So the concerns [INAUDIBLE]-- so I guess I'm just not sure [INAUDIBLE] could that possibly [INAUDIBLE] leave the concerns that you [INAUDIBLE].
FRANK SCHILBACH: Yeah, so the issue there is essentially-- so one thing they were doing in the paper is to say, let's look at the people who set evenly-spaced deadlines, and say, when they-- how are they doing compared to people in the control group, or the like? The problem with that is, like, these people are selected in certain ways. Like, if I sort of just-- this is a subset of people that's non-randomly selected. We know that they chose certain deadlines.
So in some sense, we know that they're different, in some ways, compared to the average person in the group. And these could be people who are, like, more productive, less productive, et cetera, and so on. So the clean design, how you would do that is essentially, what you would do is, you would ask people to-- ask for their deadline choices, and then randomize them into different deadline regimens. So either they get their choice or not. So you can actually estimate the treatment effect on those kinds of people, and then look at are evenly-spaced deadlines-- are those treatment effects on those people worse or not?
But I worry that in that kind of analysis, what's non-random here-- or what they're choosing as a non-random subset and compare that with a control group-- they're sort of-- that's correlated with other stuff. And that's not necessarily getting your causal effect. Having said that, when you look at the evenly-spaced deadlines, they could sort make things worse in some ways, in the sense of, like, even for people with beta smaller than 1, it could be that people missed the deadlines because they're procrastinating, and so on, and so forth. Or they had some shocks and the like, so it's not necessarily obvious that they do better. That's sort of all under some assumptions. Any other questions?
OK, so now, what we find is, like, A is doing better than C is doing better than B. So this essentially true for errors detected. It's true for delays in submission. It's also true for people's earnings in the entire exercise. So now, what you have is, like, A is better than C is better than B. So what is that consistent with? Well, it seems like it's consistent with some partial naiveté.
In some sense, admittedly, the prediction of partial naiveté is a little bit loose in a sense that, like, all we're saying is A is better than B. And you know, then the comparison with C could go either way. So sure, it's consistent with partial naiveté, but it's also only suggestive in some sense. Because there's no clear prediction in that case.
However, we can reject-- sort of like the other cases, we can reject the exponential discountings of sophisticated present bias to fully sophisticated one, or the fully naive present biased person, because those predictions are not borne out. So the only case that really is left is the partial naiveté one. So let me-- sort of to summarize what we saw, so the result is that deadline setting improves performance, that is evidence of some present bias-- being essentially beta being smaller than 1.
It's also consistent with partial naiveté. People set deadlines, which is consistent. They set early deadlines, which shows some demand for commitment, some form of sophistication. Then, result 2 says deadline setting is suboptimal, which sort of suggests that beta is smaller than beta hat, or beta hat is bigger than beta, which means that essentially, people seem to underestimate the self-control problems that they have in the future. So that's, sort of broadly speaking, some support of the beta delta model with partial naiveté. Any questions on this paper?
OK. Yes, sorry, go ahead.
AUDIENCE: I was confused by what you meant by partial sophistication or partial naiveté.
FRANK SCHILBACH: So this is what I was discussing earlier, partial naivete being-- meaning that-- let me just go back for a second. This is [INAUDIBLE] discussing here earlier, partial naiveté meaning that-- so we discussed two cases before, which was full naiveté or full sophistication-- full naiveté meaning that beta hat equals 1, full sophistication meaning that beta hat equals beta. And now, if you sort of take the case in between, which is-- where you see-- which I have here at the bottom, which is beta hat is in the middle of beta and 1, or in between beta and 1.
So essentially, you understand that your beta in the future is not 1. But you sort of underestimate how small it is. So as I said before, my beta might be 0.6. I might understand my future beta is not 1. So I understand it's smaller than 1. But I might think it's, like, 0.8 or 0.9, which is to say I overestimate my future beta to some degree. While I get it right that it's not 1, I might sort of get it wrong in the sense of saying it's not 0.6. In fact, I might think it's higher than that. OK.
OK, let me go back to-- OK, so in some sense, these are nice experiments. They're useful experiments to help us sort of understand is there evidence for this model, is there evidence for some of the predictions that we have? But it's still fairly contrived in the sense of, like, it's kind of like-- especially in the second experiment-- it's kind of like a lab experiment and the like.
Really, what we want to know is, in the real world, are commitment devices A, helpful in some sense? Do people want them? And B, are they actually improving behaviors in certain ways? Should a firm use commitment devices? This is actually helpful for improving firm performance.
So there's a very nice paper by Supreet Kaur, Sendhil Mullainathan, and Michael Kremer, who sort of investigate this question using data entry workers in India. So this is a full-time job for people. This is the primary source of earnings for these workers. These are people who are sort of recruited as data entry workers for the course of 13 months-- so over a year-- as a typical data entry worker. So in some sense, there was a job ad that said, we're looking for a data entry worker. We have data to be entered. People would answer the job ad and then sort of start working to do data entry.
Output is measured, as it is usual done in many data entry companies, by measuring the number of accurate fields entered in a day. This is measured by doing dual entry, as in to say the data is entered twice. It's then matched with the other person. And the number of errors or discrepancies are then defined as an error rate. And if the error rate-- you're matched to different people over time, so I can sort of back out your error rate-- how good or bad you are. So people are essentially incentivized to do to work fast, and incentivized to enter stuff correctly.
Workers are paid a piece rate using the weekly paycheck that paid once a week. There's no restrictions on hours. People can show up whenever they like. They can stay for as many hours as they like on any given day. There are no penalties for absences. So you can just-- if you don't show up on a given day, nothing happens.
This is what the data entry task looks like. There's essentially some data. This is kind of interesting. This is data coming from Kenya. So Michael Kremer and Sendhil Mullainathan were working on some Kenyan data where they had a huge amount of data from Kenyan sugarcane farmers over the course of 30 years-- some historical records that they found. They wanted them to get entered. So like, how do we get them entered? Well, India is the place to do that. So they got them entered. And then, when they hired lots of people to enter the data, they thought, well, why not run an experiment to learn about self-control among workers while doing that?
So what they would do is they would show these scanned images of these pages. And then there's like a computer software, pretty rudimentary, where you can sort of enter the fields in your data set. Now, the commitment device is a dominated contract that looks as follows. So the-- as I told you before, workers are paid a piece rate w. It's a linear piece rate. That's the control contract for their production. So forever production, think of production as how much the correct entries that you produce. For each correct entry that you make, you're going to get paid w.
Now, the dominated contract looks as this-- this is the red one that you see here-- which is until a target T. We'll talk about the target in a second. Until the target T, you paid w over 2, which is like half as much as before. And as soon as you reach T, the target, you're going to be paid w for the entire production that you make. Why is this a commitment contract? So if I ask you to choose your target T, if you choose a positive target, why is this a commitment contract? Yes?
AUDIENCE: So if you go less then you're penalized for it by making less money [INAUDIBLE].
FRANK SCHILBACH: Right. That's right. So there's no value of choosing this contract. So if you are an exponential discounter, you might just choose T equals 0, the reason being that there's no point in choosing a high target. Who knows what's going to happen? You might get a headache. Your kid might get sick. The computer might not work, or whatever. For whatever reason, there might be shocks, uncertainty that might make you not reach your target.
And so if you choose a positive target, the only thing you can do from this positive target is, you see, you can do only worse, right? It's a weakly dominated contract in terms of your payments, so there's no reason to actually do that. But why might the worker actually choose it anyway? Yes?
AUDIENCE: Because earlier, you specified that the workers can come and go as they please once they get to work. And so there's no real set schedule. So [INAUDIBLE] or think that they know themselves, and think that they'll push off all the work until the very last minute, et cetera, et cetera. And so they might want the commitment device to incentivize themselves to finish their work [INAUDIBLE].
FRANK SCHILBACH: Exactly, I might sort of just be slightly below T or the like. I might sort of be tempted to go home. You know, my friends call or whatever, and I might just not want to do it. It might be just a tedious day. It might be really hot. I had really goals to work a lot, and I want to incentivize my future self to reach certain targets. That's exactly right.
So I think this is-- we've already said all of that. Workers can choose the T in advance. They can choose-- essentially, on the previous day, they can choose T equals 0. They also have randomization of paydays to be able to look at payday effects.
What do I mean by that? I told you, like, workers are paid weekly. And so they randomize the paydays. What they do is they randomize the pages to be Tuesdays, Thursdays, or Saturdays so that you can allow for day-of-the-week fixed effects. That's kind of [INAUDIBLE] interesting. But essentially, you can look at, when a worker is paid in a given week, are they more or less productive on that day compared to the previous days, and so on and so forth.
What does the exponential discounting model predict about paydays or payday effects? Should you work harder on a payday? Yes?
AUDIENCE: No, because you're time-consistent.
FRANK SCHILBACH: Right, exactly. Like, your delta is 0.95 or the like. It doesn't really matter whether the daily discounting should be essentially close-- there should be essentially close to no discounting between different days. You might have a yearly delta of 0.95 or the like. But then, between today and tomorrow, or two days and three days from now and so on, it doesn't really matter. So if I'm working today and I'm paid today, it's the same as if I'm working today and I'm paid in like three days.
However, if I'm a quasi-hyperbolic discounter, well, what's going to be the case is, like, if I'm working on Sundays, I'm working today and I'm going to be paid today. Well, then I'm going to go work harder. Maybe I get the money right away. I can buy a nice meal, my family is going to be happier, and so on and so forth.
Now notice, that requires some form of liquidity constraints, right? In the sense that if I have a bunch of cash anyway, it doesn't really matter whether I'm going to be paid today versus tomorrow. But assuming that some workers are liquidity constrained, essentially whether they're paid today versus tomorrow matters a lot when the reward comes.
If you think about today, it's like 10:00 AM. I'm typing and typing. Now, I think about, like, when do I get the reward. Either it's on the payday, you're going to be paid at, like, 5:00 PM, so you're going to get essentially a nice meal on the same day. Versus if you're paid in five days, that's really far away. In the beta delta world, your reward for working hard is coming much later, OK?
So you would think quasi-hyperbolic discounters would put in higher efforts on paydays. Assuming there is some form of liquidity constraints, there should be close to no difference between other days. That depends a little bit, like, what's the horizon of the beta that we talked about before.
OK, so now what do they find in this paper? They find essentially three results. The first result is there's demand for commitment. There's sort of-- people choose dominated contracts. That is to say, workers select dominated contracts about 36% of the time. This is a lower bound for the extent of time inconsistency for three reasons.
Reason one is some workers might be naive. They might sort of think they don't need any targets. Well, in fact, they would benefit from targets. Or they might underinvest in this technology. Second, people might think this commitment device is actually ineffective, in a sense saying, look, this is sort of what we're talking about with the Frog and the Toad. There might be-- they might have time-inconsistency issues. It might just not be a strong enough commitment device. And so then, if I have a day where I just don't want to work, then if that's not going to get you over the target, well then I might not choose this commitment device, not because I'm not-- I don't have present bias, but because it's just not effective enough of a punishment for me to do.
Number three is people might prefer flexibility and they're risk-averse. These are issues like they might have children that get sick, they might have headaches, and so on. They might just be risks that's unrelated to their present bias and the like. The computers might be bad. So they had different computers at different speed. You might end up with a really bad computer, and then you might not reach the target. That's nothing to do with self-control or present bias, but rather with sort of external risk. If you're really worried about risk, then you might not choose a positive target either.
Second, they find offering the dominated contracts increases output. That is to say, if you compare groups that were, on some days, offered those contracts, compared to other groups that were randomly selected to be not offered those contracts on those same days, they find that being offered this commitment contract increases production by 2.3%. Now you might say 2% is actually pretty low. And that's true in absolute terms. That's not a lot of money.
But if you think about what other options does the employer have, well, one thing you might want to do to increase worker's output is sort of a double their wages or just increase wages overall. Now it turns out that they also have some piece rate variations where they can actually estimate how much of a piece rate increase do we need to achieve these effects on productivity. And it turns out, the impact is actually corresponding to, like, an 18% increase in the piece rate wage.
Now, if you're an employer, you don't want to pay people, like, an 80% higher piece rate for, like, a 2% or 3% increase in productivity. That's just not worth doing. It's just like a bad deal. I mean, depends a little bit on your costs and benefits of production. But that's a very ineffective way of getting workers to be more productive, and it's a costly thing to do. You're going to incentivize a bunch of intramarginal people to do that.
Instead, offering commitment devices in this setting is actually free. And in some sense, for the employer, you might actually pay people less. Some people don't reach a target, you can actually pay w over 2. Now, you actually don't want to do that, because workers will be not happy and, you know, probably get annoyed with you if you do that too much. But the technology is actually free. Here's a thing that you can offer to your workers. They're going to be 2% to 3% more productive.
That's actually a big deal for a lot of companies, in terms of the margins are often small, and a much better instrument and getting workers to be more productive compared to increasing people's wages. Because wages, obviously, you need to pay. Any questions on that?
And then a third-- and this is sort of consistent with a model of self-control control being really the driver of this-- is payday effects predict people's demand for commitment. I'm going to show you this in a second. But essentially, what they find is that they're A-- they're payday effects. What do I mean by that?
If you plot, essentially, production over the course of the day cycle-- so here's-- on the very right hand side of this graph, you see people's payday productivity compared to the productivity on the day after the payday, or seven days before the payday. You'll see people essentially are more productive on payday. They produce more on paydays compared to, like, previous days. And so that what you see, essentially, is a constant increase towards the day of the payday.
Now, somebody was asking me previously about quasi-hyperbolic discounting, or hyperbolic discounting, which model is right. When you look at this graph, this actually doesn't look very much like quasi-hyperbolic discounting. It looks a lot more like hyperbolic discounting, as in, like, there's sort of like a smooth increase. There's not a jump up. On the day of the payday people are more productive, but it's rather sort of more consistent with true hyperbolic discounting.
That's more of a detail here. That's not that important for you. But I wanted to point that out. Yes?
AUDIENCE: What's the scale and unit on the y-axis?
FRANK SCHILBACH: That is a great question. I think its production. So I think this is not a-- this is not a great-- I think these are units of production. I don't know exactly what the fraction is in terms of production. But it must be, like, a few percent of production. I can look this up. But it's like units of production, which is surely not what you want to use here.
Yeah, and so then-- so what we find here is that, essentially, people are more productive on paydays compared to the day after the payday, or compared to, like, six days before the payday. And then we find another graph, which is a quite nice one, which is to say high payday workers, or high payday effect workers, are more likely to select positive targets.
What I mean by that is they split the sample into workers who have high payday impacts and low payday impacts. That is to say, when you look at the workers and look at, like, which workers are more productive on paydays compared to on other days, you can sort of split the sample into two. Some workers are much more productive on paydays compared to other days. Some workers are not.
So what you see here in the graph is you see, essentially, the blue dots. These are workers who have high payday impacts. These are workers who are much more productive on paydays compared to other days. And you see the red dots, which are workers who are, essentially, pretty much they have low payday impact, meaning they kind of work the same on paydays versus on other days.
And then what they show on the-- so then, on the x-axis, they show experience. This is essentially-- think of this as the course of the study. These are something like 150 days. They have different workdays in the sample, where essentially, over time, people get to choose over and over their targets. And what you see, essentially, is that the blue graph-- the blue dots tend to be A, higher than the red graphs, meaning that people have high payday impacts.
These are workers of high payday effects. They're like more productive on paydays. They're more likely to choose positive targets. Sorry, I should have said, on the y-axis, you see the fraction of people choosing positive targets, OK?
So what we see essentially is two things. One is the blue dots are higher than the red dots, meaning that the people have high payday impacts. They are more likely to choose positive targets, sort of suggesting that the underlying reason why people have high payday impact is self-control. You're more, essentially, productive on paydays because you're present biased one way or the other. And that also then predicts whether you choose the target. So people who have higher self-control problems as revealed by the payday effects, those are the people who are more likely to choose positive targets.
Second, we also see that the blue dots tend to be trending upwards in the graph. You see on the right, the blue dots are higher than on the left in the graph, meaning that as people have more experience in the study, when they have like 100, 150 days in the study, they're more likely to choose positive targets, consistent with some learning about self-control over time. So they learn over time that choosing positive targets is a good thing for them, potentially, that makes them more productive. You don't see such a thing for the red dots that seem to be essentially flat or constant over time.
Sorry, that was a lot of information. Are there any questions on that? Yes?
AUDIENCE: Arguably, there's lots to do with the savings of the payday, right? So on the payday, I have to come in to the office and do the work. So I might as well type a little longer to earn a little more. Another is you don't really have an incentive. They're not getting paid. So there are other jobs where you get paid at the end of the day, every day. And then you don't really find the payday effects in those kinds of jobs, right?
FRANK SCHILBACH: Right, so exactly. So there's an issue on, when you think about these payday effects, what exactly do we learn from that? One way to think about the payday effects is kind of like the story I was telling you, which is to say, on days when you come in, when you type, when you work, you're going to be paid in the evening. And you're going to work harder, because your reward is closer to you-- to your effort.
Another explanation that you were saying is like, well, what if workers are just, like, liquidity constrained. What if their children are hungry, et cetera, and so on. So you come in on that day because you want a paycheck. Once you show up anyway, you might as well do some work. And you're going to end up being more productive that way.
That's surely, in part, going on. I have sort of two responses to that. So one response is, like, well, we find that the workers with high payday impacts are more likely to choose positive targets, which sort of suggests there is an underlying issue which is self-control problems that drives both the payday effects and the high-- the positive targets that people choose to demand for commitment. Second, and more subtle in some ways, is, like, people are liquidity constrained. People who don't have cash, there's often a reason why they don't have cash, which is often because they haven't saved in the first place.
Again, that's a little complicated. But essentially, if somebody is very liquidity constrained, never has cash, often that's coming-- or the underlying reason might be present bias, or some form of self-control problems to start with. The reason being that, like, if you really had such a value-- high value of having cash, you should just save it and have it in your home, or try to save money in a bank account and so on. So usually, we think that when people are really liquidity constrained, often that's an underlying reason of present bias or some form of self-control problems.
Having said that, there's also some other reasons why people can't save. Because it might be they don't have access to bank accounts, and so on, and so forth. OK.
OK, so let me tell you one more application. And let me sort of continue with the rest of those tomorrow. So this is sort of like a classic study, in fact, done in Boston in Boston area health clubs. So this is Dellavigna and Malmendier. And what they did is they looked at health clubs and different options that people did in gyms when they had choices between the following, or the following kind, which is monthly fees of over $70 for unlimited use of the gym, or a pay-per-visit fee of $10.
If you had that option or those two options, which options would you choose? Or why would you choose one or the other? Yes?
AUDIENCE: If you think you're going to use the gym more than seven times in a month, [INAUDIBLE].
FRANK SCHILBACH: Right, exactly. So why pay more than-- why pick option number one if you only go three times? When I was teaching the first time, I was doing exactly that, in fact. But exactly as you say, if you make-- if you understand how often you go to the gym, why choose option one if you don't go at least seven times? There could be some transaction costs and the like. But surely, if you go only once or twice, you should not choose option one.
Now of course, some people might choose option one anyway. Why might you do that? Yes?
AUDIENCE: Would it be for a commitment device? Like, I would pay for this month [INAUDIBLE] I want to get my money's worth.
FRANK SCHILBACH: Right, So one option is, like, essentially change their future prices. So notice that the marginal cost of going to the gym in option two is $10, right? Every time you go, your marginal cost is $10. In option one, the marginal cost is 0. So what I'm trying to do is change the marginal cost in the future to make it more favorable for me to go. What else could be going on? So one is they could use this as a commitment device. What's another explanation? Yeah?
AUDIENCE: Overestimation of how many times you would go?
FRANK SCHILBACH: Exactly, it could be like, essentially-- and there's two versions of that. One version has to do with beta delta or present bias, which is to say I'm just underestimating my future present bias. I think my beta is, like, 1, but in fact, it's 0.5. I think I'm going to go 17 times, but in fact, I'm going only twice. That's one option.
Another option that was also mentioned earlier was about, like, underestimating the costs of future exercising. That is to say, what often happens when people sign up for the gym, often they are already at the gym. They are really excited to exercise, and so on. And they might underestimate how costly it actually is once they sit at home in the evening coming back home from work and so on.
They might underestimate how it feels, how costly it is for them to actually exercise. Nothing to do with self-control specifically, but it's just it's a tedious thing to do. They might underestimate how tired they are, and so on, and so forth-- so some form of like underestimation of the cost. That's, again, hard to separate in many cases. But that could also be going on.
So what they find then is people exercise, on average, 4.3 times a month in the first year. That's about $17 per visit. And of course, you should be choosing option two. Before canceling, consumers go 2.3 months, on average, without using the gym at all, which is kind of like it's even tedious to cancel the gym and actually go there. And I did, actually, exactly that as well.
And so how do we think about quasi-hyperbolic discounting here? Well, the gym-goers would like to exercise a lot in the future. Being naive, that's what they think they will do, so, you know, overestimate what they will do. To save on gym costs, they buy the monthly membership. So that's kind of what the naive person does. When it comes down to exercising, their short-run inpatient kicks in, and they do end up ending the membership much. That's very much like sort of underestimating or overestimating their beta or underestimating their self-control problem.
Now, what does the sophisticated person do? We already discussed that as well, which is they also prefer to exercise a lot in the future, but they realize they don't want to do so later. So what they want to do is they choose the monthly contract as a form of commitment device, in the sense of, like, making future actions changing prices in the future, they make it now cheaper to exercise compared to sit at home by sort of changing the marginal costs in the future. And they might even be willing to pay for that. So they might actually say, I know that I'm only going to go like three or four times. But if I don't choose that contract, I'm going to only go 0 or 1 times, and it's worth for me to do that.
So now, there's a different version of-- in which one could do a commitment devices about dealing with temptation, which is a very clever bundling of temptations paper by Milkman and others. What they did-- instead of offering commitment devices directly, what they did is they bundle two things. I told you previously about investment goods and leisure goods. And so investment goods are things like going to the gym that you do not enough of. Leisure goods are things that you enjoy in the present and you might do too much of that.
So now, what you can do is you essentially bundle those things. So what they did is they offered people the option to only listen to audiobooks-- addictive audiobooks-- at the gym. And so then, you can only sort of do that while you're at the gym. And you might come back to the gym, not necessarily because you want to exercise, but because you want to listen to your audiobook.
This can also backfire. I have a friend of mine, also in grad school, who was watching lots of TV shows. And he convinced himself that he could only watch TV shows while being at the gym. So then, I would be in the office, and he would sometimes come back from the gym totally exhausted because he had watched like three episodes of certain TV shows while being on the treadmill. And so he was kind of like overexercising because of his consumption-- his bundling of temptations.
But in this case, for-- so you have to sort of figure out how to calibrate this, right? But it could be a way of getting you, essentially, to-- by bundling, essentially, pleasant and unpleasant things at different points in time, by bundling them in certain ways, it might make you-- sort of help you overcome two issues at the time, which is one, you go to the gym more often. B, you might not sort of watch too much TV if you can convince yourself to actually follow through. They do find, in fact, bundling of temptation is effective until the Thanksgiving break, when sort of everything goes downhill in their study.
OK, that's all I have for now. I'm going to continue on tomorrow with the remaining parts of time preferences. Thank you.