1 00:00:00,000 --> 00:00:01,992 [SQUEAKING] 2 00:00:01,992 --> 00:00:03,984 [RUSTLING] 3 00:00:03,984 --> 00:00:05,976 [CLICKING] 4 00:00:11,200 --> 00:00:14,750 PROFESSOR: So let me very briefly recap 5 00:00:14,750 --> 00:00:19,770 what we discussed last time. 6 00:00:19,770 --> 00:00:23,820 So we started off thinking about choices and risk. 7 00:00:23,820 --> 00:00:27,500 How do people think about economic behavior 8 00:00:27,500 --> 00:00:31,290 when things are uncertain, when there's risk involved? 9 00:00:31,290 --> 00:00:33,380 I sort of discussed and showed you 10 00:00:33,380 --> 00:00:37,220 sort of the main workhorse model that economists 11 00:00:37,220 --> 00:00:38,120 use to study risk. 12 00:00:38,120 --> 00:00:40,340 That's sort of the expected utility. 13 00:00:40,340 --> 00:00:44,000 That is a very commonly extremely useful 14 00:00:44,000 --> 00:00:45,530 model for many situations. 15 00:00:45,530 --> 00:00:47,270 So it's like very widely used. 16 00:00:47,270 --> 00:00:50,250 If you ask 100 economists randomly 17 00:00:50,250 --> 00:00:52,250 which model explains choices [INAUDIBLE] 18 00:00:52,250 --> 00:00:54,110 the majority will surely tell you expected 19 00:00:54,110 --> 00:00:55,850 utility is what you should use. 20 00:00:55,850 --> 00:00:59,000 It explains a wide range of phenomena in very useful ways. 21 00:00:59,000 --> 00:01:01,040 You can think about lots of different things. 22 00:01:01,040 --> 00:01:03,380 You can, for example, think about investment behavior, 23 00:01:03,380 --> 00:01:04,920 finance. 24 00:01:04,920 --> 00:01:08,150 When you think about what should you invest in, 25 00:01:08,150 --> 00:01:14,060 if things become more risky, if assets are more volatile, 26 00:01:14,060 --> 00:01:16,363 you need to have a higher return for that, 27 00:01:16,363 --> 00:01:17,780 or you need to be offered a higher 28 00:01:17,780 --> 00:01:20,760 return to invest in those assets and so on and so forth. 29 00:01:20,760 --> 00:01:23,180 There's lots of useful applications in finance 30 00:01:23,180 --> 00:01:24,725 using expected utility. 31 00:01:24,725 --> 00:01:26,600 You can think of a range of different issues. 32 00:01:26,600 --> 00:01:28,880 You can think about, for example, criminal behavior, 33 00:01:28,880 --> 00:01:30,590 about sort of the risk of getting caught. 34 00:01:30,590 --> 00:01:32,600 What happens when the risk of caught goes up? 35 00:01:32,600 --> 00:01:35,282 People engage in less crime and so on and so forth. 36 00:01:35,282 --> 00:01:36,740 There's lots of different behaviors 37 00:01:36,740 --> 00:01:39,780 that you can think about and explain using expected utility. 38 00:01:39,780 --> 00:01:43,250 So I do want you to sort of take away the expected utility model 39 00:01:43,250 --> 00:01:46,075 is a very useful model for various applications. 40 00:01:46,075 --> 00:01:47,450 What we're trying to do is trying 41 00:01:47,450 --> 00:01:50,660 to understand are there some applications for which perhaps 42 00:01:50,660 --> 00:01:54,530 the expected utility model has some limitation, perhaps 43 00:01:54,530 --> 00:01:57,080 because of its simplicity or parsimony because there's 44 00:01:57,080 --> 00:01:58,730 only one parameter in there? 45 00:01:58,730 --> 00:02:01,130 Can we sort of alter that in some ways 46 00:02:01,130 --> 00:02:05,790 and try to make it more realistic in some situations? 47 00:02:05,790 --> 00:02:07,815 So the key parameter of interest when 48 00:02:07,815 --> 00:02:09,440 you try to sort of estimate this model, 49 00:02:09,440 --> 00:02:12,035 trying to sort of match the data in some ways, 50 00:02:12,035 --> 00:02:13,910 the parameter that you'd estimate here is you 51 00:02:13,910 --> 00:02:15,950 need to sort of assume some functional form. 52 00:02:15,950 --> 00:02:18,420 This is what I did last time. 53 00:02:18,420 --> 00:02:19,460 I can show you this. 54 00:02:23,650 --> 00:02:26,850 So this here, one very commonly functional form 55 00:02:26,850 --> 00:02:30,120 is the CRRA utility function that's very widely used 56 00:02:30,120 --> 00:02:32,530 in a lot of range of settings. 57 00:02:32,530 --> 00:02:35,460 The feature of that is it has a constant relative risk 58 00:02:35,460 --> 00:02:38,640 aversion, and that has a bunch of useful properties 59 00:02:38,640 --> 00:02:41,423 for estimating things or making predictions. 60 00:02:41,423 --> 00:02:42,840 So what you then need to do is you 61 00:02:42,840 --> 00:02:45,220 try to sort of estimate somebody's risk preferences. 62 00:02:45,220 --> 00:02:47,508 How do people behave under risk? 63 00:02:47,508 --> 00:02:49,800 What you need to do then sort of assume some functional 64 00:02:49,800 --> 00:02:52,052 form, for example, this CRRA utility function. 65 00:02:52,052 --> 00:02:53,760 And then the question is kind of like how 66 00:02:53,760 --> 00:02:56,910 do we estimate gamma the risk aversion parameter? 67 00:02:56,910 --> 00:02:59,280 That's the key parameter in this model. 68 00:02:59,280 --> 00:03:02,338 Now, how do you do that? 69 00:03:02,338 --> 00:03:04,380 I showed you some different choices [INAUDIBLE].. 70 00:03:04,380 --> 00:03:05,430 Essentially we reveal preference. 71 00:03:05,430 --> 00:03:07,305 Economists believe when we reveal preference, 72 00:03:07,305 --> 00:03:09,160 I give you some choices that involve risk. 73 00:03:09,160 --> 00:03:10,618 And depending what you choose, that 74 00:03:10,618 --> 00:03:12,390 reveals what your gamma is. 75 00:03:12,390 --> 00:03:14,592 So you can do like small scale gambles, 76 00:03:14,592 --> 00:03:17,050 which is just like small choices between different options. 77 00:03:17,050 --> 00:03:18,790 Some entail more risks than others, 78 00:03:18,790 --> 00:03:20,165 and then you can essentially just 79 00:03:20,165 --> 00:03:22,440 sort of estimate using those choices, 80 00:03:22,440 --> 00:03:25,740 or people certainty equivalent for such choices, 81 00:03:25,740 --> 00:03:27,270 what's people's gamma is. 82 00:03:27,270 --> 00:03:29,820 Now, what we found is that when you have small-scale gambles, 83 00:03:29,820 --> 00:03:32,100 people look or appear very risk-averse. 84 00:03:32,100 --> 00:03:39,917 They'll often decline gambles with positive expected value, 85 00:03:39,917 --> 00:03:42,000 which makes them appear quite risk-averse once you 86 00:03:42,000 --> 00:03:43,830 sort of estimate this parameter gamma. 87 00:03:43,830 --> 00:03:47,580 Gamma looks like gamma is above 10, above 20, above 30, 88 00:03:47,580 --> 00:03:48,990 really, really high. 89 00:03:48,990 --> 00:03:52,510 Now, at the same time, you can look at large-scale risk. 90 00:03:52,510 --> 00:03:55,278 There, when you look at large-scale choices, 91 00:03:55,278 --> 00:03:57,570 when you sort of think about what's a reasonable gamma, 92 00:03:57,570 --> 00:04:00,780 people actually only appear moderately risk-averse. 93 00:04:00,780 --> 00:04:03,430 It doesn't look like they're particularly risk-averse. 94 00:04:03,430 --> 00:04:05,940 When you sort of think for those large-scale choices 95 00:04:05,940 --> 00:04:08,610 and we look at finance or housing or other applications 96 00:04:08,610 --> 00:04:11,010 that people have estimated such models, 97 00:04:11,010 --> 00:04:15,030 you get like gamma sort of between 0 and 2 roughly. 98 00:04:15,030 --> 00:04:17,970 Now, what that then implies is if your gamma is 99 00:04:17,970 --> 00:04:21,240 between 0 and 2, that means essentially 100 00:04:21,240 --> 00:04:23,083 for small-scale gambles you should 101 00:04:23,083 --> 00:04:24,250 be essentially risk neutral. 102 00:04:24,250 --> 00:04:26,083 You should not care about really small risks 103 00:04:26,083 --> 00:04:28,443 that are about a dollar or two. 104 00:04:28,443 --> 00:04:30,360 So that sort of poses a problem because now we 105 00:04:30,360 --> 00:04:32,220 have sort of two contradicting answers. 106 00:04:32,220 --> 00:04:34,950 We have for small-scale risk, it looks like people 107 00:04:34,950 --> 00:04:36,180 are really risk-averse. 108 00:04:36,180 --> 00:04:38,550 For a large-scale gamble it looks like people are not 109 00:04:38,550 --> 00:04:39,570 so risk-averse. 110 00:04:39,570 --> 00:04:41,130 Now, we only have like one parameter, 111 00:04:41,130 --> 00:04:43,290 is this gamma, which is coming from the concavity 112 00:04:43,290 --> 00:04:44,727 of the utility function. 113 00:04:44,727 --> 00:04:46,560 And when we only have one parameter and sort 114 00:04:46,560 --> 00:04:48,510 of two contradicting pieces of evidence, 115 00:04:48,510 --> 00:04:50,280 we can't sort of match both, right? 116 00:04:50,280 --> 00:04:52,440 Because if you match one, then essentially 117 00:04:52,440 --> 00:04:55,240 you can match the other and vice versa. 118 00:04:55,240 --> 00:04:59,142 Now, I showed you a little bit Matthew Rabin's-- 119 00:04:59,142 --> 00:05:01,475 what he calls the calibration theorem, which essentially 120 00:05:01,475 --> 00:05:04,590 is sort of calibrating, showing in a fairly compelling way 121 00:05:04,590 --> 00:05:09,030 that, in fact, you can formally show it's 122 00:05:09,030 --> 00:05:11,220 not about sort of assumptions of a specific utility 123 00:05:11,220 --> 00:05:12,360 function or the like. 124 00:05:12,360 --> 00:05:15,930 For these very sort of minimal assumptions, which 125 00:05:15,930 --> 00:05:20,640 is just the utility function is weakly concave, 126 00:05:20,640 --> 00:05:24,390 you can essentially show that declining small-scale gambles 127 00:05:24,390 --> 00:05:28,140 with positive expected value implies 128 00:05:28,140 --> 00:05:35,767 that people make absurd choices when the gambles become larger. 129 00:05:35,767 --> 00:05:37,350 The recitation will discuss this a bit 130 00:05:37,350 --> 00:05:40,320 in more detail and sort of walking you exactly, 131 00:05:40,320 --> 00:05:43,870 in somewhat slower speeds, through the specific example. 132 00:05:43,870 --> 00:05:46,080 Now then, where we started then last time 133 00:05:46,080 --> 00:05:48,330 was thinking about insurance choices. 134 00:05:48,330 --> 00:05:52,320 This is a very nice paper by Justin Sydnor, 135 00:05:52,320 --> 00:05:55,770 and one very nice feature of this paper 136 00:05:55,770 --> 00:05:57,420 is that it involves real-world choices. 137 00:05:57,420 --> 00:05:59,250 So it's not like some lab experiments 138 00:05:59,250 --> 00:06:00,840 with some experiments. 139 00:06:00,840 --> 00:06:03,365 Of course, some people care a lot about undergrads. 140 00:06:03,365 --> 00:06:04,740 Some people might say, well, what 141 00:06:04,740 --> 00:06:06,210 are these undergrads choosing anyway? 142 00:06:06,210 --> 00:06:08,293 What does this have to do with real-world choices? 143 00:06:08,293 --> 00:06:10,440 I think undergrads are great. 144 00:06:10,440 --> 00:06:13,140 But one might wonder, like, if you recruit people 145 00:06:13,140 --> 00:06:16,200 into some experiments and you see some choices, like what 146 00:06:16,200 --> 00:06:17,580 do these choices really reveal? 147 00:06:17,580 --> 00:06:20,190 Like, do we really find that these choices are predictive 148 00:06:20,190 --> 00:06:21,498 of real-world behaviors? 149 00:06:21,498 --> 00:06:23,040 So one answer to that is, well, let's 150 00:06:23,040 --> 00:06:24,550 find some data from the real world. 151 00:06:24,550 --> 00:06:26,133 Let's look at real choices that people 152 00:06:26,133 --> 00:06:27,600 have made in real-world settings, 153 00:06:27,600 --> 00:06:30,480 and this is exactly what Sydnor does. 154 00:06:30,480 --> 00:06:34,380 So what he does is he has this data set of from a large home 155 00:06:34,380 --> 00:06:35,220 insurance provider. 156 00:06:35,220 --> 00:06:37,290 This is sort of 50,000 standard policies 157 00:06:37,290 --> 00:06:39,210 that are sort of like representative of what 158 00:06:39,210 --> 00:06:41,490 people choose overall. 159 00:06:41,490 --> 00:06:45,180 And the key outcome of interest in this study 160 00:06:45,180 --> 00:06:47,257 is like people's deductible choices. 161 00:06:47,257 --> 00:06:48,090 What's a deductible? 162 00:06:48,090 --> 00:06:50,010 Again, these are expenses paid out-of-pocket 163 00:06:50,010 --> 00:06:52,900 before the insurer pays any expenses. 164 00:06:52,900 --> 00:06:55,180 So you have like a deductibles of $500, 165 00:06:55,180 --> 00:06:58,413 you have a damage of like $200, you 166 00:06:58,413 --> 00:06:59,580 have to pay it all yourself. 167 00:06:59,580 --> 00:07:02,160 If you have a damage of $1,000, you pay the 500 168 00:07:02,160 --> 00:07:05,080 and then the insurer pays the rest. 169 00:07:05,080 --> 00:07:07,500 And so what he has is he has choices 170 00:07:07,500 --> 00:07:14,490 of a menu of four deductibles for each customer or client. 171 00:07:14,490 --> 00:07:16,470 So you can see both people's choice sets, 172 00:07:16,470 --> 00:07:18,570 and you can see people's preferred options. 173 00:07:18,570 --> 00:07:20,320 And that allows them to sort of say, well, 174 00:07:20,320 --> 00:07:23,310 if you have four options, you picked one of them. 175 00:07:23,310 --> 00:07:26,740 That means you preferred that one over all three others. 176 00:07:26,740 --> 00:07:29,430 So we can sort of essentially put some bounds 177 00:07:29,430 --> 00:07:31,470 on people's risk aversion. 178 00:07:31,470 --> 00:07:33,260 And so we looked at this already. 179 00:07:33,260 --> 00:07:35,318 This is kind of what this roughly looks like. 180 00:07:35,318 --> 00:07:37,860 There are different deductibles, which is essentially, again, 181 00:07:37,860 --> 00:07:42,570 like how much how much you have to pay yourself 182 00:07:42,570 --> 00:07:45,342 in case of damage until the insurance payments kick in. 183 00:07:45,342 --> 00:07:47,550 You have the premium, which is like how much for sure 184 00:07:47,550 --> 00:07:49,200 you have to pay every year. 185 00:07:49,200 --> 00:07:53,280 And then there is the premium relative to the $1,000 policy. 186 00:07:53,280 --> 00:07:54,735 How much more expensive is it? 187 00:07:54,735 --> 00:07:55,860 That's in the third column. 188 00:07:55,860 --> 00:07:58,860 How much more expensive is it to choose a lower deductible 189 00:07:58,860 --> 00:08:01,170 relative to the $1,000 premium? 190 00:08:01,170 --> 00:08:03,840 And then we have people's choices which is, in this case, 191 00:08:03,840 --> 00:08:08,400 I guess, policyholder one shows a deductible of $250. 192 00:08:08,400 --> 00:08:11,100 It was a premium of $661, which is 193 00:08:11,100 --> 00:08:17,430 $157 more expensive than the $1,000 deductible option. 194 00:08:17,430 --> 00:08:18,180 OK? 195 00:08:18,180 --> 00:08:22,710 And for each policyholder, the company 196 00:08:22,710 --> 00:08:26,730 was, in fact, sort of providing individual prices. 197 00:08:26,730 --> 00:08:29,530 So essentially they were looking at where 198 00:08:29,530 --> 00:08:32,669 do they live, what's the housing value, and so on and so forth. 199 00:08:32,669 --> 00:08:33,900 Sydnor knows all of that. 200 00:08:33,900 --> 00:08:35,490 So he knows the full set of options 201 00:08:35,490 --> 00:08:37,950 that people had available and their actual choices, 202 00:08:37,950 --> 00:08:40,200 and the options available vary by person. 203 00:08:44,133 --> 00:08:45,800 How do we learn now about risk aversion? 204 00:08:45,800 --> 00:08:47,810 Well, so the losses to the customers 205 00:08:47,810 --> 00:08:49,610 are capped by the deductible, right? 206 00:08:49,610 --> 00:08:52,910 So any loss you have from any damage that you get, 207 00:08:52,910 --> 00:08:56,400 the losses are only up to the deductible. 208 00:08:56,400 --> 00:08:58,550 So if you have a deductible of $500, 209 00:08:58,550 --> 00:09:01,670 the most you can lose or have to pay in any case 210 00:09:01,670 --> 00:09:04,800 if any loss occurs is $500. 211 00:09:04,800 --> 00:09:06,410 So choosing a lower deductible then, 212 00:09:06,410 --> 00:09:08,810 what it does it amounts to essentially reducing 213 00:09:08,810 --> 00:09:10,620 that loss in case you have a damage. 214 00:09:10,620 --> 00:09:13,370 So if you have a deductible of $500 215 00:09:13,370 --> 00:09:17,035 and decide to instead choose a deductible of $250, that 216 00:09:17,035 --> 00:09:18,410 means essentially in case there's 217 00:09:18,410 --> 00:09:21,050 a damage, in case you have to pay something, 218 00:09:21,050 --> 00:09:22,490 you don't have to pay 500. 219 00:09:22,490 --> 00:09:25,490 You have to only pay 250. 220 00:09:25,490 --> 00:09:28,080 But of course, if you lower your deductible, 221 00:09:28,080 --> 00:09:32,270 the price of your insurance goes up, the premium goes up, 222 00:09:32,270 --> 00:09:34,703 and the premium you have to pay for sure. 223 00:09:34,703 --> 00:09:36,370 So the way you can think about this then 224 00:09:36,370 --> 00:09:38,750 is like, if you choose a lower deductible, for sure 225 00:09:38,750 --> 00:09:40,580 you have to pay more money. 226 00:09:40,580 --> 00:09:42,920 But in case there's some damage to you 227 00:09:42,920 --> 00:09:45,775 with some probability that happens, 228 00:09:45,775 --> 00:09:47,150 if you have some claims, you have 229 00:09:47,150 --> 00:09:50,880 to like pay less because your deductible is now lower. 230 00:09:50,880 --> 00:09:52,268 OK? 231 00:09:52,268 --> 00:09:53,810 So now what info do we actually need? 232 00:09:53,810 --> 00:09:55,542 We need the available deductibles. 233 00:09:55,542 --> 00:09:58,000 Like, essentially what are the deductibles for each choice? 234 00:09:58,000 --> 00:10:00,050 We need the premium for each option. 235 00:10:00,050 --> 00:10:03,740 We need the claim probabilities and people's wealth levels 236 00:10:03,740 --> 00:10:05,610 because you have a utility function where 237 00:10:05,610 --> 00:10:06,610 there's wealth in there. 238 00:10:06,610 --> 00:10:08,550 I'll talk about this in one second. 239 00:10:08,550 --> 00:10:09,520 Any questions so far? 240 00:10:16,160 --> 00:10:16,940 OK. 241 00:10:16,940 --> 00:10:19,400 So now one important feature in this I think 242 00:10:19,400 --> 00:10:21,830 was asked like last time about like, well, 243 00:10:21,830 --> 00:10:23,060 what about the claim rates? 244 00:10:23,060 --> 00:10:24,935 Well if the claim rates are really high 245 00:10:24,935 --> 00:10:26,810 or if people think the claim rates are really 246 00:10:26,810 --> 00:10:29,630 high, in some sense then, having very low deductibles 247 00:10:29,630 --> 00:10:31,550 makes a lot of sense because then-- 248 00:10:31,550 --> 00:10:33,950 and very often it happens that you have to pay. 249 00:10:33,950 --> 00:10:38,930 Then it makes lots of sense to have like lower deductibles. 250 00:10:38,930 --> 00:10:41,960 But it turns out claims rates are actually very low. 251 00:10:41,960 --> 00:10:44,173 So you can see overall-- 252 00:10:44,173 --> 00:10:45,590 this is like the full sample, this 253 00:10:45,590 --> 00:10:47,962 is everybody-- people's claim rates is 4.2%. 254 00:10:47,962 --> 00:10:49,170 These are yearly claim rates. 255 00:10:49,170 --> 00:10:52,850 This is like out of 100 customers, 4.2 per year 256 00:10:52,850 --> 00:10:55,640 actually claim any damage. 257 00:10:55,640 --> 00:10:59,490 And then it varies a little bit also by choice of deductible. 258 00:10:59,490 --> 00:11:01,340 So there's the people who happened 259 00:11:01,340 --> 00:11:05,880 to choose in the end like $1,000, $500, $250, and $100. 260 00:11:05,880 --> 00:11:07,880 But for each of them essentially, the claim rate 261 00:11:07,880 --> 00:11:09,290 is below 5%. 262 00:11:09,290 --> 00:11:11,120 So it's very low. 263 00:11:11,120 --> 00:11:12,530 OK? 264 00:11:12,530 --> 00:11:16,070 The second factor from this data is that reducing the deductible 265 00:11:16,070 --> 00:11:17,460 is very expensive. 266 00:11:17,460 --> 00:11:21,620 So for example, this is the full sample again. 267 00:11:21,620 --> 00:11:24,860 On average, purchasing the insurance 268 00:11:24,860 --> 00:11:28,760 where the deductible of $1,000 cost $615, 269 00:11:28,760 --> 00:11:31,310 we can't say very much about that choice 270 00:11:31,310 --> 00:11:33,882 because who knows how much the actual damages are and so on. 271 00:11:33,882 --> 00:11:35,840 In some sense, that's sort of irrelevant for us 272 00:11:35,840 --> 00:11:36,950 what that number is. 273 00:11:36,950 --> 00:11:38,840 What we're interested in is like what 274 00:11:38,840 --> 00:11:41,270 are the differences in costs of different deductibles? 275 00:11:41,270 --> 00:11:43,670 How much do you have to pay to lower your deductible 276 00:11:43,670 --> 00:11:46,320 to like $500, $250, and so on? 277 00:11:46,320 --> 00:11:48,710 Now what you see here is like on average, 278 00:11:48,710 --> 00:11:51,840 reducing the deductible from $1,000 to $500, 279 00:11:51,840 --> 00:11:54,290 which is sort of what this column shows that's shown 280 00:11:54,290 --> 00:11:59,720 in red, costs $999.91. 281 00:11:59,720 --> 00:12:02,840 So if you choose then like $500, is this 282 00:12:02,840 --> 00:12:04,220 is a risk-averse choice or not? 283 00:12:04,220 --> 00:12:07,200 How do we think about that? 284 00:12:07,200 --> 00:12:09,165 Suppose your claim rate is like, say, 5%. 285 00:12:17,430 --> 00:12:19,120 Yes. 286 00:12:19,120 --> 00:12:22,030 AUDIENCE: Well, I think it would be a risk-averse decision 287 00:12:22,030 --> 00:12:25,500 because you're paying $100 more and your deductible 288 00:12:25,500 --> 00:12:27,965 has gone down by $500. 289 00:12:27,965 --> 00:12:31,356 So claim rate for that back of the envelope calculation 290 00:12:31,356 --> 00:12:33,628 would need to be about 20. 291 00:12:33,628 --> 00:12:34,420 PROFESSOR: Exactly. 292 00:12:34,420 --> 00:12:37,000 So what you're saying is you're reducing the deductible 293 00:12:37,000 --> 00:12:39,880 from $1,000 to $500. 294 00:12:39,880 --> 00:12:44,260 Now, if you think that happens with a 5% chance, on average 295 00:12:44,260 --> 00:12:47,600 you're going to reduce your payments by $25. 296 00:12:47,600 --> 00:12:54,310 So 5% times $500, which is a $25. 297 00:12:54,310 --> 00:12:57,260 But people are willing to pay about $100 for that. 298 00:12:57,260 --> 00:13:01,930 So for sure they're paying $100, and the benefit that they get 299 00:13:01,930 --> 00:13:04,450 is with 5% chance, at least the average customer, 300 00:13:04,450 --> 00:13:09,920 with 5% chance they're going to pay $500 less in case 301 00:13:09,920 --> 00:13:10,960 there's some damage. 302 00:13:10,960 --> 00:13:13,323 That looks already pretty risk-averse. 303 00:13:13,323 --> 00:13:14,740 Because as Ben says, surely you're 304 00:13:14,740 --> 00:13:16,948 not risk-neutral, because then you would not do that. 305 00:13:16,948 --> 00:13:19,000 You would choose the $1,000. 306 00:13:19,000 --> 00:13:21,160 It looks fairly risk-averse. 307 00:13:21,160 --> 00:13:26,410 Now if you go down then, if you go to like from $250 to $200, 308 00:13:26,410 --> 00:13:30,840 there's an additional $133.22. 309 00:13:30,840 --> 00:13:35,100 So that's to say reducing your deductible by another $150, 310 00:13:35,100 --> 00:13:39,930 from $250 to $100, makes you for sure you have to pay $133. 311 00:13:39,930 --> 00:13:42,990 And now if your chance is like 5% of getting like essentially 312 00:13:42,990 --> 00:13:47,430 a damage, that is for a 5% chance of saving $150, 313 00:13:47,430 --> 00:13:51,200 people are willing to pay $133 for sure. 314 00:13:51,200 --> 00:13:51,900 OK? 315 00:13:51,900 --> 00:13:54,090 So now if you try to calibrate this, what we already 316 00:13:54,090 --> 00:13:55,830 know from this, the simple example 317 00:13:55,830 --> 00:13:58,020 is that people look extremely risk-averse. 318 00:13:58,020 --> 00:13:58,980 OK? 319 00:13:58,980 --> 00:14:05,182 So that's kind of like the exercise that Sydnor is doing, 320 00:14:05,182 --> 00:14:07,640 just saying, look, let's take these choices very seriously. 321 00:14:07,640 --> 00:14:09,580 Let's look what people have done in real-world situations. 322 00:14:09,580 --> 00:14:10,997 These are repeat customers, people 323 00:14:10,997 --> 00:14:13,080 who have done this for a long time and so on. 324 00:14:13,080 --> 00:14:14,320 What are people choosing? 325 00:14:14,320 --> 00:14:19,550 And if you sort of assume expected utility, what would 326 00:14:19,550 --> 00:14:21,180 people's gamma need to look like to be 327 00:14:21,180 --> 00:14:23,460 able to explain this data? 328 00:14:23,460 --> 00:14:26,670 And we can do this customer-- this is like the average rates. 329 00:14:26,670 --> 00:14:29,780 They can do this sort of customer by customer. 330 00:14:29,780 --> 00:14:33,240 Now, what he then finds is the majority of people 331 00:14:33,240 --> 00:14:34,950 choose small deductibles. 332 00:14:34,950 --> 00:14:39,090 Lots of people choose $250, $500. 333 00:14:39,090 --> 00:14:42,990 Very few people choose $1,000, even among people, 334 00:14:42,990 --> 00:14:45,720 and this is on the x-axis, who have been at the company 335 00:14:45,720 --> 00:14:47,493 for 15-plus years. 336 00:14:47,493 --> 00:14:49,410 You would say like the first time you do this, 337 00:14:49,410 --> 00:14:50,970 maybe you don't understand your claim rate, 338 00:14:50,970 --> 00:14:53,100 you don't understand what's going on or whatever. 339 00:14:53,100 --> 00:14:56,400 But there's people who have been at this company for 15 years. 340 00:14:56,400 --> 00:14:58,170 They should kind of know at some point 341 00:14:58,170 --> 00:15:02,830 that claim rates are pretty low, at least on average. 342 00:15:02,830 --> 00:15:04,950 And so like if you have 15 years at this company, 343 00:15:04,950 --> 00:15:08,280 it's hard to believe that you'd still think that your claim 344 00:15:08,280 --> 00:15:10,260 rate is, say, about 10% or the like 345 00:15:10,260 --> 00:15:13,470 since it just doesn't happen very often. 346 00:15:13,470 --> 00:15:16,160 OK. 347 00:15:16,160 --> 00:15:19,310 Now, how do we think about people choosing a deductible? 348 00:15:19,310 --> 00:15:22,250 Again, what you need is like the following parameter. 349 00:15:22,250 --> 00:15:24,320 You need to have the yearly premium. 350 00:15:24,320 --> 00:15:27,350 You need to have the deductible D. You're 351 00:15:27,350 --> 00:15:29,483 assuming no other risks to lifetime wealth, which 352 00:15:29,483 --> 00:15:31,400 is a bit of a distraction, but essentially you 353 00:15:31,400 --> 00:15:35,270 can diversify risk and so on and so forth. 354 00:15:35,270 --> 00:15:38,427 You also can assume that's at most one risk per year. 355 00:15:38,427 --> 00:15:40,010 This is, again, sort of simplification 356 00:15:40,010 --> 00:15:44,900 and doesn't really matter very much for the probability pi. 357 00:15:44,900 --> 00:15:48,110 And then for now at least, we assume 358 00:15:48,110 --> 00:15:52,150 accurate, subjective beliefs about the likelihood of a loss. 359 00:15:52,150 --> 00:15:56,200 Now then, what is then the indirect utility function 360 00:15:56,200 --> 00:15:56,700 of wealth? 361 00:15:56,700 --> 00:15:58,158 What does the utility function look 362 00:15:58,158 --> 00:16:00,290 like depending on these parameters? 363 00:16:00,290 --> 00:16:02,290 Can somebody explain this what I'm showing here? 364 00:16:02,290 --> 00:16:08,010 What is this equation? 365 00:16:08,010 --> 00:16:09,324 Yes. 366 00:16:09,324 --> 00:16:13,116 AUDIENCE: The first part says pi [INAUDIBLE] w 367 00:16:13,116 --> 00:16:15,380 minus P minus D. The w minus P minus D 368 00:16:15,380 --> 00:16:18,290 is your wealth if something happens. 369 00:16:18,290 --> 00:16:19,270 [INAUDIBLE] pi. 370 00:16:19,270 --> 00:16:23,200 And the 1 minus pi is on your utility of your wealth 371 00:16:23,200 --> 00:16:25,014 if nothing bad happens. 372 00:16:25,014 --> 00:16:26,448 [INAUDIBLE] 373 00:16:26,448 --> 00:16:27,240 PROFESSOR: Exactly. 374 00:16:27,240 --> 00:16:31,260 So for sure, so with probability 1 minus pi, nothing happens. 375 00:16:31,260 --> 00:16:34,080 You have your wealth W that you had before. 376 00:16:34,080 --> 00:16:35,730 You have to pay the premium for sure. 377 00:16:35,730 --> 00:16:37,813 So in that case, you also have to pay the premium. 378 00:16:37,813 --> 00:16:40,950 So you're going to end up with W minus P, the premium. 379 00:16:40,950 --> 00:16:43,380 And then with probability pi, you also 380 00:16:43,380 --> 00:16:47,065 have to pay the premium, which is W minus P, 381 00:16:47,065 --> 00:16:48,690 but also you have to pay the deductible 382 00:16:48,690 --> 00:16:50,920 because some damage occurred. 383 00:16:50,920 --> 00:16:51,420 All right. 384 00:16:51,420 --> 00:16:54,600 And then your indirect utility-- your expected utility 385 00:16:54,600 --> 00:16:56,718 for that year is essentially then 386 00:16:56,718 --> 00:16:58,260 the weighted average of these things. 387 00:16:58,260 --> 00:17:01,270 And pi is essentially the weight on that, 388 00:17:01,270 --> 00:17:03,880 which is the subjective or, in this case, 389 00:17:03,880 --> 00:17:06,910 assumed actual probability of a damage occurring. 390 00:17:06,910 --> 00:17:11,670 Now, I sort of said the indirect utility function, 391 00:17:11,670 --> 00:17:14,430 utility of wealth function, what is that? 392 00:17:14,430 --> 00:17:16,170 What's A, an indirect utility function? 393 00:17:16,170 --> 00:17:18,630 And B, why is there wealth in it and not consumption? 394 00:17:18,630 --> 00:17:20,510 Usually we think people eat stuff 395 00:17:20,510 --> 00:17:23,770 and there should be like consumption here. 396 00:17:23,770 --> 00:17:25,020 Why do we have wealth in here? 397 00:17:25,020 --> 00:17:25,829 What is this? 398 00:17:35,840 --> 00:17:36,920 Yes. 399 00:17:36,920 --> 00:17:46,290 AUDIENCE: I think they might be like assuming [INAUDIBLE] 400 00:17:46,290 --> 00:17:47,100 PROFESSOR: Exactly. 401 00:17:47,100 --> 00:17:50,014 What is the indirect utility function? 402 00:17:50,014 --> 00:17:51,928 AUDIENCE: [INAUDIBLE] 403 00:17:51,928 --> 00:17:52,720 PROFESSOR: Exactly. 404 00:17:52,720 --> 00:17:56,280 So usually you think what you do is if you go back 405 00:17:56,280 --> 00:17:59,520 to like 14.01 notes or what was done in the first 406 00:17:59,520 --> 00:18:01,470 I think recitation, usually what you 407 00:18:01,470 --> 00:18:05,430 do is you maximize consumption with several goods 408 00:18:05,430 --> 00:18:07,630 or one good or whatever over time, 409 00:18:07,630 --> 00:18:09,300 and usually there's a budget constraint 410 00:18:09,300 --> 00:18:11,592 and wealth is usually in your budget constraint, right? 411 00:18:11,592 --> 00:18:14,160 You can only consume as much as how much money you have. 412 00:18:14,160 --> 00:18:16,050 Could be like your income or your wealth 413 00:18:16,050 --> 00:18:17,700 if it's over your lifetime. 414 00:18:17,700 --> 00:18:19,510 Now when you do that and maximize it, 415 00:18:19,510 --> 00:18:20,760 then you end up at an optimum. 416 00:18:20,760 --> 00:18:23,280 What you can then do is like essentially express 417 00:18:23,280 --> 00:18:24,810 the optimum. 418 00:18:24,810 --> 00:18:26,520 Assuming that you have chosen optimally, 419 00:18:26,520 --> 00:18:28,170 your consumption is saying you already 420 00:18:28,170 --> 00:18:31,130 chose whether you wanted apples or bananas or whatever. 421 00:18:31,130 --> 00:18:34,130 If we assume that you optimize, I can then essentially just say 422 00:18:34,130 --> 00:18:36,390 assuming that you're optimizing, what 423 00:18:36,390 --> 00:18:39,960 is your optimized utility for different levels of wealth? 424 00:18:39,960 --> 00:18:43,140 And usually it's a function of wealth and prices, 425 00:18:43,140 --> 00:18:48,120 and that's what the indirect utility function is. 426 00:18:48,120 --> 00:18:50,920 We can very briefly also go over that in recitation. 427 00:18:50,920 --> 00:18:55,720 But if you go back to your 1401 or other notes, 428 00:18:55,720 --> 00:18:57,870 you will see essentially that it's 429 00:18:57,870 --> 00:18:59,760 the outcome of a maximization problem. 430 00:18:59,760 --> 00:19:01,710 Usually it's like for two goods or whatever. 431 00:19:01,710 --> 00:19:03,540 Like, it's like income. 432 00:19:03,540 --> 00:19:06,980 In this case, it's wealth because it's like over-- 433 00:19:06,980 --> 00:19:10,320 Yeah, it's wealth, but it could be available income 434 00:19:10,320 --> 00:19:11,850 as well if you wanted. 435 00:19:11,850 --> 00:19:14,280 Now, what the person is then going to do 436 00:19:14,280 --> 00:19:19,740 is like each contract gives you an expected utility in terms 437 00:19:19,740 --> 00:19:22,500 of how much do you expect your utility to be if you choose 438 00:19:22,500 --> 00:19:24,450 that specific contract. 439 00:19:24,450 --> 00:19:26,220 And now the maximization problem is now 440 00:19:26,220 --> 00:19:29,520 you choose the contract J that maximizes the expected 441 00:19:29,520 --> 00:19:33,750 utility or the expected indirect utility as a function 442 00:19:33,750 --> 00:19:36,180 of these parameters. 443 00:19:36,180 --> 00:19:38,960 Any questions on this? 444 00:19:38,960 --> 00:19:41,260 So for each contract, we can write down what's 445 00:19:41,260 --> 00:19:43,795 the indirect utility function. 446 00:19:43,795 --> 00:19:45,273 It depends on people's wealth. 447 00:19:45,273 --> 00:19:46,690 So we have to make some assumption 448 00:19:46,690 --> 00:19:47,982 of how wealthy people are. 449 00:19:47,982 --> 00:19:49,690 And it depends on these other parameters. 450 00:19:49,690 --> 00:19:53,200 It depends on the premium, it depends on the deductible, 451 00:19:53,200 --> 00:19:57,970 and it depends on the subjective probability of a claim 452 00:19:57,970 --> 00:19:59,200 occurring in that year. 453 00:19:59,200 --> 00:20:01,158 We assume that there's only one claim per year. 454 00:20:05,250 --> 00:20:06,090 OK. 455 00:20:06,090 --> 00:20:11,040 So now what we can do is then we can back out the implied risk 456 00:20:11,040 --> 00:20:14,190 aversion from people's choices. 457 00:20:14,190 --> 00:20:15,720 And in fact, what we can do is we 458 00:20:15,720 --> 00:20:18,150 can get upper and lower bounds on people's risk aversion 459 00:20:18,150 --> 00:20:19,275 from what they have chosen. 460 00:20:19,275 --> 00:20:21,990 Let me sort of give you an example for that. 461 00:20:21,990 --> 00:20:26,460 Suppose a person chooses a $100 deductible. 462 00:20:26,460 --> 00:20:27,340 What does this mean? 463 00:20:27,340 --> 00:20:30,630 Well, this means essentially that he or she preferred 464 00:20:30,630 --> 00:20:34,650 the $100 deductible over all the other deductibles that 465 00:20:34,650 --> 00:20:35,470 were available. 466 00:20:35,470 --> 00:20:35,970 Right? 467 00:20:35,970 --> 00:20:37,990 So essentially, if you choose the 100, 468 00:20:37,990 --> 00:20:39,180 you get three inequalities. 469 00:20:39,180 --> 00:20:41,430 You get like the $100 deductible is better 470 00:20:41,430 --> 00:20:43,890 than the $250 deductible, the $100 471 00:20:43,890 --> 00:20:46,650 is better than $500 deductible, and $100 is 472 00:20:46,650 --> 00:20:49,500 better than $1,000 deductibles. 473 00:20:49,500 --> 00:20:51,870 Now, this gives us a bound on people's risk aversion. 474 00:20:51,870 --> 00:20:55,690 Is it the lower or an upper bound and why? 475 00:21:15,690 --> 00:21:16,462 Yes. 476 00:21:16,462 --> 00:21:21,900 AUDIENCE: I think it's a lower bound because $100 [INAUDIBLE] 477 00:21:21,900 --> 00:21:29,940 is like the lowest you can go in a lower deductible 478 00:21:29,940 --> 00:21:32,470 can cause more risk-aversion. 479 00:21:32,470 --> 00:21:33,660 PROFESSOR: Right, exactly. 480 00:21:33,660 --> 00:21:34,952 Sort of like a corner solution. 481 00:21:34,952 --> 00:21:36,930 So like, if the person who chooses $100-- 482 00:21:36,930 --> 00:21:39,097 that's the person that I just showed you previously. 483 00:21:39,097 --> 00:21:42,370 This is the example that I showed you here. 484 00:21:42,370 --> 00:21:43,020 Where was it? 485 00:21:45,720 --> 00:21:49,170 This is a person who looks extremely risk-averse, right? 486 00:21:49,170 --> 00:21:50,790 This is a person essentially saying 487 00:21:50,790 --> 00:21:54,390 like, I'm choosing the lowest possible deductible. 488 00:21:54,390 --> 00:21:56,640 I'm for sure paying quite a bit of money 489 00:21:56,640 --> 00:21:59,850 compared to all these other options, 4% or 5% chance 490 00:21:59,850 --> 00:22:02,740 of not having a damage. 491 00:22:02,740 --> 00:22:05,240 So this person will look very risk-averse. 492 00:22:05,240 --> 00:22:06,620 It's the lowest possible option. 493 00:22:06,620 --> 00:22:10,350 So maybe the person, if there had been a $50 or zero dollars 494 00:22:10,350 --> 00:22:12,300 option, would have even chosen that option. 495 00:22:12,300 --> 00:22:14,940 We don't know because that's not available. 496 00:22:14,940 --> 00:22:18,420 What you can then do is, however, you 497 00:22:18,420 --> 00:22:20,190 can just write down these inequalities. 498 00:22:20,190 --> 00:22:22,410 And so if you choose the $100 deductible 499 00:22:22,410 --> 00:22:24,870 compared to the $250 deductible, it 500 00:22:24,870 --> 00:22:27,480 will be the case that if you solve for gamma, 501 00:22:27,480 --> 00:22:30,030 this gives you a lower bound for or gamma. 502 00:22:30,030 --> 00:22:31,320 So what does that mean? 503 00:22:31,320 --> 00:22:33,480 We know that gamma is at least as high 504 00:22:33,480 --> 00:22:37,650 as the solution of this inequality will tell us, 505 00:22:37,650 --> 00:22:40,200 but in fact, their gamma could be even higher. 506 00:22:40,200 --> 00:22:43,650 We just don't know it because we don't have additional choices. 507 00:22:43,650 --> 00:22:47,200 Now, if you choose the $1,000 deductible on the other hand, 508 00:22:47,200 --> 00:22:49,830 you'll get like an upper bound. 509 00:22:49,830 --> 00:22:52,500 The reasoning is exactly the same. 510 00:22:52,500 --> 00:22:54,840 Essentially, if you choose the $1,000 deductible, 511 00:22:54,840 --> 00:22:56,310 that's like the riskiest option you 512 00:22:56,310 --> 00:22:58,620 can choose because essentially you're 513 00:22:58,620 --> 00:23:01,590 choosing something you will choose not 514 00:23:01,590 --> 00:23:03,450 to reduce your risk in any way. 515 00:23:03,450 --> 00:23:05,330 So you're not willing to pay to do that. 516 00:23:05,330 --> 00:23:08,790 It's kind of like accepting a gamble 517 00:23:08,790 --> 00:23:10,208 and not choosing the safe options. 518 00:23:10,208 --> 00:23:12,000 But we don't know whether this person would 519 00:23:12,000 --> 00:23:15,000 have chosen like $1,000 on one deductible, 520 00:23:15,000 --> 00:23:17,640 or $2,000, or $5,000, what deductible would 521 00:23:17,640 --> 00:23:19,980 have been chosen because there's no other options 522 00:23:19,980 --> 00:23:22,540 of deductibles. 523 00:23:22,540 --> 00:23:24,913 And then in between, we have essentially lower 524 00:23:24,913 --> 00:23:26,580 and upper bounds, because essentially we 525 00:23:26,580 --> 00:23:28,680 know that if you choose 500, we know 526 00:23:28,680 --> 00:23:30,960 that you didn't choose 1,000, and we know also 527 00:23:30,960 --> 00:23:32,610 that you didn't choose 250. 528 00:23:32,610 --> 00:23:35,250 So your gamma must be in between those options whatsoever 529 00:23:35,250 --> 00:23:38,970 as implied by those two options. 530 00:23:38,970 --> 00:23:41,550 There's a previous problem set that we posted that sort 531 00:23:41,550 --> 00:23:43,200 of walks you through that. 532 00:23:43,200 --> 00:23:46,890 We'll also go through that in recitation 533 00:23:46,890 --> 00:23:50,200 to go through the mechanics of that. 534 00:23:50,200 --> 00:23:51,240 Any questions on this? 535 00:23:54,990 --> 00:23:55,808 Yes. 536 00:23:55,808 --> 00:23:57,950 AUDIENCE: [INAUDIBLE] 537 00:24:06,270 --> 00:24:10,590 PROFESSOR: I think that's not a problem because it's just 538 00:24:10,590 --> 00:24:12,180 sort of flipped. 539 00:24:12,180 --> 00:24:14,280 So we have the gamma-- 540 00:24:14,280 --> 00:24:17,350 that's why you have the gamma in the denominator as well. 541 00:24:17,350 --> 00:24:19,590 So you get a problem if your gamma is 1, 542 00:24:19,590 --> 00:24:21,750 because in dividing by 1, usually people 543 00:24:21,750 --> 00:24:24,796 use a log utility for that. 544 00:24:24,796 --> 00:24:26,260 Yeah. 545 00:24:26,260 --> 00:24:28,700 AUDIENCE: [INAUDIBLE] 546 00:24:39,703 --> 00:24:41,370 PROFESSOR: Yes, that's a great question. 547 00:24:41,370 --> 00:24:43,440 So I'll get to this in a second. 548 00:24:43,440 --> 00:24:46,050 So what I have done right now, and this 549 00:24:46,050 --> 00:24:48,350 is the typical way economists think about these things, 550 00:24:48,350 --> 00:24:50,990 is sort of say, let's take a model very seriously. 551 00:24:50,990 --> 00:24:52,760 Here's the model that I'm using to try 552 00:24:52,760 --> 00:24:56,210 to explain people's choices, and I'm essentially 553 00:24:56,210 --> 00:24:57,920 assuming everything else away. 554 00:24:57,920 --> 00:25:00,440 I'm essentially assuming that the person optimizes. 555 00:25:00,440 --> 00:25:01,310 Those no mistakes. 556 00:25:01,310 --> 00:25:02,910 There's no framing of facts. 557 00:25:02,910 --> 00:25:05,570 There's no other stuff going on, liquidity constraints 558 00:25:05,570 --> 00:25:06,322 and so on. 559 00:25:06,322 --> 00:25:07,780 And I'm taking this very seriously. 560 00:25:07,780 --> 00:25:09,380 I'm estimating gamma. 561 00:25:09,380 --> 00:25:11,900 Now, the typical-- and this is section four of the paper, 562 00:25:11,900 --> 00:25:12,590 in fact-- 563 00:25:12,590 --> 00:25:14,117 the typical thing then that happens 564 00:25:14,117 --> 00:25:15,950 is other people are going to say like, well, 565 00:25:15,950 --> 00:25:19,040 what about if people don't understand what they're doing? 566 00:25:19,040 --> 00:25:20,850 What about people misperceiving the risk? 567 00:25:20,850 --> 00:25:23,840 What about people framing effects in terms of the way 568 00:25:23,840 --> 00:25:25,880 you present the choices? 569 00:25:25,880 --> 00:25:28,130 People like to not choose extremes, but like to choose 570 00:25:28,130 --> 00:25:29,240 and the middle. 571 00:25:29,240 --> 00:25:33,060 Can that explain the results? 572 00:25:33,060 --> 00:25:35,360 There's some concerns about that. 573 00:25:35,360 --> 00:25:37,790 I think one thing that your explanation, for example, 574 00:25:37,790 --> 00:25:41,450 would be able perhaps to explain is people choosing $500 over 575 00:25:41,450 --> 00:25:42,560 like $1,000. 576 00:25:42,560 --> 00:25:44,690 It's hard for the framing effects 577 00:25:44,690 --> 00:25:47,850 to explain why people are choosing 250. 578 00:25:47,850 --> 00:25:51,950 So if you look at this figure in particular, the people who 579 00:25:51,950 --> 00:25:55,010 are in the company for 15 years, lots of people 580 00:25:55,010 --> 00:25:57,770 choose deductibles of 250. 581 00:25:57,770 --> 00:26:00,170 It's a little bit harder to explain with framing effects. 582 00:26:00,170 --> 00:26:04,830 Why wouldn't you choose like 500 as opposed to 250? 583 00:26:04,830 --> 00:26:07,395 Sydnor argues that's really not what's going on. 584 00:26:07,395 --> 00:26:09,020 I think at the end of the day, probably 585 00:26:09,020 --> 00:26:11,420 it's the case that people do not want these extremes 586 00:26:11,420 --> 00:26:13,033 and, to some degree, I think some 587 00:26:13,033 --> 00:26:14,450 of what's going on in this perhaps 588 00:26:14,450 --> 00:26:17,000 at least contributed by some form of framing effect, maybe 589 00:26:17,000 --> 00:26:17,510 marketing. 590 00:26:17,510 --> 00:26:21,660 People who sell the insurance choices really want people-- 591 00:26:21,660 --> 00:26:24,933 they get paid presumably if they sell essentially 592 00:26:24,933 --> 00:26:27,350 these low deductibles because that's how the company makes 593 00:26:27,350 --> 00:26:28,930 a lot of money. 594 00:26:28,930 --> 00:26:30,680 What Sydnor says there is like, well, it's 595 00:26:30,680 --> 00:26:33,138 actually hard to sell people on stuff that they don't like. 596 00:26:33,138 --> 00:26:35,390 It seems like people really seem to want these things, 597 00:26:35,390 --> 00:26:37,973 and maybe some of that is sort of sales pressure, but probably 598 00:26:37,973 --> 00:26:38,942 not everything. 599 00:26:38,942 --> 00:26:40,400 So I think some of what is going on 600 00:26:40,400 --> 00:26:44,040 is a little bit hard to rule out all of those things, 601 00:26:44,040 --> 00:26:46,943 but if you read the paper it's reasonable. 602 00:26:46,943 --> 00:26:48,860 And since it's what I'm going to show you next 603 00:26:48,860 --> 00:26:51,590 is like the implied gamma's like so large 604 00:26:51,590 --> 00:26:54,050 that even if you sort of said, OK, half of this effect 605 00:26:54,050 --> 00:26:56,240 is driven by other things, you would get still 606 00:26:56,240 --> 00:27:00,263 like really absurdly large estimates of risk aversion. 607 00:27:00,263 --> 00:27:01,430 But that's a great question. 608 00:27:01,430 --> 00:27:02,195 Yes. 609 00:27:02,195 --> 00:27:04,520 AUDIENCE: [INAUDIBLE] 610 00:27:22,160 --> 00:27:24,420 PROFESSOR: Well, to some degree, in some sense, 611 00:27:24,420 --> 00:27:26,030 I think the way they sort of-- 612 00:27:26,030 --> 00:27:28,340 The question was like, is the company deliberately 613 00:27:28,340 --> 00:27:31,580 giving people choices that leads them to choose low deductibles? 614 00:27:31,580 --> 00:27:36,020 And therefore, do we sort of overestimate people's gamma? 615 00:27:36,020 --> 00:27:37,460 To some degree, yes, but it's not 616 00:27:37,460 --> 00:27:40,460 like just we have low deductibles available. 617 00:27:40,460 --> 00:27:42,560 There's a $1,000 option available. 618 00:27:42,560 --> 00:27:44,090 I think maybe what you're alluding 619 00:27:44,090 --> 00:27:46,945 to is like there's some sales pressure and so on going on. 620 00:27:46,945 --> 00:27:48,320 That may well be true, and people 621 00:27:48,320 --> 00:27:51,620 sort of might emphasize risk and make it particularly salient 622 00:27:51,620 --> 00:27:54,210 and make customers nervous and say like, look, these floods 623 00:27:54,210 --> 00:27:56,540 and so on are going on, and really, low deductibles 624 00:27:56,540 --> 00:27:57,800 are good. 625 00:27:57,800 --> 00:27:59,730 I think to some degree, that's true, 626 00:27:59,730 --> 00:28:03,920 but you have to be pretty compelling in your reasoning. 627 00:28:03,920 --> 00:28:06,230 One other comment is like, there's 628 00:28:06,230 --> 00:28:08,990 lots of other examples of people choosing low deductibles 629 00:28:08,990 --> 00:28:10,850 and sort of extended warranties and so on. 630 00:28:10,850 --> 00:28:12,690 And in lots of cases, for example, 631 00:28:12,690 --> 00:28:16,160 if you look at like iPhones or iPads and so on and so forth, 632 00:28:16,160 --> 00:28:19,580 laptops, et cetera, Apple in particular, 633 00:28:19,580 --> 00:28:23,000 but other companies try to sell extended warranties 634 00:28:23,000 --> 00:28:26,420 that, if you actually did this exact same calculation, 635 00:28:26,420 --> 00:28:30,020 are not worth engaging in or that 636 00:28:30,020 --> 00:28:33,740 reveal essentially extreme risk aversion among customers. 637 00:28:33,740 --> 00:28:35,953 I myself am looking at this kind of research. 638 00:28:35,953 --> 00:28:38,370 There's lots of research that argues that people shouldn't 639 00:28:38,370 --> 00:28:39,830 choose extended warranties. 640 00:28:39,830 --> 00:28:46,298 Of course, then when I bought a laptop the last time, 641 00:28:46,298 --> 00:28:48,590 I was like, of course I don't need extended warranties. 642 00:28:48,590 --> 00:28:55,610 And then, of course, soon after, the laptop broke and so on, 643 00:28:55,610 --> 00:28:57,390 and I didn't have a warranty and so on. 644 00:28:57,390 --> 00:29:00,140 So of course, in specific examples that may happen, 645 00:29:00,140 --> 00:29:05,780 but on average it's not a good idea to do. 646 00:29:05,780 --> 00:29:06,280 OK. 647 00:29:06,280 --> 00:29:09,560 So now what does Sydnor find? 648 00:29:09,560 --> 00:29:14,080 So now here in this table, you can see the implied estimates 649 00:29:14,080 --> 00:29:15,065 of gamma. 650 00:29:15,065 --> 00:29:16,690 Remember, what we said is what we think 651 00:29:16,690 --> 00:29:19,270 is a reasonable gamma is somewhere between 0 and 2 652 00:29:19,270 --> 00:29:22,150 for large-scale choices. 653 00:29:22,150 --> 00:29:23,980 He has different types of assumptions. 654 00:29:23,980 --> 00:29:27,520 He has like lower bounds and upper bounds of gammas. 655 00:29:27,520 --> 00:29:30,970 And what essentially you see is the gammas are like, 656 00:29:30,970 --> 00:29:33,220 depending what the assumptions are, in the hundreds 657 00:29:33,220 --> 00:29:34,405 or in the thousands. 658 00:29:34,405 --> 00:29:36,490 It depends a little bit what people's wealth is. 659 00:29:36,490 --> 00:29:39,110 So essentially depending on how much you assume people-- 660 00:29:39,110 --> 00:29:40,810 So the data that he does not have 661 00:29:40,810 --> 00:29:43,990 is what is people's wealth actually like. 662 00:29:43,990 --> 00:29:45,880 How much money do people actually have? 663 00:29:45,880 --> 00:29:48,050 So he makes some reasonable assumptions by saying, 664 00:29:48,050 --> 00:29:50,755 look, these are people whose houses are like $200,000, 665 00:29:50,755 --> 00:29:52,960 $300,000, $400,000 worth. 666 00:29:52,960 --> 00:29:54,880 So presumably, and we know from other data 667 00:29:54,880 --> 00:29:57,800 sets roughly how much money people have available. 668 00:29:57,800 --> 00:30:00,160 So then depending on essentially what utility function 669 00:30:00,160 --> 00:30:04,390 you assume, he is mostly using like CRRA utility, 670 00:30:04,390 --> 00:30:09,910 like then is wealth of like $1,000,000. 671 00:30:09,910 --> 00:30:15,050 You can look at like $100,000, $50,000, $5,000, and so on. 672 00:30:15,050 --> 00:30:17,890 You can also use CARA utility, which is constant absolute risk 673 00:30:17,890 --> 00:30:18,970 aversion, and so on. 674 00:30:18,970 --> 00:30:21,370 And essentially, what you have to assume 675 00:30:21,370 --> 00:30:25,690 is like extremely low levels of wealth or something like $5,000 676 00:30:25,690 --> 00:30:27,670 to get into like sort of single digits 677 00:30:27,670 --> 00:30:29,980 or double digits of gamma. 678 00:30:29,980 --> 00:30:32,620 It's extremely hard to sort of get 679 00:30:32,620 --> 00:30:34,960 estimates that are reasonable in the sense 680 00:30:34,960 --> 00:30:39,940 that we think are actually reasonable parameters of gamma. 681 00:30:39,940 --> 00:30:40,990 Any questions on this? 682 00:30:45,500 --> 00:30:46,420 OK. 683 00:30:46,420 --> 00:30:50,560 So now, why do people choose those small deductibles? 684 00:30:50,560 --> 00:30:52,360 And this is what you were saying before. 685 00:30:52,360 --> 00:30:56,257 So one classical explanation would 686 00:30:56,257 --> 00:30:58,340 be, well, they must be really, really risk-averse. 687 00:30:58,340 --> 00:31:00,580 Their gamma must be really high. 688 00:31:00,580 --> 00:31:02,920 There you might sort of say, well, 689 00:31:02,920 --> 00:31:04,780 we know already that gamma shouldn't be that 690 00:31:04,780 --> 00:31:06,100 high from some other choices. 691 00:31:06,100 --> 00:31:07,600 So that's sort of hard to reconcile. 692 00:31:07,600 --> 00:31:08,560 Was there a question? 693 00:31:08,560 --> 00:31:09,060 No. 694 00:31:11,498 --> 00:31:13,290 Second, you could say, well, there's really 695 00:31:13,290 --> 00:31:15,035 high objective probability of claims. 696 00:31:15,035 --> 00:31:16,910 We know that the objectives for probabilities 697 00:31:16,910 --> 00:31:19,063 are only 4% or 5%. 698 00:31:19,063 --> 00:31:20,730 You'd have to sort of have probabilities 699 00:31:20,730 --> 00:31:23,550 of claims that are like 20%, 30%, 40% 700 00:31:23,550 --> 00:31:25,470 to be able to match these data. 701 00:31:25,470 --> 00:31:27,570 Now, it be that people have risk misperception. 702 00:31:27,570 --> 00:31:29,280 It could be that people really think 703 00:31:29,280 --> 00:31:32,660 the probability is actually 20% when it's, at the end, only 5 704 00:31:32,660 --> 00:31:33,580 or 4. 705 00:31:33,580 --> 00:31:35,790 Now, what's sort of inconsistent with that 706 00:31:35,790 --> 00:31:37,710 is that like repeat customers, people that 707 00:31:37,710 --> 00:31:40,410 have been at the company for 10, 15, 20 years, 708 00:31:40,410 --> 00:31:42,510 are making very similar choices. 709 00:31:42,510 --> 00:31:45,360 In some sense, it's hard to reconcile that everybody 710 00:31:45,360 --> 00:31:48,180 is misperceiving this risk year after year after year 711 00:31:48,180 --> 00:31:50,202 and spending lots of money. 712 00:31:50,202 --> 00:31:52,410 There's some questions on is it borrowing constraint? 713 00:31:52,410 --> 00:31:54,750 Is it like people just don't have enough money? 714 00:31:54,750 --> 00:32:00,400 Are they worried about having to pay these deductibles? 715 00:32:00,400 --> 00:32:02,430 That also seems quite unlikely because in fact, 716 00:32:02,430 --> 00:32:04,740 the deductibles are particularly-- 717 00:32:04,740 --> 00:32:06,600 we're not talking about like $5,000. 718 00:32:06,600 --> 00:32:08,100 People could sort of, if they really 719 00:32:08,100 --> 00:32:09,030 faced these borrowing constraints, 720 00:32:09,030 --> 00:32:10,450 they could save and so on. 721 00:32:10,450 --> 00:32:13,830 So Sydnor also argues that that's not going on. 722 00:32:13,830 --> 00:32:16,110 There's some questions about marketing social pressure 723 00:32:16,110 --> 00:32:17,153 and so on, which is-- 724 00:32:17,153 --> 00:32:19,320 of course, the company has very much like incentives 725 00:32:19,320 --> 00:32:22,430 to sell people these kinds of deductibles. 726 00:32:22,430 --> 00:32:24,180 I think some of that is probably going on, 727 00:32:24,180 --> 00:32:26,340 and it's hard to rule out entirely. 728 00:32:26,340 --> 00:32:28,810 Again, like, it's hard to actually sell people stuff they 729 00:32:28,810 --> 00:32:31,230 don't really want to. 730 00:32:31,230 --> 00:32:34,750 So there must be lots of social pressure to, in fact, do that. 731 00:32:34,750 --> 00:32:35,470 Menu effects. 732 00:32:35,470 --> 00:32:38,400 Already talked about it a little bit. 733 00:32:38,400 --> 00:32:41,730 There we think that maybe menu effects 734 00:32:41,730 --> 00:32:44,550 can explain why people choose to 500 735 00:32:44,550 --> 00:32:47,100 or, like the interior choices, why they don't choose 1,000. 736 00:32:47,100 --> 00:32:48,810 It's hard to explain with menu effects 737 00:32:48,810 --> 00:32:52,260 why people choose 50 over 500. 738 00:32:52,260 --> 00:32:54,240 So then Sydnor's preferred explanation 739 00:32:54,240 --> 00:32:56,670 is then reference-dependent preferences and loss aversion, 740 00:32:56,670 --> 00:32:58,560 which we're going to talk about next. 741 00:32:58,560 --> 00:32:59,470 Yes. 742 00:32:59,470 --> 00:33:02,298 AUDIENCE: Is there any data on whether the probability 743 00:33:02,298 --> 00:33:07,040 of the claim is correlated with which menu option the people 744 00:33:07,040 --> 00:33:08,340 chose? 745 00:33:08,340 --> 00:33:10,320 PROFESSOR: Yes, there is. 746 00:33:10,320 --> 00:33:13,500 So you have it here. 747 00:33:13,500 --> 00:33:17,610 So it's sort of weakly correlated in a sense. 748 00:33:17,610 --> 00:33:19,200 Like, if you choose-- 749 00:33:19,200 --> 00:33:21,210 or weekly negatively correlated. 750 00:33:21,210 --> 00:33:32,700 So people who choose $1,000 have 2.5% claim rates, and the $100 751 00:33:32,700 --> 00:33:34,060 are 4.7. 752 00:33:34,060 --> 00:33:36,730 So that's kind of not quite explaining things either. 753 00:33:36,730 --> 00:33:38,165 Yeah. 754 00:33:38,165 --> 00:33:40,290 AUDIENCE: I think the use of [INAUDIBLE] regression 755 00:33:40,290 --> 00:33:43,700 to control for fact that those with lower deductibles 756 00:33:43,700 --> 00:33:45,612 may claim multiple times. 757 00:33:45,612 --> 00:33:46,320 PROFESSOR: I see. 758 00:33:46,320 --> 00:33:48,480 Yes. 759 00:33:48,480 --> 00:33:49,200 Great, yes. 760 00:33:54,670 --> 00:33:56,510 OK. 761 00:33:56,510 --> 00:33:57,760 So what do we learn from this? 762 00:33:57,760 --> 00:34:00,400 I think this is very much sort of confirming 763 00:34:00,400 --> 00:34:04,030 some of the lab evidence data on relatively small-scale gambles. 764 00:34:04,030 --> 00:34:06,670 These are not like small-scale gambles of like a dollar 765 00:34:06,670 --> 00:34:08,199 or 2 or 5 or 10. 766 00:34:08,199 --> 00:34:10,989 These are about several hundreds of dollars, 767 00:34:10,989 --> 00:34:13,900 but they're not about hundreds of thousands of dollars. 768 00:34:13,900 --> 00:34:16,840 So these are relatively small relative to like people's 769 00:34:16,840 --> 00:34:17,584 lifetime wealth. 770 00:34:17,584 --> 00:34:19,459 And what we see for those kinds of choices is 771 00:34:19,459 --> 00:34:24,219 it really looks like people appear 772 00:34:24,219 --> 00:34:27,280 to be very, very risk-averse. 773 00:34:27,280 --> 00:34:29,262 This is what I was saying already previously. 774 00:34:29,262 --> 00:34:31,179 When looking at sort of reasonably small-scale 775 00:34:31,179 --> 00:34:31,780 choices-- 776 00:34:31,780 --> 00:34:34,505 and I count the Sydnor evidence as reasonably small-scale 777 00:34:34,505 --> 00:34:36,880 choices because it's not about like hundreds of thousands 778 00:34:36,880 --> 00:34:37,690 of dollars-- 779 00:34:37,690 --> 00:34:42,550 you essentially see that such choices imply 780 00:34:42,550 --> 00:34:43,900 people seem very risk-averse. 781 00:34:43,900 --> 00:34:47,080 These choices imply enormous risk aversion 782 00:34:47,080 --> 00:34:49,000 for large-scale risks. 783 00:34:49,000 --> 00:34:51,940 But people are not avoiding all sorts of large-scale risks. 784 00:34:51,940 --> 00:34:53,780 People take on lots of large-scale risk. 785 00:34:53,780 --> 00:34:56,989 So in some sense, that can't be true in some ways. 786 00:34:56,989 --> 00:34:59,080 We also find that individuals are moderately 787 00:34:59,080 --> 00:35:00,470 risk-averse a large-scale risk. 788 00:35:00,470 --> 00:35:02,740 People are taking on some risk, as I said. 789 00:35:02,740 --> 00:35:04,750 Now, if you sort of take that seriously 790 00:35:04,750 --> 00:35:08,500 and say, well, people might not be risk-averse, that in turn 791 00:35:08,500 --> 00:35:10,870 implies that people are nearly risk-neutral. 792 00:35:10,870 --> 00:35:12,670 They should be nearly risk-neutral 793 00:35:12,670 --> 00:35:14,633 for small-scale risk. 794 00:35:14,633 --> 00:35:16,300 So it can't be that both of these things 795 00:35:16,300 --> 00:35:18,060 are true at the same time. 796 00:35:18,060 --> 00:35:21,790 Now, in fact, there's a much older paper 797 00:35:21,790 --> 00:35:25,750 that's a very famous and seminal paper by Kahneman and Tversky. 798 00:35:25,750 --> 00:35:29,000 Kahneman got the Nobel Prize, in fact, 799 00:35:29,000 --> 00:35:32,440 for this work and similar work. 800 00:35:32,440 --> 00:35:34,210 Kahneman is a psychologist and they 801 00:35:34,210 --> 00:35:36,100 were doing psychology experiments, 802 00:35:36,100 --> 00:35:39,080 and it just turned out that these psychology experiments 803 00:35:39,080 --> 00:35:42,610 they're extremely influential in affecting how economists think 804 00:35:42,610 --> 00:35:46,330 about risk and risk preferences, and in particular, 805 00:35:46,330 --> 00:35:49,390 sort of preference-dependent preferences. 806 00:35:49,390 --> 00:35:52,708 And so what Kahneman and Tversky were doing at the time way 807 00:35:52,708 --> 00:35:54,250 before a lot of this other literature 808 00:35:54,250 --> 00:35:56,160 that I just showed you-- 809 00:35:56,160 --> 00:35:58,540 A, they showed even more sort of evidence 810 00:35:58,540 --> 00:36:01,325 against the expected utility model. 811 00:36:01,325 --> 00:36:02,950 But perhaps more importantly, they also 812 00:36:02,950 --> 00:36:04,840 proposed an alternative model in saying, 813 00:36:04,840 --> 00:36:06,370 look, here's some choices that we 814 00:36:06,370 --> 00:36:10,330 think are hard to explain and hard to rationalize 815 00:36:10,330 --> 00:36:11,930 using expected utility. 816 00:36:11,930 --> 00:36:16,510 Now here's a different model that can explain things perhaps 817 00:36:16,510 --> 00:36:19,400 better in some situations. 818 00:36:19,400 --> 00:36:21,400 So what did Kahneman and Tversky actually do? 819 00:36:21,400 --> 00:36:23,320 The experiments are actually extremely 820 00:36:23,320 --> 00:36:24,490 simple in various ways. 821 00:36:24,490 --> 00:36:26,808 They're very clever and very clean, 822 00:36:26,808 --> 00:36:27,850 but actually very simple. 823 00:36:27,850 --> 00:36:29,710 And so what do these experiments look like? 824 00:36:29,710 --> 00:36:31,245 These are survey responses. 825 00:36:31,245 --> 00:36:32,620 Like, essentially they just asked 826 00:36:32,620 --> 00:36:35,050 people about what would you do in different situations. 827 00:36:35,050 --> 00:36:36,820 This is not what economists were doing at the time. 828 00:36:36,820 --> 00:36:38,410 Economists at the time were saying 829 00:36:38,410 --> 00:36:40,450 like, revealed preferences are important. 830 00:36:40,450 --> 00:36:44,032 I need to make you do actual choices. 831 00:36:44,032 --> 00:36:45,490 Whatever you say in surveys doesn't 832 00:36:45,490 --> 00:36:48,922 matter because who knows whether you actually mean what you say. 833 00:36:48,922 --> 00:36:51,130 It turns out that the survey responses are actually-- 834 00:36:51,130 --> 00:36:53,290 these are hypothetical stakes, but it turns out 835 00:36:53,290 --> 00:36:54,970 that if you do this with actual stakes, 836 00:36:54,970 --> 00:36:57,940 you find very similar results. 837 00:36:57,940 --> 00:37:01,630 And the experiments were as follows. 838 00:37:01,630 --> 00:37:05,230 There were things like questions like, which of the following 839 00:37:05,230 --> 00:37:08,740 would you prefer, kind of like I showed you in the first class 840 00:37:08,740 --> 00:37:11,962 at the end of the survey that you did. 841 00:37:11,962 --> 00:37:13,420 Would you prefer option A, which is 842 00:37:13,420 --> 00:37:17,800 a 50% of winning $1,000 or 50% chance of winning nothing, 843 00:37:17,800 --> 00:37:22,550 versus option B is like $450 for sure? 844 00:37:22,550 --> 00:37:25,130 And so they did a series of these types of questions. 845 00:37:25,130 --> 00:37:29,340 Now, one of the things that then they 846 00:37:29,340 --> 00:37:32,220 showed is one key prediction of expected utility 847 00:37:32,220 --> 00:37:34,830 is that, as I said before, people only 848 00:37:34,830 --> 00:37:39,592 care about final outcomes and their associated probabilities. 849 00:37:39,592 --> 00:37:41,010 Kahneman and Tversky show a bunch 850 00:37:41,010 --> 00:37:44,013 of different striking contradictions of that. 851 00:37:44,013 --> 00:37:46,680 I want you to focus on the first row-- problem three and problem 852 00:37:46,680 --> 00:37:47,700 three prime. 853 00:37:47,700 --> 00:37:51,930 And look at that and tell me what of that 854 00:37:51,930 --> 00:37:55,745 example is contradicting expected utility. 855 00:37:55,745 --> 00:37:57,120 I don't know if you can see this. 856 00:38:02,770 --> 00:38:05,010 So let me just read this for you if it's hard to see, 857 00:38:05,010 --> 00:38:07,050 but think of them for a second. 858 00:38:07,050 --> 00:38:12,780 Problem three says, "Would you prefer 859 00:38:12,780 --> 00:38:18,435 an 80% chance of $4,000 over $3,000 for sure?" 860 00:38:18,435 --> 00:38:19,560 What do you see below then? 861 00:38:19,560 --> 00:38:21,210 These are always 100 people. 862 00:38:21,210 --> 00:38:23,010 You see the number of people who preferred 863 00:38:23,010 --> 00:38:24,580 one option over the other. 864 00:38:24,580 --> 00:38:27,810 So 80 people preferred the 3,000 option. 865 00:38:27,810 --> 00:38:32,560 20 people said I'd rather have the 80% chance of $4,000. 866 00:38:32,560 --> 00:38:33,930 That's problem three. 867 00:38:33,930 --> 00:38:37,110 Problem three prime is about what 868 00:38:37,110 --> 00:38:39,090 they call negative prospects. 869 00:38:39,090 --> 00:38:43,920 Would you prefer an 80% chance of losing $4,000, 870 00:38:43,920 --> 00:38:47,970 or for sure losing $3,000? 871 00:38:47,970 --> 00:38:55,650 And there you see 92% choose the first option and 80% 872 00:38:55,650 --> 00:39:01,790 choose the $3,000 loss for sure. 873 00:39:01,790 --> 00:39:05,840 Let's start with the left example. 874 00:39:05,840 --> 00:39:08,150 What did we learn from the left example? 875 00:39:08,150 --> 00:39:10,610 Are people risk-averse, risk-neutral, risk-loving, 876 00:39:10,610 --> 00:39:13,963 or what have we learned from that? 877 00:39:13,963 --> 00:39:14,880 AUDIENCE: Risk-averse. 878 00:39:14,880 --> 00:39:17,610 PROFESSOR: Risk-averse, and why is that? 879 00:39:17,610 --> 00:39:22,306 AUDIENCE: If you calculate the expected monetary value, 880 00:39:22,306 --> 00:39:25,470 I guess you would expect to [INAUDIBLE].. 881 00:39:29,056 --> 00:39:29,850 PROFESSOR: Right. 882 00:39:29,850 --> 00:39:31,860 So we said before somebody is risk neutral 883 00:39:31,860 --> 00:39:34,770 if the person is indifferent between two options 884 00:39:34,770 --> 00:39:37,830 when they have the same expected monetary value. 885 00:39:37,830 --> 00:39:40,530 And so in this case, an 80% chance of $4,000 886 00:39:40,530 --> 00:39:44,477 is $3,200 in expectation, right? 887 00:39:44,477 --> 00:39:46,060 But there's a bunch of people who say, 888 00:39:46,060 --> 00:39:51,120 I'd rather get the $3,000, which is less than 3,200, for sure. 889 00:39:51,120 --> 00:39:52,920 Which means essentially they take a surer-- 890 00:39:52,920 --> 00:39:55,950 like an option that's for sure they get. 891 00:39:55,950 --> 00:39:57,690 They prefer that over the uncertainty 892 00:39:57,690 --> 00:40:01,230 of getting 0 versus 4,000, which on average gets them more, 893 00:40:01,230 --> 00:40:03,210 3,200. 894 00:40:03,210 --> 00:40:05,580 Expected utility would say if you choose that option, 895 00:40:05,580 --> 00:40:09,690 it must be that you are risk-averse. 896 00:40:09,690 --> 00:40:12,510 Now, not everybody seems risk-averse. 897 00:40:12,510 --> 00:40:15,120 There's 80 people out of 100 choose that, 898 00:40:15,120 --> 00:40:20,520 or I think it's 80% choose that and the remaining ones 899 00:40:20,520 --> 00:40:21,870 choose the other option. 900 00:40:21,870 --> 00:40:25,170 These other people we don't know much about. 901 00:40:25,170 --> 00:40:26,900 They're not very risk-averse, but they 902 00:40:26,900 --> 00:40:28,650 could be also risk-averse just like it was 903 00:40:28,650 --> 00:40:30,310 revealed in their choice there. 904 00:40:30,310 --> 00:40:30,810 OK. 905 00:40:30,810 --> 00:40:32,352 So from the left side, we say we know 906 00:40:32,352 --> 00:40:36,485 at least 80 people, or 80% of the sample, is risk-averse. 907 00:40:36,485 --> 00:40:37,860 Now let's look at the right side. 908 00:40:37,860 --> 00:40:39,277 What do you see on the right side? 909 00:40:53,800 --> 00:40:54,567 Yes. 910 00:40:54,567 --> 00:40:56,650 AUDIENCE: Well, they seem to be risk [INAUDIBLE],, 911 00:40:56,650 --> 00:41:03,570 because unexpected risk minus 3,200 versus minus 3,000, 912 00:41:03,570 --> 00:41:07,585 but they prefer the minus 3,200 on expectation. 913 00:41:07,585 --> 00:41:09,260 PROFESSOR: Right. 914 00:41:09,260 --> 00:41:13,740 So if you look at the left option, it's minus 3,200. 915 00:41:13,740 --> 00:41:16,240 So the left option has more risk, 916 00:41:16,240 --> 00:41:18,958 and it also has a lower expected monetary value. 917 00:41:18,958 --> 00:41:20,750 As you said in the expected monetary value, 918 00:41:20,750 --> 00:41:23,920 it's just a flip of the problem three. 919 00:41:23,920 --> 00:41:29,440 It's minus 3,200, and the other one is minus 3,000. 920 00:41:29,440 --> 00:41:31,540 So if you're in a risk-neutral, surely you 921 00:41:31,540 --> 00:41:33,640 would choose the minus 3,000. 922 00:41:33,640 --> 00:41:36,860 If instead you choose the minus 3,200, well, 923 00:41:36,860 --> 00:41:38,860 it must be that you really appreciate that there 924 00:41:38,860 --> 00:41:40,280 is additional risk there. 925 00:41:40,280 --> 00:41:43,270 So it looks like you're risk-loving. 926 00:41:43,270 --> 00:41:43,990 OK. 927 00:41:43,990 --> 00:41:47,170 And now we find here that 92% of the sample 928 00:41:47,170 --> 00:41:50,800 choose the first option, but at the same time, 929 00:41:50,800 --> 00:41:55,720 with the same people, when they're offered or given 930 00:41:55,720 --> 00:41:58,480 the choice between problem three, 80% of people 931 00:41:58,480 --> 00:41:59,760 choose the other option. 932 00:41:59,760 --> 00:42:01,510 So there are a bunch of people essentially 933 00:42:01,510 --> 00:42:05,350 that choose the $3,000 for sure in problem three. 934 00:42:05,350 --> 00:42:07,600 At the same time, they choose the minus 4,000 935 00:42:07,600 --> 00:42:12,050 with an 80% chance when given problem three prime. 936 00:42:12,050 --> 00:42:14,350 So that means essentially we have two choices here. 937 00:42:14,350 --> 00:42:18,160 One choice says people are risk-averse. 938 00:42:18,160 --> 00:42:21,810 The other choice says people are risk-loving. 939 00:42:21,810 --> 00:42:25,840 In expected utility terms in our world, this cannot happen. 940 00:42:25,840 --> 00:42:28,560 The reason being that we have one parameter, which is gamma. 941 00:42:28,560 --> 00:42:31,260 Gamma tells us how risk-averse or how risk-loving 942 00:42:31,260 --> 00:42:32,970 or the risk-neutral alike. 943 00:42:32,970 --> 00:42:34,800 That parameter tells us everything 944 00:42:34,800 --> 00:42:37,140 about your risk preferences for all choices 945 00:42:37,140 --> 00:42:38,340 that I'm giving you. 946 00:42:38,340 --> 00:42:40,800 It cannot be that you're simultaneously risk-loving 947 00:42:40,800 --> 00:42:41,970 and risk-averse. 948 00:42:41,970 --> 00:42:45,750 So this evidence essentially sort of rejecting the expected 949 00:42:45,750 --> 00:42:49,195 utility model or cannot explain this behavior. 950 00:42:49,195 --> 00:42:50,070 Does this make sense? 951 00:42:52,978 --> 00:42:54,520 I sort of wrote this down here, but I 952 00:42:54,520 --> 00:42:57,370 think I said everything that's to be said. 953 00:43:00,730 --> 00:43:01,720 Any questions on this? 954 00:43:08,070 --> 00:43:10,080 So where we're going to go is essentially A, 955 00:43:10,080 --> 00:43:12,330 these people seem to be behaving differently 956 00:43:12,330 --> 00:43:15,630 for gains versus losses and, in addition 957 00:43:15,630 --> 00:43:18,630 to that, so not only do people seem to dislike losses-- 958 00:43:18,630 --> 00:43:20,530 I'll show you some evidence of that-- 959 00:43:20,530 --> 00:43:23,550 but in addition, people seem to be risk-loving for losses 960 00:43:23,550 --> 00:43:25,193 and risk-averse for gains. 961 00:43:25,193 --> 00:43:27,610 And that's kind of what Kahneman and Tversky are claiming. 962 00:43:27,610 --> 00:43:29,130 And once you make that claim, you'll 963 00:43:29,130 --> 00:43:33,390 be able to explain these patterns in the data. 964 00:43:33,390 --> 00:43:37,080 Now, a second thing that they show is the following. 965 00:43:37,080 --> 00:43:38,970 They show in problem 11 and problem 12. 966 00:43:38,970 --> 00:43:43,480 I'll let you read it for yourself. 967 00:43:43,480 --> 00:43:45,780 And essentially, you see that 84%-- 968 00:43:45,780 --> 00:43:50,885 so the choice here is in addition to whatever you have, 969 00:43:50,885 --> 00:43:52,260 for sure you're going to be given 970 00:43:52,260 --> 00:43:55,230 $1,000, or shekels I guess. 971 00:43:55,230 --> 00:43:59,040 You are now asked to choose between 1,000 with a 50% chance 972 00:43:59,040 --> 00:44:01,410 and 500 for sure. 973 00:44:01,410 --> 00:44:01,980 OK? 974 00:44:01,980 --> 00:44:04,290 84% say option B. 975 00:44:04,290 --> 00:44:06,690 Problem two is, in addition to whatever your own, 976 00:44:06,690 --> 00:44:09,480 you have been given 2,000. 977 00:44:09,480 --> 00:44:11,670 You're now asked to choose between option 978 00:44:11,670 --> 00:44:14,490 C, which is minus 1,000 with 50% chance, 979 00:44:14,490 --> 00:44:17,928 and option D, which is minus 500. 980 00:44:17,928 --> 00:44:25,670 69% of people here say they choose option C. OK? 981 00:44:25,670 --> 00:44:27,200 So what's the problem with this? 982 00:44:32,320 --> 00:44:33,130 Yes. 983 00:44:33,130 --> 00:44:35,470 AUDIENCE: I mean, at the end of it, 984 00:44:35,470 --> 00:44:38,056 you would say that problem 11 option 985 00:44:38,056 --> 00:44:41,210 B, you end up with 1,500. 986 00:44:41,210 --> 00:44:46,200 Whereas problem 12 option B, you still end up with 1,500. 987 00:44:46,200 --> 00:44:49,210 People were inconsistent with these decisions 988 00:44:49,210 --> 00:44:51,068 based on how the question's framed. 989 00:44:51,068 --> 00:44:51,860 PROFESSOR: Exactly. 990 00:44:51,860 --> 00:44:55,910 So framing matters, or reference to points matter. 991 00:44:55,910 --> 00:44:58,330 And so what assumption of the expected utility model 992 00:44:58,330 --> 00:45:01,805 is that rejecting? 993 00:45:01,805 --> 00:45:03,588 AUDIENCE: [INAUDIBLE] 994 00:45:03,588 --> 00:45:04,380 PROFESSOR: Exactly. 995 00:45:04,380 --> 00:45:10,640 So we postulated that only final outcomes matter. 996 00:45:10,640 --> 00:45:13,280 Here, if you write this down, it turns out 997 00:45:13,280 --> 00:45:15,770 option A and option C are exactly the same in terms 998 00:45:15,770 --> 00:45:17,690 of final outcomes. 999 00:45:17,690 --> 00:45:20,480 Turns out option B and D are also exactly the same. 1000 00:45:20,480 --> 00:45:23,051 I'll let you look at this for a second. 1001 00:45:23,051 --> 00:45:26,150 So A and C are the same in terms of the final outcomes. 1002 00:45:26,150 --> 00:45:29,360 B and D are also exactly the same 1003 00:45:29,360 --> 00:45:31,380 in terms of final outcomes. 1004 00:45:31,380 --> 00:45:34,370 So that means essentially when you compare A versus B and C 1005 00:45:34,370 --> 00:45:37,400 versus D, If you only cared about final outcomes, 1006 00:45:37,400 --> 00:45:41,180 you cannot choose different things here in this lottery. 1007 00:45:41,180 --> 00:45:44,150 It cannot be that your utility is defined over just final 1008 00:45:44,150 --> 00:45:46,610 outcomes, because you just told me you like two different 1009 00:45:46,610 --> 00:45:49,980 things for the exact same thing. 1010 00:45:49,980 --> 00:45:54,000 And there's 84% who choose option B while 69% choose 1011 00:45:54,000 --> 00:45:57,000 option C. So there's a bunch of people who essentially switch. 1012 00:46:00,107 --> 00:46:00,690 Is that clear? 1013 00:46:03,400 --> 00:46:04,090 OK. 1014 00:46:04,090 --> 00:46:06,430 So now, what is that? 1015 00:46:06,430 --> 00:46:07,670 What's going on here? 1016 00:46:07,670 --> 00:46:09,490 Well, where we're going to go is like well, 1017 00:46:09,490 --> 00:46:11,733 it seems like, exactly as you say, framing matters. 1018 00:46:11,733 --> 00:46:13,150 How you frame the problem matters, 1019 00:46:13,150 --> 00:46:14,270 but why does it matter? 1020 00:46:14,270 --> 00:46:16,840 Well, it's because we setting like a reference point. 1021 00:46:16,840 --> 00:46:19,180 In the first place, in the first example, 1022 00:46:19,180 --> 00:46:22,420 the reference point is like 1,000 and there is like a gain 1023 00:46:22,420 --> 00:46:23,720 relative to 1,000. 1024 00:46:23,720 --> 00:46:27,340 We look at like people evaluate this gamble as like gains. 1025 00:46:27,340 --> 00:46:29,350 In the second example, the reference point 1026 00:46:29,350 --> 00:46:32,080 is set to like 2,000, and now people essentially 1027 00:46:32,080 --> 00:46:35,800 evaluate this gamble as losses relative to the 2,000. 1028 00:46:35,800 --> 00:46:37,390 And of depending on how people think 1029 00:46:37,390 --> 00:46:39,860 about gains versus losses, it turns out 1030 00:46:39,860 --> 00:46:42,625 that people make different choices. 1031 00:46:42,625 --> 00:46:44,920 I'll be more formal about this, but that's essentially 1032 00:46:44,920 --> 00:46:47,140 the idea of, like in Kahneman and Tversky, 1033 00:46:47,140 --> 00:46:49,730 their prospect theory they proposed, 1034 00:46:49,730 --> 00:46:55,600 why can explain these behaviors as opposed to expected utility? 1035 00:46:55,600 --> 00:46:56,170 OK. 1036 00:46:56,170 --> 00:46:58,570 So now what are sort of the most important points? 1037 00:46:58,570 --> 00:46:59,750 There's three of them. 1038 00:46:59,750 --> 00:47:01,333 I'm going to show you two and then get 1039 00:47:01,333 --> 00:47:03,910 to the third one at the end. 1040 00:47:03,910 --> 00:47:07,570 So one is like what matters a lot for people's behaviors 1041 00:47:07,570 --> 00:47:09,920 is changes rather than levels. 1042 00:47:09,920 --> 00:47:13,450 So what they argue is utility is not defined by people's final 1043 00:47:13,450 --> 00:47:15,910 status-- how much they end up with-- 1044 00:47:15,910 --> 00:47:19,660 but as opposed to like how much changes 1045 00:47:19,660 --> 00:47:21,820 relative to some reference point. 1046 00:47:21,820 --> 00:47:24,650 That could be changes relative to like a status quo. 1047 00:47:24,650 --> 00:47:27,400 How much do I have right now and how much 1048 00:47:27,400 --> 00:47:29,800 does it change positively or negatively? 1049 00:47:29,800 --> 00:47:32,050 Or, it could be relative to some expectation 1050 00:47:32,050 --> 00:47:33,490 or some other reference point. 1051 00:47:33,490 --> 00:47:35,350 What I showed you here in this example 1052 00:47:35,350 --> 00:47:38,140 is like this is not the status quo that people evaluate 1053 00:47:38,140 --> 00:47:41,440 their utility against because the outcome is always the same. 1054 00:47:41,440 --> 00:47:43,930 The reference point here is kind of like the expectation 1055 00:47:43,930 --> 00:47:46,450 I sent you and say, OK, here's 1,000. 1056 00:47:46,450 --> 00:47:48,490 And now what seems to be happening 1057 00:47:48,490 --> 00:47:51,820 is that people's expectations, in fact, become 1,000. 1058 00:47:51,820 --> 00:47:53,410 And then relative to that expectation, 1059 00:47:53,410 --> 00:47:57,220 people are going to evaluate gains and losses. 1060 00:47:57,220 --> 00:48:00,250 And similarly, if I give you-- like, say you get 2,000, 1061 00:48:00,250 --> 00:48:02,560 again, people will evaluate the utility 1062 00:48:02,560 --> 00:48:08,080 or the outcomes as gains and losses relative to that. 1063 00:48:08,080 --> 00:48:12,400 Second, there seems to be loss aversion. 1064 00:48:12,400 --> 00:48:14,500 Losses loom larger than gains. 1065 00:48:14,500 --> 00:48:16,870 That is to say people dislike a loss that's 1066 00:48:16,870 --> 00:48:21,850 as large as a gain by a lot more than they like the gain. 1067 00:48:21,850 --> 00:48:22,370 OK? 1068 00:48:22,370 --> 00:48:26,470 So it's like if you say you lose 100 versus you gain 100, 1069 00:48:26,470 --> 00:48:28,840 people really dislike losing 100 a lot more 1070 00:48:28,840 --> 00:48:32,350 than they like gaining 100. 1071 00:48:32,350 --> 00:48:34,160 And once you sort of postulate that, 1072 00:48:34,160 --> 00:48:37,000 well then, that can explain why you reject a bunch of gambles. 1073 00:48:37,000 --> 00:48:41,260 Because if say if the gamble is like minus 10 with 50% chance 1074 00:48:41,260 --> 00:48:44,410 and plus 11 with 50% chance, well, 1075 00:48:44,410 --> 00:48:46,810 if you dislike the losses a lot more, if you dislike 1076 00:48:46,810 --> 00:48:51,430 losing $10 a lot more than gaining $10 or $11, well then, 1077 00:48:51,430 --> 00:48:54,130 you're going to reject this gamble 1078 00:48:54,130 --> 00:48:59,440 even if the expected value is positive. 1079 00:48:59,440 --> 00:49:00,192 OK? 1080 00:49:00,192 --> 00:49:02,650 We'll write down sort of like a utility function next time. 1081 00:49:02,650 --> 00:49:04,750 That's sort of does this more formally, 1082 00:49:04,750 --> 00:49:07,270 but essentially those are the two key areas. 1083 00:49:07,270 --> 00:49:10,370 There's a third one, which I'm going to get to in a second. 1084 00:49:10,370 --> 00:49:13,300 Now, the key part here is like there is essentially 1085 00:49:13,300 --> 00:49:14,260 reference dependence. 1086 00:49:14,260 --> 00:49:18,880 People evaluate their outcomes relative to some reference 1087 00:49:18,880 --> 00:49:20,038 point. 1088 00:49:20,038 --> 00:49:21,580 What kinds of examples do we actually 1089 00:49:21,580 --> 00:49:22,540 have a reference point? 1090 00:49:22,540 --> 00:49:23,920 When you look in the world-- and again, 1091 00:49:23,920 --> 00:49:26,110 that's kind of very much what I'd like you to do-- 1092 00:49:26,110 --> 00:49:28,510 when you think about things that you see in the world, 1093 00:49:28,510 --> 00:49:31,270 what kinds of examples do we have that people care 1094 00:49:31,270 --> 00:49:34,270 about reference points as opposed to final outcomes? 1095 00:49:46,140 --> 00:49:47,354 Yeah. 1096 00:49:47,354 --> 00:49:51,448 AUDIENCE: My first thought was, you know how we have to pay 1097 00:49:51,448 --> 00:49:54,900 $0.10 for shopping bags? 1098 00:49:54,900 --> 00:49:57,226 Where I was from a couple years ago, 1099 00:49:57,226 --> 00:50:00,653 we did it so that if you used a reusable bag, 1100 00:50:00,653 --> 00:50:02,006 you got $0.05 back. 1101 00:50:02,006 --> 00:50:05,470 That was like not effective. 1102 00:50:05,470 --> 00:50:09,970 [INAUDIBLE] 1103 00:50:09,970 --> 00:50:11,350 PROFESSOR: Right. 1104 00:50:11,350 --> 00:50:13,540 Yeah. 1105 00:50:13,540 --> 00:50:14,200 So exactly. 1106 00:50:14,200 --> 00:50:16,420 When you think about pricing of different options, 1107 00:50:16,420 --> 00:50:17,830 it matters a lot to people. 1108 00:50:17,830 --> 00:50:20,440 It seems that when you add $0.05, 1109 00:50:20,440 --> 00:50:23,410 were to subtract $0.05 or $0.10 or whatever, 1110 00:50:23,410 --> 00:50:26,530 the same changes about like it's just $0.05 at the end 1111 00:50:26,530 --> 00:50:29,392 of the day that you have to pay more or less, 1112 00:50:29,392 --> 00:50:30,850 depending on whether you use a bag, 1113 00:50:30,850 --> 00:50:33,400 your choice of using a bag versus not should not be 1114 00:50:33,400 --> 00:50:37,990 affected by whether it's framed or put as like a loss versus 1115 00:50:37,990 --> 00:50:42,045 a gain or like a whatever rebate or whatever that you might get. 1116 00:50:42,045 --> 00:50:44,170 And I think that's true for many different-- that's 1117 00:50:44,170 --> 00:50:48,900 true for shopping plastic bags or any other shopping bags, 1118 00:50:48,900 --> 00:50:50,650 but it's also true for many other options. 1119 00:50:50,650 --> 00:50:53,050 It depends a lot whether you sort of add on that option 1120 00:50:53,050 --> 00:50:54,730 or whether you get sort of essentially 1121 00:50:54,730 --> 00:50:57,025 subtracted the option as some discount. 1122 00:50:59,660 --> 00:51:01,535 Yes. 1123 00:51:01,535 --> 00:51:03,510 AUDIENCE: [INAUDIBLE] 1124 00:51:12,992 --> 00:51:13,700 PROFESSOR: Right. 1125 00:51:13,700 --> 00:51:15,710 I think some of it is not exactly-- 1126 00:51:15,710 --> 00:51:18,020 I think there's some legal constraints 1127 00:51:18,020 --> 00:51:20,565 to that specific behavior because you're not 1128 00:51:20,565 --> 00:51:22,940 supposed to sort of trick people, but surely some of that 1129 00:51:22,940 --> 00:51:24,530 is going on. 1130 00:51:24,530 --> 00:51:25,770 People love discounts. 1131 00:51:25,770 --> 00:51:27,950 Like, they love making good deals. 1132 00:51:27,950 --> 00:51:29,990 They love sort of getting things cheaper 1133 00:51:29,990 --> 00:51:32,342 in various ways compared to some reference point. 1134 00:51:32,342 --> 00:51:33,800 And the reference point, since it's 1135 00:51:33,800 --> 00:51:35,810 hard to tell what actually should the price be, 1136 00:51:35,810 --> 00:51:37,280 is often like the previous price. 1137 00:51:37,280 --> 00:51:39,500 It's something that seems really expensive 1138 00:51:39,500 --> 00:51:42,590 and now they're getting it for less money. 1139 00:51:42,590 --> 00:51:44,532 And so if you get it for half the price, 1140 00:51:44,532 --> 00:51:45,990 even if it's still quite expensive, 1141 00:51:45,990 --> 00:51:50,030 you think like you saved half of the price somehow. 1142 00:51:50,030 --> 00:51:51,530 AUDIENCE: So how you feel about what 1143 00:51:51,530 --> 00:51:54,482 you have may depend on what you see your neighbor having. 1144 00:51:54,482 --> 00:51:55,190 PROFESSOR: Right. 1145 00:51:55,190 --> 00:51:56,810 So reference points could not just 1146 00:51:56,810 --> 00:52:00,980 be prices or sort of things that affect yourself 1147 00:52:00,980 --> 00:52:04,100 in certain ways, and price is the key example overall. 1148 00:52:04,100 --> 00:52:06,440 The reference point could just be your environment, 1149 00:52:06,440 --> 00:52:07,520 your social environment. 1150 00:52:07,520 --> 00:52:09,228 We kind of talk about this to some degree 1151 00:52:09,228 --> 00:52:11,120 also when we talk about social preferences. 1152 00:52:11,120 --> 00:52:12,920 People care a lot about what others do, 1153 00:52:12,920 --> 00:52:14,628 and their reference point might very much 1154 00:52:14,628 --> 00:52:16,490 be formed by what their neighbors do. 1155 00:52:16,490 --> 00:52:17,760 Like, how big is their house? 1156 00:52:17,760 --> 00:52:19,430 How big is their car? 1157 00:52:19,430 --> 00:52:21,065 Do they have a swimming pool or not? 1158 00:52:21,065 --> 00:52:22,940 It could be also like your neighbor's-- like, 1159 00:52:22,940 --> 00:52:24,107 how good are they at school? 1160 00:52:24,107 --> 00:52:26,000 Are they smart, how good looking? 1161 00:52:26,000 --> 00:52:27,220 And so on and so forth. 1162 00:52:27,220 --> 00:52:29,030 So when people evaluate certain outcomes, 1163 00:52:29,030 --> 00:52:32,390 they often don't evaluate the levels, but rather 1164 00:52:32,390 --> 00:52:34,940 kind of how much another person makes 1165 00:52:34,940 --> 00:52:37,160 or whatever other people's outcomes are. 1166 00:52:37,160 --> 00:52:38,750 And part of the reason might be that 1167 00:52:38,750 --> 00:52:40,857 like evaluating absolute outcomes is really hard. 1168 00:52:40,857 --> 00:52:42,440 It's very hard to actually understand. 1169 00:52:42,440 --> 00:52:45,920 Is $10,000, $50,000, $100,000, a million-- 1170 00:52:45,920 --> 00:52:49,850 how much money do you actually need to be happy 1171 00:52:49,850 --> 00:52:53,330 or that you like, or what kinds of outcomes 1172 00:52:53,330 --> 00:52:55,400 are you excited about and so on? 1173 00:52:55,400 --> 00:52:57,350 But it's much easier to say you have something 1174 00:52:57,350 --> 00:52:59,270 and another person does not or the other way around. 1175 00:52:59,270 --> 00:53:01,130 It's much easier to compare with others. 1176 00:53:01,130 --> 00:53:04,040 It's very hard to actually evaluate absolute outcomes, 1177 00:53:04,040 --> 00:53:07,890 because who knows how you should feel about that. 1178 00:53:07,890 --> 00:53:09,440 Much easier to say I have this, they 1179 00:53:09,440 --> 00:53:12,600 don't, or the other way around. 1180 00:53:12,600 --> 00:53:13,860 Yeah. 1181 00:53:13,860 --> 00:53:15,660 AUDIENCE: People sometimes don't want 1182 00:53:15,660 --> 00:53:20,730 to let go of fallen stock they own 1183 00:53:20,730 --> 00:53:27,390 because they don't want to see that they sold at a good price. 1184 00:53:27,390 --> 00:53:28,450 PROFESSOR: Right. 1185 00:53:28,450 --> 00:53:28,950 Exactly. 1186 00:53:28,950 --> 00:53:31,950 That's called what people call the disposition effect. 1187 00:53:31,950 --> 00:53:34,200 There's actually pretty large literature 1188 00:53:34,200 --> 00:53:35,725 studying this and lots of debates 1189 00:53:35,725 --> 00:53:37,350 on whether it's really going on or not, 1190 00:53:37,350 --> 00:53:40,420 how important it is, and so on and so forth. 1191 00:53:40,420 --> 00:53:42,000 But sort of one very basic stylized 1192 00:53:42,000 --> 00:53:45,060 fact is that when people are looking at stocks that they 1193 00:53:45,060 --> 00:53:48,420 might want to sell, they are much more likely to sell 1194 00:53:48,420 --> 00:53:50,440 winners compared to losers. 1195 00:53:50,440 --> 00:53:54,070 Now, why is that bad, or should you not do that? 1196 00:53:57,120 --> 00:54:04,957 AUDIENCE: Is that because the winners [INAUDIBLE] 1197 00:54:04,957 --> 00:54:06,540 PROFESSOR: There's a bit of a question 1198 00:54:06,540 --> 00:54:09,300 of is there like momentum or reversal or the like. 1199 00:54:09,300 --> 00:54:11,610 But if you believe in efficient markets, which 1200 00:54:11,610 --> 00:54:14,670 many economists do, or at least some 1201 00:54:14,670 --> 00:54:18,600 of them who are in Chicago-- 1202 00:54:18,600 --> 00:54:20,280 if you believe in efficient markets, 1203 00:54:20,280 --> 00:54:24,630 then the current stock price should essentially 1204 00:54:24,630 --> 00:54:26,700 not be-- or like the previous losses 1205 00:54:26,700 --> 00:54:28,920 and so on should not be informative of what's 1206 00:54:28,920 --> 00:54:30,220 going on in the future. 1207 00:54:30,220 --> 00:54:32,730 So in expectations, your losers and your winners, 1208 00:54:32,730 --> 00:54:35,130 if there's any information about the future, 1209 00:54:35,130 --> 00:54:37,320 about the future valuation, that should already 1210 00:54:37,320 --> 00:54:39,065 be incorporated in the price. 1211 00:54:39,065 --> 00:54:40,440 So if you look at two stocks, one 1212 00:54:40,440 --> 00:54:42,660 lost some money and one gained some money, 1213 00:54:42,660 --> 00:54:45,730 they're equally likely to make you money. 1214 00:54:45,730 --> 00:54:48,210 And so if anything, what they show in these papers 1215 00:54:48,210 --> 00:54:49,440 is like there's momentum. 1216 00:54:49,440 --> 00:54:51,732 This is what you're saying is that the winners actually 1217 00:54:51,732 --> 00:54:55,020 are, in fact, going to be more likely to try to increase value 1218 00:54:55,020 --> 00:54:56,250 compared to the losers. 1219 00:54:56,250 --> 00:55:00,720 But people tend to essentially want to sort of realize gains. 1220 00:55:00,720 --> 00:55:02,790 People seem to be happy about making gains. 1221 00:55:02,790 --> 00:55:06,960 They seem to be very reluctant to sell the losers, 1222 00:55:06,960 --> 00:55:09,900 and there's some questions on how costly is that actually, 1223 00:55:09,900 --> 00:55:13,022 but that's a very robust pattern in the data. 1224 00:55:13,022 --> 00:55:13,980 The same is also true-- 1225 00:55:13,980 --> 00:55:16,050 I'm going to show you on Monday, the same 1226 00:55:16,050 --> 00:55:17,520 is also true for houses. 1227 00:55:17,520 --> 00:55:20,970 People are much less likely to sell their house 1228 00:55:20,970 --> 00:55:24,420 when it has lost value compared to when it gained value. 1229 00:55:24,420 --> 00:55:27,120 I'm controlling for a bunch of things. 1230 00:55:27,120 --> 00:55:28,600 Yeah. 1231 00:55:28,600 --> 00:55:30,839 AUDIENCE: [INAUDIBLE] 1232 00:55:38,118 --> 00:55:39,910 PROFESSOR: Yeah, so that's a little tricky. 1233 00:55:39,910 --> 00:55:42,170 There might be other things going on, but exactly. 1234 00:55:42,170 --> 00:55:45,500 It could just be that like if you don't go to a movie 1235 00:55:45,500 --> 00:55:47,390 when you have bought a ticket yourself, 1236 00:55:47,390 --> 00:55:50,300 it's sort of perceived as a loss and you really don't like that. 1237 00:55:50,300 --> 00:55:51,800 It's a little complicated to think 1238 00:55:51,800 --> 00:55:53,883 about this in simple terms, because in some sense, 1239 00:55:53,883 --> 00:55:55,697 the loss is still there. 1240 00:55:55,697 --> 00:55:56,780 But anyway, yeah, exactly. 1241 00:55:56,780 --> 00:55:57,990 It's like you lost money. 1242 00:55:57,990 --> 00:56:00,230 It's almost like as if you lost the movie ticket 1243 00:56:00,230 --> 00:56:01,965 and didn't get some value from it. 1244 00:56:01,965 --> 00:56:04,340 There's a bit of a question whether you sort of integrate 1245 00:56:04,340 --> 00:56:06,233 these two things or not because people think 1246 00:56:06,233 --> 00:56:08,150 about monetary terms versus other things often 1247 00:56:08,150 --> 00:56:10,400 in separation, but I think some of that is exactly. 1248 00:56:10,400 --> 00:56:13,460 People feel like they have a loss if they don't take 1249 00:56:13,460 --> 00:56:14,877 advantage of the movie tickets. 1250 00:56:14,877 --> 00:56:17,210 So I'm going to show you some examples that are actually 1251 00:56:17,210 --> 00:56:19,610 much more basic in some ways. 1252 00:56:19,610 --> 00:56:22,360 When you think about like visual illusions, 1253 00:56:22,360 --> 00:56:24,860 this is what's called like the size contrast illusion. 1254 00:56:24,860 --> 00:56:26,030 So one of those things is like when 1255 00:56:26,030 --> 00:56:27,530 you look at circles or things that's 1256 00:56:27,530 --> 00:56:30,257 supposed to look the same, and in fact are the same size, 1257 00:56:30,257 --> 00:56:31,340 they look quite different. 1258 00:56:31,340 --> 00:56:33,263 Depending on what you contrast things with, 1259 00:56:33,263 --> 00:56:34,430 things look quite different. 1260 00:56:34,430 --> 00:56:36,770 For example, if you look at these circles in the middle, 1261 00:56:36,770 --> 00:56:38,540 they're in fact exactly the same size, 1262 00:56:38,540 --> 00:56:40,800 but in fact, they don't really look like that. 1263 00:56:40,800 --> 00:56:42,290 Similarly, if you look at these two circles, 1264 00:56:42,290 --> 00:56:43,290 it's perhaps more stark. 1265 00:56:43,290 --> 00:56:45,560 These circles are, in fact, exactly the same size 1266 00:56:45,560 --> 00:56:46,530 that are in the middle. 1267 00:56:46,530 --> 00:56:47,840 Every time I'm teaching this class, 1268 00:56:47,840 --> 00:56:50,173 I sort of have to convince myself that they are actually 1269 00:56:50,173 --> 00:56:50,760 the same size. 1270 00:56:50,760 --> 00:56:52,370 So I print it out and sort of measure 1271 00:56:52,370 --> 00:56:54,870 to make sure that it's actually the same size, and they are. 1272 00:56:54,870 --> 00:56:56,720 I checked. 1273 00:56:56,720 --> 00:56:59,190 But when you see this, even if you know it's an illusion, 1274 00:56:59,190 --> 00:57:02,090 it's very hard to convince yourself that it's not. 1275 00:57:02,090 --> 00:57:03,535 But they are the same size. 1276 00:57:03,535 --> 00:57:04,910 When you look at these bars, when 1277 00:57:04,910 --> 00:57:07,243 you look at the upper black bar and the lower black bar, 1278 00:57:07,243 --> 00:57:10,467 again, it seems like somehow the upper one is like wider, 1279 00:57:10,467 --> 00:57:12,050 but in fact, it's not, and again, it's 1280 00:57:12,050 --> 00:57:15,050 all about the contrast to the other side. 1281 00:57:15,050 --> 00:57:17,240 There's this one which throws me off the most. 1282 00:57:17,240 --> 00:57:22,190 When you look at fields A and B, they are the same color. 1283 00:57:22,190 --> 00:57:24,595 That's hard to believe once you see it. 1284 00:57:24,595 --> 00:57:25,970 They are actually the same color, 1285 00:57:25,970 --> 00:57:27,710 and again, I would print it out and actually 1286 00:57:27,710 --> 00:57:28,877 put them next to each other. 1287 00:57:28,877 --> 00:57:30,140 They are the exact same color. 1288 00:57:30,140 --> 00:57:30,865 I checked. 1289 00:57:30,865 --> 00:57:32,240 There's also a video that you can 1290 00:57:32,240 --> 00:57:34,460 watch that sort of shows this. 1291 00:57:34,460 --> 00:57:36,650 And exactly what's happening is the way 1292 00:57:36,650 --> 00:57:41,360 we perceive colors is coming from contrast, right? 1293 00:57:41,360 --> 00:57:45,440 So like black does not or gray doesn't look as gray when white 1294 00:57:45,440 --> 00:57:46,730 or darker gray is next to it. 1295 00:57:50,090 --> 00:57:53,030 Similarly here, if you look at the gray of this bar, 1296 00:57:53,030 --> 00:57:54,410 this is the exact same gray. 1297 00:57:54,410 --> 00:57:56,120 Again, you can print it out and sort of look at it. 1298 00:57:56,120 --> 00:57:56,960 It's the exact same. 1299 00:57:56,960 --> 00:57:58,335 When you look at it, it just does 1300 00:57:58,335 --> 00:58:01,580 look like on the right side it's darker than on the left. 1301 00:58:01,580 --> 00:58:04,730 Now, there's plenty of examples of reference dependence 1302 00:58:04,730 --> 00:58:05,685 for vision. 1303 00:58:05,685 --> 00:58:06,560 There's tons of them. 1304 00:58:06,560 --> 00:58:08,540 They're kind of quite interesting. 1305 00:58:08,540 --> 00:58:10,820 At some level, it tells us something about the brain. 1306 00:58:10,820 --> 00:58:14,210 Like, in some ways, we can learn about essentially the way we 1307 00:58:14,210 --> 00:58:17,030 evaluate certain outcomes overall is essentially-- 1308 00:58:17,030 --> 00:58:19,370 it's much easier for us, or we think about in contrast, 1309 00:58:19,370 --> 00:58:22,332 it's very hard for us often to think about levels. 1310 00:58:22,332 --> 00:58:23,540 But of course, that's vision. 1311 00:58:23,540 --> 00:58:26,510 So in some sense, what did we learn really about utility? 1312 00:58:26,510 --> 00:58:31,370 One thing you can look at is bronze and silver medal winners 1313 00:58:31,370 --> 00:58:32,270 at the Olympics. 1314 00:58:32,270 --> 00:58:35,090 Presumably or arguably, winning a silver medal 1315 00:58:35,090 --> 00:58:37,370 is better than a bronze medal winner. 1316 00:58:37,370 --> 00:58:39,710 If you look at these two women, one of them 1317 00:58:39,710 --> 00:58:43,003 won the silver medal, one of them won bronze. 1318 00:58:43,003 --> 00:58:45,170 So what people have done and psychologists have done 1319 00:58:45,170 --> 00:58:46,940 is like took actually a bunch of pictures 1320 00:58:46,940 --> 00:58:50,600 from like ceremonies of bronze and silver winners 1321 00:58:50,600 --> 00:58:52,680 and just looked at who looks happier. 1322 00:58:52,680 --> 00:58:55,180 And when you do that, essentially 1323 00:58:55,180 --> 00:58:57,680 the bronze medalists look on average happier than the silver 1324 00:58:57,680 --> 00:58:58,580 medalists. 1325 00:58:58,580 --> 00:59:01,160 Presumably it's because they just missed gold 1326 00:59:01,160 --> 00:59:03,860 while the other person sort of won bronze 1327 00:59:03,860 --> 00:59:09,080 because it's great to be third as opposed to fourth. 1328 00:59:09,080 --> 00:59:09,950 And there's more. 1329 00:59:09,950 --> 00:59:12,620 You can read more about this, but that's sort of like one 1330 00:59:12,620 --> 00:59:13,610 finding. 1331 00:59:13,610 --> 00:59:19,790 Now, it's true for lots of different feelings, 1332 00:59:19,790 --> 00:59:22,330 perceptions, judgment, and so on. 1333 00:59:22,330 --> 00:59:25,413 People compare stimuli in various ways 1334 00:59:25,413 --> 00:59:26,830 when it comes to temperature, when 1335 00:59:26,830 --> 00:59:30,800 it comes to all sorts of things relative to reference levels. 1336 00:59:30,800 --> 00:59:34,970 It's very hard for them to evaluate it in absolute terms. 1337 00:59:34,970 --> 00:59:37,220 So it's easy to say-- when you look at water, 1338 00:59:37,220 --> 00:59:39,160 it's very hard to say what's the temperature 1339 00:59:39,160 --> 00:59:41,080 even if you sort of practice it a lot. 1340 00:59:41,080 --> 00:59:43,480 It's very easy to say one bucket of water 1341 00:59:43,480 --> 00:59:44,690 is warmer than another. 1342 00:59:44,690 --> 00:59:48,760 It's very hard to sort of say it's like 70 degrees or 60 1343 00:59:48,760 --> 00:59:49,270 or whatever. 1344 00:59:49,270 --> 00:59:50,770 It's very hard to sort of understand 1345 00:59:50,770 --> 00:59:52,630 what absolute temperatures are, and I 1346 00:59:52,630 --> 00:59:59,770 think that's the same in some ways for a lot of consumption 1347 00:59:59,770 --> 01:00:00,650 or other decisions. 1348 01:00:00,650 --> 01:00:02,858 It's very hard to say how much would you pay for this 1349 01:00:02,858 --> 01:00:05,590 or how happy should I be with certain outcomes, 1350 01:00:05,590 --> 01:00:07,090 because people need some reference, 1351 01:00:07,090 --> 01:00:08,470 and often then the reference are either 1352 01:00:08,470 --> 01:00:10,210 some expectations, or their neighbors, 1353 01:00:10,210 --> 01:00:15,500 or just comparing between different options. 1354 01:00:15,500 --> 01:00:18,470 So that's the example I mentioned previously, 1355 01:00:18,470 --> 01:00:20,050 which is it's much easier to compare 1356 01:00:20,050 --> 01:00:21,980 your income or any outcomes-- your grades, 1357 01:00:21,980 --> 01:00:24,580 et cetera-- compared to what your friend has. 1358 01:00:24,580 --> 01:00:28,780 It's much harder to say how much an extra $1,000 or having 1359 01:00:28,780 --> 01:00:31,870 $50,000 per year, is that good or bad? 1360 01:00:31,870 --> 01:00:36,472 It depends a lot on what you compare it against. 1361 01:00:36,472 --> 01:00:37,930 So what people tend to do, and this 1362 01:00:37,930 --> 01:00:40,990 is exactly what Kahneman and Tversky were postulating, 1363 01:00:40,990 --> 01:00:43,270 is that people compare the outcomes 1364 01:00:43,270 --> 01:00:46,452 relative to reference points. 1365 01:00:46,452 --> 01:00:48,160 And again, we're going to write this down 1366 01:00:48,160 --> 01:00:51,055 in more detail at the beginning of next class, 1367 01:00:51,055 --> 01:00:53,980 but what Kahneman and Tversky were postulating 1368 01:00:53,980 --> 01:00:55,960 were essentially two things. 1369 01:00:55,960 --> 01:00:58,420 They're postulating A, there's a reference level 1370 01:00:58,420 --> 01:01:00,385 of consumption or any outcomes. 1371 01:01:00,385 --> 01:01:03,280 We can talk a little bit about what actually is this reference 1372 01:01:03,280 --> 01:01:04,600 level, where it's coming from. 1373 01:01:04,600 --> 01:01:08,320 For now, we just assume there's a reference level that people 1374 01:01:08,320 --> 01:01:10,430 compare the outcomes against. 1375 01:01:10,430 --> 01:01:14,440 And then against that, outcomes are compared. 1376 01:01:14,440 --> 01:01:17,770 And in particular, the function is 1377 01:01:17,770 --> 01:01:20,890 like steeper on the left compared to the right, right? 1378 01:01:20,890 --> 01:01:24,730 Essentially losses loom larger than gains. 1379 01:01:24,730 --> 01:01:28,990 So like going down by 10 units on the left or 1 unit 1380 01:01:28,990 --> 01:01:32,650 on the left is more painful than going up 1381 01:01:32,650 --> 01:01:35,400 on the right relative to those reference points. 1382 01:01:35,400 --> 01:01:36,580 OK? 1383 01:01:36,580 --> 01:01:39,460 And so that's all said. 1384 01:01:39,460 --> 01:01:43,540 So then what experimental evidence do we in fact 1385 01:01:43,540 --> 01:01:45,480 have for that? 1386 01:01:45,480 --> 01:01:49,612 So one example-- so we have some experimental evidence of this. 1387 01:01:49,612 --> 01:01:51,070 And I'm going to show you next time 1388 01:01:51,070 --> 01:01:54,220 a bunch of different examples of real choices, 1389 01:01:54,220 --> 01:01:56,500 starting from golfing, to selling houses, 1390 01:01:56,500 --> 01:01:58,000 to lots of other outcomes. 1391 01:01:58,000 --> 01:02:00,443 But sort of the experimental evidence, the earlier 1392 01:02:00,443 --> 01:02:02,110 experimental evidence that people showed 1393 01:02:02,110 --> 01:02:04,510 were preferences over risky gambles. 1394 01:02:04,510 --> 01:02:07,430 I showed you some of those already, and then in particular 1395 01:02:07,430 --> 01:02:10,297 unwillingness to trade different options compared 1396 01:02:10,297 --> 01:02:12,130 to like an alternative option that I showed. 1397 01:02:12,130 --> 01:02:15,178 That's what people refer to as the endowment effect. 1398 01:02:15,178 --> 01:02:16,720 So let me show you first the gambles. 1399 01:02:16,720 --> 01:02:17,990 We already had that in some sense. 1400 01:02:17,990 --> 01:02:19,448 These are like essentially gambles. 1401 01:02:19,448 --> 01:02:21,640 People seem really risk averse when 1402 01:02:21,640 --> 01:02:24,070 they are offered these gambles. 1403 01:02:24,070 --> 01:02:26,890 Kahneman and Tversky would say people are essentially 1404 01:02:26,890 --> 01:02:27,490 loss averse. 1405 01:02:27,490 --> 01:02:30,760 People really dislike the loss of $10 1406 01:02:30,760 --> 01:02:32,170 relative to the gain of $11. 1407 01:02:32,170 --> 01:02:33,580 We can explain that behavior. 1408 01:02:33,580 --> 01:02:36,247 And that's a very robust finding that people decline these kinds 1409 01:02:36,247 --> 01:02:37,130 of types of gambles. 1410 01:02:37,130 --> 01:02:39,310 Kahneman and Tversky would say that's evidence 1411 01:02:39,310 --> 01:02:40,557 of loss aversion. 1412 01:02:40,557 --> 01:02:42,640 Now, you might say these are really small gambles. 1413 01:02:42,640 --> 01:02:43,990 Do we really care about them? 1414 01:02:43,990 --> 01:02:46,803 Well, once you do this with $500, $550 1415 01:02:46,803 --> 01:02:48,970 or whatever, large amounts, people also do the same. 1416 01:02:48,970 --> 01:02:50,890 It's hard to do this with like real-world money 1417 01:02:50,890 --> 01:02:52,240 because it's quite a bit of money. 1418 01:02:52,240 --> 01:02:53,860 It turns out there's actually some studies who 1419 01:02:53,860 --> 01:02:54,527 do that as well. 1420 01:02:54,527 --> 01:02:57,670 They did this for real with MBA students, financial analysts, 1421 01:02:57,670 --> 01:02:58,810 and rich investors. 1422 01:02:58,810 --> 01:03:02,992 And even those people like tend to then turn down 1423 01:03:02,992 --> 01:03:03,950 those kinds of gambles. 1424 01:03:03,950 --> 01:03:05,410 So there seems to be quite evidence 1425 01:03:05,410 --> 01:03:08,928 of lots of loss aversion. 1426 01:03:08,928 --> 01:03:10,720 Given the choices between gains and losses, 1427 01:03:10,720 --> 01:03:13,455 people seem to really dislike losses. 1428 01:03:13,455 --> 01:03:14,830 Now, what's the endowment effect? 1429 01:03:14,830 --> 01:03:20,410 That's sort of the perhaps most famous evidence in this domain. 1430 01:03:20,410 --> 01:03:24,250 It essentially is people, when endowed with a certain item, 1431 01:03:24,250 --> 01:03:26,380 they really do not like to trade. 1432 01:03:26,380 --> 01:03:30,235 And so the way you would do an experiment-- 1433 01:03:30,235 --> 01:03:32,620 you would give people an item, and then 1434 01:03:32,620 --> 01:03:37,420 you ask them essentially-- and a randomly selected fraction 1435 01:03:37,420 --> 01:03:39,910 of people are given an item. 1436 01:03:39,910 --> 01:03:41,980 And then either people are offered the choice-- 1437 01:03:41,980 --> 01:03:43,480 you would you either keep your item, 1438 01:03:43,480 --> 01:03:45,938 or would you get another item that's sort of the same value 1439 01:03:45,938 --> 01:03:46,960 overall? 1440 01:03:46,960 --> 01:03:50,770 Or, what people do is like they have some experiments-- 1441 01:03:50,770 --> 01:03:52,750 there's many of these experiments-- 1442 01:03:52,750 --> 01:03:56,470 you give some people one item and some other people 1443 01:03:56,470 --> 01:03:57,970 another item, and then see would you 1444 01:03:57,970 --> 01:03:59,380 like to trade with each other. 1445 01:03:59,380 --> 01:04:01,040 And what you see is essentially people 1446 01:04:01,040 --> 01:04:03,560 are extremely reluctant to trade. 1447 01:04:03,560 --> 01:04:06,190 There's many of these kinds of examples. 1448 01:04:06,190 --> 01:04:12,170 Like, for example, so one example-- sorry, 1449 01:04:12,170 --> 01:04:13,800 I skipped that. 1450 01:04:13,800 --> 01:04:16,037 One example is like if you just give people an item-- 1451 01:04:16,037 --> 01:04:18,120 usually it's like a mug-- if you give people a mug 1452 01:04:18,120 --> 01:04:21,030 and asked like, how much do I have to pay you 1453 01:04:21,030 --> 01:04:24,270 to give me to sell me this mug? 1454 01:04:24,270 --> 01:04:25,510 People say large amounts. 1455 01:04:25,510 --> 01:04:27,110 They would say like $5. 1456 01:04:27,110 --> 01:04:28,860 If, instead, you asked them, here's a mug. 1457 01:04:28,860 --> 01:04:30,360 Would you like to purchase this mug? 1458 01:04:30,360 --> 01:04:33,432 And sort of controlling for how much money they have and so on. 1459 01:04:33,432 --> 01:04:35,640 If you ask them like, would you like to buy this mug, 1460 01:04:35,640 --> 01:04:37,380 people would say like $2 or something. 1461 01:04:37,380 --> 01:04:40,110 It's the same mug, and essentially their choice 1462 01:04:40,110 --> 01:04:43,410 depends on whether you give them the mug versus whether you 1463 01:04:43,410 --> 01:04:45,910 endow them with the mug, which is where the endowment effect 1464 01:04:45,910 --> 01:04:48,420 name comes from, or whether you just sort of like 1465 01:04:48,420 --> 01:04:51,040 ask their willingness to pay when they don't have it. 1466 01:04:51,040 --> 01:04:52,590 There's tons of different experiments 1467 01:04:52,590 --> 01:04:54,630 that do exactly that. 1468 01:04:54,630 --> 01:04:56,340 Now, a different version of that is like 1469 01:04:56,340 --> 01:04:59,940 so this one is about like buying a mug, selling a mug, 1470 01:04:59,940 --> 01:05:01,660 buying and selling prices. 1471 01:05:01,660 --> 01:05:03,600 Different versions of that is to say 1472 01:05:03,600 --> 01:05:07,823 you have a population of students or different people, 1473 01:05:07,823 --> 01:05:09,240 and what you do is essentially you 1474 01:05:09,240 --> 01:05:12,230 find two items that on average have the same valuation. 1475 01:05:12,230 --> 01:05:13,870 Knetsch was doing that. 1476 01:05:13,870 --> 01:05:16,470 And so he had like mugs and pens, 1477 01:05:16,470 --> 01:05:19,170 and he sort of calibrated such that on average, when you just 1478 01:05:19,170 --> 01:05:21,828 asked, people's willingness to pay for these mugs and pens 1479 01:05:21,828 --> 01:05:23,370 are like on average roughly the same. 1480 01:05:23,370 --> 01:05:26,610 Of course, there's variation, but on average it's the same. 1481 01:05:26,610 --> 01:05:28,620 And then you offered half the students mugs 1482 01:05:28,620 --> 01:05:31,740 and half the students pens and then offered in exchange. 1483 01:05:31,740 --> 01:05:35,915 He also offered in exchange for an addition of $0.05 1484 01:05:35,915 --> 01:05:37,290 and saying like, OK, maybe you're 1485 01:05:37,290 --> 01:05:38,332 just exactly indifferent. 1486 01:05:38,332 --> 01:05:40,898 So I'm giving you $0.05 in case you exchange. 1487 01:05:40,898 --> 01:05:43,440 And it turns out that the mug people like to keep their mugs, 1488 01:05:43,440 --> 01:05:47,190 and the pen people like to keep the pens, and 90% of people 1489 01:05:47,190 --> 01:05:47,910 do that. 1490 01:05:47,910 --> 01:05:50,670 That's one of the most robust findings 1491 01:05:50,670 --> 01:05:54,650 in experimental economics. 1492 01:05:54,650 --> 01:05:56,660 There are some questions on expectations. 1493 01:05:56,660 --> 01:05:59,130 Do people expect to keep the mugs and so on and so forth? 1494 01:05:59,130 --> 01:06:00,630 There's some complications for that, 1495 01:06:00,630 --> 01:06:02,570 but the basic result is very robust 1496 01:06:02,570 --> 01:06:05,510 and has been shown in many different settings. 1497 01:06:05,510 --> 01:06:07,400 Any questions on this? 1498 01:06:14,200 --> 01:06:14,730 OK. 1499 01:06:14,730 --> 01:06:15,870 So what's going on here? 1500 01:06:15,870 --> 01:06:18,560 Well, essentially, the reference point 1501 01:06:18,560 --> 01:06:22,330 seems to be essentially affected by ownership. 1502 01:06:22,330 --> 01:06:24,360 So if you own a mug, your reference point 1503 01:06:24,360 --> 01:06:25,800 is owning a mug. 1504 01:06:25,800 --> 01:06:29,040 So now if I asked you, would you like to sell me this mug? 1505 01:06:29,040 --> 01:06:30,750 Well, now you have a loss of a mug. 1506 01:06:30,750 --> 01:06:32,860 So essentially, you're in the lost domain of mugs. 1507 01:06:32,860 --> 01:06:36,540 So I have to pay you more money to compensate you for that. 1508 01:06:36,540 --> 01:06:39,330 If, in contrast, you do not own the mug, so like you have zero 1509 01:06:39,330 --> 01:06:41,910 mugs, your reference point is zero mugs, and I'm asking you, 1510 01:06:41,910 --> 01:06:46,050 would you like to receive a mug or like buy a mug from me, 1511 01:06:46,050 --> 01:06:48,240 then you're essentially on the gain domain. 1512 01:06:48,240 --> 01:06:51,630 And sort of what I showed you here previously-- 1513 01:06:51,630 --> 01:06:52,690 one second. 1514 01:06:52,690 --> 01:06:55,080 So when I'm asking you to purchase your mug, 1515 01:06:55,080 --> 01:06:57,357 essentially you're on the left side of this figure. 1516 01:06:57,357 --> 01:06:58,440 You're in the loss domain. 1517 01:06:58,440 --> 01:07:00,600 Your marginal utility of mugs is very high. 1518 01:07:00,600 --> 01:07:04,530 I have to pay you a lot of money to receive the mug from you. 1519 01:07:04,530 --> 01:07:06,750 In contrast, if you have zero mugs, I'm asking you, 1520 01:07:06,750 --> 01:07:08,125 would you like to purchase a mug, 1521 01:07:08,125 --> 01:07:10,125 you're on the right side of the suffering point. 1522 01:07:10,125 --> 01:07:12,130 So essentially now you're in the gain domain. 1523 01:07:12,130 --> 01:07:15,480 Now you're not willing to pay a lot of money 1524 01:07:15,480 --> 01:07:17,940 because you're gaining a mug. 1525 01:07:17,940 --> 01:07:20,670 On top of that, you also on the gain loss domain of money, 1526 01:07:20,670 --> 01:07:23,950 but like I'm setting that sort of aside. 1527 01:07:23,950 --> 01:07:25,450 Does that make any sense? 1528 01:07:27,750 --> 01:07:28,250 OK. 1529 01:07:32,740 --> 01:07:35,125 And so people hate losses more than they like the gain. 1530 01:07:35,125 --> 01:07:36,250 So they stick with the mug. 1531 01:07:36,250 --> 01:07:40,610 And similarly, that's the same thing for the pen owners. 1532 01:07:40,610 --> 01:07:42,140 There's lots of different examples 1533 01:07:42,140 --> 01:07:44,300 of those kinds of behaviors. 1534 01:07:44,300 --> 01:07:46,610 For example, law school students were 1535 01:07:46,610 --> 01:07:50,000 asked to assess compensation for pain and suffering damages 1536 01:07:50,000 --> 01:07:51,710 in one study. 1537 01:07:51,710 --> 01:07:53,600 This is expected to last-- 1538 01:07:53,600 --> 01:07:56,210 or in this example-- is expected to last three years 1539 01:07:56,210 --> 01:07:58,040 and be quite unpleasant. 1540 01:07:58,040 --> 01:08:01,740 There's no impact on earnings capacity. 1541 01:08:01,740 --> 01:08:04,390 For example, it would be extreme stiffness in the upper back 1542 01:08:04,390 --> 01:08:04,890 and neck. 1543 01:08:04,890 --> 01:08:07,530 I think that would probably affect your earning capacity, 1544 01:08:07,530 --> 01:08:12,040 but anyway let's assume that's not the case. 1545 01:08:12,040 --> 01:08:13,740 So then some students are led to imagine 1546 01:08:13,740 --> 01:08:15,060 they were being injured. 1547 01:08:15,060 --> 01:08:19,210 How much would you be willing to pay to get better? 1548 01:08:19,210 --> 01:08:19,710 OK. 1549 01:08:19,710 --> 01:08:21,750 So your reference point is like you're injured. 1550 01:08:21,750 --> 01:08:25,470 How much are you willing to pay to get better? 1551 01:08:25,470 --> 01:08:30,270 And people said 151,448 on average. 1552 01:08:30,270 --> 01:08:31,770 Now, another group of students was-- 1553 01:08:31,770 --> 01:08:35,250 this is randomized-- was led to imagine being uninjured. 1554 01:08:35,250 --> 01:08:39,600 Like, how much would I need to pay you to accept the injury? 1555 01:08:39,600 --> 01:08:40,920 You're not injured. 1556 01:08:40,920 --> 01:08:43,535 Now I'm asking you, how much are you willing to pay, 1557 01:08:43,535 --> 01:08:44,910 or how much do I have to pay you, 1558 01:08:44,910 --> 01:08:48,390 how much do I have to compensate you to accept this injury? 1559 01:08:48,390 --> 01:08:52,529 Now, you would think this is like the same thing, 1560 01:08:52,529 --> 01:08:54,870 but what's your price of not being injured? 1561 01:08:54,870 --> 01:08:57,600 The price of health should be independent on which way I'm 1562 01:08:57,600 --> 01:08:58,380 asking you. 1563 01:08:58,380 --> 01:09:01,859 Turns out, when people have their good health, 1564 01:09:01,859 --> 01:09:05,319 they're willing to pay a lot more to keep it 1565 01:09:05,319 --> 01:09:08,490 or to not lose it compared to like when they have lost it 1566 01:09:08,490 --> 01:09:11,729 and they want to regain it. 1567 01:09:11,729 --> 01:09:13,890 There's another quite nice example. 1568 01:09:13,890 --> 01:09:15,930 I'm showing you here. 1569 01:09:15,930 --> 01:09:16,930 One second. 1570 01:09:16,930 --> 01:09:20,189 So in case you didn't see, that was Dan Ariely, 1571 01:09:20,189 --> 01:09:22,260 who has several nice books. 1572 01:09:22,260 --> 01:09:24,160 One of them is predictably irrational, 1573 01:09:24,160 --> 01:09:27,580 which is a quite nice read. 1574 01:09:27,580 --> 01:09:32,736 So there's lots of these kinds of examples of loss aversion 1575 01:09:32,736 --> 01:09:34,319 or what's called the endowment effect. 1576 01:09:34,319 --> 01:09:36,808 When people have stuff essentially, 1577 01:09:36,808 --> 01:09:38,100 they're not willing to part it. 1578 01:09:38,100 --> 01:09:41,760 When they don't have it, they're willing to pay less. 1579 01:09:41,760 --> 01:09:44,279 One nice thing that Ariely was also talking about-- a little 1580 01:09:44,279 --> 01:09:45,840 bit about how people then explain 1581 01:09:45,840 --> 01:09:46,950 these kinds of behaviors. 1582 01:09:46,950 --> 01:09:49,260 We're going to get back to that in I 1583 01:09:49,260 --> 01:09:50,837 think lecture 20 or something, which 1584 01:09:50,837 --> 01:09:52,920 is about kind of like I think what's going on here 1585 01:09:52,920 --> 01:09:53,670 is very obvious. 1586 01:09:53,670 --> 01:09:55,200 Potentially, people are loss-averse 1587 01:09:55,200 --> 01:09:57,150 and they're randomized into like gains or losses, 1588 01:09:57,150 --> 01:09:59,317 and that's kind of what's explaining their behavior. 1589 01:09:59,317 --> 01:10:01,058 But people don't necessarily understand 1590 01:10:01,058 --> 01:10:03,600 that they're being randomized in one of the other conditions, 1591 01:10:03,600 --> 01:10:05,700 and they then sort of try to explain their behavior 1592 01:10:05,700 --> 01:10:08,158 in various ways and saying, I want to tell my grandchildren 1593 01:10:08,158 --> 01:10:08,783 about this. 1594 01:10:08,783 --> 01:10:10,200 But in some sense, they're sort of 1595 01:10:10,200 --> 01:10:12,150 like rationalizing their preferences 1596 01:10:12,150 --> 01:10:14,688 and they don't necessarily sort of understand these stories, 1597 01:10:14,688 --> 01:10:16,230 and they don't necessarily understand 1598 01:10:16,230 --> 01:10:19,320 where these preferences are coming from even though we kind 1599 01:10:19,320 --> 01:10:21,450 of know, because the person has been manipulated 1600 01:10:21,450 --> 01:10:23,040 into those kinds of choices. 1601 01:10:23,040 --> 01:10:24,430 We'll get back to that. 1602 01:10:24,430 --> 01:10:28,440 Now, the third part of Kahneman and Tversky's prospect theory 1603 01:10:28,440 --> 01:10:29,940 of their paper is essentially what's 1604 01:10:29,940 --> 01:10:31,615 called diminishing sensitivity. 1605 01:10:31,615 --> 01:10:33,990 I'm going to get back to this and sort of write this down 1606 01:10:33,990 --> 01:10:36,720 in more detail on Monday, but essentially it's 1607 01:10:36,720 --> 01:10:40,290 like people are risk-averse in the gain domain, 1608 01:10:40,290 --> 01:10:42,600 but risk-loving and the loss region. 1609 01:10:42,600 --> 01:10:43,720 What does it look like? 1610 01:10:43,720 --> 01:10:45,930 Essentially the utility function is 1611 01:10:45,930 --> 01:10:48,390 a little bit different from what I showed you before. 1612 01:10:48,390 --> 01:10:50,370 It's not only like more steep on the left 1613 01:10:50,370 --> 01:10:52,800 relative to the reference point compared to the right, 1614 01:10:52,800 --> 01:10:57,090 but it's also concave on the right and convex on the left. 1615 01:10:57,090 --> 01:10:59,490 And what that gets you and what that buys you essentially 1616 01:10:59,490 --> 01:11:05,070 is that people are risk averse in the gain domain, 1617 01:11:05,070 --> 01:11:08,430 but people are risk-loving in the loss domain, 1618 01:11:08,430 --> 01:11:10,890 and that sort of is needed to be able to explain some 1619 01:11:10,890 --> 01:11:12,930 of the behaviors that I showed you 1620 01:11:12,930 --> 01:11:16,320 early on in the Kahneman and Tversky evidence. 1621 01:11:16,320 --> 01:11:19,440 So again, I'll tell you about this in more detail 1622 01:11:19,440 --> 01:11:22,043 and sort of write this down more precisely. 1623 01:11:22,043 --> 01:11:23,460 And then we're going to talk about 1624 01:11:23,460 --> 01:11:25,230 like many different applications. 1625 01:11:25,230 --> 01:11:27,180 And particularly, there's the endowment effect 1626 01:11:27,180 --> 01:11:29,760 which is kind of what I talked to you already about before. 1627 01:11:29,760 --> 01:11:31,808 There's labor supply, employment, 1628 01:11:31,808 --> 01:11:33,600 and effort depending on what people expect, 1629 01:11:33,600 --> 01:11:34,590 how much they earn. 1630 01:11:34,590 --> 01:11:40,110 They make different choices about how many hours 1631 01:11:40,110 --> 01:11:42,120 they actually work depending on the expectation, 1632 01:11:42,120 --> 01:11:44,460 whether it's above or below a reference point. 1633 01:11:44,460 --> 01:11:46,230 People are reluctant to sell their houses 1634 01:11:46,230 --> 01:11:47,850 if they're at a loss compared to again 1635 01:11:47,850 --> 01:11:50,040 even for very similar houses. 1636 01:11:50,040 --> 01:11:52,260 And marathon-running people are trying 1637 01:11:52,260 --> 01:11:54,480 to reach essentially certain targets. 1638 01:11:54,480 --> 01:11:56,580 People like to be below four hours or three 1639 01:11:56,580 --> 01:11:58,470 hours and the like. 1640 01:11:58,470 --> 01:12:00,580 There's the disposition effect that we mentioned, 1641 01:12:00,580 --> 01:12:03,060 which essentially is people like to sell winners 1642 01:12:03,060 --> 01:12:05,280 compared to losers. 1643 01:12:05,280 --> 01:12:08,230 And the insurance choice that I already showed you before-- 1644 01:12:08,230 --> 01:12:13,800 there's some evidence on violence in the household. 1645 01:12:13,800 --> 01:12:17,940 The particular example is about football or games essentially-- 1646 01:12:17,940 --> 01:12:20,310 I think football games where essentially depending 1647 01:12:20,310 --> 01:12:24,210 on whether people expected a loss or win of their team 1648 01:12:24,210 --> 01:12:29,490 and when then a loss or a win of the team actually happens, 1649 01:12:29,490 --> 01:12:32,220 there's more violence when there are unexpected losses compared 1650 01:12:32,220 --> 01:12:35,380 to expected losses which is, again, 1651 01:12:35,380 --> 01:12:38,590 consistent with loss aversion. 1652 01:12:38,590 --> 01:12:39,090 What's next? 1653 01:12:39,090 --> 01:12:41,215 So next we're going to talk about many applications 1654 01:12:41,215 --> 01:12:44,310 of reference dependence on Monday. 1655 01:12:44,310 --> 01:12:47,400 I'd like you to sort of read the Kahneman and Tversky 1656 01:12:47,400 --> 01:12:49,830 paper, at least the first pages to get 1657 01:12:49,830 --> 01:12:51,083 some sense of their work. 1658 01:12:51,083 --> 01:12:52,500 And then on Wednesday, we're going 1659 01:12:52,500 --> 01:12:54,282 to start talking about social preferences. 1660 01:12:54,282 --> 01:12:56,490 In particular, we can have some experiments in class. 1661 01:12:56,490 --> 01:13:00,210 There's going to be no readings for that. 1662 01:13:00,210 --> 01:13:02,030 There's the opportunity to make money 1663 01:13:02,030 --> 01:13:04,720 in class, at least some of it.