1 00:00:00,000 --> 00:00:01,760 The following content is provided by a 2 00:00:01,760 --> 00:00:03,630 Creative Commons license. 3 00:00:03,630 --> 00:00:06,725 Your support will help MIT OpenCourseWare continue to 4 00:00:06,725 --> 00:00:10,030 offer high quality educational resources for free. 5 00:00:10,030 --> 00:00:12,940 To make a donation or to view additional materials from 6 00:00:12,940 --> 00:00:16,190 hundreds of MIT courses, visit MIT OpenCourseWare at 7 00:00:16,190 --> 00:00:17,440 ocw.mit.edu. 8 00:00:21,060 --> 00:00:26,280 PROFESSOR: I just want to be explicit about it because with 9 00:00:26,280 --> 00:00:30,820 all of the abstraction of QAM looking at it in terms of 10 00:00:30,820 --> 00:00:35,740 Hilbert filters and all of that, you tend to forget that 11 00:00:35,740 --> 00:00:40,140 what's really going on in terms of implementation is 12 00:00:40,140 --> 00:00:44,260 something very, very simple. 13 00:00:44,260 --> 00:00:47,800 We said that what we were doing with QAM is, we're 14 00:00:47,800 --> 00:00:51,870 trying to essentially build two PAM systems in parallel. 15 00:00:51,870 --> 00:00:54,240 So what we do is, we have two different data 16 00:00:54,240 --> 00:00:55,990 streams coming in. 17 00:00:55,990 --> 00:01:02,480 Each one there's a new signal, each t seconds. t is the time 18 00:01:02,480 --> 00:01:07,450 separation between times when we transmit something. 19 00:01:07,450 --> 00:01:10,940 So the sequence of symbols come in. 20 00:01:10,940 --> 00:01:15,010 What we form is then -- well, actually this 21 00:01:15,010 --> 00:01:16,340 isn't a real operation. 22 00:01:16,340 --> 00:01:20,270 Because everybody does this in whatever way suits them. 23 00:01:20,270 --> 00:01:23,430 And the only thing that's important is getting from here 24 00:01:23,430 --> 00:01:25,170 over to there. 25 00:01:25,170 --> 00:01:28,250 And what we're doing in doing that is taking each of these 26 00:01:28,250 --> 00:01:33,830 signals, multiplying them by a pulse, to give this waveform 27 00:01:33,830 --> 00:01:37,020 the pulses delayed according to what signal it is. 28 00:01:37,020 --> 00:01:41,010 And if you like to think of that in terms of taking the 29 00:01:41,010 --> 00:01:46,450 signal and viewing it as an impulse, so you have a train 30 00:01:46,450 --> 00:01:50,490 of impulses weighted by these values here. 31 00:01:50,490 --> 00:01:53,280 And then filtering that, fine. 32 00:01:53,280 --> 00:01:55,580 That's a good way to think about it. 33 00:01:55,580 --> 00:01:58,510 But, at any rate you wind up with this. 34 00:01:58,510 --> 00:02:01,780 With the other data stream which is going through here -- 35 00:02:01,780 --> 00:02:03,194 yes? 36 00:02:03,194 --> 00:02:04,444 AUDIENCE: [INAUDIBLE] 37 00:02:34,920 --> 00:02:40,577 PROFESSOR: You couldn't argue that was band limited also. 38 00:02:40,577 --> 00:02:41,827 AUDIENCE: [INAUDIBLE] 39 00:02:49,980 --> 00:02:52,370 PROFESSOR: But it's band limited in the same way. 40 00:02:52,370 --> 00:02:55,790 I mean, visualize -- 41 00:02:55,790 --> 00:03:01,330 I mean, the Fourier transform of p of t minus k t is simply 42 00:03:01,330 --> 00:03:16,000 the Fourier transform of p of t, which is multiplied by a 43 00:03:16,000 --> 00:03:18,250 sinusoid in f at this point. 44 00:03:18,250 --> 00:03:20,720 So it doesn't change the bandwidth of that. 45 00:03:20,720 --> 00:03:23,720 In other words, when you take a function and you move it in 46 00:03:23,720 --> 00:03:28,680 time, the Fourier transform is going to be rotating in a 47 00:03:28,680 --> 00:03:29,660 different way. 48 00:03:29,660 --> 00:03:35,490 But its frequency components will not change at all. 49 00:03:35,490 --> 00:03:37,960 All of them just change in phase because 50 00:03:37,960 --> 00:03:38,730 of that time change. 51 00:03:38,730 --> 00:03:39,980 AUDIENCE: [INAUDIBLE] 52 00:03:42,670 --> 00:03:47,280 PROFESSOR: When you look at it in frequency, well no, because 53 00:03:47,280 --> 00:03:50,000 we're adding up all these things in 54 00:03:50,000 --> 00:03:51,410 frequency at that point. 55 00:03:51,410 --> 00:03:54,590 But each one of these pulses is still constrained within 56 00:03:54,590 --> 00:03:55,840 the same bandwidth. 57 00:04:00,950 --> 00:04:02,200 AUDIENCE: [INAUDIBLE] 58 00:04:09,180 --> 00:04:14,330 PROFESSOR: The transform of a pulse train does not look like 59 00:04:14,330 --> 00:04:17,470 a pulse train in frequency, no. 60 00:04:17,470 --> 00:04:20,640 When you take a pulse train in time, and you take the 61 00:04:20,640 --> 00:04:25,890 transform of each one of these pulses, what you get is a 62 00:04:25,890 --> 00:04:33,010 pulse which, where in frequency there is a rotation. 63 00:04:33,010 --> 00:04:35,880 In other words, the phase is changing. 64 00:04:35,880 --> 00:04:39,870 But in fact, what you wind up with is a pulse within the 65 00:04:39,870 --> 00:04:43,090 same frequency band. 66 00:04:43,090 --> 00:04:45,700 I mean, simply take the Fourier transform and look at 67 00:04:45,700 --> 00:04:49,390 it, and see which frequencies it has in it. 68 00:04:52,480 --> 00:04:55,490 In they don't change. 69 00:04:55,490 --> 00:05:00,740 And when you weight it with coefficients, the Fourier 70 00:05:00,740 --> 00:05:07,100 transform will be scaled, will be multiplied by a scalar. 71 00:05:07,100 --> 00:05:12,160 But that won't change which frequencies are there, either. 72 00:05:12,160 --> 00:05:14,410 So in fact, when you do this -- 73 00:05:14,410 --> 00:05:17,910 I should make that clearer in the notes. 74 00:05:17,910 --> 00:05:19,870 It's a little confusing. 75 00:05:19,870 --> 00:05:21,120 AUDIENCE: [INAUDIBLE] 76 00:05:39,455 --> 00:05:41,636 PROFESSOR: It'll be band limited in the same way, 77 00:05:41,636 --> 00:05:41,970 though, yes. 78 00:05:41,970 --> 00:05:44,940 AUDIENCE: [INAUDIBLE] 79 00:05:44,940 --> 00:05:46,770 PROFESSOR: It won't look the same. 80 00:05:46,770 --> 00:05:51,670 But the Nyquist criterion theorem remains the same. 81 00:05:56,740 --> 00:06:01,410 I mean, if you look at the proof of it, it's -- well I'll 82 00:06:01,410 --> 00:06:04,960 go back and look at it. 83 00:06:04,960 --> 00:06:08,410 I think if you look at it, you see that it actually -- 84 00:06:08,410 --> 00:06:12,260 I think you see that this time variation here in fact does 85 00:06:12,260 --> 00:06:14,250 not make any difference. 86 00:06:14,250 --> 00:06:15,930 I mean, I know it doesn't make any difference. 87 00:06:15,930 --> 00:06:21,720 But the question is, how to explain that it doesn't make a 88 00:06:21,720 --> 00:06:22,900 difference. 89 00:06:22,900 --> 00:06:27,170 And I'll go through this. 90 00:06:27,170 --> 00:06:29,240 And I will think about that. 91 00:06:29,240 --> 00:06:32,690 And I'll change the notes if need be. 92 00:06:32,690 --> 00:06:35,460 But anyway, it's not something you should worry about. 93 00:06:35,460 --> 00:06:36,490 Anyway. 94 00:06:36,490 --> 00:06:42,040 What happens here is, you get this pulse train, which is 95 00:06:42,040 --> 00:06:44,200 just a sum of these pulses. 96 00:06:44,200 --> 00:06:46,150 And they all interact with each other. 97 00:06:46,150 --> 00:06:49,310 So at this point we have intersymbol interference. 98 00:06:49,310 --> 00:06:52,010 We multiply this by a cosine wave. 99 00:06:52,010 --> 00:06:54,910 We multiply this by a sine wave. 100 00:06:54,910 --> 00:06:59,090 And what we get then is some output, x of t. 101 00:06:59,090 --> 00:07:02,050 Which is in the frequency band. 102 00:07:02,050 --> 00:07:09,090 But if you take the low pass frequency here, which goes 103 00:07:09,090 --> 00:07:14,730 from 0 up to sum w, up at passband, we go from f c minus 104 00:07:14,730 --> 00:07:17,870 w to f c plus w. 105 00:07:17,870 --> 00:07:19,620 Same thing here. 106 00:07:19,620 --> 00:07:22,970 So we have this function, which is then constrained. 107 00:07:22,970 --> 00:07:27,180 But it's a passband constraint to twice the frequency which 108 00:07:27,180 --> 00:07:28,790 we have here. 109 00:07:28,790 --> 00:07:35,380 When we take x sub t, and we try to go through a 110 00:07:35,380 --> 00:07:36,660 demodulator. 111 00:07:36,660 --> 00:07:38,710 The thing we're going to do is to multiply, 112 00:07:38,710 --> 00:07:40,450 again, by the cosine. 113 00:07:40,450 --> 00:07:42,840 Multiply again by the sine. 114 00:07:42,840 --> 00:07:46,550 Incidentally, this is minus sine and minus sine here. 115 00:07:46,550 --> 00:07:49,560 When you implement it, you can use plus sine or minus sine, 116 00:07:49,560 --> 00:07:50,990 whichever you want to. 117 00:07:50,990 --> 00:07:54,210 So long as you use the same thing here and there. 118 00:07:54,210 --> 00:07:56,280 It doesn't make any difference. 119 00:07:56,280 --> 00:07:58,830 It's just that when you look at things in terms of complex 120 00:07:58,830 --> 00:08:02,360 exponentials, this turns into a minus sine. 121 00:08:02,360 --> 00:08:05,840 Oh, and this turns into a minus sine. 122 00:08:05,840 --> 00:08:07,780 We then filter by q of t. 123 00:08:07,780 --> 00:08:09,590 We filter this by q of t. 124 00:08:09,590 --> 00:08:12,550 The same filter. 125 00:08:12,550 --> 00:08:15,590 And we wind up then going through the T-spaced sampler. 126 00:08:15,590 --> 00:08:19,480 The T-spaced sampler and the output use of k prime. 127 00:08:19,480 --> 00:08:24,370 And the reason for this is that if you look at the system 128 00:08:24,370 --> 00:08:32,830 broken right here, this is simply a PAM waveform. 129 00:08:32,830 --> 00:08:35,500 And when we take this PAM waveform, the only thing we're 130 00:08:35,500 --> 00:08:38,780 doing is we're multiplying it by a cosine. 131 00:08:38,780 --> 00:08:41,840 Then we're multiplying it by a cosine again. 132 00:08:41,840 --> 00:08:45,490 When we get a cosine squared term, half of it comes back 133 00:08:45,490 --> 00:08:47,440 down to baseband. 134 00:08:47,440 --> 00:08:52,720 Half of it goes up to twice the carrier frequency. 135 00:08:52,720 --> 00:08:58,980 And this filter, in principle, is going to filter out that 136 00:08:58,980 --> 00:09:02,700 part at 2 times the carrier frequency. 137 00:09:02,700 --> 00:09:07,000 And it's also going to do the right thing to this filter. 138 00:09:07,000 --> 00:09:11,530 To turn it into something where, when we sample, we 139 00:09:11,530 --> 00:09:14,560 actually get these components back again. 140 00:09:14,560 --> 00:09:17,740 So the only thing that's going on here is simply, this is a 141 00:09:17,740 --> 00:09:23,370 PAM system where, when we multiply by cosine twice, the 142 00:09:23,370 --> 00:09:26,920 thing we're getting is a baseband component again, 143 00:09:26,920 --> 00:09:32,520 which is exactly the same as this quantity here. 144 00:09:32,520 --> 00:09:37,350 Except it's -- 145 00:09:45,790 --> 00:09:49,560 why, I when I multiply this by this, don't I have 146 00:09:49,560 --> 00:09:51,280 to divide by -- 147 00:09:51,280 --> 00:09:53,210 don't I have to multiply by 2? 148 00:09:53,210 --> 00:09:54,490 I don't know. 149 00:09:54,490 --> 00:09:56,120 But we don't. 150 00:09:56,120 --> 00:09:58,565 And when we get all through the thing we come out with u 151 00:09:58,565 --> 00:10:04,400 sub k prime, and we come out with u sub k double prime. 152 00:10:04,400 --> 00:10:07,300 It is easier to think about this in terms of complex 153 00:10:07,300 --> 00:10:09,250 frequencies. 154 00:10:09,250 --> 00:10:12,170 When we think about it in terms of cosines and sines, 155 00:10:12,170 --> 00:10:14,820 we're doing it that way because we have to implement 156 00:10:14,820 --> 00:10:15,890 it this way. 157 00:10:15,890 --> 00:10:19,530 Because if you implement a Hilbert filter, you're taking 158 00:10:19,530 --> 00:10:23,740 a complex function, filtering it by complex waveform, which 159 00:10:23,740 --> 00:10:26,230 means you've got to build four separate filters. 160 00:10:26,230 --> 00:10:28,795 One to take the real part with the real part of the filter. 161 00:10:28,795 --> 00:10:34,040 The real part with the complex part of the filter. 162 00:10:34,040 --> 00:10:37,930 Also the imaginary part of the waveform with the real part of 163 00:10:37,930 --> 00:10:41,410 the filter, and so forth. 164 00:10:41,410 --> 00:10:44,250 So you don't implement it that way, but that's the easy way 165 00:10:44,250 --> 00:10:45,460 to look at what's happening. 166 00:10:45,460 --> 00:10:48,130 It's the easy way to analyze it. 167 00:10:48,130 --> 00:10:49,690 This just does the same thing. 168 00:10:56,230 --> 00:10:59,610 When we use this double sideband quadrature carrier 169 00:10:59,610 --> 00:11:05,120 implementation of QAM, the filters p and q 170 00:11:05,120 --> 00:11:07,300 really have to be real. 171 00:11:07,300 --> 00:11:09,470 Why do they have to be real? 172 00:11:09,470 --> 00:11:16,690 Well, because otherwise, we're mixing what's going on here 173 00:11:16,690 --> 00:11:18,920 with what's going on here. 174 00:11:18,920 --> 00:11:21,960 In other words, you can't take -- this will not be a real 175 00:11:21,960 --> 00:11:26,680 waveform, so you can't just multiply it by a cosine wave. 176 00:11:26,680 --> 00:11:30,430 In the other implementation we had, you have this, which is a 177 00:11:30,430 --> 00:11:33,450 complex waveform, plus this, which is 178 00:11:33,450 --> 00:11:35,300 another complex waveform. 179 00:11:35,300 --> 00:11:39,540 You multiply it by e to the 2 pi i f c t. 180 00:11:39,540 --> 00:11:41,510 And then you take the real part of it. 181 00:11:41,510 --> 00:11:43,230 And everything comes out in the wash. 182 00:11:43,230 --> 00:11:46,470 Here we need this to be real and this to be real in order 183 00:11:46,470 --> 00:11:50,360 to be able to implement this just by multiplying by cosine 184 00:11:50,360 --> 00:11:54,020 and multiplying by sine. 185 00:11:54,020 --> 00:11:57,280 So the standard QAM signal set makes it just 186 00:11:57,280 --> 00:12:00,520 parallel PAM systems. 187 00:12:00,520 --> 00:12:07,350 If, in fact, you don't use a standard QAM set; namely, if 188 00:12:07,350 --> 00:12:11,030 these two things are mixed up in some kind of circular 189 00:12:11,030 --> 00:12:15,390 signal set or something, then what comes out here has to be 190 00:12:15,390 --> 00:12:17,040 demodulated in the same way. 191 00:12:17,040 --> 00:12:19,840 So you don't have the total separation. 192 00:12:19,840 --> 00:12:21,550 But the real part of the system is the 193 00:12:21,550 --> 00:12:24,100 same in both cases. 194 00:12:24,100 --> 00:12:26,810 I think the notes for a little confusing about that. 195 00:12:26,810 --> 00:12:33,670 Finally, as a practical detail here, we have talked several 196 00:12:33,670 --> 00:12:40,180 times about the fact that every time people do 197 00:12:40,180 --> 00:12:44,260 complicated filtering, and complicated filtering involves 198 00:12:44,260 --> 00:12:45,840 very sharp filters. 199 00:12:45,840 --> 00:12:49,280 Every time you make a very sharp filter at baseband, the 200 00:12:49,280 --> 00:12:51,220 way you do it is, you do it digitally. 201 00:12:51,220 --> 00:12:53,960 Namely, you first take this waveform. 202 00:12:53,960 --> 00:12:57,040 You sample it very, very rapidly at a rapid pace. 203 00:12:57,040 --> 00:13:00,220 And then you take those digital signals and you pass 204 00:13:00,220 --> 00:13:02,900 those through a digital filter. 205 00:13:02,900 --> 00:13:04,610 And that gives you the waveform that you're 206 00:13:04,610 --> 00:13:05,710 interested in. 207 00:13:05,710 --> 00:13:10,790 When you do that, you're faced with having a waveform coming 208 00:13:10,790 --> 00:13:16,390 into the receiver which has a band down at baseband. 209 00:13:16,390 --> 00:13:17,920 Which it has width w. 210 00:13:17,920 --> 00:13:22,220 You also have a band up at twice the carrier frequency. 211 00:13:22,220 --> 00:13:26,680 And if you sample that whole thing, according to all of the 212 00:13:26,680 --> 00:13:29,690 things we've done with aliasing and everything, those 213 00:13:29,690 --> 00:13:33,540 samples mix what's up at twice the carrier frequency with 214 00:13:33,540 --> 00:13:35,550 what's down here at baseband. 215 00:13:35,550 --> 00:13:38,890 And you really have to do some filtering to get rid of once 216 00:13:38,890 --> 00:13:41,930 or twice the carrier frequency. 217 00:13:41,930 --> 00:13:47,030 This really isn't much of a practical issue, because you 218 00:13:47,030 --> 00:13:50,620 can hardly avoid doing that much filtering just in doing 219 00:13:50,620 --> 00:13:52,210 the sampling. 220 00:13:52,210 --> 00:13:54,820 I mean, you can't build the sampler that doesn't filter 221 00:13:54,820 --> 00:13:57,190 the waveform a little bit too. 222 00:13:57,190 --> 00:13:59,460 But you have to be conscious of the fact that you have to 223 00:13:59,460 --> 00:14:03,890 get rid of those double frequency terms before you go 224 00:14:03,890 --> 00:14:05,360 into the digital domain. 225 00:14:05,360 --> 00:14:08,360 So, you have to think of it as a little bit of filtering 226 00:14:08,360 --> 00:14:10,900 followed by a rapid sampler, followed 227 00:14:10,900 --> 00:14:13,860 by a careful filtering. 228 00:14:13,860 --> 00:14:17,250 Which does this function q of t here. 229 00:14:20,810 --> 00:14:22,060 OK. 230 00:14:24,180 --> 00:14:26,800 Want to talk a little bit about the number of degrees of 231 00:14:26,800 --> 00:14:31,190 freedom we have when we do this. 232 00:14:31,190 --> 00:14:35,940 If we look at quadrature amplitude modulation, anytime 233 00:14:35,940 --> 00:14:39,470 you get confused about bandwidths, it's probably 234 00:14:39,470 --> 00:14:43,800 better to look at the sample time, t, which is something 235 00:14:43,800 --> 00:14:44,820 that never changes. 236 00:14:44,820 --> 00:14:48,800 And that always remains the same. 237 00:14:48,800 --> 00:14:51,930 When you look at the sample spacing, the baseband 238 00:14:51,930 --> 00:14:58,040 bandwidth, as we've seen many times, the nominal baseband 239 00:14:58,040 --> 00:15:00,150 bandwidth is 1 over 2t. 240 00:15:00,150 --> 00:15:02,620 Namely, that's what's called the Nyquist bandwidth. 241 00:15:02,620 --> 00:15:07,080 It's what you get if you use sinc functions, which strictly 242 00:15:07,080 --> 00:15:10,220 puts everything in this baseband and 243 00:15:10,220 --> 00:15:12,690 nothing outside of it. 244 00:15:12,690 --> 00:15:17,040 And we've seen that by using the Nyquist criterion and 245 00:15:17,040 --> 00:15:21,560 building sharp filters, we can make the actual bandwidth as 246 00:15:21,560 --> 00:15:24,640 close to this as we want to make it. 247 00:15:24,640 --> 00:15:28,650 When we modulate this up to passband, we're going to have 248 00:15:28,650 --> 00:15:31,360 a passband bandwidth which is 1 over t. 249 00:15:31,360 --> 00:15:36,920 Because, in fact, this baseband function is going 250 00:15:36,920 --> 00:15:42,790 from minus 1 over 2t to plus 1 over 2t. 251 00:15:42,790 --> 00:15:46,026 When we modulate that up, we get something that goes from f 252 00:15:46,026 --> 00:15:50,880 c minus 1 over 2t to f c plus 1 over 2t. 253 00:15:50,880 --> 00:15:56,600 Which is really an overall bandwidth of 1 over t. 254 00:15:56,600 --> 00:16:01,890 If we look at this over a large time interval, t0, we 255 00:16:01,890 --> 00:16:03,660 have two degrees of freedom. 256 00:16:03,660 --> 00:16:06,520 Namely, the real part and the imaginary part. 257 00:16:06,520 --> 00:16:10,530 Namely, we have two different real number signals coming 258 00:16:10,530 --> 00:16:12,880 into this transmitter. 259 00:16:12,880 --> 00:16:15,320 Each t seconds. 260 00:16:15,320 --> 00:16:21,300 So that over a period of t0, some much larger period, we 261 00:16:21,300 --> 00:16:27,200 have t0 over capital T different signals that we're 262 00:16:27,200 --> 00:16:28,250 transmitting. 263 00:16:28,250 --> 00:16:31,450 Each of those signals has two degrees of freedom in it. 264 00:16:31,450 --> 00:16:36,810 So we wind up with 2 t0 times w real degrees of freedom. 265 00:16:36,810 --> 00:16:44,880 Namely, 2 t0 over T. There's 2 t0 times w, where w is the 266 00:16:44,880 --> 00:16:47,160 passband bandwidth. 267 00:16:47,160 --> 00:16:50,390 What I want to do now, to show you that actually nothing is 268 00:16:50,390 --> 00:16:54,340 being lost and nothing is being gained, is to look at a 269 00:16:54,340 --> 00:17:01,900 baseband system where you use pulse amplitude modulation. 270 00:17:01,900 --> 00:17:04,710 But, in fact, you use a humongously wide 271 00:17:04,710 --> 00:17:07,240 bandwidth to do this. 272 00:17:07,240 --> 00:17:10,310 So it becomes something like these ultra-wideband systems 273 00:17:10,310 --> 00:17:12,500 that people are talking about now. 274 00:17:12,500 --> 00:17:15,960 You use this very, very wide bandwidth, and you see how 275 00:17:15,960 --> 00:17:18,820 many signals you can pack into this very wide 276 00:17:18,820 --> 00:17:20,890 bandwidth using PAM. 277 00:17:20,890 --> 00:17:22,530 And what's the answer? 278 00:17:22,530 --> 00:17:26,390 There are 2 t0 w 0 real degrees of freedom. 279 00:17:26,390 --> 00:17:29,140 And a baseband bandwidth w 0. 280 00:17:29,140 --> 00:17:34,020 This is what we determine when we were dealing with PAM. 281 00:17:34,020 --> 00:17:38,390 It simply corresponded to the fact that the baseband 282 00:17:38,390 --> 00:17:42,300 bandwidth was 1 over 2 t 0. 283 00:17:42,300 --> 00:17:48,550 Therefore we got two signals coming in, each 1 over w 0. 284 00:17:48,550 --> 00:17:52,040 So this was the number of real degrees of freedom we have. 285 00:17:52,040 --> 00:17:54,400 Now, the thing we want to do is to take 286 00:17:54,400 --> 00:17:56,780 this humongous bandwidth. 287 00:17:56,780 --> 00:18:00,200 And to say, what happens if instead of using this wide 288 00:18:00,200 --> 00:18:04,810 bandwidth system, we use a lot of little narrow bands. 289 00:18:04,810 --> 00:18:09,520 So we make narrow bands send QAM in the narrow bands, where 290 00:18:09,520 --> 00:18:13,130 in fact we're packing signals into the real part and the 291 00:18:13,130 --> 00:18:18,180 imaginary part, which is what we decided we ought to do. 292 00:18:18,180 --> 00:18:20,740 And how do these two systems compare. 293 00:18:20,740 --> 00:18:24,770 Well, if we take a very wide bandwidth w0, with lots of 294 00:18:24,770 --> 00:18:29,830 little bandwidths, w, up at various passbands, all stacked 295 00:18:29,830 --> 00:18:31,690 on top of each other. 296 00:18:31,690 --> 00:18:35,540 Like here's zero. 297 00:18:35,540 --> 00:18:40,830 Here's w0 which is up at five gigahertz or whatever you want 298 00:18:40,830 --> 00:18:42,940 to make it. 299 00:18:42,940 --> 00:18:46,140 We put lots of little passbands in here, 300 00:18:46,140 --> 00:18:49,690 each of width w. 301 00:18:49,690 --> 00:18:54,350 And the question is how many different frequency 302 00:18:54,350 --> 00:18:55,410 intervals do we get? 303 00:18:55,410 --> 00:18:59,440 How many different signals can we multiplex together here by 304 00:18:59,440 --> 00:19:01,940 using different frequency bands? 305 00:19:01,940 --> 00:19:06,360 Which is exactly the way people who use QAM normally 306 00:19:06,360 --> 00:19:07,440 transmit data. 307 00:19:07,440 --> 00:19:11,650 Well, the answer is, you get w0 over w 308 00:19:11,650 --> 00:19:13,460 different bands here. 309 00:19:13,460 --> 00:19:16,080 So how many degrees of freedom do you have? 310 00:19:16,080 --> 00:19:21,590 Well, we said we had 2 t 0 w real degrees of freedom with 311 00:19:21,590 --> 00:19:23,580 each QAM system. 312 00:19:23,580 --> 00:19:29,080 We have w0 over w different QAM systems all stacked on top 313 00:19:29,080 --> 00:19:30,760 of each other. 314 00:19:30,760 --> 00:19:37,510 So, when you take w equals w0 over m you wind up with 2 t0 315 00:19:37,510 --> 00:19:45,960 w, 2 t0 w 0 real degrees of freedom, overall using QAM. 316 00:19:45,960 --> 00:19:49,840 Exactly the same thing if you use PAM. 317 00:19:49,840 --> 00:19:53,710 This corresponds to these orthonormal series we were 318 00:19:53,710 --> 00:19:58,130 thinking about, where we were thinking in terms of taking 319 00:19:58,130 --> 00:20:02,150 segmenting time, and then using a Fourier series within 320 00:20:02,150 --> 00:20:03,620 each segment of time. 321 00:20:03,620 --> 00:20:05,190 And looking at the number of different 322 00:20:05,190 --> 00:20:07,680 coefficients that we got. 323 00:20:07,680 --> 00:20:15,710 And we then looked at these sinc weighted expansions of 324 00:20:15,710 --> 00:20:16,860 the same type. 325 00:20:16,860 --> 00:20:19,410 We got the same number of degrees of freedom. 326 00:20:19,410 --> 00:20:24,180 This is simply saying that PAM and QAM use every one of those 327 00:20:24,180 --> 00:20:25,550 degrees of freedom. 328 00:20:25,550 --> 00:20:28,030 And there aren't any more degrees of freedom available. 329 00:20:28,030 --> 00:20:31,280 If you're stuck with using some very large time interval 330 00:20:31,280 --> 00:20:36,290 t0 and some very large frequency interval, w0, you're 331 00:20:36,290 --> 00:20:39,170 stuck with that many real degrees of 332 00:20:39,170 --> 00:20:41,150 freedom, and real waveform. 333 00:20:41,150 --> 00:20:46,580 There's nothing else available, and PAM and QAM are 334 00:20:46,580 --> 00:20:50,210 perfectly efficient as far as their 335 00:20:50,210 --> 00:20:51,940 user spectrum is concerned. 336 00:20:51,940 --> 00:20:54,450 They both do the same thing. 337 00:20:54,450 --> 00:20:57,750 How about single sideband? 338 00:20:57,750 --> 00:21:00,450 That does the same thing too. 339 00:21:00,450 --> 00:21:03,060 I mean, with single sideband, you're sending real signals 340 00:21:03,060 --> 00:21:06,410 but you cut off half the bandwidth at passband. 341 00:21:06,410 --> 00:21:09,200 So, again, you get the same number of signals for unit 342 00:21:09,200 --> 00:21:12,150 time and per unit bandwidth. 343 00:21:12,150 --> 00:21:17,880 And most systems, except double sideband quadrature 344 00:21:17,880 --> 00:21:23,020 carrier, which is just obviously wasting bandwidth, 345 00:21:23,020 --> 00:21:24,960 all do the same thing. 346 00:21:24,960 --> 00:21:27,480 So, it's not hard to use all of these 347 00:21:27,480 --> 00:21:30,100 available degrees of freedom. 348 00:21:30,100 --> 00:21:34,150 And the only thing that makes these degrees of freedom 349 00:21:34,150 --> 00:21:38,520 arguments complicated is, there's no way you can find 350 00:21:38,520 --> 00:21:44,990 the pulse, which is perfectly limited to 1 over w in time 351 00:21:44,990 --> 00:21:47,600 and 1 over w in bandwidth. 352 00:21:47,600 --> 00:21:50,050 If you make it limited in time, it 353 00:21:50,050 --> 00:21:51,700 spills out in bandwidth. 354 00:21:51,700 --> 00:21:53,980 If you make it limited in bandwidth, it 355 00:21:53,980 --> 00:21:56,010 spills out in time. 356 00:21:56,010 --> 00:22:00,390 So we always have to fudge a little bit as far as seeing 357 00:22:00,390 --> 00:22:01,640 how these things go together. 358 00:22:05,120 --> 00:22:12,070 Let's talk about one sort of added practical detail that 359 00:22:12,070 --> 00:22:15,100 you always need in any QAM system. 360 00:22:15,100 --> 00:22:18,670 In fact, you always need it in a PAM system, also. 361 00:22:18,670 --> 00:22:21,100 And you need to do it somehow or other. 362 00:22:21,100 --> 00:22:25,930 There's this question of, how do you make sure that the 363 00:22:25,930 --> 00:22:30,385 receiver is working on the same clock as the transmitter 364 00:22:30,385 --> 00:22:32,000 is working on? 365 00:22:32,000 --> 00:22:35,660 How do you make sure that when we talk about a cosine wave at 366 00:22:35,660 --> 00:22:40,920 the receiver, the phase of that cosine wave is, quote, 367 00:22:40,920 --> 00:22:46,330 the same as the phase at the transmit? 368 00:22:46,330 --> 00:22:52,770 So, if the transmitted waveform at the upper passband 369 00:22:52,770 --> 00:22:58,650 -- at positive frequencies is u of t times e to 370 00:22:58,650 --> 00:23:01,840 the 2 pi i f c t. 371 00:23:01,840 --> 00:23:04,750 This is this is one of the reasons why you want to think 372 00:23:04,750 --> 00:23:07,360 in terms of complex notation. 373 00:23:07,360 --> 00:23:13,050 If you try to analyze QAM phase lock using 374 00:23:13,050 --> 00:23:16,640 cosines and sines -- 375 00:23:16,640 --> 00:23:19,770 I mean, you're welcome to do it, but it's 376 00:23:19,770 --> 00:23:21,380 just a terrible mess. 377 00:23:21,380 --> 00:23:25,280 If you look at it in terms of complex signals, it's a very 378 00:23:25,280 --> 00:23:26,510 easy problem. 379 00:23:26,510 --> 00:23:29,730 But it's most easy when you look at it in terms of just 380 00:23:29,730 --> 00:23:32,710 what's going on at positive frequencies. 381 00:23:32,710 --> 00:23:35,880 So the thing we're doing here is, when we modulate, we're 382 00:23:35,880 --> 00:23:37,400 taking u of t. 383 00:23:37,400 --> 00:23:41,410 We multiply by e to the 2 pi i f c t. 384 00:23:41,410 --> 00:23:44,050 What happens to any arbitrary phase that we have in the 385 00:23:44,050 --> 00:23:45,060 transmitter? 386 00:23:45,060 --> 00:23:46,300 Well, nothing happens to it. 387 00:23:46,300 --> 00:23:51,200 We just define time in such a way that it's not there. 388 00:23:51,200 --> 00:23:56,040 At the receiver, we're going to be multiplying again by e 389 00:23:56,040 --> 00:23:59,260 to the minus 2 pi i f c t. 390 00:23:59,260 --> 00:24:02,790 There's going to be some phase in there because the 391 00:24:02,790 --> 00:24:04,930 receiver's clock and the transmitter's 392 00:24:04,930 --> 00:24:08,020 clock are not the same. 393 00:24:08,020 --> 00:24:11,690 Now, the thing which is complicated about this is that 394 00:24:11,690 --> 00:24:14,740 at the receiver there's some propagation delay from what 395 00:24:14,740 --> 00:24:16,530 there is at the transmitter. 396 00:24:16,530 --> 00:24:20,560 So what we're really saying here is that when the waveform 397 00:24:20,560 --> 00:24:25,390 u of t gets to the receiver -- 398 00:24:25,390 --> 00:24:29,460 in other words, after we demodulate, what do we get? 399 00:24:29,460 --> 00:24:32,430 That's the practical question to ask. 400 00:24:32,430 --> 00:24:36,510 I mean you have an arbitrary sinusoid, complex sinusoid at 401 00:24:36,510 --> 00:24:37,530 the transmitter. 402 00:24:37,530 --> 00:24:41,700 You have an arbitrary complex sinusoid at the receiver. 403 00:24:41,700 --> 00:24:46,630 When you multiply by e to the 2 pi i f c t, and then at the 404 00:24:46,630 --> 00:24:50,580 receiver you demodulate by multiplying e to the minus 2 405 00:24:50,580 --> 00:24:56,740 pi i f c t, there's some added phase in there just because 406 00:24:56,740 --> 00:24:58,630 you're multiplying by some sinusoid in 407 00:24:58,630 --> 00:24:59,990 an arbitrary phase. 408 00:24:59,990 --> 00:25:02,980 What do you wind up with when you get done with that? 409 00:25:02,980 --> 00:25:09,410 You get the received waveform, u of t, times e to the i phi. 410 00:25:09,410 --> 00:25:13,360 Now, suppose you're sending analog data. 411 00:25:13,360 --> 00:25:15,430 What happens here? 412 00:25:15,430 --> 00:25:17,820 Well, with analog data, you're absolutely up a 413 00:25:17,820 --> 00:25:20,580 creek without an oar. 414 00:25:20,580 --> 00:25:22,760 There's nothing you can do. 415 00:25:22,760 --> 00:25:25,520 But with digital data there is something you can do. 416 00:25:25,520 --> 00:25:29,440 And I think this diagram makes it clear, if you can see the 417 00:25:29,440 --> 00:25:31,010 little arrows on it. 418 00:25:31,010 --> 00:25:37,900 What you're transmitting is QAM signals out of a QAM 419 00:25:37,900 --> 00:25:39,350 constellation. 420 00:25:39,350 --> 00:25:42,440 So the signals -- so you're either transmitting this, or 421 00:25:42,440 --> 00:25:45,470 this, or this, or this, and so forth. 422 00:25:45,470 --> 00:25:48,890 Now, at the receiver we haven't analyzed noise yet. 423 00:25:48,890 --> 00:25:51,390 We're going to start doing that later today. 424 00:25:51,390 --> 00:25:56,070 But if you visualize noise being added up at passband and 425 00:25:56,070 --> 00:25:59,870 then coming down to baseband again, we can visualize what's 426 00:25:59,870 --> 00:26:03,420 happening as the noise just adds something to each one of 427 00:26:03,420 --> 00:26:04,460 these points. 428 00:26:04,460 --> 00:26:07,750 I mean, assuming that we know exactly what the timing is for 429 00:26:07,750 --> 00:26:08,760 the time being. 430 00:26:08,760 --> 00:26:10,540 So that we sample at exactly the 431 00:26:10,540 --> 00:26:12,810 right time at the receiver. 432 00:26:12,810 --> 00:26:14,250 And that's a separate issue. 433 00:26:14,250 --> 00:26:16,770 But let's assume that we do know what the timing is. 434 00:26:16,770 --> 00:26:18,890 We do sample at the correct time. 435 00:26:18,890 --> 00:26:21,900 And the only question we're trying to find out now is, how 436 00:26:21,900 --> 00:26:29,510 do we choose this phase right on this sinusoid that we're 437 00:26:29,510 --> 00:26:32,980 using at the demodulator. 438 00:26:32,980 --> 00:26:36,890 Because at the demodulator, there's no way to know whether 439 00:26:36,890 --> 00:26:39,620 phi is zero or whether phi is something else. 440 00:26:39,620 --> 00:26:42,520 But after we got all done doing this, if we sample at 441 00:26:42,520 --> 00:26:45,630 the right time, what we're going to receive, if there's 442 00:26:45,630 --> 00:26:51,280 some small error in phase, is that this diagram is all 443 00:26:51,280 --> 00:26:53,210 rotated around a little bit. 444 00:26:53,210 --> 00:26:57,220 Any time you send this point, what you receive is going to 445 00:26:57,220 --> 00:26:59,150 be a point up here. 446 00:26:59,150 --> 00:27:02,040 If you send this point it's going to be a point here. 447 00:27:02,040 --> 00:27:06,680 If phi is negative, everything will go the other wa.y How do 448 00:27:06,680 --> 00:27:09,870 I know whether it's positive phi or negative phi? 449 00:27:09,870 --> 00:27:10,480 I don't. 450 00:27:10,480 --> 00:27:13,070 I just do the same thing either place. 451 00:27:13,070 --> 00:27:16,510 Everybody I know who worries about phase locked loops 452 00:27:16,510 --> 00:27:20,650 ignores the sign and simply makes sure that they're using 453 00:27:20,650 --> 00:27:23,960 the same sign at the transmitter and the receiver. 454 00:27:23,960 --> 00:27:26,860 If they don't, when they build it, the thing goes out of lock 455 00:27:26,860 --> 00:27:27,690 immediately. 456 00:27:27,690 --> 00:27:31,710 And they try switching the sign, and if it works then, 457 00:27:31,710 --> 00:27:34,090 they say uh-huh, I now have the right sign. 458 00:27:34,090 --> 00:27:36,730 I mean, it's like these factors of two. 459 00:27:36,730 --> 00:27:40,220 You always put them in after you're done. 460 00:27:40,220 --> 00:27:42,640 Anyway, I think I've done it right here. 461 00:27:42,640 --> 00:27:46,010 So the received waveform is now going to be u of t times e 462 00:27:46,010 --> 00:27:47,020 to the i phi. 463 00:27:47,020 --> 00:27:49,970 It will have a little bit of noise added to it. 464 00:27:49,970 --> 00:27:55,280 So what we're getting, then, if you look at a scope, or 465 00:27:55,280 --> 00:27:58,460 whatever people use as scope these days. 466 00:27:58,460 --> 00:28:02,190 And people have done this for so many years that they call 467 00:28:02,190 --> 00:28:04,330 this an eye pattern. 468 00:28:04,330 --> 00:28:07,690 And what they see, after looking at many, many received 469 00:28:07,690 --> 00:28:11,410 signals, is little blobs around each of these points 470 00:28:11,410 --> 00:28:12,710 due to noise. 471 00:28:12,710 --> 00:28:15,010 And they also see the set of points rotated 472 00:28:15,010 --> 00:28:16,850 around a little bit. 473 00:28:16,850 --> 00:28:20,830 And if the noise a small, and if the phase error is small, 474 00:28:20,830 --> 00:28:25,700 you can actually see this diagram rotated around sitting 475 00:28:25,700 --> 00:28:28,100 at some orientation. 476 00:28:28,100 --> 00:28:32,060 And the question is, if you know that's what's happening, 477 00:28:32,060 --> 00:28:33,940 what do you do about it? 478 00:28:33,940 --> 00:28:37,800 Well, if you have this diagram rotated around a little bit, 479 00:28:37,800 --> 00:28:43,820 you want to change the phase of your demodulating carrier. 480 00:28:43,820 --> 00:28:49,530 So the thing you do is, you measure the errors that you're 481 00:28:49,530 --> 00:28:52,840 making when you go from these received signals. 482 00:28:52,840 --> 00:28:55,410 And you make decisions on them. 483 00:28:55,410 --> 00:28:58,120 You then look at the phase of these errors. 484 00:28:58,120 --> 00:29:02,960 If all the phases are positive, then you know that 485 00:29:02,960 --> 00:29:08,190 you have to change the phase of this carrier one way. 486 00:29:08,190 --> 00:29:10,730 And if all the phases are negative, you change 487 00:29:10,730 --> 00:29:11,900 it the other way. 488 00:29:11,900 --> 00:29:13,610 Now, why does this work? 489 00:29:13,610 --> 00:29:17,060 It works because these clocks are very stable clocks. 490 00:29:17,060 --> 00:29:19,170 They wander in phase slowly. 491 00:29:19,170 --> 00:29:21,580 I mean, you can't lock them perfectly. 492 00:29:21,580 --> 00:29:24,890 And since they wander in phase slowly. 493 00:29:24,890 --> 00:29:29,070 And since you're receiving data quickly, you can average 494 00:29:29,070 --> 00:29:33,200 this carrier recovery over many, many data symbols. 495 00:29:33,200 --> 00:29:34,750 Which is what people do. 496 00:29:34,750 --> 00:29:36,700 So, you average it over many symbol. 497 00:29:36,700 --> 00:29:39,930 And you gradually change the phase as the 498 00:29:39,930 --> 00:29:43,260 phase is getting off. 499 00:29:43,260 --> 00:29:46,990 How much trouble does that cause in trying to actually 500 00:29:46,990 --> 00:29:49,680 receive the data? 501 00:29:49,680 --> 00:29:51,040 Hardly any. 502 00:29:51,040 --> 00:29:53,720 Because if the phase changes are slow enough, then you 503 00:29:53,720 --> 00:29:58,880 average over a long enough interval, the actual phase 504 00:29:58,880 --> 00:30:02,130 errors are going to be extraordinarily small. 505 00:30:02,130 --> 00:30:06,800 So it's the noise errors that look big and the phase errors 506 00:30:06,800 --> 00:30:08,400 that look very small. 507 00:30:08,400 --> 00:30:13,670 When you average over a long period of time, the noise is 508 00:30:13,670 --> 00:30:16,470 sort of independent from one period to another. 509 00:30:16,470 --> 00:30:18,410 So the noise averages out. 510 00:30:18,410 --> 00:30:21,270 And the thing you see, then, is the phase errors. 511 00:30:21,270 --> 00:30:26,090 So, so long as the phase is slow and the noise is quick, 512 00:30:26,090 --> 00:30:28,560 you first decode the data. 513 00:30:28,560 --> 00:30:32,820 After you decode the data, you average over what the phase 514 00:30:32,820 --> 00:30:33,890 errors are. 515 00:30:33,890 --> 00:30:37,330 And then you go and you change the clock. 516 00:30:37,330 --> 00:30:41,110 And people spend their lives designing how to make these 517 00:30:41,110 --> 00:30:42,190 phase lock loops. 518 00:30:42,190 --> 00:30:47,340 And there's a lot of interesting technology there. 519 00:30:47,340 --> 00:30:51,280 But the point is, the idea is almost trivially simple. 520 00:30:51,280 --> 00:30:53,390 And this is what it is. 521 00:30:53,390 --> 00:30:57,760 Since the phase is changing slowly, you can change the 522 00:30:57,760 --> 00:30:58,970 phase slowly. 523 00:30:58,970 --> 00:31:01,800 You can keep the phase error very, very small. 524 00:31:01,800 --> 00:31:06,350 And therefore, the data recovery due to the noise 525 00:31:06,350 --> 00:31:11,290 works almost as well as if you had no phase error at all. 526 00:31:11,290 --> 00:31:14,790 How do you recover from frequency errors? 527 00:31:14,790 --> 00:31:17,330 Well, if I can recover from phase errors, I'm recovering 528 00:31:17,330 --> 00:31:18,970 from frequency errors also. 529 00:31:18,970 --> 00:31:22,800 Because I had this clock which is running around. 530 00:31:22,800 --> 00:31:26,030 And I'm slaving the clock at the receiver to the clock at 531 00:31:26,030 --> 00:31:27,880 the transmitter. 532 00:31:27,880 --> 00:31:30,900 So, so long as I have very little frequency error to 533 00:31:30,900 --> 00:31:37,250 start with, simply by keeping this phase right, I can also 534 00:31:37,250 --> 00:31:38,950 keep the carrier frequency right. 535 00:31:38,950 --> 00:31:42,170 Because the phase is just going around at the speed it's 536 00:31:42,170 --> 00:31:44,010 supposed to be going around. 537 00:31:44,010 --> 00:31:48,985 So this will also serve as a frequency lock as well as a 538 00:31:48,985 --> 00:31:50,090 phase lock. 539 00:31:50,090 --> 00:31:52,660 When you're trying to do both together, you build the phase 540 00:31:52,660 --> 00:31:57,150 lock loop a little different, obviously. 541 00:31:57,150 --> 00:31:58,620 What's the problem with this? 542 00:32:01,300 --> 00:32:06,260 You look this diagram, and you say, OK, suppose there's some 543 00:32:06,260 --> 00:32:10,510 huge error where for a long period of time, things get 544 00:32:10,510 --> 00:32:12,010 very noisy. 545 00:32:12,010 --> 00:32:15,030 What's going to happen? 546 00:32:15,030 --> 00:32:18,690 Well, if you can't decode the data for a long period of 547 00:32:18,690 --> 00:32:23,990 time, suddenly these phase errors build up. 548 00:32:23,990 --> 00:32:26,020 The channel stops being noisy. 549 00:32:26,020 --> 00:32:30,430 And what could have happened is that your phase could be 550 00:32:30,430 --> 00:32:32,190 off by pi over 2. 551 00:32:32,190 --> 00:32:36,360 Which means what you see, what you should be seeing is this. 552 00:32:36,360 --> 00:32:38,910 What you're actually seeing is this. 553 00:32:38,910 --> 00:32:41,590 You can't tell the difference. 554 00:32:41,590 --> 00:32:45,050 In other words, you decode every symbol wrong. 555 00:32:45,050 --> 00:32:47,710 Because you're making an error of pi over 2 in 556 00:32:47,710 --> 00:32:48,810 every one of them. 557 00:32:48,810 --> 00:32:52,420 You decode this as this, you decode this as this, this as 558 00:32:52,420 --> 00:32:53,640 this, and so forth. 559 00:32:53,640 --> 00:32:55,610 So every point is wrong. 560 00:32:58,500 --> 00:33:02,200 What will you do to recover from this? 561 00:33:02,200 --> 00:33:05,220 I mean, suppose you were designing a system where in 562 00:33:05,220 --> 00:33:08,970 fact you had no feedback to the transmitter at all. 563 00:33:08,970 --> 00:33:12,080 You have feedback from the transmitter, you just, 564 00:33:12,080 --> 00:33:14,580 obviously every once in a while you stop and you 565 00:33:14,580 --> 00:33:16,430 resynchronize. 566 00:33:16,430 --> 00:33:18,220 You don't have any feedback, what do you do? 567 00:33:20,990 --> 00:33:21,310 Yeah? 568 00:33:21,310 --> 00:33:23,950 AUDIENCE: [INAUDIBLE] 569 00:33:23,950 --> 00:33:26,260 PROFESSOR: Differential phase shift key, yes. 570 00:33:26,260 --> 00:33:30,280 I mean, this isn't quite phase shift key, but 571 00:33:30,280 --> 00:33:32,070 it's the same idea. 572 00:33:32,070 --> 00:33:36,000 In other words, the thing that you do is, you don't -- 573 00:33:36,000 --> 00:33:39,720 usually when the data is coming in, you don't map it 574 00:33:39,720 --> 00:33:42,010 directly into a signal point. 575 00:33:42,010 --> 00:33:46,770 The thing that you do is map the data into a phase change 576 00:33:46,770 --> 00:33:49,310 from one period of time to the next. 577 00:33:49,310 --> 00:33:51,630 I mean, you map it into an amplitude. 578 00:33:51,630 --> 00:33:54,900 And then you map it into a change of phase 579 00:33:54,900 --> 00:33:56,330 in this set of points. 580 00:33:56,330 --> 00:33:59,580 Or change of phase within this set of points. 581 00:33:59,580 --> 00:34:02,990 Or a change of phase within the outer set of points. 582 00:34:02,990 --> 00:34:06,900 And then if, due to some disaster, you get off by pi 583 00:34:06,900 --> 00:34:10,070 over 2, it won't hurt you. 584 00:34:10,070 --> 00:34:13,520 Because the changes are still the same, except for the 585 00:34:13,520 --> 00:34:16,560 errors that you make when this process starts. 586 00:34:19,740 --> 00:34:22,000 So in fact, a lot of these things that people have been 587 00:34:22,000 --> 00:34:25,380 doing for years, you would think of almost immediately. 588 00:34:25,380 --> 00:34:29,220 The only thing you have to be careful about, when you invent 589 00:34:29,220 --> 00:34:32,620 new things to build new systems, is you have to have 590 00:34:32,620 --> 00:34:34,480 somebody to check whether somebody else 591 00:34:34,480 --> 00:34:35,860 has a patent on it. 592 00:34:35,860 --> 00:34:39,140 So, you invent something and then you check whether 593 00:34:39,140 --> 00:34:40,390 somebody else patented it. 594 00:34:45,820 --> 00:34:47,630 So that's what this slide says. 595 00:34:47,630 --> 00:34:48,880 Bingo. 596 00:34:52,250 --> 00:34:58,800 It's time to go on to talk about additive noise and 597 00:34:58,800 --> 00:35:01,170 random processes. 598 00:35:01,170 --> 00:35:04,180 How many of you have seen random processes before? 599 00:35:06,990 --> 00:35:10,030 How many of you have seen it in a graduate context as 600 00:35:10,030 --> 00:35:13,090 opposed to an undergraduate context? 601 00:35:13,090 --> 00:35:14,560 A smaller number. 602 00:35:17,880 --> 00:35:21,620 Many of you, if you got your degrees outside of the US, if 603 00:35:21,620 --> 00:35:24,680 you've studied this as an undergraduate, you probably 604 00:35:24,680 --> 00:35:30,680 know what students here have learned in their various 605 00:35:30,680 --> 00:35:31,280 graduate courses. 606 00:35:31,280 --> 00:35:34,210 So you might know a lot more about it. 607 00:35:34,210 --> 00:35:38,260 What we're going to do here is I'll try to teach you just 608 00:35:38,260 --> 00:35:42,370 enough about random processes that you can figure out what's 609 00:35:42,370 --> 00:35:45,830 going on with the kinds of noise that we're 610 00:35:45,830 --> 00:35:48,430 looking at this term. 611 00:35:48,430 --> 00:35:50,440 Which involves two kinds of noise. 612 00:35:50,440 --> 00:35:53,300 One, something called additive noise, which is what we're 613 00:35:53,300 --> 00:35:56,060 going to be dealing with now. 614 00:35:56,060 --> 00:35:58,690 And the other is much later, when we start studying 615 00:35:58,690 --> 00:36:02,810 wireless systems, we have to deal with a 616 00:36:02,810 --> 00:36:03,880 different kind of noise. 617 00:36:03,880 --> 00:36:06,890 People sometimes call it multiplicative noise, but 618 00:36:06,890 --> 00:36:11,740 really what it is, is these channels fade in various ways. 619 00:36:11,740 --> 00:36:16,030 And the fading is a random phenomena. 620 00:36:16,030 --> 00:36:20,350 And therefore we have to learn how to deal with that in some 621 00:36:20,350 --> 00:36:24,710 kind of probabilistic way also. 622 00:36:24,710 --> 00:36:29,480 So the thing we're going to do is we're going to visualize 623 00:36:29,480 --> 00:36:35,290 what gets received as what gets transmitted plus noise. 624 00:36:35,290 --> 00:36:41,600 Now, at this point, I've said OK, we know how to deal with 625 00:36:41,600 --> 00:36:44,200 compensating for phase errors. 626 00:36:44,200 --> 00:36:46,080 So we'll assume that away, it's our 627 00:36:46,080 --> 00:36:49,270 usual layered approach. 628 00:36:49,270 --> 00:36:52,750 What kind of other assumptions are we making here? 629 00:36:52,750 --> 00:36:55,740 When I say the received signal is the transmitted 630 00:36:55,740 --> 00:36:59,320 signal plus the noise. 631 00:36:59,320 --> 00:37:02,820 Well, I mean, I could look at this and say, this doesn't 632 00:37:02,820 --> 00:37:04,770 tell me anything. 633 00:37:04,770 --> 00:37:07,260 This is simply defining the noise. 634 00:37:07,260 --> 00:37:08,810 There's the difference between what I 635 00:37:08,810 --> 00:37:12,830 receive and what I transmit. 636 00:37:12,830 --> 00:37:17,710 So when I say this is additive noise, I'm not saying anything 637 00:37:17,710 --> 00:37:21,590 other than the definition of the noise, is this difference. 638 00:37:24,220 --> 00:37:26,900 But when we look at this, we're really going to be 639 00:37:26,900 --> 00:37:33,000 interested in trying to analyze what this process is. 640 00:37:33,000 --> 00:37:35,230 And we're going to be interested in analyzing what 641 00:37:35,230 --> 00:37:39,840 this is as a process also. 642 00:37:39,840 --> 00:37:43,220 Here, I'm just looking at sample functions. 643 00:37:43,220 --> 00:37:46,900 And what I'd like to be able to do is say this sample 644 00:37:46,900 --> 00:37:51,030 function, namely, what's going into the channel, is one of a 645 00:37:51,030 --> 00:37:53,830 large set of possibilities. 646 00:37:53,830 --> 00:37:56,690 If it weren't one of a large set of possibilities, there 647 00:37:56,690 --> 00:37:59,420 would be no sense at all to transmitting it, because the 648 00:37:59,420 --> 00:38:02,010 receiver would know what it was going to be. 649 00:38:02,010 --> 00:38:04,190 And you wouldn't have to transmit it, it would just 650 00:38:04,190 --> 00:38:07,580 print it out at the output. 651 00:38:07,580 --> 00:38:11,080 So, this is some kind of random process which is 652 00:38:11,080 --> 00:38:18,640 determined by the binary data which is coming into the 653 00:38:18,640 --> 00:38:21,380 encoder and the signals it gets mapped 654 00:38:21,380 --> 00:38:23,610 into, and so forth. 655 00:38:23,610 --> 00:38:28,060 This reviewing of some kind of noise waveform. 656 00:38:28,060 --> 00:38:32,070 What we're going to assume is that in fact these two random 657 00:38:32,070 --> 00:38:34,760 phenomena are independent of each other. 658 00:38:37,380 --> 00:38:43,030 And when I write this, you would believe that if nobody 659 00:38:43,030 --> 00:38:45,470 told you to think about it. 660 00:38:45,470 --> 00:38:48,050 Because it just looks like that's what it's saying. 661 00:38:48,050 --> 00:38:52,070 Here's some noise phenomena that's added to the signal 662 00:38:52,070 --> 00:38:53,460 phenomenon. 663 00:38:53,460 --> 00:38:57,470 If in fact I told you, well, this noise here happens to be 664 00:38:57,470 --> 00:39:01,400 twice the input then you'd say, foul. 665 00:39:01,400 --> 00:39:04,590 What I'm really doing is calling something noise which 666 00:39:04,590 --> 00:39:07,530 is really just a scaling of the input. 667 00:39:07,530 --> 00:39:10,830 Which is really a more trivial problem. 668 00:39:10,830 --> 00:39:14,000 So we don't want to do that. 669 00:39:14,000 --> 00:39:17,240 Another thing I could try to do, and people tried to do for 670 00:39:17,240 --> 00:39:21,650 a long time in the communication field, was to 671 00:39:21,650 --> 00:39:27,500 say, well, studying random processes is too complicated. 672 00:39:27,500 --> 00:39:30,870 Let me just say there's a range of possibilities, 673 00:39:30,870 --> 00:39:33,720 possible things for what I transmit. 674 00:39:33,720 --> 00:39:37,770 There's a range of possible waveforms for the noise. 675 00:39:37,770 --> 00:39:41,340 And I want to make a system that works for both these 676 00:39:41,340 --> 00:39:44,750 ranges of different things. 677 00:39:44,750 --> 00:39:47,960 And it just turns out that that doesn't work. 678 00:39:47,960 --> 00:39:50,610 It works pretty well for the transmitter. 679 00:39:50,610 --> 00:39:52,180 It doesn't work at all for the noise. 680 00:39:52,180 --> 00:39:55,300 Because sometimes the noise is what it should be. 681 00:39:55,300 --> 00:39:58,880 And sometimes the noise is abnormally large. 682 00:39:58,880 --> 00:40:01,310 And when you're trying to decide what kind of coding 683 00:40:01,310 --> 00:40:05,540 devices you want to use, what kind of modulation you want to 684 00:40:05,540 --> 00:40:08,840 use, and all these other questions, you really need to 685 00:40:08,840 --> 00:40:13,440 know something about the probabilistic structure here. 686 00:40:13,440 --> 00:40:15,610 So we're going to view this as a sample 687 00:40:15,610 --> 00:40:17,760 function of a random process. 688 00:40:17,760 --> 00:40:20,110 And that means we're going to have to say what a random 689 00:40:20,110 --> 00:40:21,660 process is. 690 00:40:21,660 --> 00:40:24,140 And so far, the only thing we're going to say is, when we 691 00:40:24,140 --> 00:40:27,620 talk about it as a random process, we'll call it capital 692 00:40:27,620 --> 00:40:30,520 N of t instead of little n of t. 693 00:40:30,520 --> 00:40:35,770 And the idea there is that for every time, for every instant 694 00:40:35,770 --> 00:40:38,560 of time, when you talking about random processes, you 695 00:40:38,560 --> 00:40:42,170 usually call instants of time epochs. 696 00:40:42,170 --> 00:40:44,680 And you call them epochs because you're using time for 697 00:40:44,680 --> 00:40:46,850 too many different things. 698 00:40:46,850 --> 00:40:49,720 And calling it an epoch helps you keep straight what you're 699 00:40:49,720 --> 00:40:51,120 talking about. 700 00:40:51,120 --> 00:40:57,690 So for each epoch t, n of t is a random variable. 701 00:40:57,690 --> 00:41:02,490 And this little n of t is just some sample value of that 702 00:41:02,490 --> 00:41:05,520 random variable. 703 00:41:05,520 --> 00:41:09,140 So we're going to assume that we have some probabilistic 704 00:41:09,140 --> 00:41:12,340 description of this very large 705 00:41:12,340 --> 00:41:14,910 collection of random variables. 706 00:41:14,910 --> 00:41:17,180 And the thing which makes this a little bit tricky, 707 00:41:17,180 --> 00:41:20,200 mathematically is that we have an uncountably infinite number 708 00:41:20,200 --> 00:41:23,450 of random of variables to worry about here. 709 00:41:25,990 --> 00:41:28,220 x sub t is known at the transmitter but 710 00:41:28,220 --> 00:41:30,880 unknown at the receiver. 711 00:41:30,880 --> 00:41:34,020 And what we're eventually concerned about is, how does 712 00:41:34,020 --> 00:41:36,850 the receiver figure out what was transmitted. 713 00:41:36,850 --> 00:41:41,750 So we'd better view x of t as a sample 714 00:41:41,750 --> 00:41:44,230 function of a random process. 715 00:41:44,230 --> 00:41:47,260 x of t, from the receiver's viewpoint. 716 00:41:47,260 --> 00:41:49,160 Namely, the transmitter knows what it is. 717 00:41:49,160 --> 00:41:54,000 This is this continual problem we run into with probability. 718 00:41:54,000 --> 00:41:59,310 When you define a probability model for something, it always 719 00:41:59,310 --> 00:42:04,020 depends on whose viewpoint you're taking. 720 00:42:04,020 --> 00:42:07,540 I mean you could view my voice waveform as a random process. 721 00:42:07,540 --> 00:42:10,390 And in a sense it is. 722 00:42:10,390 --> 00:42:13,130 As far as I'm concerned it's almost a random process, 723 00:42:13,130 --> 00:42:15,840 because I don't always know what I'm saying. 724 00:42:15,840 --> 00:42:17,750 But as far as you're concerned, it's much more of a 725 00:42:17,750 --> 00:42:22,710 random process than it is from my viewpoint. 726 00:42:22,710 --> 00:42:25,940 If you understand the English language, which all of you do, 727 00:42:25,940 --> 00:42:30,310 fortunately, it's one kind of random process. 728 00:42:30,310 --> 00:42:34,670 If you don't understand that, it's just some noise waveform 729 00:42:34,670 --> 00:42:36,890 where you need a much more complicated model 730 00:42:36,890 --> 00:42:38,070 to deal with it. 731 00:42:38,070 --> 00:42:44,610 So everything we have to do here is based on these models 732 00:42:44,610 --> 00:42:49,020 of these random processes, which are based on reasonable 733 00:42:49,020 --> 00:42:51,650 assumptions of what we're trying to do with the model. 734 00:42:57,780 --> 00:43:01,340 When we do this in terms of random processes, we have the 735 00:43:01,340 --> 00:43:06,400 input random process, and we have the noise random process. 736 00:43:06,400 --> 00:43:09,540 And if we add up two random processes, it doesn't take 737 00:43:09,540 --> 00:43:13,280 much imagination to figure out that what we get is another 738 00:43:13,280 --> 00:43:14,330 random process. 739 00:43:14,330 --> 00:43:19,230 It's random both because of this and because of this. 740 00:43:19,230 --> 00:43:22,540 Now, we implicitly are assuming a whole bunch of 741 00:43:22,540 --> 00:43:25,280 things when we write things out this way. 742 00:43:25,280 --> 00:43:28,820 We're not explicitly assuming them, but what we want to do 743 00:43:28,820 --> 00:43:31,950 is explain the fact that we're implicitly assuming them and 744 00:43:31,950 --> 00:43:34,430 then make it explicit. 745 00:43:34,430 --> 00:43:39,360 One of those things that we're doing here is, we're avoiding 746 00:43:39,360 --> 00:43:43,290 all attenuation in what's been transmitted. 747 00:43:43,290 --> 00:43:44,800 We've been doing that all along. 748 00:43:44,800 --> 00:43:48,270 We've been talking about received waveforms as being 749 00:43:48,270 --> 00:43:50,730 equal to the transmitted waveform. 750 00:43:50,730 --> 00:43:53,220 Which means that we're assuming that the receiver 751 00:43:53,220 --> 00:43:55,540 knows what the attenuation is. 752 00:43:55,540 --> 00:43:58,990 And if it knows what the attenuation is, there's no 753 00:43:58,990 --> 00:44:02,510 point in putting in this scale factor everywhere. 754 00:44:02,510 --> 00:44:05,340 So we might as well look at what's being transmitted as 755 00:44:05,340 --> 00:44:08,640 being the same as what's being received, subject to this 756 00:44:08,640 --> 00:44:11,940 attenuation, which is just some known factor which we're 757 00:44:11,940 --> 00:44:13,900 not going to consider. 758 00:44:13,900 --> 00:44:16,690 There's also this question of delay. 759 00:44:16,690 --> 00:44:21,770 And we're assuming that the delay is known perfectly. 760 00:44:21,770 --> 00:44:24,480 So that assumption is built into here. 761 00:44:24,480 --> 00:44:30,550 y of t is x of t without delay, plus z of t. 762 00:44:30,550 --> 00:44:33,790 And now, after studying these modulation things, we can add 763 00:44:33,790 --> 00:44:38,180 a few more things which are implicitly put into here. 764 00:44:38,180 --> 00:44:41,580 One of them is that we know what the received phase is. 765 00:44:41,580 --> 00:44:43,600 So that in fact we don't have any phase 766 00:44:43,600 --> 00:44:45,290 error on all of this. 767 00:44:45,290 --> 00:44:48,820 So we're saying, let's get rid of that also. 768 00:44:48,820 --> 00:44:53,260 So we have no timing error, no phase error. 769 00:44:53,260 --> 00:44:56,250 All of these errors are disappearing because what we 770 00:44:56,250 --> 00:45:00,030 want to focus on is just the errors that come from this 771 00:45:00,030 --> 00:45:00,860 additive noise. 772 00:45:00,860 --> 00:45:03,100 Whatever the additive noise is. 773 00:45:03,100 --> 00:45:05,750 But we want to assume that this additive noise is 774 00:45:05,750 --> 00:45:08,100 independent of this process. 775 00:45:08,100 --> 00:45:12,360 So we're saying, when we know all of these things about -- 776 00:45:12,360 --> 00:45:15,690 or when the receiver knows all of these things about what's 777 00:45:15,690 --> 00:45:21,050 being transmitted, except for the actual noise waveform, how 778 00:45:21,050 --> 00:45:22,300 do we deal with the problem? 779 00:45:27,760 --> 00:45:31,530 We implicitly mean that z of t is independent of x of t, 780 00:45:31,530 --> 00:45:38,230 because otherwise it would be just very misleading to define 781 00:45:38,230 --> 00:45:42,110 n z of t as the difference between what you receive and 782 00:45:42,110 --> 00:45:43,570 what you transmit. 783 00:45:43,570 --> 00:45:46,740 So we're going to make that assumption explicit here. 784 00:45:46,740 --> 00:45:50,230 These are standing assumptions until we start to study 785 00:45:50,230 --> 00:45:52,300 wireless systems. 786 00:45:52,300 --> 00:45:56,910 And another standing assumption which we'll make is 787 00:45:56,910 --> 00:45:58,930 the kind of assumption we've been making in 788 00:45:58,930 --> 00:46:00,800 this course all along. 789 00:46:00,800 --> 00:46:05,300 Which some people like, usually theoreticians. 790 00:46:05,300 --> 00:46:11,070 And some people often don't like, namely practitioners. 791 00:46:11,070 --> 00:46:16,140 But which if both the theoreticians and 792 00:46:16,140 --> 00:46:20,580 practitioners understood better what the game was, I 793 00:46:20,580 --> 00:46:23,450 think they would agree to doing things this way. 794 00:46:23,450 --> 00:46:28,080 And what we're going to do here is, we're not going to 795 00:46:28,080 --> 00:46:31,400 think of what happens when people make measurements of 796 00:46:31,400 --> 00:46:35,680 this noise and then try to create a stochastic model of 797 00:46:35,680 --> 00:46:38,480 it from all of these measurements. 798 00:46:38,480 --> 00:46:43,160 What we're going to do is more in the light of what Shannon 799 00:46:43,160 --> 00:46:45,900 did then when he invented information theory. 800 00:46:45,900 --> 00:46:49,590 Which is to say, let's try to understand the very simplest 801 00:46:49,590 --> 00:46:51,660 situations first. 802 00:46:51,660 --> 00:46:54,920 If we can understand those very simplest situations 803 00:46:54,920 --> 00:46:56,600 namely, we will invent whatever 804 00:46:56,600 --> 00:46:58,540 noise we want to invent. 805 00:46:58,540 --> 00:47:02,100 To start to understand what noise does to communication. 806 00:47:02,100 --> 00:47:05,310 And we will look at the simplest processes we can. 807 00:47:05,310 --> 00:47:08,370 Which seem to make any sense at all of us. 808 00:47:08,370 --> 00:47:12,040 And then after we understand a few of those, then we will 809 00:47:12,040 --> 00:47:16,470 say, OK, what happens if there's some particular 810 00:47:16,470 --> 00:47:19,580 phenomenon going on in this channel. 811 00:47:19,580 --> 00:47:23,940 And when you look at what practitioners do, what in fact 812 00:47:23,940 --> 00:47:26,680 they do is, everything they do is based on 813 00:47:26,680 --> 00:47:29,100 various rules of thumb. 814 00:47:29,100 --> 00:47:31,910 Where do these rules of thumb come from? 815 00:47:31,910 --> 00:47:36,200 They come from the theoretical papers that were written 50 816 00:47:36,200 --> 00:47:39,340 years before, if they're very out of date practitioners. 817 00:47:39,340 --> 00:47:42,670 25 years before if they're relatively up to date 818 00:47:42,670 --> 00:47:44,510 practitioners. 819 00:47:44,510 --> 00:47:48,430 Or one year before if they're really reading all the 820 00:47:48,430 --> 00:47:51,190 literature as it comes out. 821 00:47:51,190 --> 00:47:54,430 But rules of thumb usually aren't based on what's 822 00:47:54,430 --> 00:47:57,320 happening as far as papers are concerned now. 823 00:47:57,320 --> 00:48:00,560 It's usually based on some composite of what people have 824 00:48:00,560 --> 00:48:03,470 understood over a large number of years. 825 00:48:03,470 --> 00:48:06,580 So in fact, the way you understand the subject is to 826 00:48:06,580 --> 00:48:09,320 create these toy models. 827 00:48:09,320 --> 00:48:10,660 And the toy models are what we're going 828 00:48:10,660 --> 00:48:12,360 to be creating now. 829 00:48:12,360 --> 00:48:15,910 Understand the toy models, and then move from the toy models 830 00:48:15,910 --> 00:48:17,670 to trying to understand other things. 831 00:48:17,670 --> 00:48:20,740 This is the process we'll be going through when we look at 832 00:48:20,740 --> 00:48:25,550 this additive noise channel, which doesn't make much sense 833 00:48:25,550 --> 00:48:28,440 for wireless channels, as it turns out. 834 00:48:28,440 --> 00:48:32,370 But it's part of what you deal with on a wireless channel. 835 00:48:32,370 --> 00:48:35,330 And then, on a wireless channel, we'll say, well, are 836 00:48:35,330 --> 00:48:38,800 these other effects concerned also? 837 00:48:38,800 --> 00:48:41,650 And we'll try to deal with these other effects as an 838 00:48:41,650 --> 00:48:44,460 addition to the effects that we know how to deal width. 839 00:48:44,460 --> 00:48:49,460 We've already done that as far as phase noise is concerned. 840 00:48:49,460 --> 00:48:52,720 Namely, we've already said that since phase effects occur 841 00:48:52,720 --> 00:48:56,620 slowly, we can in fact get rid of them almost entirely. 842 00:48:56,620 --> 00:49:00,020 And therefore we don't have to worry about them here. 843 00:49:00,020 --> 00:49:03,930 So, in fact we have done what we've said that we believe 844 00:49:03,930 --> 00:49:04,760 ought to be done. 845 00:49:04,760 --> 00:49:08,460 Namely, since we now know how to get rid of the phase noise, 846 00:49:08,460 --> 00:49:12,360 we're now looking at additive noise and saying, how can we 847 00:49:12,360 --> 00:49:13,410 get rid of that. 848 00:49:13,410 --> 00:49:16,170 Actually, if you look at the argument we went through with 849 00:49:16,170 --> 00:49:19,680 phase noise, we were really assuming that we knew how to 850 00:49:19,680 --> 00:49:24,020 deal with channel noise as part of that. 851 00:49:24,020 --> 00:49:26,670 And in fact when you get done knowing how to deal with 852 00:49:26,670 --> 00:49:30,460 channel noise, you can go back to that 853 00:49:30,460 --> 00:49:32,480 phase locked loop argument. 854 00:49:32,480 --> 00:49:35,100 And you'll see a little more clearly, or a little more 855 00:49:35,100 --> 00:49:36,760 precisely, what's going on there. 856 00:49:47,070 --> 00:49:51,660 When we started talking about waveforms, if you remember 857 00:49:51,660 --> 00:49:55,830 back to when we were talking about compression, we really 858 00:49:55,830 --> 00:50:00,240 faked this issue of random processes. 859 00:50:00,240 --> 00:50:04,140 If you remember the thing we did, everything was random 860 00:50:04,140 --> 00:50:08,980 while we were talking about taking discrete sources and 861 00:50:08,980 --> 00:50:10,870 coding for them. 862 00:50:10,870 --> 00:50:14,810 When we then looked at how do you do quantization on 863 00:50:14,810 --> 00:50:19,370 sequences of real numbers or sequences of complex numbers, 864 00:50:19,370 --> 00:50:22,440 we again took a probabilistic model. 865 00:50:22,440 --> 00:50:25,830 And we determined how to do quantization in terms of that 866 00:50:25,830 --> 00:50:28,280 probabilistic model. 867 00:50:28,280 --> 00:50:30,360 And in terms of that probabilistic model, we 868 00:50:30,360 --> 00:50:32,900 figured out what ought to be done. 869 00:50:32,900 --> 00:50:35,230 And then, if you remember, when we went to the problem of 870 00:50:35,230 --> 00:50:40,750 going from waveforms to sequences, and our pattern 871 00:50:40,750 --> 00:50:42,770 was, take a waveform. 872 00:50:42,770 --> 00:50:44,460 Turn it into a sequence. 873 00:50:44,460 --> 00:50:49,920 Quantize the sequence, then encode the digital sequence. 874 00:50:49,920 --> 00:50:52,020 We cheated. 875 00:50:52,020 --> 00:50:56,980 Because we didn't talk at all about random processes there. 876 00:50:56,980 --> 00:51:01,630 And the thing we did instead is, we simply converted source 877 00:51:01,630 --> 00:51:06,250 waveforms to sequences and said that only the 878 00:51:06,250 --> 00:51:10,660 probabilistic description of the sequence is relevant. 879 00:51:10,660 --> 00:51:12,400 And, you know, that's actually true. 880 00:51:12,400 --> 00:51:15,450 Because so long as you decide that what you're going to do 881 00:51:15,450 --> 00:51:19,540 is go from waveforms to sequences, and in fact you 882 00:51:19,540 --> 00:51:22,220 have decided on which particular orthonormal 883 00:51:22,220 --> 00:51:25,460 expansion you're going to use, then everything from there on 884 00:51:25,460 --> 00:51:30,120 is determined just by what you know about the probabilistic 885 00:51:30,120 --> 00:51:32,870 description of that sequence. 886 00:51:32,870 --> 00:51:35,150 And everything else is irrelevant. 887 00:51:35,150 --> 00:51:38,450 I mean, each waveform maps into a sequence. 888 00:51:38,450 --> 00:51:41,840 When you're going back, sequences map into waveforms, 889 00:51:41,840 --> 00:51:45,390 at least in this L2 sense, which is what determines 890 00:51:45,390 --> 00:51:46,830 energy difference. 891 00:51:46,830 --> 00:51:49,670 And energy differences usually are criterion for whether 892 00:51:49,670 --> 00:51:52,440 we've done a good job or not. 893 00:51:52,440 --> 00:51:55,980 So, in fact, we could avoid the problem that way simply by 894 00:51:55,980 --> 00:51:59,570 looking at the sequences. 895 00:51:59,570 --> 00:52:04,290 And it makes some sense to do that because we're looking at 896 00:52:04,290 --> 00:52:05,060 mean square error. 897 00:52:05,060 --> 00:52:09,560 And that was why we looked at mean square error. 898 00:52:12,830 --> 00:52:15,420 But now we can't do that any more. 899 00:52:15,420 --> 00:52:19,780 Because now, in fact, the noise is occurring on the 900 00:52:19,780 --> 00:52:22,160 waveform itself. 901 00:52:22,160 --> 00:52:26,620 So we can't just say, let's turn this into a sequence and 902 00:52:26,620 --> 00:52:30,870 see what the noise on the waveform does to the sequence. 903 00:52:30,870 --> 00:52:35,690 Because, in fact, that won't work at this point. 904 00:52:35,690 --> 00:52:38,550 So random process z of t is a collection of random 905 00:52:38,550 --> 00:52:42,950 variables, one for each t and r. 906 00:52:42,950 --> 00:52:45,800 So, in other words, this is really a collection of an 907 00:52:45,800 --> 00:52:50,660 uncountably infinite number of random variables. 908 00:52:50,660 --> 00:52:53,360 And for each epoch t, the random 909 00:52:53,360 --> 00:52:57,760 variable z of t is a function. 910 00:52:57,760 --> 00:52:59,770 Capital Z of t and omega. 911 00:52:59,770 --> 00:53:04,540 Omega is the sample point we have in this sample space. 912 00:53:04,540 --> 00:53:06,460 Where does the sample space come from? 913 00:53:09,110 --> 00:53:11,100 We're imagining it. 914 00:53:11,100 --> 00:53:12,830 We don't know any way to construct a 915 00:53:12,830 --> 00:53:15,270 sample space very well. 916 00:53:15,270 --> 00:53:18,190 But we know if we study probability theory that we 917 00:53:18,190 --> 00:53:21,370 need a sample space to talk about probabilities. 918 00:53:21,370 --> 00:53:25,880 So we're going to imagine different sample spaces and 919 00:53:25,880 --> 00:53:28,480 see what the consequences of those are. 920 00:53:28,480 --> 00:53:32,650 I mean, that's the point of this general point of view 921 00:53:32,650 --> 00:53:33,330 that we have. 922 00:53:33,330 --> 00:53:36,240 That we start out looking at simple models and go from 923 00:53:36,240 --> 00:53:37,780 there to complex models. 924 00:53:37,780 --> 00:53:40,780 Part of the model is the sample space that we have. 925 00:53:43,650 --> 00:53:48,340 So for each sample point that we're dealing with, the 926 00:53:48,340 --> 00:53:49,760 collection -- 927 00:53:49,760 --> 00:53:52,480 I should have written this differently. 928 00:53:52,480 --> 00:53:55,410 Let me change this a little bit to 929 00:53:55,410 --> 00:53:58,170 make it a little clearer. 930 00:53:58,170 --> 00:54:17,600 z of t omega for t and r is a sample function 931 00:54:17,600 --> 00:54:22,430 z of t; t and r. 932 00:54:26,570 --> 00:54:30,810 So if I look at this, as a function of two variables. 933 00:54:30,810 --> 00:54:37,390 One is time and one is this random collection of 934 00:54:37,390 --> 00:54:39,710 things we can't see. 935 00:54:39,710 --> 00:54:42,720 If I hold omega fixed, this thing becomes 936 00:54:42,720 --> 00:54:44,340 a function of time. 937 00:54:44,340 --> 00:54:47,180 And in fact it's a sample function, it's the sample 938 00:54:47,180 --> 00:54:50,910 function corresponding to that particular element in the 939 00:54:50,910 --> 00:54:53,860 sample space. 940 00:54:53,860 --> 00:54:56,460 And don't try to ask what the sample space is. 941 00:54:56,460 --> 00:54:58,225 The sample space -- 942 00:54:58,225 --> 00:55:00,180 I mean, the sample point here -- 943 00:55:00,180 --> 00:55:07,740 really specifies what this sample function is. 944 00:55:07,740 --> 00:55:12,850 It specifies what the input sample function is. 945 00:55:12,850 --> 00:55:16,920 It specifies what everything else you're interested in is. 946 00:55:16,920 --> 00:55:19,950 So it's just an abstract entity to say, if we're 947 00:55:19,950 --> 00:55:22,480 constructing probabilities we need something to 948 00:55:22,480 --> 00:55:23,480 construct them on. 949 00:55:23,480 --> 00:55:25,980 So that's the sample space that we can construct these 950 00:55:25,980 --> 00:55:28,180 probabilities on. 951 00:55:28,180 --> 00:55:33,520 The random processes -- 952 00:55:33,520 --> 00:55:34,720 Oh. 953 00:55:34,720 --> 00:55:35,080 OK. 954 00:55:35,080 --> 00:55:37,910 We want to look at two different things here. 955 00:55:37,910 --> 00:55:42,230 Here we're looking at a fixed sample point, which means that 956 00:55:42,230 --> 00:55:46,220 this process for a fixed sample point corresponds to a 957 00:55:46,220 --> 00:55:47,710 sample function. 958 00:55:47,710 --> 00:55:53,400 If instead I look a fixed time, a fixed epoch, t and r, 959 00:55:53,400 --> 00:56:01,420 and I say what does this correspond to as I look at z 960 00:56:01,420 --> 00:56:11,390 of t and omega over all omega in this big 961 00:56:11,390 --> 00:56:13,710 messy thing, fix t. 962 00:56:20,180 --> 00:56:22,530 What do I have there? 963 00:56:22,530 --> 00:56:24,330 I have a random variable. 964 00:56:24,330 --> 00:56:27,190 If I look at a fixed t, and I'm looking over the whole 965 00:56:27,190 --> 00:56:31,790 sample space, that's what you mean by a random variable. 966 00:56:31,790 --> 00:56:36,290 If I look at -- if I hold omega fixed and look over the 967 00:56:36,290 --> 00:56:40,810 time axis, what I have is a sample function. 968 00:56:40,810 --> 00:56:43,500 I hope that makes it a little clearer what the game we're 969 00:56:43,500 --> 00:56:44,930 playing is. 970 00:56:44,930 --> 00:56:48,120 Because this random thing, the random process that we're 971 00:56:48,120 --> 00:56:51,690 dealing with, we can best visualize it as a function of 972 00:56:51,690 --> 00:56:55,290 two things, both time and sample point. 973 00:56:55,290 --> 00:56:59,360 The sample point in this big sample space is what controls 974 00:56:59,360 --> 00:57:00,250 everything. 975 00:57:00,250 --> 00:57:03,090 It tells you what every random variable in the world is. 976 00:57:03,090 --> 00:57:05,480 And what the relationship of every random variable 977 00:57:05,480 --> 00:57:07,900 in the world is. 978 00:57:07,900 --> 00:57:11,540 So, a random process is defined, then, by some rule 979 00:57:11,540 --> 00:57:17,540 which establishes a joint density for all finite values 980 00:57:17,540 --> 00:57:22,370 of k, all particular sets of time, and all particular 981 00:57:22,370 --> 00:57:23,660 arguments here. 982 00:57:23,660 --> 00:57:26,410 What am I trying to do here? 983 00:57:26,410 --> 00:57:29,390 I'm trying to admit right from the outset that if I try to 984 00:57:29,390 --> 00:57:34,140 define this process for an uncountably infinite number of 985 00:57:34,140 --> 00:57:37,470 times, I'm going to be dead in the water. 986 00:57:37,470 --> 00:57:42,530 So I had better be happy with the idea of, at least in 987 00:57:42,530 --> 00:57:46,270 principle I would like to be able to find out what the 988 00:57:46,270 --> 00:57:50,270 joint probability density is for any finite set 989 00:57:50,270 --> 00:57:52,320 of points in time. 990 00:57:52,320 --> 00:57:55,210 And if I can find that, I will say I have 991 00:57:55,210 --> 00:57:58,060 defined this random process. 992 00:57:58,060 --> 00:58:00,750 And I'll see what I can do with it. 993 00:58:00,750 --> 00:58:04,105 So we need rules that let us find this kind 994 00:58:04,105 --> 00:58:08,030 of probability density. 995 00:58:08,030 --> 00:58:10,500 That's the whole objective here. 996 00:58:10,500 --> 00:58:15,960 Because I don't know how to generate a random process that 997 00:58:15,960 --> 00:58:17,680 does more than that, that talks about 998 00:58:17,680 --> 00:58:19,710 everything all at once. 999 00:58:19,710 --> 00:58:24,700 You can read mathematics texts to try to do this, but I don't 1000 00:58:24,700 --> 00:58:28,310 know any way of getting from that kind of general 1001 00:58:28,310 --> 00:58:32,680 mathematics down to anything that we could make sense about 1002 00:58:32,680 --> 00:58:36,100 in terms of models of reality. 1003 00:58:36,100 --> 00:58:41,820 The favorite way we're going to generate this kind of rule 1004 00:58:41,820 --> 00:58:48,480 is to think of the random process as being a sum of 1005 00:58:48,480 --> 00:58:54,060 random variables multiplied by orthonormal functions. 1006 00:58:54,060 --> 00:58:59,790 In other words, we will have sample values in time, z of t, 1007 00:58:59,790 --> 00:59:07,010 which are equal to z sub i phi sub i of t. 1008 00:59:07,010 --> 00:59:12,480 And for each sample function in time, we will then have, 1009 00:59:12,480 --> 00:59:19,200 for that particular time, as I vary over capital Omega, I'm 1010 00:59:19,200 --> 00:59:22,420 going to be varying over the values of z sub i. 1011 00:59:22,420 --> 00:59:24,240 These are just fixed functions. 1012 00:59:24,240 --> 00:59:27,020 And what that's going to give me is a set of random 1013 00:59:27,020 --> 00:59:28,270 variables here. 1014 00:59:32,320 --> 00:59:36,240 So in a way we're doing very much the same thing as we did 1015 00:59:36,240 --> 00:59:37,700 with functions. 1016 00:59:37,700 --> 00:59:40,630 With functions we said, when we're dealing with these the 1017 00:59:40,630 --> 00:59:45,150 equivalence classes, things get very, very complicated. 1018 00:59:45,150 --> 00:59:49,810 The way we're going to visualize a function is in 1019 00:59:49,810 --> 00:59:52,890 terms of an orthonormal expansion. 1020 00:59:52,890 --> 00:59:55,470 And with orthonormal expansions, if two functions 1021 00:59:55,470 --> 00:59:57,220 are equivalent, they both have the 1022 00:59:57,220 --> 00:59:59,270 same orthonormal expension. 1023 00:59:59,270 --> 01:00:03,530 So now we're saying the same thing when we're dealing with 1024 01:00:03,530 --> 01:00:07,170 this more general thing, which is stochastic processes. 1025 01:00:07,170 --> 01:00:09,830 This is really what the signal space viewpoint of 1026 01:00:09,830 --> 01:00:11,760 communication is. 1027 01:00:11,760 --> 01:00:15,730 It's the viewpoint that you view these random processes in 1028 01:00:15,730 --> 01:00:18,810 terms of orthonormal expansions where the 1029 01:00:18,810 --> 01:00:21,820 coefficients are random variables. 1030 01:00:21,820 --> 01:00:24,330 And if I deal with a finite number of random variables 1031 01:00:24,330 --> 01:00:27,890 here, everything is very simple. 1032 01:00:27,890 --> 01:00:32,630 Because you know how to add random variables, right? 1033 01:00:32,630 --> 01:00:34,410 I mean, at least in principle, you all know how 1034 01:00:34,410 --> 01:00:35,660 to add random variables. 1035 01:00:35,660 --> 01:00:37,810 You've been doing that any time you take 1036 01:00:37,810 --> 01:00:40,200 a probability class. 1037 01:00:40,200 --> 01:00:43,180 A quarter of the exercises you deal with are adding random 1038 01:00:43,180 --> 01:00:47,050 variables together and finding various things about the sum 1039 01:00:47,050 --> 01:00:48,680 of those random variables. 1040 01:00:48,680 --> 01:00:51,110 So that's the same thing we're going to be doing here. 1041 01:00:51,110 --> 01:00:55,890 The only thing is, we can do this for any real value of t, 1042 01:00:55,890 --> 01:00:58,940 just by adding up this fixed 1043 01:00:58,940 --> 01:01:01,650 collection of random variables. 1044 01:01:01,650 --> 01:01:04,920 And you add it up at a different arguments here 1045 01:01:04,920 --> 01:01:08,180 because as t changes, phi sub i of t changes. 1046 01:01:10,750 --> 01:01:14,020 And this is sort of what we did when we were talking about 1047 01:01:14,020 --> 01:01:16,710 source waveforms. 1048 01:01:16,710 --> 01:01:19,870 It's very much the same idea that we were looking at. 1049 01:01:19,870 --> 01:01:23,040 Because with source waveforms, when we turned the waveform 1050 01:01:23,040 --> 01:01:26,000 into a sequence we were turning the waveform into a 1051 01:01:26,000 --> 01:01:31,540 sequence by in fact using an orthonormal expansion. 1052 01:01:31,540 --> 01:01:35,830 And then what was happening is that the coefficients in the 1053 01:01:35,830 --> 01:01:39,990 orthonormal expansion, we were using to actually represent 1054 01:01:39,990 --> 01:01:43,260 the random process. 1055 01:01:43,260 --> 01:01:45,620 So we're doing the same thing here. 1056 01:01:45,620 --> 01:01:49,860 The same argument. 1057 01:01:49,860 --> 01:01:52,460 So, somehow we have to figure out what these 1058 01:01:52,460 --> 01:01:53,710 joint densities are. 1059 01:02:01,200 --> 01:02:04,660 So, as I said before, when we don't know what to do, we make 1060 01:02:04,660 --> 01:02:05,910 a simple assumption. 1061 01:02:08,540 --> 01:02:14,430 And the simple assumption we're going to make is that 1062 01:02:14,430 --> 01:02:18,080 these random variables at different points in time, of 1063 01:02:18,080 --> 01:02:22,550 noise, we're going to assume that they're Gaussian. 1064 01:02:22,550 --> 01:02:25,880 And a normal Gaussian random variable; this, I hope, is 1065 01:02:25,880 --> 01:02:29,550 familiar to you, has the density 1 over the square root 1066 01:02:29,550 --> 01:02:33,990 of 2 pi, e to the minus n squared over 2. 1067 01:02:33,990 --> 01:02:37,580 So it's just density that has this nice bell-shaped curve. 1068 01:02:37,580 --> 01:02:40,600 It's an extraordinarily smooth thing. 1069 01:02:40,600 --> 01:02:42,390 It has a mean of 0. 1070 01:02:42,390 --> 01:02:45,500 It has a variance of 1. 1071 01:02:45,500 --> 01:02:47,240 How many of you could prove that the 1072 01:02:47,240 --> 01:02:48,600 variance of this is 1? 1073 01:02:51,150 --> 01:02:53,930 Well, in a while you'll all be able to prove it. 1074 01:02:53,930 --> 01:02:57,570 Because there are easy ways of doing it. 1075 01:02:57,570 --> 01:03:00,260 Other than looking at a table of integrals, which we'll do. 1076 01:03:04,340 --> 01:03:08,530 An arbitrary Gaussian random variable is just where you 1077 01:03:08,530 --> 01:03:10,640 take this normal variable. 1078 01:03:10,640 --> 01:03:14,610 These normalized Gaussian random variables are also 1079 01:03:14,610 --> 01:03:18,070 traditionally called normal variables. 1080 01:03:18,070 --> 01:03:20,990 I mean, they have such a central position in all of 1081 01:03:20,990 --> 01:03:22,360 probability. 1082 01:03:22,360 --> 01:03:27,250 And when somebody talks about a normal random variable -- 1083 01:03:27,250 --> 01:03:30,150 I mean, there's sort of the sense that random variable 1084 01:03:30,150 --> 01:03:31,090 should be normal. 1085 01:03:31,090 --> 01:03:33,800 That's the typical thing. 1086 01:03:33,800 --> 01:03:35,850 It's what should happen. 1087 01:03:35,850 --> 01:03:38,800 And any time you don't have normal random variables, 1088 01:03:38,800 --> 01:03:41,530 you're dealing with something bizarre and strange. 1089 01:03:41,530 --> 01:03:45,220 And that's carrying it a little too far, but in fact, 1090 01:03:45,220 --> 01:03:51,220 when we're looking at noise, that's not a bad idea. 1091 01:03:51,220 --> 01:03:55,510 If you shift this random variable by a mean z bar, and 1092 01:03:55,510 --> 01:03:59,690 you scale it by -- excuse me, scale it by sigma. 1093 01:04:05,670 --> 01:04:11,120 Then the density of the result becomes 1 over the square root 1094 01:04:11,120 --> 01:04:17,120 of 2 pi sigma squared, e to the minus z minus z bar over 2 1095 01:04:17,120 --> 01:04:18,570 sigma squared. 1096 01:04:18,570 --> 01:04:20,700 In other words, the shifting just shifts the 1097 01:04:20,700 --> 01:04:23,540 whole thing by z bar. 1098 01:04:23,540 --> 01:04:28,350 So this density peaks up around z bar, rather than 1099 01:04:28,350 --> 01:04:32,240 peaking up around 0, which makes sense. 1100 01:04:32,240 --> 01:04:37,580 And if you scale it by sigma, then you need a 1101 01:04:37,580 --> 01:04:40,750 sigma in here, too. 1102 01:04:40,750 --> 01:04:43,380 When you have this probability density, you're going to be 1103 01:04:43,380 --> 01:04:45,200 scaling it this way. 1104 01:04:45,200 --> 01:04:49,710 Scaling it that way is what we mean by scaling it. 1105 01:04:49,710 --> 01:04:54,160 If you scale it this way to make sigma squared larger, the 1106 01:04:54,160 --> 01:04:59,530 variance larger, this thing still has to integrate to 1. 1107 01:04:59,530 --> 01:05:02,810 So you have to pop it down a little bit also. 1108 01:05:02,810 --> 01:05:06,845 Which is why this sigma squared appears here in the 1109 01:05:06,845 --> 01:05:07,670 denominator. 1110 01:05:07,670 --> 01:05:12,250 So anytime we scale a random variable, it's got to be 1111 01:05:12,250 --> 01:05:13,480 scaled this way. 1112 01:05:13,480 --> 01:05:15,500 And it's also got to be scaled down. 1113 01:05:15,500 --> 01:05:17,490 So that's why these Gaussian random 1114 01:05:17,490 --> 01:05:19,850 variables look this way. 1115 01:05:19,850 --> 01:05:22,050 We're going to be dealing with these Gaussian random 1116 01:05:22,050 --> 01:05:26,140 variables so much that we'll refer to this random variable 1117 01:05:26,140 --> 01:05:30,890 as being normal with mean z bar and sigma 1118 01:05:30,890 --> 01:05:32,700 variance sigma squared. 1119 01:05:32,700 --> 01:05:36,680 And if we don't say anything other than it's normal, we 1120 01:05:36,680 --> 01:05:40,950 mean it's normal with mean 0 and variance 1. 1121 01:05:40,950 --> 01:05:43,470 Which is the nicest case of all. 1122 01:05:43,470 --> 01:05:46,420 It's this nice bell-shaped curve which everything should 1123 01:05:46,420 --> 01:05:47,670 correspond to. 1124 01:05:53,760 --> 01:05:55,580 They tend to be good models of things 1125 01:05:55,580 --> 01:05:58,580 for a number of reasons. 1126 01:05:58,580 --> 01:06:02,130 One of them is the central limit theorem. 1127 01:06:02,130 --> 01:06:06,070 And the central limit theorem, I'm not going to state it 1128 01:06:06,070 --> 01:06:07,380 precisely here. 1129 01:06:07,380 --> 01:06:09,400 I'm certainly not going to prove it. 1130 01:06:09,400 --> 01:06:12,920 One of the nastiest things to prove in probability theory is 1131 01:06:12,920 --> 01:06:14,890 the central limit theorem. 1132 01:06:14,890 --> 01:06:16,420 It's an absolute bear. 1133 01:06:16,420 --> 01:06:18,220 I mean, it seems so simple. 1134 01:06:18,220 --> 01:06:22,440 And you try to prove it, and it's ugly. 1135 01:06:22,440 --> 01:06:26,470 But, anyway, the central limit theorem says that if you add 1136 01:06:26,470 --> 01:06:29,850 up a large number of Gaussian random variables, of 1137 01:06:29,850 --> 01:06:33,570 independent Gaussian random variables, and you scale the 1138 01:06:33,570 --> 01:06:38,040 sum down to make it variance one, you 1139 01:06:38,040 --> 01:06:39,650 have to scale it down. 1140 01:06:39,650 --> 01:06:43,900 Then when you get all done adding them all up, what you 1141 01:06:43,900 --> 01:06:46,960 wind up with is a Gaussian random variable. 1142 01:06:46,960 --> 01:06:49,140 This is saying something a little more precise than the 1143 01:06:49,140 --> 01:06:50,730 law of large numbers. 1144 01:06:50,730 --> 01:06:53,940 The law of large numbers says you add up a bunch of 1145 01:06:53,940 --> 01:06:57,470 independent or almost independent random variables. 1146 01:06:57,470 --> 01:06:59,190 And you scale them down. 1147 01:06:59,190 --> 01:07:01,340 And you get a mean, which is something. 1148 01:07:01,340 --> 01:07:03,050 This is saying something more. 1149 01:07:03,050 --> 01:07:06,070 It's saying you add them all up, and you scale them down in 1150 01:07:06,070 --> 01:07:07,160 the right way. 1151 01:07:07,160 --> 01:07:09,950 And you get not only the mean, now, but you also get the 1152 01:07:09,950 --> 01:07:12,620 shape of the density around the mean. 1153 01:07:12,620 --> 01:07:16,580 Which is this Gaussian shape, if you add a lot of them. 1154 01:07:16,580 --> 01:07:19,710 It has a bunch of extremal properties. 1155 01:07:19,710 --> 01:07:21,560 In a sense, it's the most random 1156 01:07:21,560 --> 01:07:25,400 of all random variables. 1157 01:07:25,400 --> 01:07:28,070 Not going to talk about that, but every once in a while, one 1158 01:07:28,070 --> 01:07:30,320 of these things will come up. 1159 01:07:30,320 --> 01:07:33,290 When you start doing an optimization in probability 1160 01:07:33,290 --> 01:07:37,280 theory, over all possible random variables of a given 1161 01:07:37,280 --> 01:07:41,700 mean and variance, you often find that the answer that you 1162 01:07:41,700 --> 01:07:47,360 come up with either when you maximize or when you minimize 1163 01:07:47,360 --> 01:07:49,220 is Gaussian. 1164 01:07:49,220 --> 01:07:52,670 If you get this answer when you maximize, then what you 1165 01:07:52,670 --> 01:07:56,460 get when you minimize is usually a random variable, 1166 01:07:56,460 --> 01:07:57,980 which is zero or something. 1167 01:07:57,980 --> 01:07:59,260 And vice versa. 1168 01:07:59,260 --> 01:08:02,930 So that either the minimum or the maximum of a 1169 01:08:02,930 --> 01:08:04,450 problem makes sense. 1170 01:08:04,450 --> 01:08:05,980 And the one that makes sense, the 1171 01:08:05,980 --> 01:08:07,540 answer is usually Gaussian. 1172 01:08:10,790 --> 01:08:12,370 At least, it's Gaussian for the problems 1173 01:08:12,370 --> 01:08:13,490 for which it's Gaussian. 1174 01:08:13,490 --> 01:08:17,120 But it's Gaussian for an awful lot of problems. 1175 01:08:17,120 --> 01:08:19,650 It's very easy to manipulate analytically. 1176 01:08:19,650 --> 01:08:28,170 Now, you look at that density, and you say, my God, that 1177 01:08:28,170 --> 01:08:31,190 can't be easy to manipulate analytically. 1178 01:08:31,190 --> 01:08:34,980 But in fact, after you get used to two or three very 1179 01:08:34,980 --> 01:08:39,570 small tricks, in fact you'll find out that it's very easy 1180 01:08:39,570 --> 01:08:41,640 to manipulate analytically. 1181 01:08:41,640 --> 01:08:45,210 It's one of the easiest random variables to work with. 1182 01:08:45,210 --> 01:08:48,500 Far easier than a binary random variable. 1183 01:08:48,500 --> 01:08:50,900 If we add up a bunch of binary random variables, 1184 01:08:50,900 --> 01:08:51,810 and what do you get? 1185 01:08:51,810 --> 01:08:54,720 You get a binomial random variable. 1186 01:08:54,720 --> 01:08:57,790 And binomial random variables, when you add up enough of 1187 01:08:57,790 --> 01:08:59,680 them, they get uglier and uglier. 1188 01:08:59,680 --> 01:09:01,490 Except they start to look Gaussian. 1189 01:09:01,490 --> 01:09:03,970 These Gaussian random variables, if you add up a 1190 01:09:03,970 --> 01:09:05,600 bunch of them, and what do you get? 1191 01:09:05,600 --> 01:09:07,380 You get a Gaussian random variable. 1192 01:09:07,380 --> 01:09:07,660 Yes? 1193 01:09:07,660 --> 01:09:11,209 AUDIENCE: Central limit theorem applies to random 1194 01:09:11,209 --> 01:09:13,744 variables or Gaussian random variables? 1195 01:09:13,744 --> 01:09:15,265 PROFESSOR: Central limit theorem 1196 01:09:15,265 --> 01:09:17,293 applies to random variables. 1197 01:09:17,293 --> 01:09:17,800 AUDIENCE: [INAUDIBLE] 1198 01:09:17,800 --> 01:09:18,470 PROFESSOR: Oh, I'm sorry. 1199 01:09:18,470 --> 01:09:20,260 I shouldn't have. 1200 01:09:20,260 --> 01:09:20,490 No. 1201 01:09:20,490 --> 01:09:24,090 I said, when you add a bunch of Gaussian random variables, 1202 01:09:24,090 --> 01:09:26,440 you get exactly a Gaussian random -- 1203 01:09:26,440 --> 01:09:29,410 when you add up a bunch of jointly Gaussian random 1204 01:09:29,410 --> 01:09:32,380 variables, and we'll find out what that means in a while, 1205 01:09:32,380 --> 01:09:35,300 you get exactly a Gaussian random variable. 1206 01:09:35,300 --> 01:09:38,870 When you add up a bunch of random variables which are 1207 01:09:38,870 --> 01:09:41,840 independent, or which are close enough to independent, 1208 01:09:41,840 --> 01:09:44,660 and which have several restrictions on them and you 1209 01:09:44,660 --> 01:09:46,090 scale them the right way. 1210 01:09:46,090 --> 01:09:50,200 What you get as you add more and more of them, tends toward 1211 01:09:50,200 --> 01:09:52,490 a Gaussian distribution. 1212 01:09:52,490 --> 01:09:56,450 And I don't want to state all of those conditions. 1213 01:09:56,450 --> 01:09:59,020 I mean, we don't need all those conditions here. 1214 01:09:59,020 --> 01:10:03,470 Because the only thing we're relying on is when we look at 1215 01:10:03,470 --> 01:10:06,560 noise, we're looking at a collection of a very large 1216 01:10:06,560 --> 01:10:08,620 number of phenomena. 1217 01:10:08,620 --> 01:10:11,020 And because we're looking at a very large number of 1218 01:10:11,020 --> 01:10:17,490 phenomena, we can sort of model it as being an addition 1219 01:10:17,490 --> 01:10:19,700 of a lot of little random variables. 1220 01:10:19,700 --> 01:10:23,510 And because of that, in an enormous number of situations, 1221 01:10:23,510 --> 01:10:27,170 when you look at these things, what you get is fairly close 1222 01:10:27,170 --> 01:10:28,510 to a Gaussian shape. 1223 01:10:31,450 --> 01:10:35,550 And it's a model that everybody uses for noise. 1224 01:10:35,550 --> 01:10:41,670 If you read papers in the communication field, or in the 1225 01:10:41,670 --> 01:10:45,660 control field, or in a bunch of other fields, and people 1226 01:10:45,660 --> 01:10:50,260 start talking about what happens when noise enters the 1227 01:10:50,260 --> 01:10:54,880 picture, sometimes they will talk about their model for the 1228 01:10:54,880 --> 01:10:55,830 noise process. 1229 01:10:55,830 --> 01:10:57,420 Sometimes they won't. 1230 01:10:57,420 --> 01:11:00,610 But if they don't talk about what noise model they're 1231 01:11:00,610 --> 01:11:04,860 using, it's a very safe bet that what they're using is a 1232 01:11:04,860 --> 01:11:07,900 Gaussian process model. 1233 01:11:07,900 --> 01:11:10,760 In other words, this is the kind of model people use 1234 01:11:10,760 --> 01:11:12,730 because it's a model that makes a 1235 01:11:12,730 --> 01:11:14,250 certain amount of sense. 1236 01:11:14,250 --> 01:11:17,200 Now, there's one other thing you ought to be aware of here. 1237 01:11:17,200 --> 01:11:19,520 It's something that we've talked about a little bit in 1238 01:11:19,520 --> 01:11:22,260 other contexts. 1239 01:11:22,260 --> 01:11:28,000 When we're looking at a noise waveform that you have to deal 1240 01:11:28,000 --> 01:11:35,190 with in a modem or something like that, this modem only 1241 01:11:35,190 --> 01:11:40,670 experiences one noise waveform in its life. 1242 01:11:40,670 --> 01:11:44,020 So, what do we mean by saying that it's random? 1243 01:11:44,020 --> 01:11:46,820 Well, it's an act of faith, in a sense. 1244 01:11:46,820 --> 01:11:48,080 But it's not so much. 1245 01:11:48,080 --> 01:11:54,760 Because if you're designing communication equipment, and 1246 01:11:54,760 --> 01:12:00,580 you're cookie cuttering cellphones or something like 1247 01:12:00,580 --> 01:12:04,550 that, or something else, what you're doing is designing a 1248 01:12:04,550 --> 01:12:08,150 piece of equipment which is supposed to work in an 1249 01:12:08,150 --> 01:12:11,040 enormous number of different places. 1250 01:12:11,040 --> 01:12:14,340 So that each one you're manufacturing is experiencing 1251 01:12:14,340 --> 01:12:15,890 a different waveform. 1252 01:12:15,890 --> 01:12:18,880 So that as far as the designing the device is 1253 01:12:18,880 --> 01:12:21,920 concerned, what you're interested in is not that one 1254 01:12:21,920 --> 01:12:25,030 waveform that one of these things experiences, but in 1255 01:12:25,030 --> 01:12:29,310 fact the ensemble of waveforms that all of them experience. 1256 01:12:29,310 --> 01:12:32,370 Now, when you look at the ensemble of waveforms that all 1257 01:12:32,370 --> 01:12:36,970 of these things experience, over all of this large variety 1258 01:12:36,970 --> 01:12:40,920 of different contexts; noise here, noise there, noise 1259 01:12:40,920 --> 01:12:47,960 there, suddenly the model of a large collection of very small 1260 01:12:47,960 --> 01:12:50,430 things makes even better sense. 1261 01:12:50,430 --> 01:12:53,330 Because in each context, in each place, you have some 1262 01:12:53,330 --> 01:12:54,710 forms of noise. 1263 01:12:54,710 --> 01:12:58,590 When you look at this over all the different contexts that 1264 01:12:58,590 --> 01:13:01,670 you're looking at, then suddenly you have many, many 1265 01:13:01,670 --> 01:13:06,450 more things that you, in a sense, are just adding up if 1266 01:13:06,450 --> 01:13:08,650 you want to make a model here. 1267 01:13:08,650 --> 01:13:10,750 So we're going to use this model for a while. 1268 01:13:10,750 --> 01:13:14,830 We will find out that it leads us to lots of nice things. 1269 01:13:14,830 --> 01:13:18,430 And so forth. 1270 01:13:18,430 --> 01:13:25,810 So, we said that we'd be happy if we could deal with at least 1271 01:13:25,810 --> 01:13:30,780 studying a random process over some finite set of times. 1272 01:13:30,780 --> 01:13:33,770 So it makes sense to look at a finite 1273 01:13:33,770 --> 01:13:36,370 set of random variables. 1274 01:13:36,370 --> 01:13:39,940 So we're going to look at a k-tuple of random variables. 1275 01:13:39,940 --> 01:13:42,120 And we're going to call that -- what do you think we're 1276 01:13:42,120 --> 01:13:44,580 going to call a k-tuple of random variables? 1277 01:13:44,580 --> 01:13:47,590 We're going to call it a random vector. 1278 01:13:47,590 --> 01:13:51,250 In fact, when you look at random vectors, you can 1279 01:13:51,250 --> 01:13:54,700 describe them as being elements of a vector space. 1280 01:13:54,700 --> 01:13:58,500 And we're not going to do that because most of you are 1281 01:13:58,500 --> 01:14:03,090 already sufficiently confused about doing functions as 1282 01:14:03,090 --> 01:14:07,360 vectors, and adding on the idea of a random process as a 1283 01:14:07,360 --> 01:14:13,400 vector would be very unkind. 1284 01:14:13,400 --> 01:14:15,210 And we're not going to do it because we 1285 01:14:15,210 --> 01:14:16,680 don't need to do it. 1286 01:14:16,680 --> 01:14:19,720 We're simply calling these things vectors because we want 1287 01:14:19,720 --> 01:14:23,390 to use the notation of vector space, which will be 1288 01:14:23,390 --> 01:14:26,740 convenient here. 1289 01:14:26,740 --> 01:14:33,140 So, if we have a collection of random variables n 1 to n k, 1290 01:14:33,140 --> 01:14:36,580 and they're all IID, independent identically 1291 01:14:36,580 --> 01:14:38,620 distributed. 1292 01:14:38,620 --> 01:14:43,930 They're all normal with mean zero and variance one, the 1293 01:14:43,930 --> 01:14:46,230 joint density of them is what? 1294 01:14:46,230 --> 01:14:47,860 Well, you know what the joint density is. 1295 01:14:47,860 --> 01:14:50,690 If they're independent and they're all the same, you just 1296 01:14:50,690 --> 01:14:52,930 multiply their densities together. 1297 01:14:52,930 --> 01:14:56,640 So it's 1 over 2 pi with a k over 2. 1298 01:14:56,640 --> 01:14:59,550 e to the minus n 1 squared minus n 2 1299 01:14:59,550 --> 01:15:01,230 squared, and so forth. 1300 01:15:01,230 --> 01:15:02,970 Divided by 2. 1301 01:15:02,970 --> 01:15:06,500 That's why this density form is a 1302 01:15:06,500 --> 01:15:08,070 particularly convenient thing. 1303 01:15:08,070 --> 01:15:10,050 You have something in the exponent. 1304 01:15:10,050 --> 01:15:12,590 When you have independent random variables, 1305 01:15:12,590 --> 01:15:14,510 you multiply densities. 1306 01:15:14,510 --> 01:15:17,090 Which means you add what's in the exponent. 1307 01:15:17,090 --> 01:15:19,280 Which means everything is very nice. 1308 01:15:19,280 --> 01:15:23,970 So you wind up adding up all these sample values for each 1309 01:15:23,970 --> 01:15:25,630 of the random variables. 1310 01:15:25,630 --> 01:15:29,830 With our notation looking at norm squared, this is just the 1311 01:15:29,830 --> 01:15:35,910 norm squared for this sequence of sample values. 1312 01:15:35,910 --> 01:15:38,540 And look at the sequence of samples values is a actual 1313 01:15:38,540 --> 01:15:40,700 vector, n 1 up to n sub k. 1314 01:15:40,700 --> 01:15:43,800 So we wind up width the norm of the vector n squared 1315 01:15:43,800 --> 01:15:46,210 divided by 2. 1316 01:15:46,210 --> 01:15:49,930 If you draw a picture of that, you have spherical symmetry. 1317 01:15:49,930 --> 01:15:58,090 Every set of sample vectors which have the same energy in 1318 01:15:58,090 --> 01:16:01,250 them has the same probability density. 1319 01:16:01,250 --> 01:16:17,640 If you draw it in two dimensions, you get -- 1320 01:16:17,640 --> 01:16:21,090 it looks like a target where the biggest probability 1321 01:16:21,090 --> 01:16:25,070 density is in here, and the probability density goes down 1322 01:16:25,070 --> 01:16:26,150 as you move out. 1323 01:16:26,150 --> 01:16:28,410 If you look at it in three dimensions, if that same 1324 01:16:28,410 --> 01:16:30,280 thing's looked at in a sphere. 1325 01:16:30,280 --> 01:16:33,640 And if you look at it in four dimensions, well, it's the 1326 01:16:33,640 --> 01:16:38,630 same thing, but I can't draw it, I can't imagine it. 1327 01:16:38,630 --> 01:16:40,260 But it's still a spherical symmetry. 1328 01:16:44,980 --> 01:16:50,810 In the one minute that I have remaining, I'm going to -- 1329 01:16:50,810 --> 01:16:52,700 I might derive a Gaussian density. 1330 01:16:55,690 --> 01:16:59,550 I'm going to call a bunch of random variables zero-mean 1331 01:16:59,550 --> 01:17:04,200 jointly Gaussian if they're all derived by linear 1332 01:17:04,200 --> 01:17:09,300 combinations on some set of normal IID 1333 01:17:09,300 --> 01:17:10,640 Gaussian in random variables. 1334 01:17:10,640 --> 01:17:14,140 In other words, if each of them is a linear combination 1335 01:17:14,140 --> 01:17:18,130 of these IID random variables, I will call the collection z 1 1336 01:17:18,130 --> 01:17:22,490 up to z k a jointly Gaussian random variable. 1337 01:17:22,490 --> 01:17:26,865 In other words, jointly Gaussian does not mean that z 1338 01:17:26,865 --> 01:17:29,270 1 is Gaussian, z 2 is Gaussian, up 1339 01:17:29,270 --> 01:17:31,150 to z k being Gaussian. 1340 01:17:31,150 --> 01:17:33,100 It means much more than that. 1341 01:17:33,100 --> 01:17:37,300 It means that all of them are linear combinations of some 1342 01:17:37,300 --> 01:17:40,290 underlying set of independent random variables. 1343 01:17:40,290 --> 01:17:43,090 In the homework that was passed out this time, you will 1344 01:17:43,090 --> 01:17:46,910 look at two different ways of generating a pair of random 1345 01:17:46,910 --> 01:17:50,380 variables which are each Gaussian, but which are not 1346 01:17:50,380 --> 01:17:51,290 jointly Gaussian. 1347 01:17:51,290 --> 01:17:54,290 Which you can't view in this way. 1348 01:17:54,290 --> 01:17:55,650 So this is important. 1349 01:17:55,650 --> 01:18:01,130 Jointly Gaussian makes sense physically because when we 1350 01:18:01,130 --> 01:18:05,220 look at random variables, we say it makes sense to look at 1351 01:18:05,220 --> 01:18:07,980 Gaussian random variables because of a collection of a 1352 01:18:07,980 --> 01:18:09,550 lot of noise variables. 1353 01:18:09,550 --> 01:18:13,410 If you look at two different random variables, each of them 1354 01:18:13,410 --> 01:18:15,110 is going to be a collection of 1355 01:18:15,110 --> 01:18:18,670 different small noise variables. 1356 01:18:18,670 --> 01:18:23,610 And they're each going to be some weighted sum over a 1357 01:18:23,610 --> 01:18:25,240 different set of noise variables. 1358 01:18:25,240 --> 01:18:29,820 So it's reasonable to expect each of them to be expressible 1359 01:18:29,820 --> 01:18:32,820 in this way. 1360 01:18:32,820 --> 01:18:34,850 I guess I'm not going to derive 1361 01:18:34,850 --> 01:18:35,960 what I wanted to derive. 1362 01:18:35,960 --> 01:18:39,080 I'll do it next time. 1363 01:18:39,080 --> 01:18:43,020 It's easier to think of this in matrix form. 1364 01:18:43,020 --> 01:18:48,240 In other words, I'll take the vector of sample values, z 1 1365 01:18:48,240 --> 01:18:50,990 up to z sub k. 1366 01:18:50,990 --> 01:18:55,060 Each one of them is a linear combination of these normal 1367 01:18:55,060 --> 01:18:58,200 random variables, n 1 up to n sub n. 1368 01:18:58,200 --> 01:19:01,760 So I can do it as z equal to matrix a times n. 1369 01:19:01,760 --> 01:19:04,140 This is a vector of real numbers. 1370 01:19:04,140 --> 01:19:08,750 This is a vector of real numbers, this is a matrix. 1371 01:19:08,750 --> 01:19:14,850 And if I make this a square matrix, than what we're doing 1372 01:19:14,850 --> 01:19:18,840 is mapping each of the unit vectors, each single noise 1373 01:19:18,840 --> 01:19:21,480 vector, into the i'th column of a. 1374 01:19:21,480 --> 01:19:26,950 In other words, if I have a noise vector where n 1 is 1375 01:19:26,950 --> 01:19:32,010 equal to 0, n 2 is equal to 0, n sub i is equal to 1 and 1376 01:19:32,010 --> 01:19:35,970 everything else is equal to 0, what that generates in terms 1377 01:19:35,970 --> 01:19:41,780 of this random vector z is just the vector a sub 1. 1378 01:19:41,780 --> 01:19:42,670 So. 1379 01:19:42,670 --> 01:19:45,450 What I'm going to do when I come back to this next time 1380 01:19:45,450 --> 01:19:51,780 is, if I take a little cube in the n space, and I map it into 1381 01:19:51,780 --> 01:19:58,150 what goes on in the z space, this unit vector here is going 1382 01:19:58,150 --> 01:20:00,970 to map into a vector here. 1383 01:20:00,970 --> 01:20:05,020 The unit vector here is going to map into a vector here. 1384 01:20:05,020 --> 01:20:08,610 So cubes map into parallelograms. 1385 01:20:11,390 --> 01:20:14,810 Next time, the thing we will say is, if you look at the 1386 01:20:14,810 --> 01:20:20,580 determinant of a matrix, the determinant of a matrix, the 1387 01:20:20,580 --> 01:20:27,320 most fundamental definition of it, is that it's the volume of 1388 01:20:27,320 --> 01:20:28,360 a parallelepiped. 1389 01:20:28,360 --> 01:20:31,740 In fact, it's the volume of this parallelepiped. 1390 01:20:31,740 --> 01:20:33,890 So we'll do that next time.