1
00:00:00,000 --> 00:00:01,070
2
00:00:01,070 --> 00:00:03,550
PROFESSOR: There is one handout
being passed out, it's
3
00:00:03,550 --> 00:00:07,360
chapters four and five, so
be sure to pick it up.
4
00:00:07,360 --> 00:00:10,380
And just a reminder that there
is a homework due on next
5
00:00:10,380 --> 00:00:13,500
Wednesday, so we have homeworks
due every week in
6
00:00:13,500 --> 00:00:16,560
this course as well.
7
00:00:16,560 --> 00:00:19,010
So last class, we covered
Chapters One through Three
8
00:00:19,010 --> 00:00:22,240
quite quickly rather, because
it was mostly a review of
9
00:00:22,240 --> 00:00:27,350
6.450, and one of the key ideas
we covered last class
10
00:00:27,350 --> 00:00:29,810
was the connection between
continuous time and discrete
11
00:00:29,810 --> 00:00:31,060
time systems.
12
00:00:31,060 --> 00:00:33,470
13
00:00:33,470 --> 00:00:40,756
So we have continuous
time, discrete time.
14
00:00:40,756 --> 00:00:46,180
15
00:00:46,180 --> 00:00:49,560
A specific example that
we saw was that of an
16
00:00:49,560 --> 00:00:50,930
orthonormal PAM system.
17
00:00:50,930 --> 00:01:02,480
18
00:01:02,480 --> 00:01:07,660
The architecture for a PAM
system is as follows: you have
19
00:01:07,660 --> 00:01:12,000
Xk, a sequence of symbols
coming in, they
20
00:01:12,000 --> 00:01:13,640
get into a PAM modulator.
21
00:01:13,640 --> 00:01:21,530
22
00:01:21,530 --> 00:01:25,870
What you get out is a wave form,
X of t, is what the PAM
23
00:01:25,870 --> 00:01:28,960
modulator produces.
24
00:01:28,960 --> 00:01:34,400
You have a little noise on the
channel, and what you receive
25
00:01:34,400 --> 00:01:36,530
out is Y of t.
26
00:01:36,530 --> 00:01:40,430
So Y of t is the wave form that
the receiver receives,
27
00:01:40,430 --> 00:01:45,310
and what a canonical receiver's
structure to a
28
00:01:45,310 --> 00:01:49,245
matched filter followed
by sampling.
29
00:01:49,245 --> 00:01:52,330
30
00:01:52,330 --> 00:01:57,710
So what you get out is a
sequence of symbols Y of t.
31
00:01:57,710 --> 00:02:02,610
So this is a structure for an
orthonormal PAM modulator, and
32
00:02:02,610 --> 00:02:06,640
the continuous time version of
the channel is that you have y
33
00:02:06,640 --> 00:02:09,410
of p, which is received at
the receiver, plus X of
34
00:02:09,410 --> 00:02:10,764
t, plus N of t.
35
00:02:10,764 --> 00:02:19,790
36
00:02:19,790 --> 00:02:26,820
The discrete time model is you
have the symbol Y of k as the
37
00:02:26,820 --> 00:02:29,180
output of the sampler.
38
00:02:29,180 --> 00:02:33,310
It equals X of k plus N of k.
39
00:02:33,310 --> 00:02:36,130
40
00:02:36,130 --> 00:02:39,800
Now basically, we say that the
two systems are equivalent in
41
00:02:39,800 --> 00:02:43,415
the following sense: if you want
to make a detection of X
42
00:02:43,415 --> 00:02:47,890
of k here, at the receiver,
Y of k is a sufficient
43
00:02:47,890 --> 00:02:52,100
statistic, given that Y of t is
received at the front end
44
00:02:52,100 --> 00:02:53,670
of the receiver.
45
00:02:53,670 --> 00:02:55,750
So that's the equivalence
between discrete time and
46
00:02:55,750 --> 00:02:57,000
continuous time.
47
00:02:57,000 --> 00:03:20,390
48
00:03:20,390 --> 00:03:23,140
And the way we established this
fact was by using the
49
00:03:23,140 --> 00:03:25,530
theorem of irrelevance.
50
00:03:25,530 --> 00:03:28,860
The noise here is white Gaussian
noise, so if we
51
00:03:28,860 --> 00:03:32,180
project it onto orthonormal wave
forms, the corresponding
52
00:03:32,180 --> 00:03:34,185
noise samples would be IID.
53
00:03:34,185 --> 00:03:36,690
The noise will be independent of
everything which is out of
54
00:03:36,690 --> 00:03:39,980
band, and so there is no
correlation among the noise
55
00:03:39,980 --> 00:03:43,100
samples, and Y of k is a
sufficient statistic
56
00:03:43,100 --> 00:03:44,490
to detect X of k.
57
00:03:44,490 --> 00:03:52,620
58
00:03:52,620 --> 00:03:55,060
So that is the basic idea.
59
00:03:55,060 --> 00:03:57,520
Now, a similar architecture
also holds when your
60
00:03:57,520 --> 00:04:00,140
continuous time system operates
in fast band rather
61
00:04:00,140 --> 00:04:03,560
than base band, except the main
difference now is instead
62
00:04:03,560 --> 00:04:07,390
of a PAM modulation, you have
a QAM modulator, and your
63
00:04:07,390 --> 00:04:10,960
symbols, X of k and N of k, will
be complex numbers rather
64
00:04:10,960 --> 00:04:13,160
than the real numbers.
65
00:04:13,160 --> 00:04:15,870
Now the connection between
continuous time and discrete
66
00:04:15,870 --> 00:04:19,519
time systems can be made more
precise by relating some
67
00:04:19,519 --> 00:04:23,930
parameters in continuous time
with those in discrete time.
68
00:04:23,930 --> 00:04:24,530
OK?
69
00:04:24,530 --> 00:04:33,590
So we have continuous time
parameters, and discrete time
70
00:04:33,590 --> 00:04:34,840
parameters.
71
00:04:34,840 --> 00:04:41,020
72
00:04:41,020 --> 00:04:50,950
So a continuous time parameter
is a bandwidth, W. A discrete
73
00:04:50,950 --> 00:05:00,505
time parameter is given by
symbol interval T, and the
74
00:05:00,505 --> 00:05:03,170
relation is T equals
1 over 2W.
75
00:05:03,170 --> 00:05:04,420
This is for a PAM system.
76
00:05:04,420 --> 00:05:08,070
77
00:05:08,070 --> 00:05:08,530
OK?
78
00:05:08,530 --> 00:05:12,600
So something is shown in 6.450,
and it's shown by using
79
00:05:12,600 --> 00:05:14,630
Nyquist's ISI criteria.
80
00:05:14,630 --> 00:05:18,040
Do not want to have ISI in the
system, the maximum symbol
81
00:05:18,040 --> 00:05:20,410
rate, or rather the minimum
symbol rate,
82
00:05:20,410 --> 00:05:21,910
would be 1 over 2W.
83
00:05:21,910 --> 00:05:25,920
You cannot send symbols
faster than this rate.
84
00:05:25,920 --> 00:05:30,630
A second criteria is power,
which is P here,
85
00:05:30,630 --> 00:05:32,600
in continuous time.
86
00:05:32,600 --> 00:05:37,930
In discrete time the equivalent
parameter is energy
87
00:05:37,930 --> 00:05:39,580
per two dimensions.
88
00:05:39,580 --> 00:05:43,800
89
00:05:43,800 --> 00:05:49,420
It's denoted by Es, and Es is
related to P by using Es
90
00:05:49,420 --> 00:05:51,130
equals 2 times PT.
91
00:05:51,130 --> 00:05:55,340
Note that this is for two
dimensions, meaning in PAM we
92
00:05:55,340 --> 00:06:00,040
have one symbol per dimension,
so this is for two symbols.
93
00:06:00,040 --> 00:06:03,690
This is energy for two symbols
for a PAM modulator, and the
94
00:06:03,690 --> 00:06:06,090
relation is Es equals
2 times PT.
95
00:06:06,090 --> 00:06:08,915
T is the time for
the one symbol.
96
00:06:08,915 --> 00:06:11,950
We also -- we are looking at
energy for two symbols, so we
97
00:06:11,950 --> 00:06:13,200
multiply P by 2T.
98
00:06:13,200 --> 00:06:16,470
99
00:06:16,470 --> 00:06:18,900
OK?
100
00:06:18,900 --> 00:06:25,560
Noise in continuous time is
AWGN, for Additive White
101
00:06:25,560 --> 00:06:30,800
Gaussian Noise process, and the
power spectral density is
102
00:06:30,800 --> 00:06:35,110
flat on positive support
of bandwidth, and
103
00:06:35,110 --> 00:06:37,890
the height is N_0.
104
00:06:37,890 --> 00:06:40,070
OK, so this is how the
spectral density is.
105
00:06:40,070 --> 00:06:45,740
In 6.450, we looked at
double-sided power spectral
106
00:06:45,740 --> 00:06:48,980
density when the height was N_0
over two, but it was going
107
00:06:48,980 --> 00:06:50,660
over both the positive and
108
00:06:50,660 --> 00:06:53,200
negative part of the bandwidth.
109
00:06:53,200 --> 00:06:55,180
Here, we are only looking at
the positive part of the
110
00:06:55,180 --> 00:06:57,730
bandwidth, and we are going
to scale noise by N_0.
111
00:06:57,730 --> 00:06:58,980
This is just a convention.
112
00:06:58,980 --> 00:07:02,760
113
00:07:02,760 --> 00:07:09,780
In discrete time, your noise
sequence is Nk, and is an IID
114
00:07:09,780 --> 00:07:14,090
and you take them as Gaussians
with zero mean, and variance
115
00:07:14,090 --> 00:07:15,930
of N_0 over 2.
116
00:07:15,930 --> 00:07:17,530
So we have noise which
has a variance of
117
00:07:17,530 --> 00:07:20,220
N_0 over 2 per dimension.
118
00:07:20,220 --> 00:07:23,360
And this was, again, shown by
this theorem that when you
119
00:07:23,360 --> 00:07:28,040
project noise onto each of the
orthonormal wave forms, you
120
00:07:28,040 --> 00:07:32,155
get the variance
is N_0 over 2.
121
00:07:32,155 --> 00:07:33,990
OK?
122
00:07:33,990 --> 00:07:38,990
Instead of a base band system,
if you had a fast band system,
123
00:07:38,990 --> 00:07:42,000
then instead of having
a PAM modulator, we
124
00:07:42,000 --> 00:07:44,280
would have a QAM modulator.
125
00:07:44,280 --> 00:07:48,330
So if instead of PAM we had a
QAM then the main difference
126
00:07:48,330 --> 00:07:50,350
now would be that these symbols
are going to be
127
00:07:50,350 --> 00:07:53,310
complex numbers instead
of real.
128
00:07:53,310 --> 00:07:57,540
So what's the symbol interval
now going to be?
129
00:07:57,540 --> 00:07:58,500
AUDIENCE: 1 over W.
130
00:07:58,500 --> 00:08:02,070
PROFESSOR: It's going
to be 1 over W.
131
00:08:02,070 --> 00:08:04,630
Instead of sending a real
symbol, we are sending one
132
00:08:04,630 --> 00:08:08,370
complex symbol, which is
occupying two dimensions, and
133
00:08:08,370 --> 00:08:12,770
our symbol rate is going to be 1
over W. Well, the energy for
134
00:08:12,770 --> 00:08:18,410
two dimensions, is still given
by Es equals 2PT, or
135
00:08:18,410 --> 00:08:23,570
equivalently, P over W. OK?
136
00:08:23,570 --> 00:08:24,820
Or that's --
137
00:08:24,820 --> 00:08:32,200
138
00:08:32,200 --> 00:08:38,080
The noise samples Nk are still
IID, but now they are complex
139
00:08:38,080 --> 00:08:44,240
Gaussians, with zero mean,
and variance N_0.
140
00:08:44,240 --> 00:08:45,790
Or equivalently, they
have a variance of
141
00:08:45,790 --> 00:08:48,590
N_0 over 2 per dimension.
142
00:08:48,590 --> 00:08:52,060
So what we see here is that we
can have analogous definitions
143
00:08:52,060 --> 00:08:55,590
for discrete time and continuous
time, and one of
144
00:08:55,590 --> 00:08:59,750
the key parameters that comes up
over and over again in the
145
00:08:59,750 --> 00:09:04,220
analysis is this notion of
signal to noise ratio.
146
00:09:04,220 --> 00:09:09,550
The signal to noise ratio is
defined as the energy per two
147
00:09:09,550 --> 00:09:13,335
dimensions, over the noise
variance per two dimensions.
148
00:09:13,335 --> 00:09:17,350
149
00:09:17,350 --> 00:09:19,570
So that's the definition of
signal to noise ratio.
150
00:09:19,570 --> 00:09:22,310
151
00:09:22,310 --> 00:09:28,630
Es, well, it's 2PT, or
equivalently, P over W. The
152
00:09:28,630 --> 00:09:33,040
noise variance for two
dimensions is N_0, so the
153
00:09:33,040 --> 00:09:35,190
definition is the same as --
154
00:09:35,190 --> 00:09:40,510
SNR equals P over N_0 W. OK,
are there any questions?
155
00:09:40,510 --> 00:09:44,600
156
00:09:44,600 --> 00:09:49,150
The other notion we talked about
last time is this idea
157
00:09:49,150 --> 00:09:50,400
of the spectral efficiency.
158
00:09:50,400 --> 00:09:58,340
159
00:09:58,340 --> 00:10:02,310
In continuous time, the
definition is quite natural.
160
00:10:02,310 --> 00:10:05,380
It's denoted by symbol rho.
161
00:10:05,380 --> 00:10:13,060
The units are bits per second
per Hertz, and it's basically
162
00:10:13,060 --> 00:10:20,916
R over W. You have R bits
per second over W hertz.
163
00:10:20,916 --> 00:10:22,600
So it's the amount of
information bits that I'm able
164
00:10:22,600 --> 00:10:25,050
to send over the amount
of bandwidth that
165
00:10:25,050 --> 00:10:26,300
I have in my system.
166
00:10:26,300 --> 00:10:29,050
167
00:10:29,050 --> 00:10:40,300
In discrete time, we can also
define the same idea of
168
00:10:40,300 --> 00:10:48,220
spectral efficiency, but it's
usually -- and a good way to
169
00:10:48,220 --> 00:10:51,600
think about spectral efficiency
from a point of
170
00:10:51,600 --> 00:10:54,100
view of a design
of an encoder.
171
00:10:54,100 --> 00:11:02,680
What the encoder does, is you
have an encoder here, it takes
172
00:11:02,680 --> 00:11:04,620
a sequence of bits in--
173
00:11:04,620 --> 00:11:08,220
so say you have b
bits coming in--
174
00:11:08,220 --> 00:11:09,470
and it produces N symbols.
175
00:11:09,470 --> 00:11:14,050
176
00:11:14,050 --> 00:11:19,100
So this could be X1, X2,
Xn, and you have a
177
00:11:19,100 --> 00:11:21,160
sequence of B bits --
178
00:11:21,160 --> 00:11:28,100
b1, b2, b subcapital B. This
is how an encoder operates.
179
00:11:28,100 --> 00:11:31,950
It maps a sequence of bits
to a sequence of symbols.
180
00:11:31,950 --> 00:11:34,045
Now where does the encoder fit
into this architecture?
181
00:11:34,045 --> 00:11:38,200
182
00:11:38,200 --> 00:11:40,640
It fits right here at
the front, right?
183
00:11:40,640 --> 00:11:43,220
You have bits coming in, you
encode them, you produce a
184
00:11:43,220 --> 00:11:47,440
sequence of symbols, and you
send them over the channel.
185
00:11:47,440 --> 00:11:51,040
So if I have this encoder,
what's my the spectral
186
00:11:51,040 --> 00:11:52,300
efficiency going to be?
187
00:11:52,300 --> 00:12:03,790
188
00:12:03,790 --> 00:12:06,100
Well, you have to ask what
the encoder does, right?
189
00:12:06,100 --> 00:12:08,610
So from here, we have
a PAM modulator.
190
00:12:08,610 --> 00:12:15,210
191
00:12:15,210 --> 00:12:17,315
So it's basically this
from here on, we
192
00:12:17,315 --> 00:12:19,910
are back to the system.
193
00:12:19,910 --> 00:12:21,560
So what's the spectral
efficiency
194
00:12:21,560 --> 00:12:22,810
now for this system?
195
00:12:22,810 --> 00:12:29,580
196
00:12:29,580 --> 00:12:31,035
How many bits do you
have per symbol?
197
00:12:31,035 --> 00:12:34,823
198
00:12:34,823 --> 00:12:36,820
AUDIENCE: [UNINTELLIGIBLE]
199
00:12:36,820 --> 00:12:39,730
PROFESSOR: You have B over
N bits per symbol.
200
00:12:39,730 --> 00:12:43,620
Now how many symbols do you have
per dimension, if this is
201
00:12:43,620 --> 00:12:44,960
an orthonormal PAM system?
202
00:12:44,960 --> 00:12:51,890
203
00:12:51,890 --> 00:12:52,890
AUDIENCE: One symbol
per dimension.
204
00:12:52,890 --> 00:12:54,900
PROFESSOR: You have one symbol
per dimension, right?
205
00:12:54,900 --> 00:12:59,060
So you have B bits per N
dimensions, in other words.
206
00:12:59,060 --> 00:13:00,310
AUDIENCE: [UNINTELLIGIBLE]
207
00:13:00,310 --> 00:13:04,490
208
00:13:04,490 --> 00:13:07,970
PROFESSOR: So in QAM how many
-- you have usually how many
209
00:13:07,970 --> 00:13:09,300
symbols per dimension?
210
00:13:09,300 --> 00:13:09,740
AUDIENCE: [UNINTELLIGIBLE]
211
00:13:09,740 --> 00:13:12,880
PROFESSOR: Half symbols
per dimension, right.
212
00:13:12,880 --> 00:13:15,650
So in that case, the spectral
efficiency --
213
00:13:15,650 --> 00:13:23,160
the units are bits per two
dimensions, is 2B over N. B
214
00:13:23,160 --> 00:13:26,805
over N bits per dimension,
because the units are bits per
215
00:13:26,805 --> 00:13:29,890
two dimensions, you get
2B over N bits per two
216
00:13:29,890 --> 00:13:31,640
dimensions.
217
00:13:31,640 --> 00:13:32,600
OK?
218
00:13:32,600 --> 00:13:36,480
Now a natural question to ask
is, how is this definition
219
00:13:36,480 --> 00:13:39,060
related to this definition
here?
220
00:13:39,060 --> 00:13:42,130
This is quite the natural
definition, and here we have
221
00:13:42,130 --> 00:13:44,300
imposed this encoder
structure.
222
00:13:44,300 --> 00:13:47,490
Are the two definitions
equivalent in any way?
223
00:13:47,490 --> 00:13:57,620
And in order to understand
this, let us take
224
00:13:57,620 --> 00:13:58,870
a one-second snapshot.
225
00:13:58,870 --> 00:14:11,370
226
00:14:11,370 --> 00:14:19,250
So now in one second, I can
send N equals 2W symbols.
227
00:14:19,250 --> 00:14:23,010
Because this is an orthonormal
PAM system, I can send 2W
228
00:14:23,010 --> 00:14:25,542
symbols per second, but
in one second, I can
229
00:14:25,542 --> 00:14:28,220
send N equals 2W symbols.
230
00:14:28,220 --> 00:14:33,680
Because my rate is R bits per
second, B equals R, in one
231
00:14:33,680 --> 00:14:35,850
second I can send R bits.
232
00:14:35,850 --> 00:14:39,980
So now my definition of rho,
which I defined to be 2B over
233
00:14:39,980 --> 00:14:47,000
N, is same as 2R over 2W, which
is R over W. So the
234
00:14:47,000 --> 00:14:49,380
definitions in continuous time
and discrete time are
235
00:14:49,380 --> 00:14:50,630
equivalent.
236
00:14:50,630 --> 00:14:53,130
237
00:14:53,130 --> 00:14:54,380
OK?
238
00:14:54,380 --> 00:15:25,540
239
00:15:25,540 --> 00:15:28,950
Now, why is spectral efficiency
very important?
240
00:15:28,950 --> 00:15:32,200
Well, there is a very famous
theorem by Shannon which gives
241
00:15:32,200 --> 00:15:35,100
us a nice upper bound on the
spectral efficiency.
242
00:15:35,100 --> 00:15:38,600
Perhaps the most important
theorem in communications is
243
00:15:38,600 --> 00:15:44,610
Shannon's Theorem, it says
that if you have an AWGN
244
00:15:44,610 --> 00:15:48,830
system with a certain SNR, then
you can immediately bound
245
00:15:48,830 --> 00:15:56,720
the spectral efficiency by log2
of 1 plus SNR bits per
246
00:15:56,720 --> 00:15:59,070
two dimensions.
247
00:15:59,070 --> 00:16:01,410
This is a very powerful
statement.
248
00:16:01,410 --> 00:16:10,460
Equivalently, the capacity of an
AWGN channel is log2 1 plus
249
00:16:10,460 --> 00:16:15,090
SNR bits per second.
250
00:16:15,090 --> 00:16:17,750
So one important observation
here is if I have a
251
00:16:17,750 --> 00:16:20,850
communication system, and if
what I care about is the
252
00:16:20,850 --> 00:16:24,370
spectral efficiency of the
capacity, there are only two
253
00:16:24,370 --> 00:16:26,360
terms that are important.
254
00:16:26,360 --> 00:16:30,750
One is the signal to noise
ratio, which is P over N_0 W,
255
00:16:30,750 --> 00:16:33,020
which is defined here.
256
00:16:33,020 --> 00:16:36,230
So the individual units of P and
N_0 doesn't matter, it's
257
00:16:36,230 --> 00:16:38,710
only the ratio of P over
N_0 that matters.
258
00:16:38,710 --> 00:16:41,240
And the second parameter is the
bandwidth W that we have
259
00:16:41,240 --> 00:16:42,440
in the system.
260
00:16:42,440 --> 00:16:45,930
So signal to noise ratio and
bandwidth are in some sense
261
00:16:45,930 --> 00:16:49,000
fundamental to the system.
262
00:16:49,000 --> 00:16:53,050
An operational meaning of this
theorem is that if I look at
263
00:16:53,050 --> 00:16:56,260
this encoder, then it gives me
an upper bound of how many
264
00:16:56,260 --> 00:16:59,090
bits I can put for
each symbol.
265
00:16:59,090 --> 00:17:02,250
The number of bits that I can
put on each symbol is upper
266
00:17:02,250 --> 00:17:06,060
bounded by this term here,
log2 1 plus SNR.
267
00:17:06,060 --> 00:17:10,500
I cannot put arbitrarily many
bits per each symbol here.
268
00:17:10,500 --> 00:17:12,819
Now in order to make such a
statement, there has to be
269
00:17:12,819 --> 00:17:15,490
some criteria that we
need to satisfy.
270
00:17:15,490 --> 00:17:17,119
And what is the criteria
for Shannon's Theorem?
271
00:17:17,119 --> 00:17:23,609
272
00:17:23,609 --> 00:17:26,220
In order to make such a
statement, I need to say that
273
00:17:26,220 --> 00:17:28,630
some objective function
has to be satisfied.
274
00:17:28,630 --> 00:17:30,800
Because in some sense, I could
just put any number of bits I
275
00:17:30,800 --> 00:17:33,270
could have on upper
symbol, right?
276
00:17:33,270 --> 00:17:36,130
The encoder could just
put 100 or 200 bits.
277
00:17:36,130 --> 00:17:38,390
What's going to limit me?
278
00:17:38,390 --> 00:17:39,580
AUDIENCE: Probability
of error?
279
00:17:39,580 --> 00:17:40,920
PROFESSOR: The probability
of error.
280
00:17:40,920 --> 00:17:42,280
So this assumes that --
281
00:17:42,280 --> 00:17:53,220
282
00:17:53,220 --> 00:17:54,460
OK?
283
00:17:54,460 --> 00:17:58,256
Now, you're not responsible for
the proof of this theorem,
284
00:17:58,256 --> 00:18:00,435
it is in Chapter Three
of the notes.
285
00:18:00,435 --> 00:18:09,420
286
00:18:09,420 --> 00:18:11,890
Basically, it's just a random
coding argument, which is
287
00:18:11,890 --> 00:18:15,720
quite standard in information
theory.
288
00:18:15,720 --> 00:18:19,330
So if you already taken
information theory or
289
00:18:19,330 --> 00:18:22,370
otherwise, you would probably
have seen that argument
290
00:18:22,370 --> 00:18:25,440
involves bounding atypical
events that happen.
291
00:18:25,440 --> 00:18:28,190
So the probability of error is
an atypical event, and we use
292
00:18:28,190 --> 00:18:30,230
asymptotic equipartition
property to
293
00:18:30,230 --> 00:18:32,010
bound the error event.
294
00:18:32,010 --> 00:18:35,210
There's a standard proof in
Cover and Thomas, for example.
295
00:18:35,210 --> 00:18:39,310
Now the main difference in
Professor Forney's approach is
296
00:18:39,310 --> 00:18:42,320
that he uses this Theory
of Large Deviations.
297
00:18:42,320 --> 00:18:46,640
Theory of Large Deviations
basically gives you a bound on
298
00:18:46,640 --> 00:18:49,370
the occurrence of rare events,
and it is well-known in
299
00:18:49,370 --> 00:18:51,070
statistical mechanics.
300
00:18:51,070 --> 00:18:53,620
So it's kind of a different
approach to the same problem.
301
00:18:53,620 --> 00:18:55,830
The basic idea is same as you
would find in a standard
302
00:18:55,830 --> 00:18:57,720
proof, but it uses --
303
00:18:57,720 --> 00:19:00,010
it comes from this idea of
large deviations theory.
304
00:19:00,010 --> 00:19:07,830
305
00:19:07,830 --> 00:19:10,650
So for those of you who are
taking information theory or
306
00:19:10,650 --> 00:19:13,570
already have seen it, I urge
you, go at some point, take a
307
00:19:13,570 --> 00:19:14,950
look at this proof.
308
00:19:14,950 --> 00:19:16,670
It's quite cool.
309
00:19:16,670 --> 00:19:18,950
I already saw somebody reading
this last Friday, and it was
310
00:19:18,950 --> 00:19:19,950
quite impressive.
311
00:19:19,950 --> 00:19:21,800
So I urge more of
you to do that.
312
00:19:21,800 --> 00:19:28,220
313
00:19:28,220 --> 00:19:32,810
So now that we have the spectral
efficiency, a natural
314
00:19:32,810 --> 00:19:34,200
thing is to plot how
the spectral
315
00:19:34,200 --> 00:19:36,220
efficiency looks like.
316
00:19:36,220 --> 00:19:41,080
So what I'm going to plot is
as a function of SNR the
317
00:19:41,080 --> 00:19:43,990
spectral efficiency.
318
00:19:43,990 --> 00:19:46,700
Typically, when you plot SNR
on the x-axis, you almost
319
00:19:46,700 --> 00:19:49,140
always plot it on a dB scale.
320
00:19:49,140 --> 00:19:51,280
But I'm going to make one
exception this time, and I'm
321
00:19:51,280 --> 00:19:55,900
going to plot this on
a linear scale.
322
00:19:55,900 --> 00:20:02,400
So this point is zero, or minus
infinity dB here, and
323
00:20:02,400 --> 00:20:03,590
I'm going to plot --
324
00:20:03,590 --> 00:20:06,580
well I should call this,
actually, rho Shannon, so I
325
00:20:06,580 --> 00:20:08,910
don't confuse the notation.
326
00:20:08,910 --> 00:20:11,650
So we'll define rho Shannon
as log2 1 plus SNR.
327
00:20:11,650 --> 00:20:16,030
328
00:20:16,030 --> 00:20:19,840
So this is rho Shannon here,
and we want to plot rho
329
00:20:19,840 --> 00:20:22,630
Shannon as a function of SNR.
330
00:20:22,630 --> 00:20:26,160
Now if my SNR is really small,
log 1 plus SNR is
331
00:20:26,160 --> 00:20:31,000
approximately linear, so I get
a linear increase here.
332
00:20:31,000 --> 00:20:38,440
If my SNR is large, then the
logarithmic behavior kicks
333
00:20:38,440 --> 00:20:41,550
into this expression here, so
now the spectral efficiency
334
00:20:41,550 --> 00:20:44,900
grows slower and slower
with SNR.
335
00:20:44,900 --> 00:20:49,630
So this is a basic shape for
my spectral efficiency.
336
00:20:49,630 --> 00:20:52,060
And this immediately suggests
that there are two different
337
00:20:52,060 --> 00:20:54,110
operating regimes we have.
338
00:20:54,110 --> 00:20:57,030
One regime where the spectral
efficiency increases linearly
339
00:20:57,030 --> 00:21:00,320
with SNR, another regime where
the spectral efficiency
340
00:21:00,320 --> 00:21:03,220
increases logarithmically
with SNR.
341
00:21:03,220 --> 00:21:11,250
So if SNR is very small, then
we call this regime as
342
00:21:11,250 --> 00:21:12,500
power-limited regime.
343
00:21:12,500 --> 00:21:21,200
344
00:21:21,200 --> 00:21:26,920
And if SNR is large,
we call this the
345
00:21:26,920 --> 00:21:29,320
bandwidth-limited regime.
346
00:21:29,320 --> 00:21:34,820
347
00:21:34,820 --> 00:21:36,790
These are our definitions.
348
00:21:36,790 --> 00:21:39,230
And let's see what motivates
their names.
349
00:21:39,230 --> 00:21:43,200
Suppose I have a 3 dB increase
in my SNR, and I am in power
350
00:21:43,200 --> 00:21:44,760
limited regime.
351
00:21:44,760 --> 00:21:46,025
How does rho Shannon increase?
352
00:21:46,025 --> 00:21:51,588
353
00:21:51,588 --> 00:21:52,838
AUDIENCE: [UNINTELLIGIBLE]
354
00:21:52,838 --> 00:21:56,370
355
00:21:56,370 --> 00:21:58,440
PROFESSOR: A factor
of 2, right?
356
00:21:58,440 --> 00:22:02,220
Basically, if I have a 3 dB
increase in SNR, my SNR
357
00:22:02,220 --> 00:22:04,330
increases by a factor of 2.
358
00:22:04,330 --> 00:22:08,230
In this regime, rho Shannon
increases linearly with SNR,
359
00:22:08,230 --> 00:22:10,590
so I have a factor of 2 increase
in my spectral
360
00:22:10,590 --> 00:22:11,680
efficiency.
361
00:22:11,680 --> 00:22:13,840
What about this regime here?
362
00:22:13,840 --> 00:22:17,760
If SNR increases by 3 dB, how
does rho Shannon increase?
363
00:22:17,760 --> 00:22:31,080
364
00:22:31,080 --> 00:22:34,690
It increases by one bit
per two dimensions.
365
00:22:34,690 --> 00:22:37,200
Units of rho are bits
per two dimensions.
366
00:22:37,200 --> 00:22:40,130
I have a logarithmic behavior
kicking in, so rho is
367
00:22:40,130 --> 00:22:42,060
approximately log SNR.
368
00:22:42,060 --> 00:22:45,120
If I increase SNR by a factor
of 2, I get an additional 1
369
00:22:45,120 --> 00:22:48,030
bit per two dimension scale.
370
00:22:48,030 --> 00:22:49,590
OK?
371
00:22:49,590 --> 00:22:53,140
If I increase my bandwidth in
the power-limited regime by a
372
00:22:53,140 --> 00:22:54,650
factor of 2.
373
00:22:54,650 --> 00:22:57,870
So this bandwidth here increases
by a factor of 2,
374
00:22:57,870 --> 00:23:02,750
how does my capacity change,
if I'm in the
375
00:23:02,750 --> 00:23:04,000
power-limited regime?
376
00:23:04,000 --> 00:23:09,820
377
00:23:09,820 --> 00:23:12,980
There's no change, right?
378
00:23:12,980 --> 00:23:16,270
Or is there a change?
379
00:23:16,270 --> 00:23:19,570
I'm in the power-limited regime
here, and I increase my
380
00:23:19,570 --> 00:23:21,185
bandwidth by a factor of 2.
381
00:23:21,185 --> 00:23:22,435
AUDIENCE: [UNINTELLIGIBLE]
382
00:23:22,435 --> 00:23:26,520
383
00:23:26,520 --> 00:23:27,500
PROFESSOR: Right.
384
00:23:27,500 --> 00:23:29,590
What is my SNR?
385
00:23:29,590 --> 00:23:32,960
It's P over W N_0.
386
00:23:32,960 --> 00:23:36,400
So what happens if
I fix P over N_0?
387
00:23:36,400 --> 00:23:38,190
OK, yeah, so you had it.
388
00:23:38,190 --> 00:23:41,370
Basically, if I double my
bandwidth, my SNR decreases by
389
00:23:41,370 --> 00:23:47,350
a factor of 2, so this term
here is like SNR over 2, W
390
00:23:47,350 --> 00:23:49,440
increases by a factor
of 2, and there is
391
00:23:49,440 --> 00:23:51,980
no change in bandwidth.
392
00:23:51,980 --> 00:23:53,230
So let's do it more slowly.
393
00:23:53,230 --> 00:24:00,590
394
00:24:00,590 --> 00:24:01,840
So we have, say, in the --
395
00:24:01,840 --> 00:24:12,680
396
00:24:12,680 --> 00:24:19,390
and say, my C is now
W log2 1 plus SNR.
397
00:24:19,390 --> 00:24:24,970
Instead of SNR, I will write
P over N_0 W. So this
398
00:24:24,970 --> 00:24:31,970
approximately is W times
P over N_0 W times log
399
00:24:31,970 --> 00:24:35,090
E to the base 2.
400
00:24:35,090 --> 00:24:37,330
And this is approximately --
401
00:24:37,330 --> 00:24:44,300
this is basically P over N_0
times log E to the base 2.
402
00:24:44,300 --> 00:24:47,330
So in other words, in the
power-limited regime, changing
403
00:24:47,330 --> 00:24:50,970
bandwidth has no effect
on the capacity.
404
00:24:50,970 --> 00:24:53,060
What happens in the
bandwidth-limited regime?
405
00:24:53,060 --> 00:25:05,860
406
00:25:05,860 --> 00:25:14,130
Well, in this case
C equals W log2.
407
00:25:14,130 --> 00:25:16,680
Well, I can say approximately,
and I can ignore this factor
408
00:25:16,680 --> 00:25:20,750
of 1, because P over N_0 W is
large, because my SNR is large
409
00:25:20,750 --> 00:25:22,730
in the bandwidth-limited
regime.
410
00:25:22,730 --> 00:25:27,580
I'm operating in this
part of the graph.
411
00:25:27,580 --> 00:25:33,720
So this is P over N_0 W. Now
suppose I increase my
412
00:25:33,720 --> 00:25:35,890
bandwidth by a factor of 2.
413
00:25:35,890 --> 00:25:44,110
C prime will be 2W log2
over P over 2 W N_0.
414
00:25:44,110 --> 00:25:48,040
415
00:25:48,040 --> 00:25:49,340
All right?
416
00:25:49,340 --> 00:26:00,220
This, I can write it as 2W log2
P over N_0 W minus --
417
00:26:00,220 --> 00:26:04,050
this is to the base 2, so
I can write a 1 here.
418
00:26:04,050 --> 00:26:06,900
And this term is approximately
same --
419
00:26:06,900 --> 00:26:10,750
if I ignore the 1, if I assume
P over N_0 W is quite large,
420
00:26:10,750 --> 00:26:13,330
then I can just ignore
the subtraction of 1.
421
00:26:13,330 --> 00:26:19,610
So I get 2W log2 P over N_0 W,
or it's equivalently 2C.
422
00:26:19,610 --> 00:26:23,040
OK, so in other words, in the
bandwidth-limited regime, if I
423
00:26:23,040 --> 00:26:28,480
increase my W by a factor of 2,
the capacity approximately
424
00:26:28,480 --> 00:26:30,330
increases by a factor of 2.
425
00:26:30,330 --> 00:26:31,960
So that's what motivates
the name
426
00:26:31,960 --> 00:26:35,040
bandwidth-limited regime here.
427
00:26:35,040 --> 00:26:36,290
Are there any questions?
428
00:26:36,290 --> 00:26:40,180
429
00:26:40,180 --> 00:26:41,430
OK.
430
00:26:41,430 --> 00:27:49,370
431
00:27:49,370 --> 00:27:50,010
All right.
432
00:27:50,010 --> 00:27:54,390
So it turns out that there are
two points I wanted to say.
433
00:27:54,390 --> 00:27:57,920
First of all, one might ask,
that fine, these seem to be
434
00:27:57,920 --> 00:28:01,580
interesting definitions, that
SNR is much smaller than one,
435
00:28:01,580 --> 00:28:04,860
and SNR is much larger than one,
but is there any kind of
436
00:28:04,860 --> 00:28:07,830
hard division between the
bandwidth-limited regime and
437
00:28:07,830 --> 00:28:09,450
power-limited regime?
438
00:28:09,450 --> 00:28:10,900
I mean, the general
answer is no.
439
00:28:10,900 --> 00:28:11,930
It's basically --
440
00:28:11,930 --> 00:28:15,680
because there is some point when
the capacity appears not
441
00:28:15,680 --> 00:28:18,000
strictly logarithmically,
or strictly
442
00:28:18,000 --> 00:28:19,910
linearly in terms of SNR.
443
00:28:19,910 --> 00:28:23,980
But from an engineering point
of view, we take rho equals
444
00:28:23,980 --> 00:28:34,510
two bits per two dimension as
a dividing point between the
445
00:28:34,510 --> 00:28:35,760
two regimes.
446
00:28:35,760 --> 00:28:44,780
447
00:28:44,780 --> 00:28:48,100
And one motivation to see why
rho equals two bits per two
448
00:28:48,100 --> 00:28:51,180
dimension is a good choice is
because this is the maximum
449
00:28:51,180 --> 00:28:55,680
spectral efficiency we can get
from binary modulation.
450
00:28:55,680 --> 00:28:58,340
OK, if you want to have spectral
efficiency more than
451
00:28:58,340 --> 00:29:01,160
two bits per two dimensions, you
have to go to multi-level
452
00:29:01,160 --> 00:29:05,830
modulation, and that's one of
the reasons why this choice is
453
00:29:05,830 --> 00:29:09,090
often used in practice.
454
00:29:09,090 --> 00:29:12,180
Another point is that the
bandwidth-limited regime and
455
00:29:12,180 --> 00:29:15,340
power-limited regime, they
behave quite differently in
456
00:29:15,340 --> 00:29:18,390
almost all criterias that
you can think about.
457
00:29:18,390 --> 00:29:20,040
So usually, with
the analysis --
458
00:29:20,040 --> 00:29:22,490
we keep them separately and do
the analysis differently in
459
00:29:22,490 --> 00:29:24,780
the bandwidth and power-limited
regime, and
460
00:29:24,780 --> 00:29:27,130
that's what we will be doing in
the subsequent part of this
461
00:29:27,130 --> 00:29:28,450
lecture and next lecture.
462
00:29:28,450 --> 00:29:34,110
463
00:29:34,110 --> 00:29:36,020
So we start with the
power-limited regime.
464
00:29:36,020 --> 00:29:46,520
465
00:29:46,520 --> 00:30:03,230
Now, we already saw that in
doubling the power doubles rho
466
00:30:03,230 --> 00:30:19,600
S, and doubling bandwidth
does not change C.
467
00:30:19,600 --> 00:30:27,890
OK, the other point I
mentioned was binary
468
00:30:27,890 --> 00:30:37,510
modulation is sufficient in
this regime, because our
469
00:30:37,510 --> 00:30:39,460
spectral efficiency is less
than two bits per two
470
00:30:39,460 --> 00:30:42,950
dimensions, so the idea is to
have a strong code followed by
471
00:30:42,950 --> 00:30:44,200
binary modulation.
472
00:30:44,200 --> 00:30:46,490
473
00:30:46,490 --> 00:30:47,740
What else?
474
00:30:47,740 --> 00:30:51,780
475
00:30:51,780 --> 00:30:52,030
Right.
476
00:30:52,030 --> 00:31:01,100
Typically, the normalization
is done in
477
00:31:01,100 --> 00:31:02,350
per information bit.
478
00:31:02,350 --> 00:31:09,840
479
00:31:09,840 --> 00:31:11,300
What does this mean?
480
00:31:11,300 --> 00:31:14,100
When we are wanting to compare
different systems, we will
481
00:31:14,100 --> 00:31:16,220
look at all the parameters
normalized
482
00:31:16,220 --> 00:31:18,150
per information bit.
483
00:31:18,150 --> 00:31:20,570
For example, if we want to look
at the probability of
484
00:31:20,570 --> 00:31:25,450
error, we look at this quantity
here, Pb of E, which
485
00:31:25,450 --> 00:31:29,060
is the probability of error
per information bit, as a
486
00:31:29,060 --> 00:31:30,310
function of Eb/N_0.
487
00:31:30,310 --> 00:31:34,050
488
00:31:34,050 --> 00:31:37,270
This is an important trade-off
that we wish to study in
489
00:31:37,270 --> 00:31:39,020
power-limited regime.
490
00:31:39,020 --> 00:31:43,310
Eb is the energy per bit, P sub
b of E is the probability
491
00:31:43,310 --> 00:31:45,372
of error per information bit.
492
00:31:45,372 --> 00:32:08,510
493
00:32:08,510 --> 00:32:11,360
So I want to spend some time on
this Eb/N_0, because it's
494
00:32:11,360 --> 00:32:13,370
an important concept
that we'll be using
495
00:32:13,370 --> 00:32:15,080
often in the course.
496
00:32:15,080 --> 00:32:16,870
So what is Eb/N_0?
497
00:32:16,870 --> 00:32:21,190
It's sometimes also known as Eb
over N_0, so it depends how
498
00:32:21,190 --> 00:32:23,340
you wish to call it.
499
00:32:23,340 --> 00:32:25,995
What is energy per bit
in terms of Es?
500
00:32:25,995 --> 00:32:36,490
501
00:32:36,490 --> 00:32:37,010
AUDIENCE: [UNINTELLIGIBLE]
502
00:32:37,010 --> 00:32:38,280
PROFESSOR: Well, Es
over rho, right?
503
00:32:38,280 --> 00:32:39,890
Energy per bit.
504
00:32:39,890 --> 00:32:42,240
Well, Es is energy per
two dimensions.
505
00:32:42,240 --> 00:32:45,330
Rho is bits per two dimensions,
so Eb is Es over
506
00:32:45,330 --> 00:32:52,150
rho, and you have N_0 here.
507
00:32:52,150 --> 00:32:57,403
And Es over N_0 is also our SNR,
so this is SNR over rho.
508
00:32:57,403 --> 00:33:00,160
509
00:33:00,160 --> 00:33:04,410
OK, so that's how Eb/N_0
is defined.
510
00:33:04,410 --> 00:33:11,270
Now we know from Shannon that
rho is always less than log2 1
511
00:33:11,270 --> 00:33:16,810
plus SNR, or equivalently,
2 to the rho minus 1
512
00:33:16,810 --> 00:33:19,990
is less than SNR.
513
00:33:19,990 --> 00:33:24,216
If I sub in here, this means
that SNR is greater than 2 to
514
00:33:24,216 --> 00:33:27,980
the rho minus 1 over
rho, and then we're
515
00:33:27,980 --> 00:33:29,370
just dividing by rho.
516
00:33:29,370 --> 00:33:31,590
So Eb/N_0 is always greater
than 2 to the rho
517
00:33:31,590 --> 00:33:33,760
minus 1 over rho.
518
00:33:33,760 --> 00:33:36,420
And this is quite an interesting
observation.
519
00:33:36,420 --> 00:33:39,460
For example, if you are
analyzing the feasibility of a
520
00:33:39,460 --> 00:33:42,710
communication system, which
has a certain spectral
521
00:33:42,710 --> 00:33:46,240
efficiency, and a certain
Eb/N_0, then you can
522
00:33:46,240 --> 00:33:48,990
immediately check this relation,
to see if it is a
523
00:33:48,990 --> 00:33:51,840
system in the first place.
524
00:33:51,840 --> 00:33:54,240
What Shannon says is that Eb/N_0
is always going to be
525
00:33:54,240 --> 00:33:57,060
greater than 2 to the rho
minus 1 over rho.
526
00:33:57,060 --> 00:34:00,030
So if you see that this relation
is not satisfied,
527
00:34:00,030 --> 00:34:01,610
immediately know something
is wrong.
528
00:34:01,610 --> 00:34:04,160
529
00:34:04,160 --> 00:34:06,920
This actually reminds me of an
interesting anecdote that
530
00:34:06,920 --> 00:34:08,480
Professor Forney once
mentioned when I
531
00:34:08,480 --> 00:34:10,090
was taking the class.
532
00:34:10,090 --> 00:34:13,170
Well he has been in this field
since the '60's, and so he has
533
00:34:13,170 --> 00:34:14,989
seen a lot of this stuff.
534
00:34:14,989 --> 00:34:16,610
He was saying when
turbo codes --
535
00:34:16,610 --> 00:34:19,449
which is one of the first
capacity approaching codes in
536
00:34:19,449 --> 00:34:21,730
recent years when they
were proposed --
537
00:34:21,730 --> 00:34:24,420
they presented the results
at ICC, the International
538
00:34:24,420 --> 00:34:27,889
Conference in Communications,
and what they saw was that the
539
00:34:27,889 --> 00:34:33,370
performance was very close to
the limit that we can predict.
540
00:34:33,370 --> 00:34:36,770
Eb/N_0 was very close to
the ultimate limit --
541
00:34:36,770 --> 00:34:39,250
the Eb/N_0 achieved by turbo
codes was very close to the
542
00:34:39,250 --> 00:34:43,500
ultimate limit that one can
predict, at least 3 dB better
543
00:34:43,500 --> 00:34:45,679
than the best codes that
were available there.
544
00:34:45,679 --> 00:34:47,789
So most people just thought that
there was something wrong
545
00:34:47,789 --> 00:34:50,360
in the simulations, and they
told them that there was a
546
00:34:50,360 --> 00:34:52,560
factor of 2 missing
somewhere, so they
547
00:34:52,560 --> 00:34:53,929
should just double check.
548
00:34:53,929 --> 00:34:55,980
But it turned out when they went
back and people actually
549
00:34:55,980 --> 00:34:58,470
implemented these codes, they
were actually very close to
550
00:34:58,470 --> 00:35:00,910
the capacity.
551
00:35:00,910 --> 00:35:04,730
So sometimes, you have
to be careful.
552
00:35:04,730 --> 00:35:06,770
If you are not going below this
limit, it could be that
553
00:35:06,770 --> 00:35:08,100
the system is good.
554
00:35:08,100 --> 00:35:10,217
And when it is, it's a really
important breakthrough.
555
00:35:10,217 --> 00:35:14,610
Ok
556
00:35:14,610 --> 00:35:19,830
So one particular concept that
comes here is the idea of
557
00:35:19,830 --> 00:35:21,375
ultimate Shannon limit.
558
00:35:21,375 --> 00:35:30,140
559
00:35:30,140 --> 00:35:32,880
And basically, if we are talking
about power limited
560
00:35:32,880 --> 00:35:35,640
regime, our SNR is very small.
561
00:35:35,640 --> 00:35:38,920
So our spectral efficiency is
going to be quite small.
562
00:35:38,920 --> 00:35:40,970
Now notice that this function
here it monotonically
563
00:35:40,970 --> 00:35:43,450
increasing in rho.
564
00:35:43,450 --> 00:35:46,130
It's easy to show that this
function here increases
565
00:35:46,130 --> 00:35:47,430
monotonically in rho.
566
00:35:47,430 --> 00:35:50,380
You could just differentiate
this, or there's easier ways
567
00:35:50,380 --> 00:35:53,630
to do a Taylor expansion of 2 to
the rho minus 1, the series
568
00:35:53,630 --> 00:35:56,090
expansion and show each
term is positive.
569
00:35:56,090 --> 00:35:58,080
So if this term is always
going to be greater
570
00:35:58,080 --> 00:36:01,360
than this term here.
571
00:36:01,360 --> 00:36:04,420
So I'm going from
here to here.
572
00:36:04,420 --> 00:36:10,530
Limit rho tends to zero of 2 to
the rho minus 1 over rho.
573
00:36:10,530 --> 00:36:13,740
This is going to be -- it's a
simple calculus exercise to
574
00:36:13,740 --> 00:36:16,770
show this is the natural
log of 2, or in dB,
575
00:36:16,770 --> 00:36:21,780
it's minus 1.59 dB.
576
00:36:21,780 --> 00:36:25,720
So no matter what system you
design, your Eb/N_0 is always
577
00:36:25,720 --> 00:36:29,230
going to be greater than
minus 1.59 dB.
578
00:36:29,230 --> 00:36:32,050
That's basically what this
calculation shows.
579
00:36:32,050 --> 00:36:33,270
And when is it achieved?
580
00:36:33,270 --> 00:36:34,770
Well, it's only achieved
when the spectral
581
00:36:34,770 --> 00:36:37,250
efficiency goes to zero.
582
00:36:37,250 --> 00:36:40,210
So if you have a deep space
communication system, where
583
00:36:40,210 --> 00:36:42,500
you have lots and lots of
bandwidth, and you do not care
584
00:36:42,500 --> 00:36:46,490
about spectral efficiency, then
if your only criteria is
585
00:36:46,490 --> 00:36:49,270
to minimize Eb/N_0, then you
can design your system
586
00:36:49,270 --> 00:36:52,260
accordingly, check how much
Eb/N_0 you require for a
587
00:36:52,260 --> 00:36:55,110
certain probability of error,
and see how far you are from
588
00:36:55,110 --> 00:36:57,300
the ultimate Shannon limit.
589
00:36:57,300 --> 00:36:59,970
So in this way, in the
power-limited regime, you can
590
00:36:59,970 --> 00:37:03,650
quantify your gap to capacity,
if you will, through this
591
00:37:03,650 --> 00:37:07,090
ultimate Shannon limit.
592
00:37:07,090 --> 00:37:08,370
OK, are there any questions?
593
00:37:08,370 --> 00:38:44,890
594
00:38:44,890 --> 00:38:45,710
OK.
595
00:38:45,710 --> 00:38:47,735
So now let's look at the
bandwidth-limited regime.
596
00:38:47,735 --> 00:39:02,740
597
00:39:02,740 --> 00:39:04,650
So we already saw two
things in the
598
00:39:04,650 --> 00:39:06,410
bandwidth-limited regime.
599
00:39:06,410 --> 00:39:15,830
If I double P, my spectral
efficiency increases by one
600
00:39:15,830 --> 00:39:17,080
bit per two dimension.
601
00:39:17,080 --> 00:39:19,900
602
00:39:19,900 --> 00:39:30,140
Similarly, if I double bandwidth
by C, capacity
603
00:39:30,140 --> 00:39:31,390
approximately doubles.
604
00:39:31,390 --> 00:39:37,648
605
00:39:37,648 --> 00:39:40,260
Now, because you want a spectral
efficiency of more
606
00:39:40,260 --> 00:39:43,440
than two bits per two dimension,
in this system you
607
00:39:43,440 --> 00:39:45,815
typically do a multi-level
modulation.
608
00:39:45,815 --> 00:39:53,650
609
00:39:53,650 --> 00:39:56,830
So for those of you who are
familiar, we do things like
610
00:39:56,830 --> 00:39:59,760
trellis-coded modulation, and
bit-interleaved coded
611
00:39:59,760 --> 00:40:01,080
modulation, and so on.
612
00:40:01,080 --> 00:40:03,280
If we have time, we'll be seeing
those things towards
613
00:40:03,280 --> 00:40:05,370
the very end of the course.
614
00:40:05,370 --> 00:40:07,940
This is not a subject of
the course as such.
615
00:40:07,940 --> 00:40:11,470
616
00:40:11,470 --> 00:40:20,420
The normalization in this
regime is done for two
617
00:40:20,420 --> 00:40:21,670
dimensions.
618
00:40:21,670 --> 00:40:25,260
619
00:40:25,260 --> 00:40:28,405
So if you want to normalize
all the quantities, we
620
00:40:28,405 --> 00:40:30,980
normalize them for two
dimensions here.
621
00:40:30,980 --> 00:40:35,740
And particularly, we will be
seeing probability of error as
622
00:40:35,740 --> 00:40:38,850
a function of this quantity
called SNR norm.
623
00:40:38,850 --> 00:40:42,315
624
00:40:42,315 --> 00:40:45,930
So this is the performance
analysis that is done in
625
00:40:45,930 --> 00:40:47,930
bandwidth-limited regimes.
626
00:40:47,930 --> 00:40:59,450
Here Ps of E is the probability
of error for two
627
00:40:59,450 --> 00:41:00,700
dimensions.
628
00:41:00,700 --> 00:41:05,290
629
00:41:05,290 --> 00:41:06,540
What is SNR norm?
630
00:41:06,540 --> 00:41:10,360
631
00:41:10,360 --> 00:41:18,830
It's defined to be SNR over
2 to the rho minus 1.
632
00:41:18,830 --> 00:41:21,810
Why do we divide by 2
to the rho minus 1?
633
00:41:21,810 --> 00:41:25,420
Well, that's the minimum SNR
that you require for the best
634
00:41:25,420 --> 00:41:26,740
possible system.
635
00:41:26,740 --> 00:41:28,970
That's what Shannon
says, right?
636
00:41:28,970 --> 00:41:31,320
So this quantity here is always
going to be greater
637
00:41:31,320 --> 00:41:33,510
than 1, or 0 dB.
638
00:41:33,510 --> 00:41:36,140
639
00:41:36,140 --> 00:41:38,640
So OK, well, this is the
ultimate Shannon limit in the
640
00:41:38,640 --> 00:41:40,860
bandwidth-limited regime.
641
00:41:40,860 --> 00:41:43,720
If you have a system that is
designed, that operates at a
642
00:41:43,720 --> 00:41:47,540
certain SNR and a certain
spectral efficiency, you can
643
00:41:47,540 --> 00:41:52,220
calculate SNR norm and see how
far you are from 0 dB.
644
00:41:52,220 --> 00:41:55,170
If you're very close to 0
dB, then that's great.
645
00:41:55,170 --> 00:41:57,890
You have a very good
system in practice.
646
00:41:57,890 --> 00:42:01,310
If not, then you have room
for improvement.
647
00:42:01,310 --> 00:42:07,600
And so, in other
words, SNR norm
648
00:42:07,600 --> 00:42:12,950
defines the gap to capacity.
649
00:42:12,950 --> 00:42:30,685
650
00:42:30,685 --> 00:42:32,155
OK, so let's do an example.
651
00:42:32,155 --> 00:42:38,920
652
00:42:38,920 --> 00:42:40,760
Suppose we have an
M-PAM system.
653
00:42:40,760 --> 00:42:43,750
654
00:42:43,750 --> 00:42:46,620
So we have an M-PAM
constellation.
655
00:42:46,620 --> 00:42:48,000
So how does it look like?
656
00:42:48,000 --> 00:42:52,350
Well, you have points here
on a linear line.
657
00:42:52,350 --> 00:42:58,830
This point is say minus alpha,
alpha, three alpha minus three
658
00:42:58,830 --> 00:43:05,700
alpha, all the way up to m minus
one alpha, and here,
659
00:43:05,700 --> 00:43:08,780
minus of m minus one alpha.
660
00:43:08,780 --> 00:43:10,050
Assume m is an even number.
661
00:43:10,050 --> 00:43:12,600
662
00:43:12,600 --> 00:43:16,750
So the distance between two
points here is two alpha, any
663
00:43:16,750 --> 00:43:18,450
two points.
664
00:43:18,450 --> 00:43:25,400
Now we want to find the SNR norm
given that we are using
665
00:43:25,400 --> 00:43:26,650
this constellation.
666
00:43:26,650 --> 00:43:34,220
667
00:43:34,220 --> 00:43:37,540
So in other words, if I use
this constellation in my
668
00:43:37,540 --> 00:43:40,800
communication system, how far
am I operating from the
669
00:43:40,800 --> 00:43:43,420
ultimate Shannon limit?
670
00:43:43,420 --> 00:43:46,280
OK, that's the question.
671
00:43:46,280 --> 00:43:51,760
So in order to find what we need
to find, is first energy
672
00:43:51,760 --> 00:43:53,970
for two dimensions.
673
00:43:53,970 --> 00:43:55,790
Does anybody remember the
formula for M-PAM?
674
00:43:55,790 --> 00:44:00,140
675
00:44:00,140 --> 00:44:04,000
Well, there was a very nice
way of doing it in 450.
676
00:44:04,000 --> 00:44:06,890
One natural way is to simply sum
up all of the coordinates
677
00:44:06,890 --> 00:44:11,480
here, square them, and divide by
M. And because it's for two
678
00:44:11,480 --> 00:44:14,810
dimensions, we could do it --
679
00:44:14,810 --> 00:44:16,980
we're to multiply by a factor
of two, because it's for two
680
00:44:16,980 --> 00:44:18,360
dimensions--
681
00:44:18,360 --> 00:44:20,060
and summation of Xk squared.
682
00:44:20,060 --> 00:44:23,290
683
00:44:23,290 --> 00:44:26,400
And if you work out, you will
get some answer here.
684
00:44:26,400 --> 00:44:29,250
Another way that was
shown in 450 --
685
00:44:29,250 --> 00:44:31,740
AUDIENCE: [UNINTELLIGIBLE]
686
00:44:31,740 --> 00:44:33,940
PROFESSOR: With uniform
quantization, exactly.
687
00:44:33,940 --> 00:44:38,260
So the idea here is, you have
a source, say, which is
688
00:44:38,260 --> 00:44:44,460
uniformly distributed between
M alpha and minus M alpha.
689
00:44:44,460 --> 00:44:47,470
So you have a source uniformly
distributed, and it can be
690
00:44:47,470 --> 00:44:52,010
easily seen by inspection that
M-PAM is the best quantizer
691
00:44:52,010 --> 00:44:55,540
for this particular source.
692
00:44:55,540 --> 00:44:55,615
Ok.
693
00:44:55,615 --> 00:45:03,510
So the interval regions are
just of equal width.
694
00:45:03,510 --> 00:45:08,940
Each interval region is width
two alpha, so your mean square
695
00:45:08,940 --> 00:45:19,250
error, if you will, is 2 alpha
squared over 12, so it's alpha
696
00:45:19,250 --> 00:45:21,460
squared over 3.
697
00:45:21,460 --> 00:45:24,800
Well, the variance in your
source, which I will denote by
698
00:45:24,800 --> 00:45:32,610
sigma square s, is 2M alpha
squared over 12, or it's M
699
00:45:32,610 --> 00:45:35,790
squared alpha squared over 3.
700
00:45:35,790 --> 00:45:40,450
So your energy per symbol --
701
00:45:40,450 --> 00:45:45,090
it's energy in your quantizer,
so I'm denoting it by E of A
702
00:45:45,090 --> 00:45:47,850
in order to differentiate it by
Es, because Es is two times
703
00:45:47,850 --> 00:45:50,080
E of A, OK?
704
00:45:50,080 --> 00:45:54,260
It's going to be m squared
minus 1 times alpha
705
00:45:54,260 --> 00:45:55,530
squared over 3.
706
00:45:55,530 --> 00:46:00,840
So Es is 2 E of A, and so we
get it's 2 times alpha
707
00:46:00,840 --> 00:46:03,600
squared, m squared
minus 1 over 3.
708
00:46:03,600 --> 00:46:08,910
709
00:46:08,910 --> 00:46:12,110
OK, so can anybody tell me what
the spectral efficiency
710
00:46:12,110 --> 00:46:16,020
will be for the system, if I
use an M-PAM constellation?
711
00:46:16,020 --> 00:46:29,982
712
00:46:29,982 --> 00:46:32,180
Well, how many bits per
symbol do I have?
713
00:46:32,180 --> 00:46:34,988
714
00:46:34,988 --> 00:46:36,860
AUDIENCE: Log2M.
715
00:46:36,860 --> 00:46:38,190
PROFESSOR: Log2 M
bits per symbol.
716
00:46:38,190 --> 00:46:41,475
So since it's bits per two
dimensions, the sum would be 2
717
00:46:41,475 --> 00:46:44,600
logM to the base 2.
718
00:46:44,600 --> 00:46:49,660
719
00:46:49,660 --> 00:46:49,865
Ok?
720
00:46:49,865 --> 00:46:52,160
So right now, we have pretty
much everything we need to
721
00:46:52,160 --> 00:46:54,360
find SNR norm.
722
00:46:54,360 --> 00:47:05,790
The SNR here is Es over N_0,
so it's 2 alpha squared, M
723
00:47:05,790 --> 00:47:09,370
squared minus 1, over 3N_0.
724
00:47:09,370 --> 00:47:13,120
But remember, N_0 by definition
is 2 sigma squared,
725
00:47:13,120 --> 00:47:15,206
because sigma squared
is N_0 over 2.
726
00:47:15,206 --> 00:47:17,772
727
00:47:17,772 --> 00:47:21,140
That's how -- that's the noise
variance per dimension.
728
00:47:21,140 --> 00:47:25,830
So, I had 3 times 2 times sigma
squared, or I will just
729
00:47:25,830 --> 00:47:30,390
write this as 6 times sigma
squared, and that's going to
730
00:47:30,390 --> 00:47:32,280
be -- this is just
multiplication, so I should
731
00:47:32,280 --> 00:47:33,180
not even write it --
732
00:47:33,180 --> 00:47:34,820
six sigma squared.
733
00:47:34,820 --> 00:47:39,650
So this is alpha squared,
M squared minus 1
734
00:47:39,650 --> 00:47:43,370
over 3 sigma squared.
735
00:47:43,370 --> 00:47:56,060
OK, so now SNR norm is SNR over
2 to the rho minus 1.
736
00:47:56,060 --> 00:47:59,150
2 to the rho minus 1 is just
M squared minus 1.
737
00:47:59,150 --> 00:48:04,030
It cancels with this M squared
minus 1, and so I get alpha
738
00:48:04,030 --> 00:48:08,280
squared over 3 sigma squared.
739
00:48:08,280 --> 00:48:11,820
So that's the SNR norm if
I use an M-PAM system.
740
00:48:11,820 --> 00:48:26,730
741
00:48:26,730 --> 00:48:27,980
AUDIENCE: Why is [INAUDIBLE]
742
00:48:27,980 --> 00:48:33,500
743
00:48:33,500 --> 00:48:34,150
PROFESSOR: Well, N_0 --
744
00:48:34,150 --> 00:48:40,830
I've plugged in for SNR 3 here,
so I have 3 N_0, but N_0
745
00:48:40,830 --> 00:48:42,080
is 2 sigma squared.
746
00:48:42,080 --> 00:48:51,015
747
00:48:51,015 --> 00:48:52,265
Did I miss anything?
748
00:48:52,265 --> 00:49:46,202
749
00:49:46,202 --> 00:49:46,401
Ok?
750
00:49:46,401 --> 00:49:47,651
Any questions?
751
00:49:47,651 --> 00:49:50,480
752
00:49:50,480 --> 00:49:57,710
OK, so there are two important
remarks from this example.
753
00:49:57,710 --> 00:50:11,900
The first remark is that SNR
norm is independent of M. So I
754
00:50:11,900 --> 00:50:15,120
started with an M-PAM
constellation, so it's a
755
00:50:15,120 --> 00:50:18,570
different constellation for
each value of M, right?
756
00:50:18,570 --> 00:50:21,780
If I look at my spectral
efficiency rho, it's different
757
00:50:21,780 --> 00:50:24,240
for each value of M, because I
can pack more and more bits
758
00:50:24,240 --> 00:50:28,090
per symbol as I increase M. If
I look at my signal to noise
759
00:50:28,090 --> 00:50:33,910
ratio, it's also a function of
M. But when I took SNR norm,
760
00:50:33,910 --> 00:50:36,870
remarkably, the M squared minus
1 term cancelled out in
761
00:50:36,870 --> 00:50:40,030
the numerator and denominator,
and what I was left with was
762
00:50:40,030 --> 00:50:42,770
something independent of M.
763
00:50:42,770 --> 00:50:45,300
So this is actually quite
an interesting result.
764
00:50:45,300 --> 00:50:49,060
What it says is suppose I design
a system, an M-PAM
765
00:50:49,060 --> 00:50:51,350
system, that has this
particular spectral
766
00:50:51,350 --> 00:50:55,620
efficiency, then my gap to
capacity is given by this
767
00:50:55,620 --> 00:50:57,000
expression.
768
00:50:57,000 --> 00:51:01,770
If I use a different value of
M, my gap to the ultimate
769
00:51:01,770 --> 00:51:05,340
Shannon limit is still given
by this expression here.
770
00:51:05,340 --> 00:51:08,630
So by increasing value of M or
decreasing the value of M, my
771
00:51:08,630 --> 00:51:12,330
gap to the Shannon limit is
still going to be the same.
772
00:51:12,330 --> 00:51:14,200
For each value of M, I will
have a different spectral
773
00:51:14,200 --> 00:51:17,920
efficiency, but I'm not getting
any kind of coding
774
00:51:17,920 --> 00:51:20,380
gain, if you will, here.
775
00:51:20,380 --> 00:51:28,900
OK, so this motivates that M-PAM
is an uncoded system.
776
00:51:28,900 --> 00:51:34,240
777
00:51:34,240 --> 00:51:37,500
All of them have the same gap
to the Shannon limit,
778
00:51:37,500 --> 00:51:40,505
regardless of what the value
of M is, and so.
779
00:51:40,505 --> 00:51:46,340
780
00:51:46,340 --> 00:51:51,550
The second point to note is if
I look at the value of alpha
781
00:51:51,550 --> 00:51:54,420
squared over 3 sigma square,
I can make it
782
00:51:54,420 --> 00:51:56,110
quite small, right?
783
00:51:56,110 --> 00:51:59,770
I can even, if I decrease
alpha really by a great
784
00:51:59,770 --> 00:52:03,620
amount, I can make this quantity
smaller than 1.
785
00:52:03,620 --> 00:52:04,600
OK?
786
00:52:04,600 --> 00:52:09,890
I told you here that SNR norm
is always greater than 1.
787
00:52:09,890 --> 00:52:11,890
So what happened?
788
00:52:11,890 --> 00:52:14,580
Did I lie to you here, or did
I do something wrong here?
789
00:52:14,580 --> 00:52:18,570
790
00:52:18,570 --> 00:52:20,910
I mean, I can choose any alpha,
right, and make this
791
00:52:20,910 --> 00:52:24,300
quantity as small as I please,
and then I'm doing better than
792
00:52:24,300 --> 00:52:26,561
the Shannon limit.
793
00:52:26,561 --> 00:52:27,811
AUDIENCE: [UNINTELLIGIBLE]
794
00:52:27,811 --> 00:52:29,900
795
00:52:29,900 --> 00:52:30,270
PROFESSOR: Right.
796
00:52:30,270 --> 00:52:33,310
So basically, what's missing
in this calculation is
797
00:52:33,310 --> 00:52:34,940
probability of error.
798
00:52:34,940 --> 00:52:38,240
If I make alpha really small,
what I have is all these
799
00:52:38,240 --> 00:52:41,560
points come closer and closer,
and sure, I seem like I'm
800
00:52:41,560 --> 00:52:42,800
doing very well at
the encoder.
801
00:52:42,800 --> 00:52:44,880
But what happens
at the decoder?
802
00:52:44,880 --> 00:52:48,410
There is noise in the system,
and so I get too many errors.
803
00:52:48,410 --> 00:52:51,070
This lower bound clearly assumes
that you can make your
804
00:52:51,070 --> 00:52:53,000
probability of error
quite small,
805
00:52:53,000 --> 00:52:54,800
arbitrarily small, right?
806
00:52:54,800 --> 00:52:57,410
So in any reasonable system,
I should also look at the
807
00:52:57,410 --> 00:52:59,020
probability of error.
808
00:52:59,020 --> 00:53:01,540
If I make alpha really small,
I'm going to have too much
809
00:53:01,540 --> 00:53:07,270
error at the decoder, and so
I won't be able to have a
810
00:53:07,270 --> 00:53:08,520
practical system.
811
00:53:08,520 --> 00:53:12,000
812
00:53:12,000 --> 00:53:23,870
So the comment is SNR norm
cannot be seen in isolation.
813
00:53:23,870 --> 00:53:28,700
814
00:53:28,700 --> 00:53:31,050
We need to couple SNR norm
with the corresponding
815
00:53:31,050 --> 00:53:32,330
probability of error.
816
00:53:32,330 --> 00:53:32,703
Yes?
817
00:53:32,703 --> 00:53:35,240
AUDIENCE: [UNINTELLIGIBLE]
818
00:53:35,240 --> 00:53:37,450
PROFESSOR: What I mean
by uncoded syst --
819
00:53:37,450 --> 00:53:38,930
that's a good question.
820
00:53:38,930 --> 00:53:41,350
What do I mean by
uncoded system?
821
00:53:41,350 --> 00:53:45,390
All I'm really saying here is
M-PAM system is a fairly
822
00:53:45,390 --> 00:53:47,730
simple system to implement,
right?
823
00:53:47,730 --> 00:53:51,420
Irregardless of what value of M
I get, I have a certain gap
824
00:53:51,420 --> 00:53:54,940
to the Shannon limit, which is
independent of M. So if I
825
00:53:54,940 --> 00:53:58,230
start with a simple system,
where I have bits coming in,
826
00:53:58,230 --> 00:54:00,840
each bit gets mapped to one
symbol, and I send it over the
827
00:54:00,840 --> 00:54:03,980
channel, I have a fixed gap
to the Shannon capacity.
828
00:54:03,980 --> 00:54:06,360
And this, I will call
an uncoded system.
829
00:54:06,360 --> 00:54:09,930
My only hope will be to improve
upon this system.
830
00:54:09,930 --> 00:54:11,190
OK?
831
00:54:11,190 --> 00:54:13,168
Any other questions?
832
00:54:13,168 --> 00:54:15,144
AUDIENCE: So, if I'm multiplying
[INAUDIBLE]
833
00:54:15,144 --> 00:54:19,100
834
00:54:19,100 --> 00:54:19,790
PROFESSOR: Right, right.
835
00:54:19,790 --> 00:54:22,263
I'm assuming the fixed alpha.
836
00:54:22,263 --> 00:54:24,190
AUDIENCE: [UNINTELLIGIBLE]
837
00:54:24,190 --> 00:54:27,350
PROFESSOR: I mean, basically,
you will see that alpha is a
838
00:54:27,350 --> 00:54:28,895
function of Es, right?
839
00:54:28,895 --> 00:54:30,145
And I want to --
840
00:54:30,145 --> 00:54:36,890
841
00:54:36,890 --> 00:54:38,870
so why do I want -- the question
is, why do I want to
842
00:54:38,870 --> 00:54:41,880
keep alpha fixed, right?
843
00:54:41,880 --> 00:54:42,140
AUDIENCE: Yeah.
844
00:54:42,140 --> 00:54:44,300
PROFESSOR: OK, so in order to
understand that, I have to
845
00:54:44,300 --> 00:54:47,840
look at energy for
two dimensions.
846
00:54:47,840 --> 00:54:50,040
If I normalize by M squared
minus one --
847
00:54:50,040 --> 00:54:52,650
848
00:54:52,650 --> 00:54:53,900
that doesn't work out.
849
00:54:53,900 --> 00:55:01,340
850
00:55:01,340 --> 00:55:02,590
AUDIENCE: [UNINTELLIGIBLE]
851
00:55:02,590 --> 00:55:08,430
852
00:55:08,430 --> 00:55:10,875
so the constellation cannot be
expanding [UNINTELLIGIBLE]
853
00:55:10,875 --> 00:55:13,810
854
00:55:13,810 --> 00:55:15,600
PROFESSOR: Right, so I wanted
to always plot by keeping
855
00:55:15,600 --> 00:55:16,370
alpha fixed.
856
00:55:16,370 --> 00:55:19,610
And I mean, the point is,
typically you want to plot the
857
00:55:19,610 --> 00:55:22,190
probability of symbol error --
858
00:55:22,190 --> 00:55:25,460
do I have that expression
anywhere?
859
00:55:25,460 --> 00:55:26,160
Right there.
860
00:55:26,160 --> 00:55:29,030
Ps of p is a function
of SNR norm, right?
861
00:55:29,030 --> 00:55:31,890
And that will be -- if I
increase SNR norm, so it will
862
00:55:31,890 --> 00:55:33,310
be some kind of a
graph, right?
863
00:55:33,310 --> 00:55:34,370
A trade-off.
864
00:55:34,370 --> 00:55:37,380
Basically, alpha will define the
trade-off between SNR norm
865
00:55:37,380 --> 00:55:39,640
and Ps of p.
866
00:55:39,640 --> 00:55:43,810
So as I slip over larger and
larger values of alpha, for
867
00:55:43,810 --> 00:55:48,160
different values of M, then I
will have a trade-off as Ps of
868
00:55:48,160 --> 00:55:50,840
p is a function of SNR norm.
869
00:55:50,840 --> 00:55:52,990
Does that make sense?
870
00:55:52,990 --> 00:55:56,370
So say I fixed a
value of M, OK?
871
00:55:56,370 --> 00:55:58,740
I define SNR norm, which
is alpha squared
872
00:55:58,740 --> 00:56:00,390
over 3 sigma squared.
873
00:56:00,390 --> 00:56:03,450
For this, I will get a certain
probability of error now.
874
00:56:03,450 --> 00:56:05,920
So if I fix this value of SNR
norm, I get a certain
875
00:56:05,920 --> 00:56:07,410
probability of error.
876
00:56:07,410 --> 00:56:09,220
What if I want to increase
my SNR norm?
877
00:56:09,220 --> 00:56:11,430
The only way to do that will
be to increase alpha.
878
00:56:11,430 --> 00:56:15,340
879
00:56:15,340 --> 00:56:18,020
Does that make sense?
880
00:56:18,020 --> 00:56:19,140
Yeah.
881
00:56:19,140 --> 00:56:20,390
AUDIENCE: [UNINTELLIGIBLE]
882
00:56:20,390 --> 00:56:22,605
883
00:56:22,605 --> 00:56:28,980
You're defining SNR norm as the
gap to capacity, but it
884
00:56:28,980 --> 00:56:32,516
seems like, I mean, obviously as
you increase alpha, as you
885
00:56:32,516 --> 00:56:34,350
increase your signal energy,
you're going to do better and
886
00:56:34,350 --> 00:56:35,020
better, right?
887
00:56:35,020 --> 00:56:35,930
PROFESSOR: Right.
888
00:56:35,930 --> 00:56:37,420
AUDIENCE: Well, in
terms of what?
889
00:56:37,420 --> 00:56:39,630
In terms of probability of
error, like achievable
890
00:56:39,630 --> 00:56:41,300
probability of error, or?
891
00:56:41,300 --> 00:56:42,220
PROFESSOR: Right, exactly.
892
00:56:42,220 --> 00:56:48,460
So basically, the point is say
I plot Ps of E as a function
893
00:56:48,460 --> 00:56:50,610
of SNR norm, or as a function
of alpha squared
894
00:56:50,610 --> 00:56:52,700
over 3 sigma squared.
895
00:56:52,700 --> 00:56:54,868
So my core will look
like this.
896
00:56:54,868 --> 00:56:58,272
AUDIENCE: But the gap to
capacity is defined all the
897
00:56:58,272 --> 00:56:59,760
way over here on the left,
is that what--?
898
00:56:59,760 --> 00:57:01,100
PROFESSOR: Right, that's
a very good question.
899
00:57:01,100 --> 00:57:03,560
You are going much ahead
then what I thought.
900
00:57:03,560 --> 00:57:09,250
So basically, at zero, seven,
SNR norm is zero, right?
901
00:57:09,250 --> 00:57:11,270
This is in linear scale,
so when I have
902
00:57:11,270 --> 00:57:13,500
one, SNR norm is one.
903
00:57:13,500 --> 00:57:15,940
I have the Shannon system.
904
00:57:15,940 --> 00:57:19,890
Basically, for any SNR norm
greater than 1, what Shannon
905
00:57:19,890 --> 00:57:21,990
says is your probability of
error will be arbitrarily
906
00:57:21,990 --> 00:57:25,290
small, and here, it
will be large.
907
00:57:25,290 --> 00:57:27,500
So this here is the
Shannon limit.
908
00:57:27,500 --> 00:57:30,120
And now, in a practical system,
what I want to do is I
909
00:57:30,120 --> 00:57:32,080
want to fix the probability
of error.
910
00:57:32,080 --> 00:57:33,840
So say I like probability
of error of ten
911
00:57:33,840 --> 00:57:35,620
to the minus five.
912
00:57:35,620 --> 00:57:38,520
So that's something that
the system specifies.
913
00:57:38,520 --> 00:57:41,970
And from that, I know the
gap to the capacity.
914
00:57:41,970 --> 00:57:44,800
So if here, if I require, this
is my SNR norm, then this is
915
00:57:44,800 --> 00:57:45,980
going to be my gap.
916
00:57:45,980 --> 00:57:47,630
I wanted to cover
it, but later.
917
00:57:47,630 --> 00:57:50,372
That's a good point.
918
00:57:50,372 --> 00:57:51,622
AUDIENCE: [UNINTELLIGIBLE]
919
00:57:51,622 --> 00:58:00,725
920
00:58:00,725 --> 00:58:04,150
PROFESSOR: Right, this is a
certain, specific system, the
921
00:58:04,150 --> 00:58:07,110
M-PAM system.
922
00:58:07,110 --> 00:58:10,440
What Shannon says that, if
you're anywhere here, you
923
00:58:10,440 --> 00:58:12,030
should be able to make your
probability of error
924
00:58:12,030 --> 00:58:14,540
arbitrarily small.
925
00:58:14,540 --> 00:58:17,627
So your code should basically
look something like this, if
926
00:58:17,627 --> 00:58:19,170
you will, or even steeper.
927
00:58:19,170 --> 00:58:21,850
928
00:58:21,850 --> 00:58:24,176
AUDIENCE: What's the difference
between that curve
929
00:58:24,176 --> 00:58:26,030
and the wider curve?
930
00:58:26,030 --> 00:58:28,030
PROFESSOR: This curve
and this curve?
931
00:58:28,030 --> 00:58:28,860
AUDIENCE: Yeah.
932
00:58:28,860 --> 00:58:29,890
PROFESSOR: This is
basically what we
933
00:58:29,890 --> 00:58:32,390
achieved by an M-PAM system.
934
00:58:32,390 --> 00:58:34,240
I don't want to go into too much
of this, because we'll be
935
00:58:34,240 --> 00:58:35,070
doing this later.
936
00:58:35,070 --> 00:58:38,190
This is the next topic.
937
00:58:38,190 --> 00:58:41,370
Ideally, what we would want is
something, basically, this is
938
00:58:41,370 --> 00:58:43,450
my Shannon limit, I want
something such as the
939
00:58:43,450 --> 00:58:46,590
probability of error decreases
right away as soon as I am
940
00:58:46,590 --> 00:58:47,890
just away from the
Shannon limit.
941
00:58:47,890 --> 00:58:52,000
942
00:58:52,000 --> 00:58:52,590
All right.
943
00:58:52,590 --> 00:58:54,680
AUDIENCE: [UNINTELLIGIBLE]
944
00:58:54,680 --> 00:58:55,010
PROFESSOR: Right.
945
00:58:55,010 --> 00:58:56,380
So that's a good point.
946
00:58:56,380 --> 00:58:58,700
So if I did not do coding,
this is what I get.
947
00:58:58,700 --> 00:59:00,630
If I did coding, this
is what I would get.
948
00:59:00,630 --> 00:59:04,238
So this is how much I can expect
a gain from coding.
949
00:59:04,238 --> 00:59:06,310
AUDIENCE: So in other words,
basically, this bound is
950
00:59:06,310 --> 00:59:10,550
basically in my channel,
the SNR is such that --
951
00:59:10,550 --> 00:59:14,785
I mean, I'm so energy limited
that my SNR norm is lower than
952
00:59:14,785 --> 00:59:17,670
a certain amount, there's
nothing I can do, basically,
953
00:59:17,670 --> 00:59:17,930
is what it's saying.
954
00:59:17,930 --> 00:59:18,210
PROFESSOR: Right.
955
00:59:18,210 --> 00:59:20,970
Because if you want a certain
spectral efficiency, there's
956
00:59:20,970 --> 00:59:24,010
also spectral efficiency,
right?
957
00:59:24,010 --> 00:59:26,700
And if your SNR is much lower,
you're out of luck.
958
00:59:26,700 --> 00:59:28,398
AUDIENCE: So basically, you
have to make your spectral
959
00:59:28,398 --> 00:59:29,790
efficiency higher --
960
00:59:29,790 --> 00:59:30,720
or, lower.
961
00:59:30,720 --> 00:59:31,560
PROFESSOR: Lower, right.
962
00:59:31,560 --> 00:59:32,810
Exactly.
963
00:59:32,810 --> 01:00:13,380
964
01:00:13,380 --> 01:00:16,280
So now let us start with the
probability of error analysis,
965
01:00:16,280 --> 01:00:19,200
and we'll start in
limited regime.
966
01:00:19,200 --> 01:00:22,840
So we have Eb of E as a function
of Eb/N_0, we want to
967
01:00:22,840 --> 01:00:24,370
quantify this trade-off.
968
01:00:24,370 --> 01:00:27,110
Basically, what we are doing now
is trying to quantify how
969
01:00:27,110 --> 01:00:29,430
this graph will look like,
at least for the
970
01:00:29,430 --> 01:00:30,970
uncoded systems today.
971
01:00:30,970 --> 01:00:34,270
We'll do coding next class,
or two classes from
972
01:00:34,270 --> 01:00:36,780
now, as we get time.
973
01:00:36,780 --> 01:00:41,440
So say I have a constellation,
A, as binary PAM.
974
01:00:41,440 --> 01:00:45,230
So it takes only two values,
minus alpha and alpha.
975
01:00:45,230 --> 01:00:49,430
So I have these two points here,
alpha and minus alpha.
976
01:00:49,430 --> 01:00:56,030
My receiveds are Y, symbol Y is
X plus n, where X belongs
977
01:00:56,030 --> 01:01:02,570
to A, and N is Gaussian zero
mean, variance sigma squared,
978
01:01:02,570 --> 01:01:07,990
where sigma squared
is N_0 over 2.
979
01:01:07,990 --> 01:01:11,130
And now what want to do at the
receiver is given Y, you want
980
01:01:11,130 --> 01:01:14,610
to decide whether X was
alpha or minus alpha.
981
01:01:14,610 --> 01:01:17,110
That's a standard detection
problem.
982
01:01:17,110 --> 01:01:20,590
So how will the probability
of error look like?
983
01:01:20,590 --> 01:01:24,070
So suppose I transmit X equals
alpha, my conditional
984
01:01:24,070 --> 01:01:26,140
density of Y --
985
01:01:26,140 --> 01:01:27,390
let me make some room here --
986
01:01:27,390 --> 01:01:32,270
987
01:01:32,270 --> 01:01:36,180
will be a bell shaped curve,
because of the Gaussian noise.
988
01:01:36,180 --> 01:01:40,670
So this is P of Y given
X equals alpha.
989
01:01:40,670 --> 01:01:43,830
Similarly, if I have set X
equals minus alpha, what I get
990
01:01:43,830 --> 01:01:45,700
is something like this.
991
01:01:45,700 --> 01:01:50,170
This is p of Y given X
equals minus alpha.
992
01:01:50,170 --> 01:01:52,132
Excuse my handwriting there.
993
01:01:52,132 --> 01:01:57,350
And the decision region is at
the midpoint, Y equals zero.
994
01:01:57,350 --> 01:01:59,880
This is a standard binary
detection problem.
995
01:01:59,880 --> 01:02:02,880
If Y is positive, you will say
X equals alpha was sent.
996
01:02:02,880 --> 01:02:07,330
If Y is negative, you will say X
equals minus alpha was sent.
997
01:02:07,330 --> 01:02:10,570
And your probability of error
is simply the probability of
998
01:02:10,570 --> 01:02:12,840
error under this --
999
01:02:12,840 --> 01:02:15,420
graph under these
two curves here.
1000
01:02:15,420 --> 01:02:19,080
So we want to find what the
probability of error is.
1001
01:02:19,080 --> 01:02:21,980
Just let me just do quickly the
calculation, in order to
1002
01:02:21,980 --> 01:02:22,890
remind you.
1003
01:02:22,890 --> 01:02:24,575
We'll be doing this over
and over again.
1004
01:02:24,575 --> 01:02:25,825
We'll just do it once.
1005
01:02:25,825 --> 01:02:28,860
1006
01:02:28,860 --> 01:02:32,260
Without loss in generality, I
can say this is probability of
1007
01:02:32,260 --> 01:02:35,900
error, given X equals minus
alpha, where both the points
1008
01:02:35,900 --> 01:02:37,220
are equally likely.
1009
01:02:37,220 --> 01:02:39,490
So by symmetry, I
can say that.
1010
01:02:39,490 --> 01:02:44,100
So this is the same as
probability that Y is greater
1011
01:02:44,100 --> 01:02:46,820
than zero, given X
is minus alpha.
1012
01:02:46,820 --> 01:02:52,320
Now Y is X plus N, so this is
same as probability that N --
1013
01:02:52,320 --> 01:02:55,610
1014
01:02:55,610 --> 01:02:58,350
yeah capital N --
1015
01:02:58,350 --> 01:02:59,310
is greater than alpha.
1016
01:02:59,310 --> 01:03:02,170
Since N has zero mean variance
sigma squared, this is a
1017
01:03:02,170 --> 01:03:06,618
standard Q function of
alpha over sigma.
1018
01:03:06,618 --> 01:03:09,230
OK, so that's the probability
of error.
1019
01:03:09,230 --> 01:03:12,310
Now, there is one bit per symbol
for each X that is one
1020
01:03:12,310 --> 01:03:15,200
bit, so probability of error is
also same as probability of
1021
01:03:15,200 --> 01:03:23,490
bit error, so this is also Pb of
E. So I have Pb of E equals
1022
01:03:23,490 --> 01:03:34,720
Q of alpha over sigma, and
now, I want to basically
1023
01:03:34,720 --> 01:03:37,400
express it as a function
of Eb/N_0, right.
1024
01:03:37,400 --> 01:03:39,200
So what is Eb for the system?
1025
01:03:39,200 --> 01:03:42,540
1026
01:03:42,540 --> 01:03:45,890
It's going to be
alpha squared.
1027
01:03:45,890 --> 01:03:50,540
Sigma squared is now N_0 over 2,
so alpha squared over sigma
1028
01:03:50,540 --> 01:03:55,500
squared is 2 Eb over N_0.
1029
01:03:55,500 --> 01:04:00,920
So now my probability of bit
error is this Q function of
1030
01:04:00,920 --> 01:04:02,170
root 2 Eb over N_0.
1031
01:04:02,170 --> 01:04:07,820
1032
01:04:07,820 --> 01:04:08,580
OK?
1033
01:04:08,580 --> 01:04:09,850
So let us plot this.
1034
01:04:09,850 --> 01:04:12,570
1035
01:04:12,570 --> 01:04:16,680
So I'm on the x-axis,
I'm plotting Eb/N_0,
1036
01:04:16,680 --> 01:04:19,640
which is in dB scale.
1037
01:04:19,640 --> 01:04:21,920
On the y-axis, I'm
going to plot the
1038
01:04:21,920 --> 01:04:26,070
probability of bit error.
1039
01:04:26,070 --> 01:04:28,820
Typically, what you do is
you plot the y-axis
1040
01:04:28,820 --> 01:04:30,690
on a semi-log scale.
1041
01:04:30,690 --> 01:04:35,410
So this is ten to the minus six,
ten to the minus five.
1042
01:04:35,410 --> 01:04:37,390
If you want to do this in
MATLAB, you can use this
1043
01:04:37,390 --> 01:04:39,920
command semi-log y,
and that does it.
1044
01:04:39,920 --> 01:04:43,130
1045
01:04:43,130 --> 01:04:44,570
And so on.
1046
01:04:44,570 --> 01:04:49,070
So can anybody say what will
be a good candidate for the
1047
01:04:49,070 --> 01:04:50,570
x-value at this point here?
1048
01:04:50,570 --> 01:04:53,840
1049
01:04:53,840 --> 01:04:55,590
So what should be the Eb/N_0
at this point?
1050
01:04:55,590 --> 01:05:00,430
1051
01:05:00,430 --> 01:05:01,890
AUDIENCE: [UNINTELLIGIBLE]
1052
01:05:01,890 --> 01:05:04,210
PROFESSOR: It should be the
Shannon limit, right?
1053
01:05:04,210 --> 01:05:07,350
You cannot hope to go below
the Shannon limit.
1054
01:05:07,350 --> 01:05:12,050
So now, what Shannon
says is that --
1055
01:05:12,050 --> 01:05:14,580
so I'm going to plot this as
my Shannon limit here.
1056
01:05:14,580 --> 01:05:19,650
1057
01:05:19,650 --> 01:05:22,460
This probability of error will
basically use the standard
1058
01:05:22,460 --> 01:05:25,100
waterfall curve.
1059
01:05:25,100 --> 01:05:28,670
This is 2-PAM.
1060
01:05:28,670 --> 01:05:31,970
1061
01:05:31,970 --> 01:05:34,460
And if you look at the
x-coordinates, then the value
1062
01:05:34,460 --> 01:05:38,350
at ten to the minus
five is 9.6 dB.
1063
01:05:38,350 --> 01:05:47,840
1064
01:05:47,840 --> 01:05:51,370
So as a system designer, if you
care about probability of
1065
01:05:51,370 --> 01:05:54,780
error at ten to the negative
five, then your gap to
1066
01:05:54,780 --> 01:05:57,800
capacity, or gap to the ultimate
limit, if you will,
1067
01:05:57,800 --> 01:06:02,665
will be this, 9.6 minus
of minus 1.59 dB.
1068
01:06:02,665 --> 01:06:04,260
OK, I should not erase this.
1069
01:06:04,260 --> 01:07:02,980
1070
01:07:02,980 --> 01:07:21,770
OK, so the first thing is at N
to the minus five, our gap to
1071
01:07:21,770 --> 01:07:31,590
the ultimate limit is 9.6
plus 1.59 dB, and that's
1072
01:07:31,590 --> 01:07:37,570
approximately 11.2 dB.
1073
01:07:37,570 --> 01:07:38,650
OK?
1074
01:07:38,650 --> 01:07:40,990
But there is one
catch to this.
1075
01:07:40,990 --> 01:07:44,025
This particular system has a
spectral efficiency of two
1076
01:07:44,025 --> 01:07:47,480
bits for two dimensions, whereas
if you want to achieve
1077
01:07:47,480 --> 01:07:50,150
something close to the Shanon
limit, you have to drive the
1078
01:07:50,150 --> 01:07:52,630
spectral efficiency
down to zero.
1079
01:07:52,630 --> 01:07:55,750
So you might say that this
is not a fair comparison.
1080
01:07:55,750 --> 01:07:58,770
So if you do want to make a fair
comparison, you want to
1081
01:07:58,770 --> 01:08:02,030
fix rho to be two bits
for two dimensions.
1082
01:08:02,030 --> 01:08:06,540
So if you fix rho for two bits
per two dimensions, you will
1083
01:08:06,540 --> 01:08:10,460
get a limit somewhere,
not at 1.59 dB, but
1084
01:08:10,460 --> 01:08:12,190
at some other point.
1085
01:08:12,190 --> 01:08:18,895
This is if you fix rho2, two
bits for two dimensions.
1086
01:08:18,895 --> 01:08:20,970
Can anybody say what
that point will be?
1087
01:08:20,970 --> 01:08:29,934
1088
01:08:29,934 --> 01:08:31,939
AUDIENCE: [UNINTELLIGIBLE]
1089
01:08:31,939 --> 01:08:32,990
PROFESSOR: 3 over 2.
1090
01:08:32,990 --> 01:08:33,939
1.5.
1091
01:08:33,939 --> 01:08:37,304
What will it be in
the log scale?
1092
01:08:37,304 --> 01:08:38,290
AUDIENCE: [UNINTELLIGIBLE]
1093
01:08:38,290 --> 01:08:39,450
PROFESSOR: 1.76.
1094
01:08:39,450 --> 01:08:40,380
Good.
1095
01:08:40,380 --> 01:08:41,810
So what we do know --
1096
01:08:41,810 --> 01:08:43,074
if we do the calculation here.
1097
01:08:43,074 --> 01:08:45,819
1098
01:08:45,819 --> 01:08:54,160
If you fix rho2, two bits for
two dimensions, your Eb/N_0,
1099
01:08:54,160 --> 01:08:58,670
we know, is greater than 2 to
the rho minus 1 over rho,
1100
01:08:58,670 --> 01:09:00,840
which is 3 over 2.
1101
01:09:00,840 --> 01:09:06,260
And that is, if you remember
your dB tables, 1.76 dB.
1102
01:09:06,260 --> 01:09:16,430
Log of 3 is like 4.7, log of 2
is like 3, so it's 1.76 dB.
1103
01:09:16,430 --> 01:09:31,470
So in this case, your gap to the
ultimate limit is going to
1104
01:09:31,470 --> 01:09:39,950
be 9.6 minus 1.76, which
comes out to be 7.8 dB.
1105
01:09:39,950 --> 01:09:42,896
1106
01:09:42,896 --> 01:09:45,359
OK?
1107
01:09:45,359 --> 01:09:48,159
So now, let us do the
bandwidth-limited regime.
1108
01:09:48,159 --> 01:10:07,160
1109
01:10:07,160 --> 01:10:10,380
Now in bandwidth-limited regime,
the trade-off is Ps of
1110
01:10:10,380 --> 01:10:13,290
E as a function of SNR norm.
1111
01:10:13,290 --> 01:10:17,950
1112
01:10:17,950 --> 01:10:20,010
OK, so that's the trade-off
we seek for.
1113
01:10:20,010 --> 01:10:21,320
And the baseline system
we will be
1114
01:10:21,320 --> 01:10:22,660
using is an M-PAM system.
1115
01:10:22,660 --> 01:10:28,810
1116
01:10:28,810 --> 01:10:38,890
So now, your constellation is
going to be minus alpha,
1117
01:10:38,890 --> 01:10:43,420
alpha, 3 alpha, minus 3 alpha,
and so on, up to M
1118
01:10:43,420 --> 01:10:45,720
minus 1 alpha --
1119
01:10:45,720 --> 01:10:48,170
I always run out
of room here --
1120
01:10:48,170 --> 01:10:50,810
minus M minus 1 alpha.
1121
01:10:50,810 --> 01:10:53,170
This is your constellation.
1122
01:10:53,170 --> 01:10:55,990
Now what's the probability of
error going to be for this
1123
01:10:55,990 --> 01:10:57,240
constellation?
1124
01:10:57,240 --> 01:11:01,280
1125
01:11:01,280 --> 01:11:03,450
Well, what's the probability
of error for each
1126
01:11:03,450 --> 01:11:05,710
intermediate point?
1127
01:11:05,710 --> 01:11:07,440
There are two ways you
can make an error.
1128
01:11:07,440 --> 01:11:11,300
Say we say alpha, your noise
is either too small, so it
1129
01:11:11,300 --> 01:11:13,330
takes you to minus alpha.
1130
01:11:13,330 --> 01:11:17,710
Your noise is too high, so
it takes you to 3 alpha.
1131
01:11:17,710 --> 01:11:21,410
For each one, the probability
will be Q of --
1132
01:11:21,410 --> 01:11:24,030
this distance is 2 alpha, it
would be Q of alpha over
1133
01:11:24,030 --> 01:11:26,820
sigma, because you will be
having your decision regions
1134
01:11:26,820 --> 01:11:27,870
right here.
1135
01:11:27,870 --> 01:11:30,260
So in other words, for each
intermediate point -- and
1136
01:11:30,260 --> 01:11:32,170
there are M minus
two of this --
1137
01:11:32,170 --> 01:11:35,520
your probability of error would
be 2 times Q of alpha
1138
01:11:35,520 --> 01:11:37,710
over sigma.
1139
01:11:37,710 --> 01:11:38,160
OK?
1140
01:11:38,160 --> 01:11:39,670
And how many of them
there are?
1141
01:11:39,670 --> 01:11:42,230
There are M minus 2 of these.
1142
01:11:42,230 --> 01:11:44,860
And if all the points are
equally likely, and you want
1143
01:11:44,860 --> 01:11:47,450
to find the probability of
error, you divide by M.
1144
01:11:47,450 --> 01:11:50,420
For the two end points, the
noise can only make an error
1145
01:11:50,420 --> 01:11:52,080
in one direction.
1146
01:11:52,080 --> 01:11:55,820
So you get two over M times
Q of alpha over sigma.
1147
01:11:55,820 --> 01:11:59,340
You work this out, and you have
two times M minus 1 over
1148
01:11:59,340 --> 01:12:02,465
M, times Q of alpha
over sigma.
1149
01:12:02,465 --> 01:12:05,260
1150
01:12:05,260 --> 01:12:08,310
Now, we want to find probability
of error for two
1151
01:12:08,310 --> 01:12:12,830
symbols, because that's
what Ps of E is, OK?
1152
01:12:12,830 --> 01:12:17,435
So what's Ps of E going to be?
1153
01:12:17,435 --> 01:12:20,980
Well, it's -- in terms of Pr
of E, it's going to be one
1154
01:12:20,980 --> 01:12:25,290
minus one minus Pr
of E squared.
1155
01:12:25,290 --> 01:12:28,977
This is the probability you make
error in none of the two
1156
01:12:28,977 --> 01:12:31,310
symbols, and 1 minus that will
be the probability of error
1157
01:12:31,310 --> 01:12:33,730
you make in at least
one symbol.
1158
01:12:33,730 --> 01:12:39,890
And that's going to be equal to
two Pr of E, or it's four
1159
01:12:39,890 --> 01:12:45,270
times M minus 1 over M,
Q of alpha over sigma.
1160
01:12:45,270 --> 01:12:51,880
1161
01:12:51,880 --> 01:12:53,395
Good, I did not write
on this board.
1162
01:12:53,395 --> 01:13:01,500
1163
01:13:01,500 --> 01:13:04,130
So now what remains to do
is to relate alpha over
1164
01:13:04,130 --> 01:13:06,980
sigma to SNR norm.
1165
01:13:06,980 --> 01:13:10,360
And I had it on this
board here.
1166
01:13:10,360 --> 01:13:16,370
SNR norm, we just did in the
previous example, for an M-PAM
1167
01:13:16,370 --> 01:13:21,940
system is alpha squared
over 3 sigma squared.
1168
01:13:21,940 --> 01:13:22,770
OK?
1169
01:13:22,770 --> 01:13:29,120
So I plug that in here, so I get
Ps of E is 4 times M minus
1170
01:13:29,120 --> 01:13:35,475
1 over M times Q of
root 3 SNR norm.
1171
01:13:35,475 --> 01:13:40,430
1172
01:13:40,430 --> 01:13:46,320
And if M is large, this is
approximately 4 times Q of
1173
01:13:46,320 --> 01:13:48,302
root 3 SNR norm.
1174
01:13:48,302 --> 01:13:51,440
1175
01:13:51,440 --> 01:13:53,720
So we are plotting now
probability of error as a
1176
01:13:53,720 --> 01:13:58,600
function of SNR norm, similar to
what we did in that part in
1177
01:13:58,600 --> 01:14:01,590
the power-limited
regime there.
1178
01:14:01,590 --> 01:14:05,810
So this is Ps of E as a
function of SNR norm.
1179
01:14:05,810 --> 01:14:12,000
1180
01:14:12,000 --> 01:14:16,260
Now my Shannon limit would be at
0 dB, so this is my Shannon
1181
01:14:16,260 --> 01:14:17,510
limit point.
1182
01:14:17,510 --> 01:14:20,610
1183
01:14:20,610 --> 01:14:24,912
This is ten to the minus six,
ten to the minus five, ten to
1184
01:14:24,912 --> 01:14:28,630
the minus four, ten to the minus
three, ten to the minus
1185
01:14:28,630 --> 01:14:29,880
two, and so on.
1186
01:14:29,880 --> 01:14:33,010
1187
01:14:33,010 --> 01:14:36,490
This is going to be the
performance of M-PAM.
1188
01:14:36,490 --> 01:14:40,020
And again, I turn to the minus
five, which will be the kind
1189
01:14:40,020 --> 01:14:41,840
of performance criteria
we'll be using
1190
01:14:41,840 --> 01:14:43,090
throughout this course.
1191
01:14:43,090 --> 01:14:45,690
1192
01:14:45,690 --> 01:14:51,340
You'll see that this
y-axis is 8.4 dB.
1193
01:14:51,340 --> 01:15:01,730
So in this case, the
gap to capacity is
1194
01:15:01,730 --> 01:15:06,000
going to be 8.4 dB.
1195
01:15:06,000 --> 01:15:08,400
So if I want a certain spectral
efficiency and I use
1196
01:15:08,400 --> 01:15:13,540
M-PAM, I'm 8.4 dB away from
the Shannon limit.
1197
01:15:13,540 --> 01:15:17,490
The idea behind coding, as
someone pointed out, was to
1198
01:15:17,490 --> 01:15:21,170
start from here and do coding
to come closer and closer to
1199
01:15:21,170 --> 01:15:23,420
bridge this gap.
1200
01:15:23,420 --> 01:15:24,670
OK?
1201
01:15:24,670 --> 01:15:27,010
1202
01:15:27,010 --> 01:15:28,330
So are there any questions?
1203
01:15:28,330 --> 01:15:29,580
We are almost end time.
1204
01:15:29,580 --> 01:15:34,390
1205
01:15:34,390 --> 01:15:34,960
OK.
1206
01:15:34,960 --> 01:15:36,022
I think -- yes?
1207
01:15:36,022 --> 01:15:37,870
AUDIENCE: A perfect code would
just be literally like a step
1208
01:15:37,870 --> 01:15:38,170
function --
1209
01:15:38,170 --> 01:15:38,750
PROFESSOR: Right.
1210
01:15:38,750 --> 01:15:40,025
AUDIENCE: -- all the way down.
1211
01:15:40,025 --> 01:15:43,200
PROFESSOR: That's what
Shannon coded.
1212
01:15:43,200 --> 01:15:46,070
OK, I think this is a natural
point to stop.
1213
01:15:46,070 --> 01:15:47,320
We'll continue next class.
1214
01:15:47,320 --> 01:16:04,996