1
00:00:00,000 --> 00:00:13,560
PROFESSOR: So the idea behind
the encoder is increase the
2
00:00:13,560 --> 00:00:23,760
minimum distance at the cost
of spectral efficiency.
3
00:00:23,760 --> 00:00:32,170
4
00:00:32,170 --> 00:00:38,070
The idea behind the decoder
was the following.
5
00:00:38,070 --> 00:00:41,710
You have a received signal, y.
6
00:00:41,710 --> 00:00:49,930
And the job of the decoder is
to find x hat, which belongs
7
00:00:49,930 --> 00:00:54,660
to the signal set so that
you minimize your
8
00:00:54,660 --> 00:00:55,920
probability of error.
9
00:00:55,920 --> 00:01:01,840
And the probability of error
x is not equal to x-hat.
10
00:01:01,840 --> 00:01:05,780
That's the probability of
error for your decoder.
11
00:01:05,780 --> 00:01:09,090
And we saw a bunch of
equivalence rules -- we said
12
00:01:09,090 --> 00:01:12,080
that the minimum probability of
error rule, which is this
13
00:01:12,080 --> 00:01:18,900
rule here, is the same as the
maximum a posteriori rule.
14
00:01:18,900 --> 00:01:22,410
And this just follows
by the definition.
15
00:01:22,410 --> 00:01:28,160
This was the same as the maximum
likelihood rule.
16
00:01:28,160 --> 00:01:30,350
And here we made one assumption
that all the
17
00:01:30,350 --> 00:01:35,840
signals in your signal set
A are equally likely.
18
00:01:35,840 --> 00:01:39,660
And this was shown to be
equivalent to your minimum
19
00:01:39,660 --> 00:01:42,130
distance decision rule.
20
00:01:42,130 --> 00:01:45,580
And in order to show this
equivalence, we use the fact
21
00:01:45,580 --> 00:01:48,630
that the noise was White
and Gaussian.
22
00:01:48,630 --> 00:01:52,570
So this assumes equi-probable
signals.
23
00:01:52,570 --> 00:01:59,290
24
00:01:59,290 --> 00:02:01,265
And this assumes AWGN channel.
25
00:02:01,265 --> 00:02:02,515
OK.
26
00:02:02,515 --> 00:02:08,259
27
00:02:08,259 --> 00:02:11,840
In fact, the minimum distance
decisions has very nice
28
00:02:11,840 --> 00:02:13,090
geometrical properties.
29
00:02:13,090 --> 00:02:17,720
30
00:02:17,720 --> 00:02:20,190
We basically said that the
decision regions had a
31
00:02:20,190 --> 00:02:21,250
particular shape.
32
00:02:21,250 --> 00:02:23,100
They were convex polytopes.
33
00:02:23,100 --> 00:02:27,720
And they were also known
as Voronoi regions.
34
00:02:27,720 --> 00:02:30,370
However, finding the exact
expression for the probability
35
00:02:30,370 --> 00:02:33,850
of error seemed quite tedious
so we decided
36
00:02:33,850 --> 00:02:35,640
to settle with bounds.
37
00:02:35,640 --> 00:02:38,200
And we developed some bounds for
the probability of error.
38
00:02:38,200 --> 00:02:42,530
39
00:02:42,530 --> 00:02:46,140
And the main bounds were -- we
had a strict upper bound,
40
00:02:46,140 --> 00:02:46,937
which was the summation.
41
00:02:46,937 --> 00:02:48,187
OK.
42
00:02:48,187 --> 00:03:01,220
43
00:03:01,220 --> 00:03:03,230
This is a strict upper bound.
44
00:03:03,230 --> 00:03:06,890
Well, the inequality here, say
it's a strict upper bound.
45
00:03:06,890 --> 00:03:10,450
We also had a union
bound estimate.
46
00:03:10,450 --> 00:03:12,910
Because this is a sum of various
Q functions, this is
47
00:03:12,910 --> 00:03:16,270
still a very cumbersome
expression to evaluate.
48
00:03:16,270 --> 00:03:18,440
So we wanted a simpler
expression.
49
00:03:18,440 --> 00:03:28,650
So we then approximated this
by K_min of aj times Q of
50
00:03:28,650 --> 00:03:33,410
d_min over 2 sigma.
51
00:03:33,410 --> 00:03:37,750
OK, and now if we take the
average on both sides, what we
52
00:03:37,750 --> 00:03:43,650
get is the probability of error
is approximately equal
53
00:03:43,650 --> 00:03:47,980
to K_min of the constellation,
which is the average number of
54
00:03:47,980 --> 00:03:55,890
nearest neighbors, times Q
of d_min over 2 sigma.
55
00:03:55,890 --> 00:03:58,690
56
00:03:58,690 --> 00:04:02,600
And this expression is known as
the union bound estimate.
57
00:04:02,600 --> 00:04:10,280
58
00:04:10,280 --> 00:04:13,120
We call it an estimate because
it's not a bound anymore.
59
00:04:13,120 --> 00:04:16,370
We have made an approximation
going from here to here.
60
00:04:16,370 --> 00:04:18,910
So this is what we came
up with last time.
61
00:04:18,910 --> 00:04:23,076
Throughout the analysis today,
whenever we have a probability
62
00:04:23,076 --> 00:04:25,970
of error expression, we will be
estimating it by the union
63
00:04:25,970 --> 00:04:27,170
bound estimate.
64
00:04:27,170 --> 00:04:31,310
So this union bound estimate
is quite important.
65
00:04:31,310 --> 00:04:34,420
Are there any questions
on this?
66
00:04:34,420 --> 00:04:40,700
OK, so today we will start by
introducing the notion of
67
00:04:40,700 --> 00:04:41,950
effective coding gain.
68
00:04:41,950 --> 00:04:58,240
69
00:04:58,240 --> 00:05:03,330
And we'll use the symbol
gamma sub f for that.
70
00:05:03,330 --> 00:05:05,790
What do we mean by effective
coding gain?
71
00:05:05,790 --> 00:05:08,630
First you have to decide what
region you're operating in.
72
00:05:08,630 --> 00:05:11,300
It's defined for both
bandwidth-limited regime and
73
00:05:11,300 --> 00:05:12,800
power-limited regimes.
74
00:05:12,800 --> 00:05:15,240
So let us start with the
power-limited regime.
75
00:05:15,240 --> 00:05:24,610
76
00:05:24,610 --> 00:05:27,600
Remember in the power-limited
regime, the trade-off we care
77
00:05:27,600 --> 00:05:35,740
about is the probability of bit
error versus Eb over N_0.
78
00:05:35,740 --> 00:05:37,110
The baseline scheme is 2-PAM.
79
00:05:37,110 --> 00:05:45,100
80
00:05:45,100 --> 00:05:49,630
And for 2-PAM, we showed two
lectures ago that the
81
00:05:49,630 --> 00:05:56,650
probability of bit error is
given by Q of root 2 Eb N_0.
82
00:05:56,650 --> 00:06:01,160
83
00:06:01,160 --> 00:06:04,310
OK, it should be --
84
00:06:04,310 --> 00:06:10,870
so now the idea is to plot your
probability of bit error
85
00:06:10,870 --> 00:06:12,120
as a function of Eb N_0.
86
00:06:12,120 --> 00:06:16,150
87
00:06:16,150 --> 00:06:19,760
Eb N_0 is always plotted
in dB on the x-axis.
88
00:06:19,760 --> 00:06:23,790
The probability of bit
error we also plot on
89
00:06:23,790 --> 00:06:25,750
a logarithmic scale.
90
00:06:25,750 --> 00:06:27,590
So this is ten to
the minus six.
91
00:06:27,590 --> 00:06:30,390
This is ten to the minus
five, ten to the
92
00:06:30,390 --> 00:06:33,840
minus four and so on.
93
00:06:33,840 --> 00:06:37,420
The Shannon limit is at --
94
00:06:37,420 --> 00:06:39,412
this is the Shannon limit --
95
00:06:39,412 --> 00:06:45,830
is at minus 1.59 dB.
96
00:06:45,830 --> 00:06:46,580
OK?
97
00:06:46,580 --> 00:06:52,750
For the power-limited regime,
the performance of 2-PAM,
98
00:06:52,750 --> 00:06:57,350
which is given by this
expression, is here.
99
00:06:57,350 --> 00:07:01,260
And we basically go to say that
when the probability of
100
00:07:01,260 --> 00:07:05,600
bit error is ten to the minus
five, the EbN _0 that you
101
00:07:05,600 --> 00:07:11,310
require is 9.6 dB.
102
00:07:11,310 --> 00:07:12,560
This is a nine.
103
00:07:12,560 --> 00:07:15,510
104
00:07:15,510 --> 00:07:19,540
So what's the idea behind the
effective coding gain?
105
00:07:19,540 --> 00:07:23,440
Well, suppose I give you a
certain constellation, OK, a
106
00:07:23,440 --> 00:07:24,940
certain signal set.
107
00:07:24,940 --> 00:07:28,370
And you find the probability
of bit error for that,
108
00:07:28,370 --> 00:07:29,070
and you plot it.
109
00:07:29,070 --> 00:07:31,970
And it comes out to be
something like this.
110
00:07:31,970 --> 00:07:37,470
So this is the performance for
some given signal set A. This
111
00:07:37,470 --> 00:07:38,720
is your 2-PAM.
112
00:07:38,720 --> 00:07:43,150
113
00:07:43,150 --> 00:07:46,470
And now you want to quantify how
much better is the signal
114
00:07:46,470 --> 00:07:49,390
set over the baseline system.
115
00:07:49,390 --> 00:07:51,510
Now, so in other words you
want to look at the gap
116
00:07:51,510 --> 00:07:53,490
between the two curves.
117
00:07:53,490 --> 00:07:55,580
Now clearly the gap is going
to be a function of the
118
00:07:55,580 --> 00:07:58,100
probability of bit
error, right?
119
00:07:58,100 --> 00:08:01,990
But the basic idea now is you
want to fix a certain
120
00:08:01,990 --> 00:08:04,530
probability of bit error because
if you're designing a
121
00:08:04,530 --> 00:08:07,070
system, the probability of bit
error is something that the
122
00:08:07,070 --> 00:08:08,920
system tolerates and is
going to be fixed
123
00:08:08,920 --> 00:08:10,480
throughout the analysis.
124
00:08:10,480 --> 00:08:12,320
So in this course, we'll
be fixing it at ten
125
00:08:12,320 --> 00:08:13,920
to the minus five.
126
00:08:13,920 --> 00:08:16,510
You fix this probability of bit
error to ten to the minus
127
00:08:16,510 --> 00:08:18,610
five, and you look at how
much a gap you have
128
00:08:18,610 --> 00:08:21,520
between the two curves.
129
00:08:21,520 --> 00:08:23,630
And this gap is going
to be your effective
130
00:08:23,630 --> 00:08:26,520
coding gain in dB.
131
00:08:26,520 --> 00:08:33,049
132
00:08:33,049 --> 00:08:37,010
So in other words, effective
coding gain, I can write here,
133
00:08:37,010 --> 00:08:54,340
is the gap between a given
signal set and concordant
134
00:08:54,340 --> 00:09:01,415
system, 2-PAM, at a fixed
probability of bit error.
135
00:09:01,415 --> 00:09:05,300
136
00:09:05,300 --> 00:09:06,060
OK?
137
00:09:06,060 --> 00:09:09,470
So now let us try to quantify
this effective coding gain
138
00:09:09,470 --> 00:09:12,690
using the union bound estimate
that we have here.
139
00:09:12,690 --> 00:09:18,120
So if I have a certain signal
set, A, then I know from the
140
00:09:18,120 --> 00:09:23,370
union bound estimate that the
probability of error is given
141
00:09:23,370 --> 00:09:33,060
by K_min of A times
approximately Q of
142
00:09:33,060 --> 00:09:37,160
d_min over 2 sigma.
143
00:09:37,160 --> 00:09:39,200
So since I want that expression
in terms of
144
00:09:39,200 --> 00:09:43,360
probability of bit error,
I will use the fact that
145
00:09:43,360 --> 00:09:46,620
probability of bit error is
probability of error per
146
00:09:46,620 --> 00:09:51,130
symbols or the number
of bits per symobl.
147
00:09:51,130 --> 00:09:55,730
Since I have endpoints in my
signal set, I have logM bits
148
00:09:55,730 --> 00:09:56,980
per symbol.
149
00:09:56,980 --> 00:09:59,450
150
00:09:59,450 --> 00:10:15,980
And this is then K_min of A over
logM base 2 times Q of
151
00:10:15,980 --> 00:10:17,780
d_min over 2 sigma.
152
00:10:17,780 --> 00:10:33,910
153
00:10:33,910 --> 00:10:37,100
So I'm going to rewrite my
probability of bit error
154
00:10:37,100 --> 00:10:40,620
expression now in this --
155
00:10:40,620 --> 00:10:44,080
in fact the right side
in this quad.
156
00:10:44,080 --> 00:10:56,390
K_b of A times Q of root
2 gamma_c of A
157
00:10:56,390 --> 00:10:58,870
times Eb over N_0.
158
00:10:58,870 --> 00:11:00,520
So I'm just going to define
the right hand
159
00:11:00,520 --> 00:11:01,880
side in this way.
160
00:11:01,880 --> 00:11:05,290
And I'm going to relate gamma_c
of A and K_b of A to
161
00:11:05,290 --> 00:11:08,620
the parameters I have on the
right hand side there.
162
00:11:08,620 --> 00:11:10,690
And why do I want to
do it this way?
163
00:11:10,690 --> 00:11:13,620
Because I want to get an
expression as close as
164
00:11:13,620 --> 00:11:17,330
possible to the performance
of an uncoded 2-PAM system
165
00:11:17,330 --> 00:11:20,900
because that way things will be
easier to do if I want to
166
00:11:20,900 --> 00:11:24,530
calculate the effective
coding gain, OK?
167
00:11:24,530 --> 00:11:27,240
So how do the parameters
relate?
168
00:11:27,240 --> 00:11:40,810
Well, K_b of A is K_min of A
over logM to the base 2.
169
00:11:40,810 --> 00:11:56,430
And this is the average number
of nearest neighbors per
170
00:11:56,430 --> 00:12:04,110
information bit.
171
00:12:04,110 --> 00:12:08,060
If I compare the arguments
inside the Q function, what I
172
00:12:08,060 --> 00:12:20,570
have is 2 gamma_c of A times Eb
N_0 is the argument d_min
173
00:12:20,570 --> 00:12:25,350
over 2 sigma squared, which I
will write as d_min squared
174
00:12:25,350 --> 00:12:28,960
over 4 sigma squared.
175
00:12:28,960 --> 00:12:32,950
So remember sigma squared
is N_0 over 2.
176
00:12:32,950 --> 00:12:36,397
So if I simplify this
expression, what I will end up
177
00:12:36,397 --> 00:12:45,525
getting is gamma_c of A is
d_min squared over 4 Eb.
178
00:12:45,525 --> 00:12:48,860
179
00:12:48,860 --> 00:12:51,500
Gamma_c of A is known as the
nominal coding gain.
180
00:12:51,500 --> 00:13:03,680
181
00:13:03,680 --> 00:13:06,060
It's not the same as the
effective coding gain, which
182
00:13:06,060 --> 00:13:08,880
is the distance between
the two curves.
183
00:13:08,880 --> 00:13:13,730
OK, are there any questions
so far?
184
00:13:13,730 --> 00:13:18,440
So far what I have done is given
a constellation A, I can
185
00:13:18,440 --> 00:13:20,870
find two parameters.
186
00:13:20,870 --> 00:13:25,490
I can find the nominal coding
gain, and I can find the
187
00:13:25,490 --> 00:13:30,000
average number of nearest
neighbors per information bit.
188
00:13:30,000 --> 00:13:32,630
And using these two parameters,
we will try to get
189
00:13:32,630 --> 00:13:35,360
a handle over the effective
coding gain.
190
00:13:35,360 --> 00:13:36,990
So that's the idea.
191
00:13:36,990 --> 00:13:38,490
So how can we plot --
192
00:13:38,490 --> 00:13:41,420
so given this constellation
A, we want to plot the
193
00:13:41,420 --> 00:13:44,050
probability of bit error
curve like this.
194
00:13:44,050 --> 00:13:46,430
But instead of plotting the
exact curve we will be
195
00:13:46,430 --> 00:13:49,600
plotting this curve here,
which is the union bound
196
00:13:49,600 --> 00:13:52,270
estimate of your probability
of bit error.
197
00:13:52,270 --> 00:13:57,290
So we will always be plotting
the union bound estimate.
198
00:13:57,290 --> 00:13:59,200
So how do we do that?
199
00:13:59,200 --> 00:14:02,270
So we'll again start with
a curve like that.
200
00:14:02,270 --> 00:14:07,930
Probability of bit error as
a function of Eb N_0.
201
00:14:07,930 --> 00:14:11,020
This is in dB.
202
00:14:11,020 --> 00:14:13,980
The y-axis is also in dB.
203
00:14:13,980 --> 00:14:15,870
Sorry, not in dB but
on the log scale.
204
00:14:15,870 --> 00:14:22,995
205
00:14:22,995 --> 00:14:24,760
And so on.
206
00:14:24,760 --> 00:14:28,870
This is the Shannon
limit at minus 1.
207
00:14:28,870 --> 00:14:30,120
59 dB.
208
00:14:30,120 --> 00:14:34,270
209
00:14:34,270 --> 00:14:36,000
This is the 2-PAM performance.
210
00:14:36,000 --> 00:14:39,420
211
00:14:39,420 --> 00:14:43,770
So if we want to plot this union
bound estimate, you will
212
00:14:43,770 --> 00:14:45,250
be doing two steps.
213
00:14:45,250 --> 00:14:48,170
First, we'll be plotting this
Q function, a curve
214
00:14:48,170 --> 00:14:49,780
corresponding to
the Q function.
215
00:14:49,780 --> 00:14:52,110
And then we'll be scaling it.
216
00:14:52,110 --> 00:14:56,400
So if I just want to plot this
Q function, how will it
217
00:14:56,400 --> 00:14:58,070
compare to this curve here?
218
00:14:58,070 --> 00:15:05,620
219
00:15:05,620 --> 00:15:08,660
Well, you're just scaling the
argument inside the Q
220
00:15:08,660 --> 00:15:10,130
function, right?
221
00:15:10,130 --> 00:15:13,280
But now because we are plotting
the x-axis on a dB
222
00:15:13,280 --> 00:15:17,770
scale, it simply means that we
are translating to the left by
223
00:15:17,770 --> 00:15:20,670
an amount of gamma_c in dB.
224
00:15:20,670 --> 00:15:22,920
Is that clear?
225
00:15:22,920 --> 00:15:23,850
Well, let's see.
226
00:15:23,850 --> 00:15:28,500
What we are doing is we are
going to scale the x-axis by a
227
00:15:28,500 --> 00:15:29,790
certain constant.
228
00:15:29,790 --> 00:15:33,210
So this means that we will have
to scale x-axis here.
229
00:15:33,210 --> 00:15:37,480
But, our Eb N_0 is being plotted
on a dB scale, right?
230
00:15:37,480 --> 00:15:41,610
So multiplying in the linear
scale corresponds to an
231
00:15:41,610 --> 00:15:43,680
addition on the logarithmic
scale.
232
00:15:43,680 --> 00:15:47,240
An addition on the logarithmic
scale simply implies a shift
233
00:15:47,240 --> 00:15:52,170
on the x-axis so that's the
constant shift on the x-axis.
234
00:15:52,170 --> 00:15:53,800
I'm going to plot it here.
235
00:15:53,800 --> 00:15:58,020
This curve is going to be
parallel to the original 2-PAM
236
00:15:58,020 --> 00:16:01,310
curve to the best
of my abilities.
237
00:16:01,310 --> 00:16:05,010
OK, so this is what we get
as your Q function.
238
00:16:05,010 --> 00:16:08,800
Now we are going to scale the
y-axis by this factor here,
239
00:16:08,800 --> 00:16:14,300
K_b of A. But remember the
y-axis is also on a
240
00:16:14,300 --> 00:16:16,970
logarithmic scale, rather
than the linear scale.
241
00:16:16,970 --> 00:16:21,010
So multiply [INAUDIBLE] by a
factor of K_b of A implies a
242
00:16:21,010 --> 00:16:24,590
vertical shift by a
certain factor.
243
00:16:24,590 --> 00:16:27,760
So I'm going to do a vertical
shift here.
244
00:16:27,760 --> 00:16:34,600
245
00:16:34,600 --> 00:16:38,510
So let me make that
darker now.
246
00:16:38,510 --> 00:16:40,060
So this is what I
end up getting.
247
00:16:40,060 --> 00:16:43,700
248
00:16:43,700 --> 00:16:46,250
OK, now note that the distance
between these two curves is
249
00:16:46,250 --> 00:16:47,960
not fixed anymore.
250
00:16:47,960 --> 00:16:51,460
If the slope is steeper, then
this distance is small,
251
00:16:51,460 --> 00:16:53,790
meaning that this distance
here is large.
252
00:16:53,790 --> 00:16:57,830
On the other hand, if the slope
is going to be smaller
253
00:16:57,830 --> 00:17:01,330
here, then this distance
is large.
254
00:17:01,330 --> 00:17:03,770
So this means that a distance
between these two curves is no
255
00:17:03,770 --> 00:17:05,099
longer fixed.
256
00:17:05,099 --> 00:17:07,859
And basically what we're saying
here is that if the
257
00:17:07,859 --> 00:17:11,109
probability of error is large,
then the number of nearest
258
00:17:11,109 --> 00:17:13,069
neighbors matters much more.
259
00:17:13,069 --> 00:17:15,280
But if the probability of error
is going to be quite
260
00:17:15,280 --> 00:17:19,400
small, then the more important
factor is your exponent, how
261
00:17:19,400 --> 00:17:21,750
fast the slope decays as opposed
to the number of
262
00:17:21,750 --> 00:17:26,349
nearest neighbors, OK?
263
00:17:26,349 --> 00:17:31,630
This distance here is your
effective coding gain.
264
00:17:31,630 --> 00:17:37,420
And this constant horizontal
distance here is your nominal
265
00:17:37,420 --> 00:17:39,364
coding gain in dB.
266
00:17:39,364 --> 00:17:45,570
267
00:17:45,570 --> 00:17:49,280
Are there any questions
on this curve?
268
00:17:49,280 --> 00:17:51,842
AUDIENCE: [INAUDIBLE]
269
00:17:51,842 --> 00:17:57,800
on the right of four 2-PAM
because these are the logM.
270
00:17:57,800 --> 00:18:01,340
PROFESSOR: So so what I'm
really assuming is that
271
00:18:01,340 --> 00:18:03,350
gamma_c of A is greater
than 1.
272
00:18:03,350 --> 00:18:05,250
So if I'm adding it, then
I'm going to shift
273
00:18:05,250 --> 00:18:06,500
to the left, right?
274
00:18:06,500 --> 00:18:19,600
275
00:18:19,600 --> 00:18:21,035
OK, so a couple of remarks.
276
00:18:21,035 --> 00:18:31,980
277
00:18:31,980 --> 00:18:34,970
I'm just summarizing a few
remarks I made while
278
00:18:34,970 --> 00:18:36,430
plotting the curve.
279
00:18:36,430 --> 00:18:53,430
First is the log-log scale is
very convenient because any
280
00:18:53,430 --> 00:18:56,280
multiplicative factor simply
involves a translation along
281
00:18:56,280 --> 00:19:00,040
the x and y directions.
282
00:19:00,040 --> 00:19:07,510
The second point is that the
accuracy we have in the
283
00:19:07,510 --> 00:19:21,330
effective coding gain in this
way is determined by the union
284
00:19:21,330 --> 00:19:26,020
bound estimate because that's
all we are plotting.
285
00:19:26,020 --> 00:19:28,160
We are not really plotting
the exact curve.
286
00:19:28,160 --> 00:19:32,210
287
00:19:32,210 --> 00:19:35,380
And the third point is that
we have a rule of thumb.
288
00:19:35,380 --> 00:19:41,770
289
00:19:41,770 --> 00:19:45,190
Because the distance between the
curves is not fixed, it's
290
00:19:45,190 --> 00:19:47,720
still too tedious to find the
exact numerical value of
291
00:19:47,720 --> 00:19:49,610
effective coding gain.
292
00:19:49,610 --> 00:19:52,750
So what we can find is the value
of the nominal coding
293
00:19:52,750 --> 00:19:57,390
gain and K_b of A exactly,
given a constellation.
294
00:19:57,390 --> 00:20:00,320
So we want to have a rule
of thumb for the
295
00:20:00,320 --> 00:20:02,200
effective coding gain.
296
00:20:02,200 --> 00:20:06,280
And the rule of thumb is that
the effective coding gain is
297
00:20:06,280 --> 00:20:16,050
given by the nominal coding gain
in dB minus 0.2 log2 of
298
00:20:16,050 --> 00:20:20,620
K_b of A. This is in dB.
299
00:20:20,620 --> 00:20:25,450
What this means is every factor
of two increase is K_b
300
00:20:25,450 --> 00:20:29,590
results in a loss of 0.2 dB in
the effective coding gain.
301
00:20:29,590 --> 00:20:35,520
OK, and this is our probability
of bit error of
302
00:20:35,520 --> 00:20:37,600
ten to the minus five.
303
00:20:37,600 --> 00:20:42,060
So that's the region that we
will be doing our analysis in.
304
00:20:42,060 --> 00:20:44,610
So this rule of thumb
is quite helpful.
305
00:20:44,610 --> 00:20:46,594
Are there any questions?
306
00:20:46,594 --> 00:20:49,340
AUDIENCE: [INAUDIBLE]
307
00:20:49,340 --> 00:20:50,530
PROFESSOR: This rule of thumb?
308
00:20:50,530 --> 00:20:53,360
Well, it works quite well
in practice so you
309
00:20:53,360 --> 00:20:54,860
want to use that rule.
310
00:20:54,860 --> 00:20:57,360
AUDIENCE: [INAUDIBLE]
311
00:20:57,360 --> 00:20:58,610
PROFESSOR: Right.
312
00:20:58,610 --> 00:21:06,360
313
00:21:06,360 --> 00:21:09,150
If things are clear, let
us do some examples.
314
00:21:09,150 --> 00:21:17,090
315
00:21:17,090 --> 00:21:19,515
So let us start with a
2-PAM for examples.
316
00:21:19,515 --> 00:21:24,810
317
00:21:24,810 --> 00:21:26,480
The first is 2-PAM system.
318
00:21:26,480 --> 00:21:29,760
319
00:21:29,760 --> 00:21:32,950
What's the effective coding
gain going to be for this
320
00:21:32,950 --> 00:21:35,790
constellation?
321
00:21:35,790 --> 00:21:38,250
It's going to be 0 dB, right?
322
00:21:38,250 --> 00:21:39,500
It just follows from
the definition.
323
00:21:39,500 --> 00:21:42,718
324
00:21:42,718 --> 00:21:46,790
I mean the effective coding gain
tells us how much gap we
325
00:21:46,790 --> 00:21:50,000
have between this new
constellation and 2-PAM.
326
00:21:50,000 --> 00:21:52,990
If your new constellation is
2-PAM itself, we don't have
327
00:21:52,990 --> 00:21:56,100
any effective coding gain.
328
00:21:56,100 --> 00:21:57,350
What about the 4-QAM?
329
00:21:57,350 --> 00:22:01,120
330
00:22:01,120 --> 00:22:04,230
It's going to be still 0,
because a 4-QAM is a Cartesian
331
00:22:04,230 --> 00:22:06,820
product of two 2-PAM
constellations.
332
00:22:06,820 --> 00:22:09,290
And we argued last time that
there is no coding gain if you
333
00:22:09,290 --> 00:22:11,970
use a Cartesian product.
334
00:22:11,970 --> 00:22:16,290
If you want to verify it using
the framework that we have
335
00:22:16,290 --> 00:22:21,420
just developed, say we
have four points.
336
00:22:21,420 --> 00:22:24,070
This point is alpha, alpha.
337
00:22:24,070 --> 00:22:30,580
This point is minus alpha,
alpha, minus alpha, minus
338
00:22:30,580 --> 00:22:33,600
alpha, and alpha minus alpha.
339
00:22:33,600 --> 00:22:39,010
The nominal coding gain
is given by d_min
340
00:22:39,010 --> 00:22:40,580
squared over 4 Eb.
341
00:22:40,580 --> 00:22:45,080
342
00:22:45,080 --> 00:22:49,820
The minimum distance is 3 alpha
between any two points.
343
00:22:49,820 --> 00:22:52,250
So we have 4 alpha squared.
344
00:22:52,250 --> 00:22:54,040
Then what's the energy
per bit?
345
00:22:54,040 --> 00:22:57,210
Well, the energy per symbol
is 2 alpha squared.
346
00:22:57,210 --> 00:23:00,240
Then we have two bits per
symbol, so the energy per bit
347
00:23:00,240 --> 00:23:01,520
is alpha squared.
348
00:23:01,520 --> 00:23:05,080
So we have 4 alpha squared,
and so that's 1.
349
00:23:05,080 --> 00:23:08,060
So we do not have any
nominal coding gain.
350
00:23:08,060 --> 00:23:10,600
K_b of A, what's that
going to be?
351
00:23:10,600 --> 00:23:13,870
Well, we have two nearest
neighbors per symbol, but we
352
00:23:13,870 --> 00:23:15,830
also have two bits per symbol.
353
00:23:15,830 --> 00:23:17,910
So the number of nearest
neighbors per bit
354
00:23:17,910 --> 00:23:18,940
is going to be one.
355
00:23:18,940 --> 00:23:23,960
So your nominal coding gain is
1, K_b of A is going to be 1.
356
00:23:23,960 --> 00:23:28,220
And so we do not see
any coding gain
357
00:23:28,220 --> 00:23:30,880
for the 4-QAM system.
358
00:23:30,880 --> 00:23:33,660
Now let me do a slightly
different case.
359
00:23:33,660 --> 00:23:37,770
Let me start with a 4-QAM,
but I remove two points.
360
00:23:37,770 --> 00:23:40,860
I remove this point, and
I remove this point.
361
00:23:40,860 --> 00:23:43,610
And I only keep the
two points here.
362
00:23:43,610 --> 00:23:48,710
So I have a constellation, say,
A prime, which only has
363
00:23:48,710 --> 00:23:52,490
two points, one point here
and one point here.
364
00:23:52,490 --> 00:23:55,750
So this point is alpha, alpha,
and this point is minus alpha,
365
00:23:55,750 --> 00:23:58,180
minus alpha.
366
00:23:58,180 --> 00:24:00,480
Any idea what the effective
coding gain
367
00:24:00,480 --> 00:24:01,730
will be for this case?
368
00:24:01,730 --> 00:24:06,000
369
00:24:06,000 --> 00:24:07,920
AUDIENCE: I think
it's the same.
370
00:24:07,920 --> 00:24:08,900
PROFESSOR: It's still
zero, right?
371
00:24:08,900 --> 00:24:12,130
AUDIENCE: Yeah, because it's
saying that's 2-PAM.
372
00:24:12,130 --> 00:24:12,820
PROFESSOR: Exactly.
373
00:24:12,820 --> 00:24:17,600
All this is a 2-PAM
constellation embedded in two
374
00:24:17,600 --> 00:24:18,910
dimensions.
375
00:24:18,910 --> 00:24:22,130
It's 2-PAM along this
line, right?
376
00:24:22,130 --> 00:24:24,780
So I cannot hope to have a
higher coding gain for this
377
00:24:24,780 --> 00:24:27,620
constellation over
the original one.
378
00:24:27,620 --> 00:24:30,340
I mean I told you the story that
we can take a subset of
379
00:24:30,340 --> 00:24:33,860
points, and we can increase
the minimum distance by
380
00:24:33,860 --> 00:24:35,360
throwing away some points.
381
00:24:35,360 --> 00:24:38,700
But one has to be careful
in that analysis.
382
00:24:38,700 --> 00:24:41,280
There could be examples where
you're strictly improving the
383
00:24:41,280 --> 00:24:42,760
minimum distance.
384
00:24:42,760 --> 00:24:49,460
Note that the minimum distance
here is going to 8 alpha
385
00:24:49,460 --> 00:24:52,230
squared now.
386
00:24:52,230 --> 00:24:54,970
OK, so the minimum distance
is strictly improved.
387
00:24:54,970 --> 00:24:58,530
But what happens to be
your energy per bit?
388
00:24:58,530 --> 00:25:01,780
Well, the energy per symbol
is 2 alpha squared.
389
00:25:01,780 --> 00:25:05,650
But you only have one bit per
symbol, so the energy per bit
390
00:25:05,650 --> 00:25:07,760
is 2 alpha squared.
391
00:25:07,760 --> 00:25:13,710
So your nominal coding gain is
d_min squared over 4 Eb, which
392
00:25:13,710 --> 00:25:17,130
follows from the relation
we have here.
393
00:25:17,130 --> 00:25:19,310
And d_min squared is
8 alpha squared.
394
00:25:19,310 --> 00:25:23,040
4 Eb is also 8 alpha squared, so
the nominal coding gain is
395
00:25:23,040 --> 00:25:24,410
1, or 0 dB.
396
00:25:24,410 --> 00:25:27,890
397
00:25:27,890 --> 00:25:30,810
The average number of nearest
neighbors, well, we only have
398
00:25:30,810 --> 00:25:34,120
one nearest neighbor, and we
have one bit per symbol, so
399
00:25:34,120 --> 00:25:37,950
that's -- and this is A
prime by the way --
400
00:25:37,950 --> 00:25:39,270
is going to be 1.
401
00:25:39,270 --> 00:25:42,627
And so the effective coding
gain is still 0 dB.
402
00:25:42,627 --> 00:25:46,920
403
00:25:46,920 --> 00:25:48,170
AUDIENCE: [INAUDIBLE]
404
00:25:48,170 --> 00:25:51,220
405
00:25:51,220 --> 00:25:54,170
PROFESSOR: So probability of
symbol error as a function of
406
00:25:54,170 --> 00:25:56,250
alpha has definitely
decreased, right.
407
00:25:56,250 --> 00:25:58,530
But the trade-off we care about
is probability of bit
408
00:25:58,530 --> 00:26:00,740
error as a function of Eb N_0.
409
00:26:00,740 --> 00:26:02,620
In other words, this curve
is still here.
410
00:26:02,620 --> 00:26:04,370
It's not changed.
411
00:26:04,370 --> 00:26:07,450
I just shifted right by
reducing my spectral
412
00:26:07,450 --> 00:26:07,622
efficiency.
413
00:26:07,622 --> 00:26:08,872
OK?
414
00:26:08,872 --> 00:26:15,560
415
00:26:15,560 --> 00:26:19,810
So let's do one last example
just to prove that it's not
416
00:26:19,810 --> 00:26:23,290
always the case that you get
nothing by removing points.
417
00:26:23,290 --> 00:27:35,990
418
00:27:35,990 --> 00:27:42,770
OK, so the contellation that
I have in mind is the one I
419
00:27:42,770 --> 00:27:45,420
introduced last time.
420
00:27:45,420 --> 00:27:50,170
We'll start with a 3, 4
Cartesian product of 2-PAM.
421
00:27:50,170 --> 00:27:54,980
OK, so in three dimensions
it's a cube, and the 3,4
422
00:27:54,980 --> 00:27:57,840
Cartesian product will
have a point on each
423
00:27:57,840 --> 00:27:59,260
vertex of the cube.
424
00:27:59,260 --> 00:28:01,150
And we only keep four points.
425
00:28:01,150 --> 00:28:03,377
So the points we keep are
these four points.
426
00:28:03,377 --> 00:28:04,627
OK?
427
00:28:04,627 --> 00:28:09,800
428
00:28:09,800 --> 00:28:19,300
So I write B, the coordinates
are alpha, alpha, alpha, minus
429
00:28:19,300 --> 00:28:24,540
alpha, minus alpha, alpha,
minus alpha, alpha, minus
430
00:28:24,540 --> 00:28:31,770
alpha, and alpha, minus
alpha, minus alpha.
431
00:28:31,770 --> 00:28:33,650
So these are the vertices.
432
00:28:33,650 --> 00:28:37,575
These are the coordinates of the
four points here where we
433
00:28:37,575 --> 00:28:40,190
have the standard x, y and
z-axis, which I'm not drawing
434
00:28:40,190 --> 00:28:42,360
because it will just
look confusing.
435
00:28:42,360 --> 00:28:46,320
So now we'll see that for this
particular signal set, we do
436
00:28:46,320 --> 00:28:49,170
have a coding gain, OK?
437
00:28:49,170 --> 00:28:50,595
Let's find the minimum
distance.
438
00:28:50,595 --> 00:28:54,160
439
00:28:54,160 --> 00:28:56,740
The minimum distance
here is what?
440
00:28:56,740 --> 00:29:00,010
It's the distance along
a diagonal.
441
00:29:00,010 --> 00:29:04,390
And the diagonal is of length to
-- each side of the cube is
442
00:29:04,390 --> 00:29:05,640
off-length to alpha.
443
00:29:05,640 --> 00:29:08,790
444
00:29:08,790 --> 00:29:13,930
So the diagonal is of
length to alpha.
445
00:29:13,930 --> 00:29:16,275
So d_min squared is
8 alpha squared.
446
00:29:16,275 --> 00:29:19,450
447
00:29:19,450 --> 00:29:21,950
What is the energy per bit
going to be in this case?
448
00:29:21,950 --> 00:29:33,030
449
00:29:33,030 --> 00:29:35,226
Well, what's the energy
per symbol now?
450
00:29:35,226 --> 00:29:37,520
AUDIENCE: [INAUDIBLE]
451
00:29:37,520 --> 00:29:40,140
PROFESSOR: 3 alpha squared
over 2 is the Eb, right?
452
00:29:40,140 --> 00:29:43,890
We have 3 alpha squared units
of energy per symbol.
453
00:29:43,890 --> 00:29:48,230
We have four points, so we
have two bits per symbol.
454
00:29:48,230 --> 00:29:54,900
So the energy per bit is
3 alpha squared over 2.
455
00:29:54,900 --> 00:30:01,470
So now if I look at my nominal
coding gain, that's going to
456
00:30:01,470 --> 00:30:07,210
be d_min squared over 4 Eb.
457
00:30:07,210 --> 00:30:10,640
d_min squared is 8
alpha squared.
458
00:30:10,640 --> 00:30:14,170
4 dB is 6 alpha squared.
459
00:30:14,170 --> 00:30:17,080
So that's 4/3.
460
00:30:17,080 --> 00:30:21,780
On a linear scale and on a dB
scale, this is going to be?
461
00:30:21,780 --> 00:30:32,680
462
00:30:32,680 --> 00:30:34,666
You guys should remember
your dB tables.
463
00:30:34,666 --> 00:30:37,534
464
00:30:37,534 --> 00:30:39,446
AUDIENCE: 0.2?
465
00:30:39,446 --> 00:30:40,410
PROFESSOR: 12.
466
00:30:40,410 --> 00:30:42,510
You have to multiply by ten.
467
00:30:42,510 --> 00:30:46,150
So actually I think it's 1.3,
more like 1.3 dB if you look
468
00:30:46,150 --> 00:30:47,880
at the last significant digit.
469
00:30:47,880 --> 00:30:50,730
470
00:30:50,730 --> 00:30:56,760
OK, so what's K_b of A going
to be in this case?
471
00:30:56,760 --> 00:30:59,055
Or K_b of B I should say.
472
00:30:59,055 --> 00:31:01,650
I'm calling the signal
set B here.
473
00:31:01,650 --> 00:31:06,140
So what is K_b if
B going to be?
474
00:31:06,140 --> 00:31:13,820
475
00:31:13,820 --> 00:31:15,260
AUDIENCE: [INAUDIBLE]
476
00:31:15,260 --> 00:31:17,180
PROFESSOR: 3 by 2, right?
477
00:31:17,180 --> 00:31:19,760
You have three nearest neighbors
per point, for each
478
00:31:19,760 --> 00:31:21,150
point has three nearest
neighbors.
479
00:31:21,150 --> 00:31:22,840
They're all equidistant.
480
00:31:22,840 --> 00:31:23,906
And you have two bits
per symbol.
481
00:31:23,906 --> 00:31:25,156
OK?
482
00:31:25,156 --> 00:31:26,680
483
00:31:26,680 --> 00:31:29,650
So if you work out your
effective coding again using
484
00:31:29,650 --> 00:31:33,745
this rule of thumb here, take
gamma_c of A in dB and
485
00:31:33,745 --> 00:31:37,460
subtract 0.2 log2 you will see
that the effective coding gain
486
00:31:37,460 --> 00:31:39,220
is approximately 1 dB.
487
00:31:39,220 --> 00:31:55,730
488
00:31:55,730 --> 00:31:56,980
Are there any questions?
489
00:31:56,980 --> 00:32:00,970
490
00:32:00,970 --> 00:32:02,800
OK, so it might seem that
this is still not a
491
00:32:02,800 --> 00:32:03,760
very impressive number.
492
00:32:03,760 --> 00:32:05,100
Yes, you had a question.
493
00:32:05,100 --> 00:32:07,810
AUDIENCE: What was it about this
constellation that gave
494
00:32:07,810 --> 00:32:09,082
us coding gain?
495
00:32:09,082 --> 00:32:12,390
Was is the face that we went
to higher dimensions?
496
00:32:12,390 --> 00:32:15,240
PROFESSOR: Right, so usually
you do get a coding gain if
497
00:32:15,240 --> 00:32:20,400
you trade-off minimum distance
with spectral efficiency.
498
00:32:20,400 --> 00:32:22,820
So that's usually the case.
499
00:32:22,820 --> 00:32:26,200
So that's why I presented you
an example of that case.
500
00:32:26,200 --> 00:32:28,180
It's just that this was a very
special case where you end up
501
00:32:28,180 --> 00:32:30,766
with a 2-PAM constellation
to start with.
502
00:32:30,766 --> 00:32:36,010
503
00:32:36,010 --> 00:32:42,440
OK, so the comment I was going
to make is that 1.2 dB still
504
00:32:42,440 --> 00:32:44,510
might not be like a very
impressive number.
505
00:32:44,510 --> 00:32:47,230
I mean after all the hard
work, you just get 1 dB
506
00:32:47,230 --> 00:32:48,660
effective coding gain.
507
00:32:48,660 --> 00:32:51,290
So the question to ask at this
point is are there any
508
00:32:51,290 --> 00:32:54,980
constellations that have much
higher coding gains?
509
00:32:54,980 --> 00:32:58,440
In particular, other
constellations that come close
510
00:32:58,440 --> 00:33:01,340
to the Shannon limit at all.
511
00:33:01,340 --> 00:33:04,280
And we'll look at one
interesting class of signal
512
00:33:04,280 --> 00:33:06,640
sets, which are known as
orthogonal signal sets.
513
00:33:06,640 --> 00:33:18,540
514
00:33:18,540 --> 00:33:22,140
And you probably saw these
examples back in 6.450, but
515
00:33:22,140 --> 00:33:24,070
we're just reviewing
it again here.
516
00:33:24,070 --> 00:33:27,620
And these signal sets have a
property that as you increase
517
00:33:27,620 --> 00:33:30,960
the number of dimensions, you
come arbitrarily close to the
518
00:33:30,960 --> 00:33:32,770
Shannon limit.
519
00:33:32,770 --> 00:33:37,110
So the definition of the signal
sets is your A --
520
00:33:37,110 --> 00:33:43,450
it's a collection of signals aj,
where j goes from 1 to M
521
00:33:43,450 --> 00:33:52,450
such that the inner product
between ai and aj is E A times
522
00:33:52,450 --> 00:33:56,030
delta_i,j, meaning that if i
is not equal to j, the two
523
00:33:56,030 --> 00:33:56,920
signals are orthogonal.
524
00:33:56,920 --> 00:34:01,560
And if i equals j, the
energy is E(A).
525
00:34:01,560 --> 00:34:05,420
Geometrically, we
have two points.
526
00:34:05,420 --> 00:34:07,220
The two points can be
thought of as two
527
00:34:07,220 --> 00:34:09,870
points on the two axes.
528
00:34:09,870 --> 00:34:12,650
So this is a case
when M equals 2.
529
00:34:12,650 --> 00:34:16,139
When M equals 3, the three
points will lie on those three
530
00:34:16,139 --> 00:34:17,389
corresponding axes.
531
00:34:17,389 --> 00:34:19,650
532
00:34:19,650 --> 00:34:22,870
So this is the case when
M equals 3 and so on.
533
00:34:22,870 --> 00:34:25,120
So generally when you have
M points, your signal
534
00:34:25,120 --> 00:34:28,020
spatializes in M dimensions, and
each point can be thought
535
00:34:28,020 --> 00:34:31,860
of as a point on one
of the axes.
536
00:34:31,860 --> 00:34:36,760
So if you look at the distance
between any two points, it's
537
00:34:36,760 --> 00:34:41,850
the norm of ai minus
aj squared.
538
00:34:41,850 --> 00:34:46,210
I could simplify this simply as
the norm of ai squared plus
539
00:34:46,210 --> 00:34:50,050
the norm of aj squared because
the end product
540
00:34:50,050 --> 00:34:52,630
is going to be zero.
541
00:34:52,630 --> 00:34:55,760
OK, now ai squared is
going to be E(A).
542
00:34:55,760 --> 00:34:57,810
aj squared is going
to be E(A).
543
00:34:57,810 --> 00:35:01,280
So this is 2E(A).
544
00:35:01,280 --> 00:35:03,440
So what we're saying here
is that every point is
545
00:35:03,440 --> 00:35:06,880
equidistant from every other
point, and the square of the
546
00:35:06,880 --> 00:35:09,940
distance is 2 E(A).
547
00:35:09,940 --> 00:35:11,190
OK?
548
00:35:11,190 --> 00:35:13,440
549
00:35:13,440 --> 00:35:18,230
So in other words, the average
number of nearest neighbors is
550
00:35:18,230 --> 00:35:20,160
always going to be
a M minus 1.
551
00:35:20,160 --> 00:35:24,310
552
00:35:24,310 --> 00:35:27,700
So let us calculate the nominal
coding gain for this
553
00:35:27,700 --> 00:35:28,950
constellation.
554
00:35:28,950 --> 00:35:32,190
555
00:35:32,190 --> 00:35:38,460
That's going to be d_min
squared over 4 Eb.
556
00:35:38,460 --> 00:35:41,590
557
00:35:41,590 --> 00:35:49,160
d_min squared is 2E(A)
over 4 Eb.
558
00:35:49,160 --> 00:35:53,090
Well, the energy per bit is the
energy per symbol, which
559
00:35:53,090 --> 00:35:56,585
is E(A) over the number of bits
per symbol, which is log
560
00:35:56,585 --> 00:35:59,710
M to the base 2.
561
00:35:59,710 --> 00:36:09,520
And this I'm going to write
here is 1/2 log
562
00:36:09,520 --> 00:36:11,890
M to the base 2.
563
00:36:11,890 --> 00:36:15,622
So as I increase my M, my
number of dimensions, my
564
00:36:15,622 --> 00:36:17,155
nominal coding gain
goes to infinity.
565
00:36:17,155 --> 00:36:18,405
OK?
566
00:36:18,405 --> 00:36:21,240
567
00:36:21,240 --> 00:36:23,310
What do we expect for the
effective coding gain?
568
00:36:23,310 --> 00:36:24,560
Will it also go to infinity?
569
00:36:24,560 --> 00:36:30,974
570
00:36:30,974 --> 00:36:32,408
AUDIENCE: It approaches
the Shannon limit.
571
00:36:32,408 --> 00:36:34,300
PROFESSOR: Well, we don't know
whether it approaches the
572
00:36:34,300 --> 00:36:36,115
Shannon limit as of yet,
but we know it
573
00:36:36,115 --> 00:36:37,250
cannot go to infinity.
574
00:36:37,250 --> 00:36:38,546
It's upper bounded by
the Shannon limit.
575
00:36:38,546 --> 00:36:40,220
Right?
576
00:36:40,220 --> 00:36:44,090
So the effective coding gain
cannot really become
577
00:36:44,090 --> 00:36:48,220
unbounded, and what's really
happening here?
578
00:36:48,220 --> 00:36:49,470
AUDIENCE: [INAUDIBLE]
579
00:36:49,470 --> 00:36:51,890
580
00:36:51,890 --> 00:36:54,280
PROFESSOR: Right, but why does
the effective coding gain stay
581
00:36:54,280 --> 00:36:56,610
bounded and the nominal
coding gain blow up?
582
00:36:56,610 --> 00:36:58,680
AUDIENCE: [INAUDIBLE]
583
00:36:58,680 --> 00:37:01,010
PROFESSOR: Right, the number
of nearest neighbors also
584
00:37:01,010 --> 00:37:02,440
increases with M, right?
585
00:37:02,440 --> 00:37:04,359
Not just the nominal
coding gain.
586
00:37:04,359 --> 00:38:32,330
587
00:38:32,330 --> 00:38:34,570
So let's write that down.
588
00:38:34,570 --> 00:38:37,460
If I look at the number of
nearest neighbors per
589
00:38:37,460 --> 00:38:40,690
information bit --
590
00:38:40,690 --> 00:38:41,510
yes?
591
00:38:41,510 --> 00:38:44,480
AUDIENCE: These don't
have 0 mean, right?
592
00:38:44,480 --> 00:38:45,790
PROFESSOR: Right, they
don't have 0 mean.
593
00:38:45,790 --> 00:38:47,040
That's a good point.
594
00:38:47,040 --> 00:38:51,420
595
00:38:51,420 --> 00:38:56,170
PROFESSOR: Let's see, what
happens to the mean as M goes
596
00:38:56,170 --> 00:38:57,420
to infinity?
597
00:38:57,420 --> 00:39:00,140
598
00:39:00,140 --> 00:39:02,470
If we look at what happens to
the mean, it will be 1 over M.
599
00:39:02,470 --> 00:39:05,410
You find the mean of this guy,
it's 1 over 2 times the
600
00:39:05,410 --> 00:39:09,300
distance here times distance
here is 1 over 2 E(A), E(A).
601
00:39:09,300 --> 00:39:10,260
OK?
602
00:39:10,260 --> 00:39:12,340
If you look at three dimensions,
it's 1 over 3
603
00:39:12,340 --> 00:39:14,050
E(A), E(A), E(A).
604
00:39:14,050 --> 00:39:16,560
So in M dimensions, it's going
to be 1 over M times all these
605
00:39:16,560 --> 00:39:17,180
coordinates.
606
00:39:17,180 --> 00:39:19,930
If we look at the mean squared,
it goes to 0 as M
607
00:39:19,930 --> 00:39:21,042
goes to infinity.
608
00:39:21,042 --> 00:39:24,390
So that's a good point.
609
00:39:24,390 --> 00:39:26,276
So the mean does go
to 0, and so we
610
00:39:26,276 --> 00:39:27,910
approach the Shannon limit.
611
00:39:27,910 --> 00:39:30,410
612
00:39:30,410 --> 00:39:38,260
So we have K_b of A over
logM What's the
613
00:39:38,260 --> 00:39:39,610
number of nearest neighbors?
614
00:39:39,610 --> 00:39:47,990
We saw it was M minus 1 over
log M. And that goes to
615
00:39:47,990 --> 00:39:51,990
infinity as M goes
to infinity.
616
00:39:51,990 --> 00:39:55,980
OK, so the average number
of nearest neighbors per
617
00:39:55,980 --> 00:40:00,590
information bit also
go to infinity
618
00:40:00,590 --> 00:40:02,930
as M goes to infinity.
619
00:40:02,930 --> 00:40:07,060
And if we look at the mean --
just to write down the comment
620
00:40:07,060 --> 00:40:09,100
that was made.
621
00:40:09,100 --> 00:40:13,120
If we look at the mean of this
constellation, that's going to
622
00:40:13,120 --> 00:40:21,960
be 1 over the number of points
times root E(A) root
623
00:40:21,960 --> 00:40:26,590
E(A) and root E(A).
624
00:40:26,590 --> 00:40:28,170
So there are M coordinates.
625
00:40:28,170 --> 00:40:30,760
So that's going to be the mean
if you believe this is the
626
00:40:30,760 --> 00:40:33,300
coordinate axis here.
627
00:40:33,300 --> 00:40:39,550
And if you we look at the norm
of M(A) squared, then it's
628
00:40:39,550 --> 00:40:46,310
going to be E(A) over M. And
as M goes to infinity,
629
00:40:46,310 --> 00:40:47,560
this goes to 0.
630
00:40:47,560 --> 00:40:54,508
631
00:40:54,508 --> 00:40:57,590
And because the mean goes to
0, it's not unreasonable to
632
00:40:57,590 --> 00:41:00,825
expect that the performance does
approach to the ultimate
633
00:41:00,825 --> 00:41:03,200
Shanon limit.
634
00:41:03,200 --> 00:41:09,060
OK, so so far we have looked at
K_b of A and gamma_c of A.
635
00:41:09,060 --> 00:41:10,620
If you look at the
effective coding
636
00:41:10,620 --> 00:41:13,640
gain, it is still bounded.
637
00:41:13,640 --> 00:41:16,930
638
00:41:16,930 --> 00:41:18,360
It's always bounded.
639
00:41:18,360 --> 00:41:22,280
It is not clear at this point
what really happens to it.
640
00:41:22,280 --> 00:41:25,750
And in fact, it's not quite
straightforward to actually
641
00:41:25,750 --> 00:41:28,440
show what happens to the
effective coding gain.
642
00:41:28,440 --> 00:41:32,260
If you have taken 6.450, then
Professor Bob Gallager really
643
00:41:32,260 --> 00:41:35,470
goes into the details of proving
how the effective
644
00:41:35,470 --> 00:41:38,490
coding gain for this signal set
indeed does approach the
645
00:41:38,490 --> 00:41:40,560
Shannon limit.
646
00:41:40,560 --> 00:41:50,690
But it can be shown that the
effective coding gain does
647
00:41:50,690 --> 00:41:56,790
approach 11.2 dB as M
goes to infinity.
648
00:41:56,790 --> 00:41:59,560
The main trick here is that
union bound is usually quite
649
00:41:59,560 --> 00:42:01,770
weak if the probability
of error is small.
650
00:42:01,770 --> 00:42:04,450
So we replace the probability of
error just by 1, instead of
651
00:42:04,450 --> 00:42:06,240
the union bound,
at some point.
652
00:42:06,240 --> 00:42:10,300
And if you do that change, then
things work out nicely.
653
00:42:10,300 --> 00:42:13,520
But in this course, we will
just assert that the
654
00:42:13,520 --> 00:42:16,040
orthogonal signal sets
do achieve the
655
00:42:16,040 --> 00:42:17,580
ultimate Shannon limit.
656
00:42:17,580 --> 00:42:22,180
And you can see 6.450
notes for the proof.
657
00:42:22,180 --> 00:42:30,560
658
00:42:30,560 --> 00:42:33,570
So that's the assertion that
we are making right now.
659
00:42:33,570 --> 00:42:36,910
So let's step back and
take a look at what's
660
00:42:36,910 --> 00:42:38,740
happening so far.
661
00:42:38,740 --> 00:42:41,990
We have a class of signal sets
which are conceptually quite
662
00:42:41,990 --> 00:42:43,290
easy to describe.
663
00:42:43,290 --> 00:42:45,430
They are just orthogonal
signal sets.
664
00:42:45,430 --> 00:42:47,870
They are not too hard
to analyze, and they
665
00:42:47,870 --> 00:42:51,460
asymptotically approach the
ultimate Shannon limit.
666
00:42:51,460 --> 00:42:52,730
So this seems quite nice.
667
00:42:52,730 --> 00:42:56,010
So why not we just
implement them?
668
00:42:56,010 --> 00:42:59,180
Now it turns that these are not
very common in practice.
669
00:42:59,180 --> 00:43:01,850
They are not implemented as
often as you might think when
670
00:43:01,850 --> 00:43:03,740
you first look at
them and because
671
00:43:03,740 --> 00:43:05,840
they have some drawbacks.
672
00:43:05,840 --> 00:43:09,610
And there are a couple of
drawbacks which make them very
673
00:43:09,610 --> 00:43:12,530
less appealing than what you
might otherwise think.
674
00:43:12,530 --> 00:43:15,450
675
00:43:15,450 --> 00:43:18,820
And first of all, what's the
spectral efficiency for these
676
00:43:18,820 --> 00:43:20,070
signal sets?
677
00:43:20,070 --> 00:43:23,316
678
00:43:23,316 --> 00:43:24,230
AUDIENCE:0, right?
679
00:43:24,230 --> 00:43:25,150
PROFESSOR: It goes to 0.
680
00:43:25,150 --> 00:43:27,050
And how does it go to 0?
681
00:43:27,050 --> 00:43:33,370
It's 2log M to the base 2 over
M. As M goes to infinity,
682
00:43:33,370 --> 00:43:34,960
this goes to 0.
683
00:43:34,960 --> 00:43:39,150
And in fact, it goes
to 0 quite sharply.
684
00:43:39,150 --> 00:43:41,740
Now that's fine, but if you
want to approach Shannon
685
00:43:41,740 --> 00:43:45,300
limit, our spectral efficiency
should indeed go to 0.
686
00:43:45,300 --> 00:43:48,520
But if you're looking at the
system design problem, what do
687
00:43:48,520 --> 00:43:50,050
you really want?
688
00:43:50,050 --> 00:43:52,330
You have a certain effective
coding gain that you have in
689
00:43:52,330 --> 00:43:55,030
mind, and suppose you are
presented with a list of
690
00:43:55,030 --> 00:43:57,960
several different signal sets
that achieve this effective
691
00:43:57,960 --> 00:43:59,290
coding gain.
692
00:43:59,290 --> 00:44:02,700
And your job is to pick one of
these possible signal sets.
693
00:44:02,700 --> 00:44:04,320
Which one will you pick?
694
00:44:04,320 --> 00:44:06,430
Well, there are two things
you will look for, right?
695
00:44:06,430 --> 00:44:08,220
You will look for the spectral
efficiency that these
696
00:44:08,220 --> 00:44:12,020
different signal sets require
or achieve in order to get
697
00:44:12,020 --> 00:44:13,800
this effective coding gain.
698
00:44:13,800 --> 00:44:16,650
And the higher the spectral
efficiency the better.
699
00:44:16,650 --> 00:44:18,660
And secondly, you will also
want to look at the
700
00:44:18,660 --> 00:44:20,600
implementation complexity.
701
00:44:20,600 --> 00:44:22,760
You do not want something which
really takes a lot of
702
00:44:22,760 --> 00:44:25,110
time to decode.
703
00:44:25,110 --> 00:44:25,195
OK?
704
00:44:25,195 --> 00:44:27,770
So it turns out that because the
spectral efficiency goes
705
00:44:27,770 --> 00:44:31,180
to 0 so sharply, it's not very
surprising that there are
706
00:44:31,180 --> 00:44:34,380
other codes that have the same
effective coding gain but
707
00:44:34,380 --> 00:44:36,480
higher spectral efficiency.
708
00:44:36,480 --> 00:44:55,460
So there are binary codes that
have a higher spectral
709
00:44:55,460 --> 00:45:03,050
efficiency for same effective
coding gain.
710
00:45:03,050 --> 00:45:05,860
And so they would be a natural
choice over the orthogonal
711
00:45:05,860 --> 00:45:06,960
signal set.
712
00:45:06,960 --> 00:45:08,130
Yes?
713
00:45:08,130 --> 00:45:09,740
AUDIENCE: [INAUDIBLE]
high-spectrum efficiency to
714
00:45:09,740 --> 00:45:11,350
mean bandwidth?
715
00:45:11,350 --> 00:45:12,990
PROFESSOR: Right, because in
practice, bandwidth is never
716
00:45:12,990 --> 00:45:14,320
really infinite, right?
717
00:45:14,320 --> 00:45:16,850
So if you are achieving
the same --
718
00:45:16,850 --> 00:45:18,440
the first objective is
to have a certain
719
00:45:18,440 --> 00:45:19,530
effective coding gain.
720
00:45:19,530 --> 00:45:21,680
If you do have that, you want to
pick something which has a
721
00:45:21,680 --> 00:45:24,090
higher spectral efficiency.
722
00:45:24,090 --> 00:45:28,120
And indeed, the subject of
chapters six and eight is to
723
00:45:28,120 --> 00:45:30,850
come up with binary codes that
have a much better performance
724
00:45:30,850 --> 00:45:32,970
than the orthogonal signal
set for a given
725
00:45:32,970 --> 00:45:34,430
effective coding gain.
726
00:45:34,430 --> 00:45:37,240
Chapter six deals with the the
block codes, whereas chapter
727
00:45:37,240 --> 00:45:39,880
eight deals with convolutional
codes.
728
00:45:39,880 --> 00:45:43,220
And chapter seven basically
develops all the finite field
729
00:45:43,220 --> 00:45:46,410
theory that you require to study
convolutional codes.
730
00:45:46,410 --> 00:45:49,430
So the point is, in the
subsequent lectures we will be
731
00:45:49,430 --> 00:45:52,510
focusing a lot on improving
the performance over
732
00:45:52,510 --> 00:45:54,940
orthogonal signal sets.
733
00:45:54,940 --> 00:45:57,775
The second point I mentioned was
implementation complexity.
734
00:45:57,775 --> 00:46:07,600
735
00:46:07,600 --> 00:46:10,670
Now one particular way of
implementing orthogonal codes,
736
00:46:10,670 --> 00:46:15,100
which you saw in the first
homework, was using this idea
737
00:46:15,100 --> 00:46:17,100
of Hadamard transforms, right?
738
00:46:17,100 --> 00:46:19,940
You have a Hardamard matrix,
which is an M by M matrix.
739
00:46:19,940 --> 00:46:22,260
The rows of the Hadamard matrix
are your orthogonal
740
00:46:22,260 --> 00:46:25,400
signal sets, which
are elements aj.
741
00:46:25,400 --> 00:46:27,950
And at the receiver, what you
would do is when you get the
742
00:46:27,950 --> 00:46:31,730
signal y, you simply multiply
by the Hadamard matrix and
743
00:46:31,730 --> 00:46:34,050
look at the coordinate
which is the largest.
744
00:46:34,050 --> 00:46:35,180
So this is something
you did in the
745
00:46:35,180 --> 00:46:37,120
first homework exercise.
746
00:46:37,120 --> 00:46:40,680
And how many computations did
you require for an orthogonal
747
00:46:40,680 --> 00:46:41,960
signal set of size M?
748
00:46:41,960 --> 00:46:46,010
749
00:46:46,010 --> 00:46:47,260
AUDIENCE: [INAUDIBLE]
750
00:46:47,260 --> 00:46:51,420
751
00:46:51,420 --> 00:46:51,506
PROFESSOR: Sorry?
752
00:46:51,506 --> 00:46:54,350
OK, so I meant M being
the size of the
753
00:46:54,350 --> 00:46:55,792
orthogonal signal set.
754
00:46:55,792 --> 00:46:59,030
AUDIENCE: Yeah, [INAUDIBLE]
755
00:46:59,030 --> 00:47:01,778
PROFESSOR: Right.
756
00:47:01,778 --> 00:47:03,590
AUDIENCE: 2 to the
M by 2 to the M.
757
00:47:03,590 --> 00:47:05,330
PROFESSOR: If it is 2 to the M
by 2 to the M, it is M times 2
758
00:47:05,330 --> 00:47:09,470
to the M. But there are M
orthogonal signal sets, then
759
00:47:09,470 --> 00:47:17,950
it will be M log M, where M is
the number of signal sets.
760
00:47:17,950 --> 00:47:19,440
So that's the number
of computations
761
00:47:19,440 --> 00:47:20,970
that you would require.
762
00:47:20,970 --> 00:47:26,640
Now, the question to ask is, is
this fast or is this slow?
763
00:47:26,640 --> 00:47:28,670
Is this too many computations
or is this too little
764
00:47:28,670 --> 00:47:30,090
computations?
765
00:47:30,090 --> 00:47:32,790
And if you're looking in the
power-limited regime, what we
766
00:47:32,790 --> 00:47:36,270
really want is to see the
complexity per information bit
767
00:47:36,270 --> 00:47:38,810
because we normalize things
per information bit.
768
00:47:38,810 --> 00:47:41,730
Now, how many bits are sent
when you have orthogonal
769
00:47:41,730 --> 00:47:43,510
signal set of size M?
770
00:47:43,510 --> 00:47:45,050
We have log M bits, right?
771
00:47:45,050 --> 00:47:47,290
So this quantity is actually
exponential in the number of
772
00:47:47,290 --> 00:47:50,360
transmitted bits that we are
sending, and so in fact this
773
00:47:50,360 --> 00:47:52,385
is quite slow.
774
00:47:52,385 --> 00:47:54,480
Or in other words we are saying
it is we require M
775
00:47:54,480 --> 00:47:57,640
times 2 to the M computations,
where M is the number of
776
00:47:57,640 --> 00:48:01,295
information bits that
we are sending.
777
00:48:01,295 --> 00:48:04,220
AUDIENCE: So if M is the size
of your [INAUDIBLE]
778
00:48:04,220 --> 00:48:06,127
then you have log M bits?
779
00:48:06,127 --> 00:48:07,558
PROFESSOR: Right.
780
00:48:07,558 --> 00:48:09,470
AUDIENCE: So M log M
over log M is M.
781
00:48:09,470 --> 00:48:11,195
PROFESSOR: Why are
you adding by M?
782
00:48:11,195 --> 00:48:13,690
AUDIENCE: You need M log M
operations to decode a symbol.
783
00:48:13,690 --> 00:48:15,190
PROFESSOR: To decode a symbol.
784
00:48:15,190 --> 00:48:18,020
AUDIENCE: And each symbol
gives you log M bits.
785
00:48:18,020 --> 00:48:21,650
So you have M operations
per bit.
786
00:48:21,650 --> 00:48:24,080
PROFESSOR: Well, when you send,
say, log M bits, right,
787
00:48:24,080 --> 00:48:26,290
you can only send one of
the M possible symbols.
788
00:48:26,290 --> 00:48:29,860
789
00:48:29,860 --> 00:48:32,100
And when you want to decode the
symbol, you would end up
790
00:48:32,100 --> 00:48:33,350
recovering log M bits.
791
00:48:33,350 --> 00:48:35,630
792
00:48:35,630 --> 00:48:38,928
So you should not be
dividing by M here.
793
00:48:38,928 --> 00:48:42,030
AUDIENCE: You're dividing by log
M because you want to do
794
00:48:42,030 --> 00:48:43,730
it per bit.
795
00:48:43,730 --> 00:48:44,710
PROFESSOR: All right.
796
00:48:44,710 --> 00:48:44,960
You're right.
797
00:48:44,960 --> 00:48:47,330
So you're dividing by log M.
You have M log M bits.
798
00:48:47,330 --> 00:48:51,276
You're dividing by log M,
so you still have --
799
00:48:51,276 --> 00:48:55,240
AUDIENCE: You have M
calculations per bit.
800
00:48:55,240 --> 00:48:56,490
[INAUDIBLE]
801
00:48:56,490 --> 00:49:02,870
802
00:49:02,870 --> 00:49:06,040
PROFESSOR: So we are sending
-- so I need this many
803
00:49:06,040 --> 00:49:09,990
computations to get
log M bits.
804
00:49:09,990 --> 00:49:12,590
805
00:49:12,590 --> 00:49:17,359
So is this expression
exponential in log M or not?
806
00:49:17,359 --> 00:49:20,670
807
00:49:20,670 --> 00:49:23,310
So that's all I was saying.
808
00:49:23,310 --> 00:49:25,350
AUDIENCE: I just put it another
way that you could
809
00:49:25,350 --> 00:49:27,170
divide by log M.
810
00:49:27,170 --> 00:49:28,440
PROFESSOR: M for information,
right.
811
00:49:28,440 --> 00:49:32,900
812
00:49:32,900 --> 00:49:35,050
OK, are there any questions?
813
00:49:35,050 --> 00:49:37,510
I mean we are both agreeing to
the fact that this is too many
814
00:49:37,510 --> 00:49:39,030
computations than
you would want.
815
00:49:39,030 --> 00:49:43,730
And there exist other decoding
algorithms for which the
816
00:49:43,730 --> 00:49:46,605
number of computations is much
smaller than exponential.
817
00:49:46,605 --> 00:49:54,290
818
00:49:54,290 --> 00:49:58,280
OK, so let's next look at the
bandwidth-limited regime.
819
00:49:58,280 --> 00:50:12,110
820
00:50:12,110 --> 00:50:15,500
Well, actually before that I
wanted to mention a couple of
821
00:50:15,500 --> 00:50:18,370
other signal sets as well.
822
00:50:18,370 --> 00:50:20,780
So there are two other important
class of signal sets
823
00:50:20,780 --> 00:50:24,050
which are related to the
orthogonal signal sets.
824
00:50:24,050 --> 00:50:28,090
The first is this class of
simplex signal sets.
825
00:50:28,090 --> 00:50:31,580
826
00:50:31,580 --> 00:50:41,065
And the other class is
bi-orthogonal signal sets.
827
00:50:41,065 --> 00:50:46,960
828
00:50:46,960 --> 00:50:49,910
So the idea behind simplex
signal sets goes back to the
829
00:50:49,910 --> 00:50:53,300
observation that was made that
we have a non-zero mean for
830
00:50:53,300 --> 00:50:54,960
this signal set here.
831
00:50:54,960 --> 00:50:57,850
So if we do, we can indeed
subtract of the mean and get a
832
00:50:57,850 --> 00:51:00,430
performance which is
better than the
833
00:51:00,430 --> 00:51:02,070
orthogonal signal set.
834
00:51:02,070 --> 00:51:05,450
So in particular if I subtract
of the mean here, I get a
835
00:51:05,450 --> 00:51:09,560
signal set, which
is like this.
836
00:51:09,560 --> 00:51:14,090
This is the simplex signal
set when M equals 2.
837
00:51:14,090 --> 00:51:18,730
When M equals 3, I'm going to
subtract of the mean from this
838
00:51:18,730 --> 00:51:19,390
plane here.
839
00:51:19,390 --> 00:51:21,980
The points lie on this
particular plane.
840
00:51:21,980 --> 00:51:25,570
And what I end up with is
an equilateral triangle.
841
00:51:25,570 --> 00:51:29,580
So this is the case
when M equals 3.
842
00:51:29,580 --> 00:51:32,960
What do you think M equals
4 would look like?
843
00:51:32,960 --> 00:51:36,020
A tetrahedron right here.
844
00:51:36,020 --> 00:51:39,930
That's a simplex signal set.
845
00:51:39,930 --> 00:51:45,410
So basically, it's also not too
hard to write an algebraic
846
00:51:45,410 --> 00:51:47,820
expression for these
signal sets.
847
00:51:47,820 --> 00:51:55,720
E of A prime is going to be
E of A minus M of A. You
848
00:51:55,720 --> 00:52:00,590
subtract off the mean from your
orthogonal signal set.
849
00:52:00,590 --> 00:52:03,460
If I want to write it in terms
of inner products, the inner
850
00:52:03,460 --> 00:52:13,450
product between ai and aj is
given by E(A) if i equals j.
851
00:52:13,450 --> 00:52:21,610
And it's I believe minus of 1
over M minus 1 times E(A) if i
852
00:52:21,610 --> 00:52:23,220
is not equal to j.
853
00:52:23,220 --> 00:52:26,340
So this is the inner product
between ai and aj.
854
00:52:26,340 --> 00:52:28,850
And I mean there are also many
properties of this simplex
855
00:52:28,850 --> 00:52:31,652
signal set, but I believe that
that's going to be in your
856
00:52:31,652 --> 00:52:33,960
next homework exercise
so I'm not going
857
00:52:33,960 --> 00:52:35,780
into too many details.
858
00:52:35,780 --> 00:52:38,360
What' the spectral efficiency
in this case going to be?
859
00:52:38,360 --> 00:52:45,346
860
00:52:45,346 --> 00:52:46,850
AUDIENCE: I have a question.
861
00:52:46,850 --> 00:52:49,040
So what is a simplex?
862
00:52:49,040 --> 00:52:52,270
Is a simplex just a reference to
anything with [INAUDIBLE].
863
00:52:52,270 --> 00:52:54,990
PROFESSOR: Simplex signal set is
derived from the orthogonal
864
00:52:54,990 --> 00:52:57,560
signal set by subtracting
off its mean.
865
00:52:57,560 --> 00:52:59,935
AUDIENCE: Right, but in general
simplex means anything
866
00:52:59,935 --> 00:53:01,840
[INAUDIBLE]?
867
00:53:01,840 --> 00:53:03,170
PROFESSOR: I'm not
sure about that.
868
00:53:03,170 --> 00:53:08,940
869
00:53:08,940 --> 00:53:12,188
What's the spectral efficiency
going to be?
870
00:53:12,188 --> 00:53:13,438
AUDIENCE: [INAUDIBLE]
871
00:53:13,438 --> 00:53:15,840
872
00:53:15,840 --> 00:53:18,536
PROFESSOR: Well, almost.
873
00:53:18,536 --> 00:53:19,030
AUDIENCE: You use
one dimension.
874
00:53:19,030 --> 00:53:21,285
PROFESSOR: Right, you just
use one dimension.
875
00:53:21,285 --> 00:53:27,230
876
00:53:27,230 --> 00:53:30,650
OK, and there are other
properties that you'll be
877
00:53:30,650 --> 00:53:32,420
looking at it in the homework.
878
00:53:32,420 --> 00:53:34,692
Now the idea behind this --
879
00:53:34,692 --> 00:53:35,942
AUDIENCE: [INAUDIBLE]
880
00:53:35,942 --> 00:53:39,660
881
00:53:39,660 --> 00:53:44,390
PROFESSOR: This is 2 log
M over [INAUDIBLE].
882
00:53:44,390 --> 00:53:48,510
So the idea behind the
bi-orthogonal signal set is
883
00:53:48,510 --> 00:53:51,870
you start with an orthogonal
signal set, and for each
884
00:53:51,870 --> 00:53:54,240
point, you also put the
negative signal of it.
885
00:53:54,240 --> 00:53:56,830
So in addition to this, you will
have a point here, and
886
00:53:56,830 --> 00:53:58,520
you will have a point here.
887
00:53:58,520 --> 00:54:03,770
So the case in two dimensions, M
equals 2, you will have four
888
00:54:03,770 --> 00:54:09,270
points like this.
889
00:54:09,270 --> 00:54:16,510
So in this case you are
increasing the number of
890
00:54:16,510 --> 00:54:25,830
signal points by a
factor of two.
891
00:54:25,830 --> 00:54:30,440
892
00:54:30,440 --> 00:54:32,380
But this does come at
a price, right?
893
00:54:32,380 --> 00:54:35,280
Because if you're increasing the
number of signal points,
894
00:54:35,280 --> 00:54:37,450
the number of nearest neighbors
is also going to
895
00:54:37,450 --> 00:54:39,300
increase by a factor of two.
896
00:54:39,300 --> 00:54:41,410
In this case, you have two
nearest neighbors now.
897
00:54:41,410 --> 00:55:07,210
898
00:55:07,210 --> 00:55:09,540
So you have both of the
effects going on.
899
00:55:09,540 --> 00:55:12,680
I mean ultimately, all the
signal sets, as M goes to
900
00:55:12,680 --> 00:55:15,570
infinity, are going to approach
the Shannon limit.
901
00:55:15,570 --> 00:55:18,510
So again, but they do suffer
from the same kind of
902
00:55:18,510 --> 00:55:21,270
drawbacks that we saw for the
orthogonal signal sets.
903
00:55:21,270 --> 00:55:23,830
So these are more of theoretical
interest as
904
00:55:23,830 --> 00:55:28,130
opposed to real, practical
implementation points.
905
00:55:28,130 --> 00:55:29,380
Any questions?
906
00:55:29,380 --> 00:55:33,540
907
00:55:33,540 --> 00:55:34,790
AUDIENCE: Minus [INAUDIBLE]
908
00:55:34,790 --> 00:55:37,310
909
00:55:37,310 --> 00:55:38,570
PROFESSOR: It's a bit
involved to show.
910
00:55:38,570 --> 00:55:42,760
I think you'll be showing it
in the homework exercise or
911
00:55:42,760 --> 00:55:46,530
for an exam problem
at one point.
912
00:55:46,530 --> 00:55:50,710
You basically start with the
orthogonal signal sets here,
913
00:55:50,710 --> 00:55:54,590
look at the inner product here,
and use the relation
914
00:55:54,590 --> 00:55:57,110
between the simplex signal sets
and the orthogonal signal
915
00:55:57,110 --> 00:55:59,770
sets to do a subtraction
of the mean.
916
00:55:59,770 --> 00:56:01,445
And then you can derive in
a product expression.
917
00:56:01,445 --> 00:56:59,690
918
00:56:59,690 --> 00:57:02,265
OK, so let's now move on to the
bandwidth-limited regime.
919
00:57:02,265 --> 00:57:18,548
920
00:57:18,548 --> 00:57:20,990
OK, so the main difference
between --
921
00:57:20,990 --> 00:57:22,946
yes?
922
00:57:22,946 --> 00:57:24,196
AUDIENCE: [INAUDIBLE]
923
00:57:24,196 --> 00:57:29,220
924
00:57:29,220 --> 00:57:31,040
PROFESSOR: You mean here?
925
00:57:31,040 --> 00:57:32,120
For each --
926
00:57:32,120 --> 00:57:33,503
AUDIENCE: How far can
you [INAUDIBLE]?
927
00:57:33,503 --> 00:57:36,270
PROFESSOR: A factor of two.
928
00:57:36,270 --> 00:57:37,550
I said a factor right?
929
00:57:37,550 --> 00:57:39,850
So if you have M points in the
orthogonal signal set, you
930
00:57:39,850 --> 00:57:44,690
have 2M points in the
bi-orthogonal signal set
931
00:57:44,690 --> 00:57:45,940
because for each --
932
00:57:45,940 --> 00:57:48,460
933
00:57:48,460 --> 00:57:52,470
AUDIENCE: In the orthogonal
sets you have 3.
934
00:57:52,470 --> 00:57:56,050
PROFESSOR: And in
this you have 6.
935
00:57:56,050 --> 00:58:00,290
So let's look at the
bandwidth-limited regime.
936
00:58:00,290 --> 00:58:03,480
The trade-off we care about is
the probability of error for
937
00:58:03,480 --> 00:58:06,260
two dimensions as a function
of SNR norm.
938
00:58:06,260 --> 00:58:09,410
939
00:58:09,410 --> 00:58:10,960
OK?
940
00:58:10,960 --> 00:58:15,530
The baseline scheme here
is your M-PAM scheme.
941
00:58:15,530 --> 00:58:18,210
And we showed a couple of
lectures ago that the
942
00:58:18,210 --> 00:58:22,460
probability of error for two
dimensions is approximately 4
943
00:58:22,460 --> 00:58:24,542
Q times root 3 SNR norm.
944
00:58:24,542 --> 00:58:25,792
OK?
945
00:58:25,792 --> 00:58:30,420
946
00:58:30,420 --> 00:58:32,450
So let's plot the performance
graph.
947
00:58:32,450 --> 00:58:40,576
948
00:58:40,576 --> 00:58:43,810
So on the x-axis I
plot SNR norm.
949
00:58:43,810 --> 00:58:47,980
On the y-axis I plot Ps of E
again on a log-log scale.
950
00:58:47,980 --> 00:58:53,310
951
00:58:53,310 --> 00:58:55,770
This is ten to the negative six,
ten to the negative five,
952
00:58:55,770 --> 00:58:57,010
ten to the negative four,
ten to the negative
953
00:58:57,010 --> 00:58:59,140
three and so on.
954
00:58:59,140 --> 00:59:02,900
The intercept is at 0
dB, this point here.
955
00:59:02,900 --> 00:59:05,420
So this is your Shannon
limit in the
956
00:59:05,420 --> 00:59:08,320
bandwidth-limited regime.
957
00:59:08,320 --> 00:59:12,560
Now your performance curve is
going to be something like
958
00:59:12,560 --> 00:59:14,010
this for the M-PAM system.
959
00:59:14,010 --> 00:59:22,710
960
00:59:22,710 --> 00:59:28,480
And we want to basically
quantify now the effective
961
00:59:28,480 --> 00:59:30,650
coding gain in this regime.
962
00:59:30,650 --> 00:59:33,900
So it's the same story as in
the power-limited regime.
963
00:59:33,900 --> 00:59:39,610
We will start off with the
probability of error using the
964
00:59:39,610 --> 00:59:51,000
union bound estimate, which is
K_min of A times Q of d_min
965
00:59:51,000 --> 00:59:52,575
over 2 sigma.
966
00:59:52,575 --> 00:59:55,620
967
00:59:55,620 --> 01:00:00,460
Now Ps of E is the probability
of error for two dimensions.
968
01:00:00,460 --> 01:00:04,740
Assuming we have N dimensions
here, it is two times the
969
01:00:04,740 --> 01:00:10,400
probability of error over N.
So this is probability of
970
01:00:10,400 --> 01:00:12,140
error per symbol.
971
01:00:12,140 --> 01:00:15,120
We are dividing by the number of
dimensions and multiplying
972
01:00:15,120 --> 01:00:18,770
by two because it is
for two dimensions.
973
01:00:18,770 --> 01:00:28,980
And this is going to be 2 times
K_min of A over N, times
974
01:00:28,980 --> 01:00:32,230
Q of d_min over 2 sigma.
975
01:00:32,230 --> 01:00:39,530
976
01:00:39,530 --> 01:00:41,900
So now let's do the same trick
that we did in the
977
01:00:41,900 --> 01:00:43,150
power-limited regime.
978
01:00:43,150 --> 01:00:46,110
979
01:00:46,110 --> 01:00:49,860
We will write Ps of E
be approximately --
980
01:00:49,860 --> 01:00:53,980
and instead of the right hand
side, I will write it as Ks of
981
01:00:53,980 --> 01:01:01,950
A times Q of root 3 SNR norm.
982
01:01:01,950 --> 01:01:06,390
983
01:01:06,390 --> 01:01:08,900
But because I have a factor of
four up there, I will also
984
01:01:08,900 --> 01:01:12,790
multiply and divide by 4.
985
01:01:12,790 --> 01:01:16,024
That's not going to
change anything.
986
01:01:16,024 --> 01:01:17,888
AUDIENCE: [INAUDIBLE]
987
01:01:17,888 --> 01:01:19,470
per two dimensions
or by symbol.
988
01:01:19,470 --> 01:01:20,925
PROFESSOR: Per two dimensions.
989
01:01:20,925 --> 01:01:23,510
And for bandwidth-limited
regime, we normalize
990
01:01:23,510 --> 01:01:24,600
everything by two dimensions.
991
01:01:24,600 --> 01:01:28,222
AUDIENCE: But for N band we used
it for symbols, right?
992
01:01:28,222 --> 01:01:30,790
The calculations represent --
993
01:01:30,790 --> 01:01:32,040
PROFESSOR: No, it was
for two dimensions.
994
01:01:32,040 --> 01:01:37,720
995
01:01:37,720 --> 01:01:42,120
OK?, so now let's compare
the two expressions.
996
01:01:42,120 --> 01:01:50,960
We have Ks of A, which we'll
define by two times K_min of A
997
01:01:50,960 --> 01:02:05,810
over N. And this is the number
of nearest neighbors per two
998
01:02:05,810 --> 01:02:08,740
dimensions.
999
01:02:08,740 --> 01:02:19,780
And I can write 3 times SNR norm
is d_min squared over 4
1000
01:02:19,780 --> 01:02:27,970
sigma squared, OK?
1001
01:02:27,970 --> 01:02:30,550
Actually, I was missing
this factor of gamma.
1002
01:02:30,550 --> 01:02:32,070
I knew I was missing
something.
1003
01:02:32,070 --> 01:02:34,970
It's not just 3 SNR norm.
1004
01:02:34,970 --> 01:02:39,090
It's 3 times this nominal coding
gain times SNR norm.
1005
01:02:39,090 --> 01:02:42,080
1006
01:02:42,080 --> 01:02:47,550
So 3 times SNR norm times
gamma_c of A here.
1007
01:02:47,550 --> 01:02:50,020
So this is d_min squared
over 2N_0.
1008
01:02:50,020 --> 01:02:54,850
1009
01:02:54,850 --> 01:03:02,470
So what do I have for the
gamma_c c o A is d_min squared
1010
01:03:02,470 --> 01:03:06,070
over 6 N_0 times SNR norm.
1011
01:03:06,070 --> 01:03:08,740
1012
01:03:08,740 --> 01:03:12,480
Remember SNR norm is SNR over
2 to the rho minus 1.
1013
01:03:12,480 --> 01:03:15,730
It's normalized signal-to-noise
ratio.
1014
01:03:15,730 --> 01:03:20,890
So this is the d_min squared
times 2 to the rho minus 1
1015
01:03:20,890 --> 01:03:24,300
over 6 times N_0 times SNR.
1016
01:03:24,300 --> 01:03:27,390
But N_0 times SNR is just Es.
1017
01:03:27,390 --> 01:03:30,180
So this is 6 times Es.
1018
01:03:30,180 --> 01:03:31,740
So this is the expression
you have for the
1019
01:03:31,740 --> 01:03:32,990
nominal coding gain.
1020
01:03:32,990 --> 01:03:47,760
1021
01:03:47,760 --> 01:03:52,140
So now again, given a signal
set A, I can find those two
1022
01:03:52,140 --> 01:03:59,370
parameters, the nominal coding
gain and the number of nearest
1023
01:03:59,370 --> 01:04:01,440
neighbors per two dimensions.
1024
01:04:01,440 --> 01:04:05,686
And I can use those to
plot on the Ps of E
1025
01:04:05,686 --> 01:04:07,990
versus SNR norm curve.
1026
01:04:07,990 --> 01:04:09,405
Again, the exact same story.
1027
01:04:09,405 --> 01:04:12,940
1028
01:04:12,940 --> 01:04:17,920
If I have a certain nominal
coding gain, I will simply
1029
01:04:17,920 --> 01:04:23,460
shift my curve around the x-axis
by that factor as a
1030
01:04:23,460 --> 01:04:25,550
pure translation.
1031
01:04:25,550 --> 01:04:32,120
And then because I have a factor
of Ks A over 4, I'm
1032
01:04:32,120 --> 01:04:33,396
going to plot --
1033
01:04:33,396 --> 01:04:35,340
how did that expression go?
1034
01:04:35,340 --> 01:04:37,590
This expression on the top.
1035
01:04:37,590 --> 01:04:41,440
Remember this curve here is 4
times root 3 SNR norm, right?
1036
01:04:41,440 --> 01:04:44,740
So the multiplicative factor
I have is Ks(A) over 4.
1037
01:04:44,740 --> 01:04:46,880
So I will shift my curve
up by that factor.
1038
01:04:46,880 --> 01:04:50,180
1039
01:04:50,180 --> 01:04:52,360
And I will get something
which is like this.
1040
01:04:52,360 --> 01:04:57,110
1041
01:04:57,110 --> 01:05:00,430
And that's my union bound
estimate for this new
1042
01:05:00,430 --> 01:05:04,470
constellation A.
1043
01:05:04,470 --> 01:05:06,340
The distance here is
gamma effective.
1044
01:05:06,340 --> 01:05:08,990
1045
01:05:08,990 --> 01:05:19,400
This distance here is gamma_c of
A. Are there any questions?
1046
01:05:19,400 --> 01:05:21,870
So now we also have a similar
rule of thumb as in the
1047
01:05:21,870 --> 01:05:24,450
power-limited regime.
1048
01:05:24,450 --> 01:05:27,140
I will write that rule
of thumb here.
1049
01:05:27,140 --> 01:05:34,430
We have that gamma effective is
going to be gamma_c in dB
1050
01:05:34,430 --> 01:05:43,020
minus 0.2 times log2 times
Ks of A over 4 in dB.
1051
01:05:43,020 --> 01:05:46,670
1052
01:05:46,670 --> 01:05:51,400
OK, so again, a factor of 2NKs
will decrease your nominal
1053
01:05:51,400 --> 01:05:54,830
coding gain by a factor
of 0.2 in dB.
1054
01:05:54,830 --> 01:05:58,620
1055
01:05:58,620 --> 01:05:59,870
Are there any questions?
1056
01:05:59,870 --> 01:06:03,260
1057
01:06:03,260 --> 01:06:06,260
Well, so I mean this is the
end of chapter five.
1058
01:06:06,260 --> 01:06:08,830
We still have like ten minutes
remaining, so I thought I
1059
01:06:08,830 --> 01:06:11,480
would give you a preview
of the next chapter.
1060
01:06:11,480 --> 01:06:13,510
Professor Forney will be coming
next class, and he will
1061
01:06:13,510 --> 01:06:15,970
be starting chapter six
all over I believe.
1062
01:06:15,970 --> 01:06:19,030
1063
01:06:19,030 --> 01:06:21,303
This chalk is much nicer to
erase than that other one.
1064
01:06:21,303 --> 01:06:43,000
1065
01:06:43,000 --> 01:06:48,650
So in chapter six, we'll
be looking at the
1066
01:06:48,650 --> 01:06:49,900
binary block codes.
1067
01:06:49,900 --> 01:07:00,130
1068
01:07:00,130 --> 01:07:04,540
OK, and what is the main
architecture of these codes?
1069
01:07:04,540 --> 01:07:09,700
Well, for an encoder, remember
we have K bits coming in.
1070
01:07:09,700 --> 01:07:12,315
And we pass it now through
a binary block code.
1071
01:07:12,315 --> 01:07:16,070
1072
01:07:16,070 --> 01:07:23,220
And what we get out is
a binary sequence
1073
01:07:23,220 --> 01:07:25,425
of length N. OK?
1074
01:07:25,425 --> 01:07:31,750
So x belongs to c, where c is
a subset of binary sequences
1075
01:07:31,750 --> 01:07:33,550
of length N.
1076
01:07:33,550 --> 01:07:37,240
So we start with K bits, and
we convert them to N bits,
1077
01:07:37,240 --> 01:07:42,610
where N is going to be greater
than or equal to K. Once we
1078
01:07:42,610 --> 01:07:46,590
have this sequence of N bits,
what we end up doing is we
1079
01:07:46,590 --> 01:07:49,820
pass it through a binary
signal constellation.
1080
01:07:49,820 --> 01:07:54,710
1081
01:07:54,710 --> 01:07:56,690
So I'm writing in as
binary signaling.
1082
01:07:56,690 --> 01:08:02,200
And what we get out is
S of x, which is a
1083
01:08:02,200 --> 01:08:04,940
sequence of N symbols.
1084
01:08:04,940 --> 01:08:07,110
Each symbol can either take
value minus alpha or alpha.
1085
01:08:07,110 --> 01:08:12,130
So this binary signalling is
actually quite a trivial step.
1086
01:08:12,130 --> 01:08:16,560
Basically, the relation is S of
0 is going to be alpha, S
1087
01:08:16,560 --> 01:08:18,500
of 1 is going to
be minus alpha.
1088
01:08:18,500 --> 01:08:21,520
If I have a 0 coming in, I
will produce an alpha.
1089
01:08:21,520 --> 01:08:24,520
if I have a 1 coming in, I will
produce a minus alpha.
1090
01:08:24,520 --> 01:08:26,689
So this part here is trivial.
1091
01:08:26,689 --> 01:08:29,253
All our energy will be
focused on will be
1092
01:08:29,253 --> 01:08:31,340
to find a good code.
1093
01:08:31,340 --> 01:08:33,970
Given a sequence of input bits,
we want to find a good
1094
01:08:33,970 --> 01:08:39,080
binary code in order to
optimize the minimum
1095
01:08:39,080 --> 01:08:40,880
distance and so on.
1096
01:08:40,880 --> 01:08:43,540
So that's the architecture that
we will be using for the
1097
01:08:43,540 --> 01:08:45,210
binary linear codes.
1098
01:08:45,210 --> 01:08:46,800
Both chapters six and
eight will be only
1099
01:08:46,800 --> 01:08:48,250
focusing on this part.
1100
01:08:48,250 --> 01:08:50,550
And this binary signalling
scheme, for the most part,
1101
01:08:50,550 --> 01:08:52,265
will be implicit in
our architecture.
1102
01:08:52,265 --> 01:08:55,069
1103
01:08:55,069 --> 01:08:58,399
So at this point, one might ask
is there anything to lose
1104
01:08:58,399 --> 01:09:01,359
in having imposed this
type of architecture?
1105
01:09:01,359 --> 01:09:03,899
Remember, the nice thing about
this architecture is we have
1106
01:09:03,899 --> 01:09:06,910
coding, which is going on here,
and we have modulation
1107
01:09:06,910 --> 01:09:08,460
that is happening here.
1108
01:09:08,460 --> 01:09:10,960
And the modulation step
is quite simple.
1109
01:09:10,960 --> 01:09:23,156
So we have a separation between
coding and modulation.
1110
01:09:23,156 --> 01:09:24,406
OK?
1111
01:09:24,406 --> 01:09:29,689
1112
01:09:29,689 --> 01:09:32,600
And the question is, is this
the right thing to do?
1113
01:09:32,600 --> 01:09:34,520
Now, it turns out that if we
are going to live in this
1114
01:09:34,520 --> 01:09:36,984
binary world where we are only
constrained ourselves to
1115
01:09:36,984 --> 01:09:40,134
sending binary signals over the
channel, this is in fact
1116
01:09:40,134 --> 01:09:41,800
an economical structure.
1117
01:09:41,800 --> 01:09:44,310
There isn't much improvement we
can get, and that is quite
1118
01:09:44,310 --> 01:09:45,424
obvious, right?
1119
01:09:45,424 --> 01:09:48,410
But it turns out that if you are
going to go for non-binary
1120
01:09:48,410 --> 01:09:51,870
signaling, particularly in the
bandwidth-limited regime,
1121
01:09:51,870 --> 01:09:55,460
there is a cost to be paid for
this kind of separation.
1122
01:09:55,460 --> 01:09:57,950
In fact, in the 1980's there
was this whole idea of
1123
01:09:57,950 --> 01:10:01,060
turbo-coded modulation or
trellis-coded modulation.
1124
01:10:01,060 --> 01:10:03,680
And the idea there was to
combine coding and modulation
1125
01:10:03,680 --> 01:10:05,110
in one step.
1126
01:10:05,110 --> 01:10:19,440
So this is not optimal for
non-binary signalling, but's
1127
01:10:19,440 --> 01:10:25,270
it's OK for binary.
1128
01:10:25,270 --> 01:10:30,570
1129
01:10:30,570 --> 01:10:33,820
So we will be focusing on this
type of an architecture.
1130
01:10:33,820 --> 01:10:37,130
The other issue is OK, nobody
told us that you can only send
1131
01:10:37,130 --> 01:10:39,080
binary signals over
the channel.
1132
01:10:39,080 --> 01:10:42,270
If you want to achieve the
Shannon limit, when we derived
1133
01:10:42,270 --> 01:10:44,580
the capacity expression --
1134
01:10:44,580 --> 01:10:46,970
or we just stated it but
you can derive it.
1135
01:10:46,970 --> 01:10:49,660
We said that the Shannon
limit is always less
1136
01:10:49,660 --> 01:10:52,830
than log2 1 plus SNR.
1137
01:10:52,830 --> 01:10:55,980
And we never said that we can
only send binary signals over
1138
01:10:55,980 --> 01:10:56,840
the channel.
1139
01:10:56,840 --> 01:10:59,070
So it could be that this is some
inherence of optimality
1140
01:10:59,070 --> 01:11:00,770
in this type of architecture.
1141
01:11:00,770 --> 01:11:03,730
Why send binary signals
in the first place?
1142
01:11:03,730 --> 01:11:06,940
Again, it turns out that the
regime that we are interested
1143
01:11:06,940 --> 01:11:10,130
in here is the power-limited
regime because the spectral
1144
01:11:10,130 --> 01:11:12,220
efficiency can never be more
than two bits per two
1145
01:11:12,220 --> 01:11:13,810
dimensions, right?
1146
01:11:13,810 --> 01:11:16,340
Remember, rho is 2K over N here,
and K is going to be
1147
01:11:16,340 --> 01:11:17,290
less than that.
1148
01:11:17,290 --> 01:11:22,130
So we are inherently in the
power-limited regime.
1149
01:11:22,130 --> 01:11:24,770
And so a natural thing to do
in order to understand how
1150
01:11:24,770 --> 01:11:28,120
much fundamental loss we have
in imposing this type of
1151
01:11:28,120 --> 01:11:32,360
binary constraint over the
signals is to not look at this
1152
01:11:32,360 --> 01:11:36,470
expression, but to find the best
possible performance we
1153
01:11:36,470 --> 01:11:39,390
can have if we constrain
ourselves to binary signals
1154
01:11:39,390 --> 01:11:41,090
over the channel.
1155
01:11:41,090 --> 01:11:43,910
If we impose a certain signaling
constraint, we can
1156
01:11:43,910 --> 01:11:49,330
find another fundamental limit
on the spectral efficiency.
1157
01:11:49,330 --> 01:11:53,070
So the point being, the best
possible binary code can only
1158
01:11:53,070 --> 01:11:55,450
achieve this upper bound here.
1159
01:11:55,450 --> 01:11:57,050
Now, this upper bound
has to be less than
1160
01:11:57,050 --> 01:11:58,300
log2 1 plus SNR, right?
1161
01:11:58,300 --> 01:12:03,170
1162
01:12:03,170 --> 01:12:06,070
Because the Shannon code assumes
the binary signaling
1163
01:12:06,070 --> 01:12:07,640
is a special case.
1164
01:12:07,640 --> 01:12:10,700
So it turns out that this
expression is actually quite
1165
01:12:10,700 --> 01:12:12,310
tedious to write out.
1166
01:12:12,310 --> 01:12:15,090
I'm not even sure if the
closed-form expression exists,
1167
01:12:15,090 --> 01:12:16,820
but you can calculate
this numerically.
1168
01:12:16,820 --> 01:12:19,760
It's just some kind of a convex
optimization problem.
1169
01:12:19,760 --> 01:12:22,350
And now, to understand how
much loss we have, we can
1170
01:12:22,350 --> 01:12:26,090
compare the two expressions in
the power-limited regime.
1171
01:12:26,090 --> 01:12:30,690
So I'm going to plot the curves
for the two cases.
1172
01:12:30,690 --> 01:12:35,350
I'm going to plot the spectral
efficiency on the y-axis as a
1173
01:12:35,350 --> 01:12:36,500
function of Eb N_0.
1174
01:12:36,500 --> 01:12:39,000
You can easily convert from
SNR to Eb N_0 using the
1175
01:12:39,000 --> 01:12:42,210
relations we discussed
in chapter four.
1176
01:12:42,210 --> 01:12:45,030
Now we first plot the
Shannon limit.
1177
01:12:45,030 --> 01:12:48,920
The ultimate Shannon limit
is minus 1.59 dB.
1178
01:12:48,920 --> 01:12:52,000
This is going to be 0 dB here.
1179
01:12:52,000 --> 01:12:54,100
And if I'm plot the Shannon
limit, it will be
1180
01:12:54,100 --> 01:12:56,260
some code like this.
1181
01:12:56,260 --> 01:12:58,080
It increases with Eb N_0.
1182
01:12:58,080 --> 01:13:01,210
And minus 1.59 dB is 0.
1183
01:13:01,210 --> 01:13:03,130
This point here is 1.
1184
01:13:03,130 --> 01:13:05,070
Some point here is 2.
1185
01:13:05,070 --> 01:13:09,200
So actually I want to draw this
horizontal asymptote at
1186
01:13:09,200 --> 01:13:13,030
two when rho Shannon equals 2.
1187
01:13:13,030 --> 01:13:15,720
So this is my rho Shannon.
1188
01:13:15,720 --> 01:13:19,200
Now, I can now also plot
rho binary from
1189
01:13:19,200 --> 01:13:21,020
this expression here.
1190
01:13:21,020 --> 01:13:23,220
I can never exceed two bits
per two dimensions.
1191
01:13:23,220 --> 01:13:25,720
So I had plotted this
horizontal line.
1192
01:13:25,720 --> 01:13:28,720
So as Eb N_0 goes to infinity,
the most I can get is two bits
1193
01:13:28,720 --> 01:13:30,250
per two dimensions.
1194
01:13:30,250 --> 01:13:35,380
If Eb N_0 is smaller, I cannot
do better so I will be doing
1195
01:13:35,380 --> 01:13:38,210
worse and worse, and I will
always be below this line.
1196
01:13:38,210 --> 01:13:41,330
Ultimately, it can be shown
that I will have the same
1197
01:13:41,330 --> 01:13:43,210
Shannon limit here.
1198
01:13:43,210 --> 01:13:45,020
So this is rho binary.
1199
01:13:45,020 --> 01:13:48,380
1200
01:13:48,380 --> 01:13:51,830
And the main observation here is
if I'm in the power-limited
1201
01:13:51,830 --> 01:13:55,150
regime, which is say, in this
part of the curve --
1202
01:13:55,150 --> 01:13:57,650
like, say around one bit per two
dimension -- the gap here
1203
01:13:57,650 --> 01:13:59,390
is quite small.
1204
01:13:59,390 --> 01:14:02,620
In fact, this gap can be shown
to be at most 0.2 dB.
1205
01:14:02,620 --> 01:14:05,190
1206
01:14:05,190 --> 01:14:08,130
So if I want to achieve a
certain spectral efficiency,
1207
01:14:08,130 --> 01:14:10,855
if I impose a binary signaling
constraint, the additional Eb
1208
01:14:10,855 --> 01:14:15,300
N_0 that I require when the
spectral efficiency is 1 dB,
1209
01:14:15,300 --> 01:14:18,360
one bit per two dimensions,
is at most 0.2 dB.
1210
01:14:18,360 --> 01:14:22,280
So there is not much to lose by
imposing a binary signaling
1211
01:14:22,280 --> 01:14:23,870
constraint.
1212
01:14:23,870 --> 01:14:26,220
Said in different words words
if you had the best possible
1213
01:14:26,220 --> 01:14:29,150
binary linear code, followed
by a binary signaling
1214
01:14:29,150 --> 01:14:33,210
constellation, the most you
can lose is 0.2 dB.
1215
01:14:33,210 --> 01:14:36,070
And so this type of an
architecture does make sense.
1216
01:14:36,070 --> 01:14:39,783
There are a lot of details that
the next powerpoint will
1217
01:14:39,783 --> 01:14:44,680
be to kind of formalize the
notion of a binary block code,
1218
01:14:44,680 --> 01:14:46,850
then specialize it to the
case of binary linear
1219
01:14:46,850 --> 01:14:48,450
codes and so on.
1220
01:14:48,450 --> 01:14:51,370
Unfortunately, the first thing
to do is to start with some
1221
01:14:51,370 --> 01:14:54,970
finite field theory because
there's a lot of finite field
1222
01:14:54,970 --> 01:14:57,780
algebra involved in this
algebraic block codes.
1223
01:14:57,780 --> 01:15:02,580
So how many of you have seen
finite field theory before?
1224
01:15:02,580 --> 01:15:05,810
OK, if you haven't seen it,
don't worry about it.
1225
01:15:05,810 --> 01:15:08,570
We'll be starting right
from the basics.
1226
01:15:08,570 --> 01:15:11,580
But I think I will be stopping
here for today because I don't
1227
01:15:11,580 --> 01:15:13,390
want to go into those
details today.
1228
01:15:13,390 --> 01:15:36,017