1
00:00:00,000 --> 00:00:01,620
2
00:00:01,620 --> 00:00:04,730
PROFESSOR: Today we finish
up chapter 13.
3
00:00:04,730 --> 00:00:07,700
I think we are finished, but
I'll take questions.
4
00:00:07,700 --> 00:00:12,520
We get into chapter 14, which
starts to get into coding for
5
00:00:12,520 --> 00:00:15,440
bandwidth-limited channels.
6
00:00:15,440 --> 00:00:18,910
Here we're coding directly in
terms of Euclidean space
7
00:00:18,910 --> 00:00:21,200
rather than in terms
of Hamming space.
8
00:00:21,200 --> 00:00:24,150
And there's this wonderful
quote from Neil Sloane.
9
00:00:24,150 --> 00:00:27,110
"Euclidean space coding is to
Hamming space coding as
10
00:00:27,110 --> 00:00:31,660
classical music is to
rock'n'roll." Meaning that, in
11
00:00:31,660 --> 00:00:34,740
the Euclidean space, things
are continuous, whereas we
12
00:00:34,740 --> 00:00:36,950
were discrete back
in Hamming space.
13
00:00:36,950 --> 00:00:42,110
Nonetheless, there are strong
connections between the two,
14
00:00:42,110 --> 00:00:46,420
which in two lectures I'll
basically only have a chance
15
00:00:46,420 --> 00:00:48,830
to hint at.
16
00:00:48,830 --> 00:00:52,860
You were supposed to hand in
problem set nine last Friday,
17
00:00:52,860 --> 00:00:56,090
or today if you haven't
already.
18
00:00:56,090 --> 00:00:59,220
Today, we'll hand out the
solutions for nine, and then
19
00:00:59,220 --> 00:01:02,060
chapter 14 and its problems,
which are problem set 10.
20
00:01:02,060 --> 00:01:04,610
These are not due.
21
00:01:04,610 --> 00:01:08,060
As always, I recommend
you do them.
22
00:01:08,060 --> 00:01:11,650
The solutions will be handed
out Wednesday.
23
00:01:11,650 --> 00:01:16,000
However, on the exam, the exam
will not cover chapter 14.
24
00:01:16,000 --> 00:01:21,040
So everything we do this
week is gravy.
25
00:01:21,040 --> 00:01:25,180
The exam is a week from tomorrow
in the morning.
26
00:01:25,180 --> 00:01:28,960
Ashish will administer it
because I won't be here.
27
00:01:28,960 --> 00:01:32,620
For this exam, it's closed book,
except you're allowed to
28
00:01:32,620 --> 00:01:36,120
make five pages of notes,
as you did three
29
00:01:36,120 --> 00:01:38,010
pages on the midterm.
30
00:01:38,010 --> 00:01:42,660
And I do recommend you bring a
calculator, because for one of
31
00:01:42,660 --> 00:01:46,490
the problems the calculator
will be helpful.
32
00:01:46,490 --> 00:01:49,900
And I was told that erasable
memory should be cleared on
33
00:01:49,900 --> 00:01:55,410
the calculator, so point of
honor if you do that.
34
00:01:55,410 --> 00:02:02,710
Any questions about the
exam or chapter 13?
35
00:02:02,710 --> 00:02:03,100
Yes?
36
00:02:03,100 --> 00:02:06,010
AUDIENCE: Are you including the
midterm [UNINTELLIGIBLE]?
37
00:02:06,010 --> 00:02:08,750
PROFESSOR: Yeah, oh, everything
in the course up to
38
00:02:08,750 --> 00:02:11,730
chapter 13.
39
00:02:11,730 --> 00:02:14,660
The rule is everything that
we covered in class.
40
00:02:14,660 --> 00:02:16,330
If we didn't cover it in
class, then you're not
41
00:02:16,330 --> 00:02:19,520
responsible for it.
42
00:02:19,520 --> 00:02:20,770
Other questions?
43
00:02:20,770 --> 00:02:23,140
44
00:02:23,140 --> 00:02:27,020
Any last questions
on chapter 13?
45
00:02:27,020 --> 00:02:33,590
This is kind of the finale
for binary codes.
46
00:02:33,590 --> 00:02:37,990
We found that, at least on the
binary erasure channel, we had
47
00:02:37,990 --> 00:02:42,460
techniques that would allow us
to design LDPC codes that
48
00:02:42,460 --> 00:02:46,060
would get us as close to
capacity as we liked, or at
49
00:02:46,060 --> 00:02:48,930
least I made that assertion,
directed you to where you
50
00:02:48,930 --> 00:02:50,600
could find that done
in the literature.
51
00:02:50,600 --> 00:02:54,840
And I further asserted that, at
least for binary symmetric
52
00:02:54,840 --> 00:02:57,820
input channels, like the binary
additive white Gaussian
53
00:02:57,820 --> 00:03:03,320
noise channels, you could do
much the same thing using
54
00:03:03,320 --> 00:03:07,350
techniques like density
evolution or EXIT charts.
55
00:03:07,350 --> 00:03:10,280
Again, the issue is to design
code, such as this iterative
56
00:03:10,280 --> 00:03:14,850
decoding, after many, many, many
iterations, will, with
57
00:03:14,850 --> 00:03:19,180
probability one, converge to the
correct solution (or with
58
00:03:19,180 --> 00:03:22,090
probably one minus epsilon).
59
00:03:22,090 --> 00:03:28,920
So that's the great conclusion
to the story for binary codes,
60
00:03:28,920 --> 00:03:31,800
channels in which we want
to have binary inputs.
61
00:03:31,800 --> 00:03:36,290
And way back in the beginning,
we said that that was
62
00:03:36,290 --> 00:03:40,740
perfectly sufficient, really,
for the general additive white
63
00:03:40,740 --> 00:03:42,630
Gaussian channel.
64
00:03:42,630 --> 00:03:46,560
When we were talking about
binary code rates less than
65
00:03:46,560 --> 00:03:51,250
1/2, or nominal spectral
efficiencies less than one bit
66
00:03:51,250 --> 00:03:55,690
per second per Hertz, which
are equivalent statements.
67
00:03:55,690 --> 00:03:59,070
So what happens if we want to go
over channels where we want
68
00:03:59,070 --> 00:04:03,000
to send at higher spectral
efficiencies?
69
00:04:03,000 --> 00:04:07,630
We want to send four bits per
second per Hertz, as we
70
00:04:07,630 --> 00:04:11,230
certainly do over, say,
telephone line channels, which
71
00:04:11,230 --> 00:04:15,190
are 3,000, or 4,000-Hertz
channels, over which we want
72
00:04:15,190 --> 00:04:18,610
to send tens of thousands
of bits per second.
73
00:04:18,610 --> 00:04:22,780
Or many radio channels will
support high numbers of bits
74
00:04:22,780 --> 00:04:25,560
per second per Hertz,
and so forth.
75
00:04:25,560 --> 00:04:30,440
Well, obviously, you can't do
that with binary codes.
76
00:04:30,440 --> 00:04:38,330
So, today and Wednesday in this
chapter, we very quickly,
77
00:04:38,330 --> 00:04:42,390
of course, go through the series
of techniques that have
78
00:04:42,390 --> 00:04:44,590
been developed to go over
bandwidth-limited channels
79
00:04:44,590 --> 00:04:46,650
such as the bandwidth-limited
additive white
80
00:04:46,650 --> 00:04:48,430
Gaussian noise channel.
81
00:04:48,430 --> 00:04:51,032
Clearly, you're going to need
a non-binary constellation,
82
00:04:51,032 --> 00:04:53,820
and somehow code on that
constellation.
83
00:04:53,820 --> 00:04:57,330
84
00:04:57,330 --> 00:05:01,140
But what you'll see is that both
the techniques and much
85
00:05:01,140 --> 00:05:05,880
of the development are
very reminiscent
86
00:05:05,880 --> 00:05:07,660
of the binary case.
87
00:05:07,660 --> 00:05:11,290
Historically, the binary case
was always done first, and
88
00:05:11,290 --> 00:05:15,820
then the generalizations
to non-binary followed.
89
00:05:15,820 --> 00:05:17,450
And so we'll see --
90
00:05:17,450 --> 00:05:20,630
you can start off with very
simple, hand-designed
91
00:05:20,630 --> 00:05:24,160
constellations, as we were
doing back in the early
92
00:05:24,160 --> 00:05:26,270
chapters, four and five.
93
00:05:26,270 --> 00:05:29,780
We played around with little m
equals eight constellations
94
00:05:29,780 --> 00:05:32,790
and tried to optimize them from
the point of view of,
95
00:05:32,790 --> 00:05:37,070
say, minimizing power for a
fixed minimum distance between
96
00:05:37,070 --> 00:05:41,740
points in a certain number of
dimensions in Euclidean space.
97
00:05:41,740 --> 00:05:44,000
Now, we want to go up to
considerably longer
98
00:05:44,000 --> 00:05:45,310
constellations.
99
00:05:45,310 --> 00:05:50,080
So that was kind of at the level
of Hamming codes and
100
00:05:50,080 --> 00:05:54,760
single parity-check codes and
very elementary binary coding
101
00:05:54,760 --> 00:05:57,590
techniques like that.
102
00:05:57,590 --> 00:06:00,620
So we want more complicated
codes, let's say moderate
103
00:06:00,620 --> 00:06:02,200
complexity codes.
104
00:06:02,200 --> 00:06:08,290
We will find things that are
analogous to block codes,
105
00:06:08,290 --> 00:06:11,650
linear block codes,
namely lattices.
106
00:06:11,650 --> 00:06:14,940
These are structures in
Euclidean space that have the
107
00:06:14,940 --> 00:06:19,340
group property, as linear
block codes do.
108
00:06:19,340 --> 00:06:22,960
And then we'll find trellis
codes, which are the Euclidean
109
00:06:22,960 --> 00:06:26,450
space analog of convolutional
codes.
110
00:06:26,450 --> 00:06:31,130
And basically, our goal in the
next two lectures is to get to
111
00:06:31,130 --> 00:06:34,890
the level of a union-bound
estimate analysis of the
112
00:06:34,890 --> 00:06:39,620
performance of these kinds of
coding techniques, which, like
113
00:06:39,620 --> 00:06:44,460
their binary analogs, sort of
sit in a moderate complexity,
114
00:06:44,460 --> 00:06:48,300
pretty good performance, get up
to 6 dB coding gain, can't
115
00:06:48,300 --> 00:06:50,540
get to capacity.
116
00:06:50,540 --> 00:06:54,560
So that's as far as we'll be
able to go in chapter 14.
117
00:06:54,560 --> 00:06:59,500
In the last page or two of
chapter 14, I briefly mention
118
00:06:59,500 --> 00:07:01,130
higher performance techniques.
119
00:07:01,130 --> 00:07:04,160
And it's generally accepted that
you can get as close to
120
00:07:04,160 --> 00:07:06,910
capacity on these
bandwidth-limited channels,
121
00:07:06,910 --> 00:07:09,430
additive white Gaussian noise
channels as you can on the
122
00:07:09,430 --> 00:07:13,030
binary channels, in some cases
just by adapting binary
123
00:07:13,030 --> 00:07:17,190
techniques in a kind of
force-fit way, and others that
124
00:07:17,190 --> 00:07:19,840
are somewhat more principled,
I might say.
125
00:07:19,840 --> 00:07:25,290
But in any case, by adapting the
binary techniques, you can
126
00:07:25,290 --> 00:07:29,330
get turbo codes, low-density
parity-check codes, capacity
127
00:07:29,330 --> 00:07:32,540
approaching codes of various
types for the
128
00:07:32,540 --> 00:07:34,670
bandwidth-limited channel.
129
00:07:34,670 --> 00:07:39,090
But I just hint how that can
be done in one page, and I
130
00:07:39,090 --> 00:07:42,610
can't include it
in the course.
131
00:07:42,610 --> 00:07:45,870
So that's where we're going.
132
00:07:45,870 --> 00:07:55,510
So basically, we're trying to
design a constellation set of
133
00:07:55,510 --> 00:07:56,760
signal points.
134
00:07:56,760 --> 00:07:59,940
135
00:07:59,940 --> 00:08:05,000
Constellation c or signal
set or alphabet,
136
00:08:05,000 --> 00:08:07,950
script c, if you like.
137
00:08:07,950 --> 00:08:10,320
And we're doing the kind of
things we were doing back in
138
00:08:10,320 --> 00:08:11,680
chapter four and five.
139
00:08:11,680 --> 00:08:14,600
140
00:08:14,600 --> 00:08:18,510
What are some of the things that
the parameters we have--
141
00:08:18,510 --> 00:08:21,990
we're going to do this in
n-dimensional space.
142
00:08:21,990 --> 00:08:26,020
We're going to be concerned
with the size of the
143
00:08:26,020 --> 00:08:29,920
constellation, the log of the
size, the number of log 2 of
144
00:08:29,920 --> 00:08:34,169
the size, the number of
bits we can send.
145
00:08:34,169 --> 00:08:38,010
We call this m at one point.
146
00:08:38,010 --> 00:08:42,010
That will determine the number
of bits we can send per n
147
00:08:42,010 --> 00:08:44,510
dimensions, or we can
normalize it per two
148
00:08:44,510 --> 00:08:45,210
dimensions.
149
00:08:45,210 --> 00:08:47,260
Typically, I said, in
bandwidth-limited regime,
150
00:08:47,260 --> 00:08:50,500
we'll normalize everything
per two dimensions.
151
00:08:50,500 --> 00:08:54,370
We're going to have an average
power of the constellation,
152
00:08:54,370 --> 00:08:56,840
which I'm just going
to write like that.
153
00:08:56,840 --> 00:09:01,040
We're going to have a minimum
squared distance between
154
00:09:01,040 --> 00:09:04,690
points in the constellation,
which I'm going
155
00:09:04,690 --> 00:09:06,580
to write like that.
156
00:09:06,580 --> 00:09:10,190
And the idea is to optimize
the trade-off
157
00:09:10,190 --> 00:09:12,210
between this and this.
158
00:09:12,210 --> 00:09:15,970
Clearly, if I take a given
constellation and I scale it
159
00:09:15,970 --> 00:09:18,960
up, I'm going to get better
minimum squared distance
160
00:09:18,960 --> 00:09:21,480
without the cost of
increasing power.
161
00:09:21,480 --> 00:09:25,380
So I either fix the minimum
squared distance between
162
00:09:25,380 --> 00:09:31,380
points and try to minimize the
power, the average energy over
163
00:09:31,380 --> 00:09:39,470
the centroids of a bunch of
spheres, or vice versa.
164
00:09:39,470 --> 00:09:44,370
So when we do this, this is kind
of the game, we get into
165
00:09:44,370 --> 00:09:46,880
a classical problem called
sphere-packing.
166
00:09:46,880 --> 00:09:52,230
167
00:09:52,230 --> 00:09:57,250
If we say, hold the minimum
squared distance constant,
168
00:09:57,250 --> 00:10:02,660
that means we want to have a
certain minimum distance.
169
00:10:02,660 --> 00:10:03,910
Here I'll make it
d_min over 2.
170
00:10:03,910 --> 00:10:06,180
171
00:10:06,180 --> 00:10:08,980
That means that there's going to
be a sphere of radius d_min
172
00:10:08,980 --> 00:10:14,170
over 2 around each point in this
space, which must not be
173
00:10:14,170 --> 00:10:17,580
violated by the sphere
of any other point.
174
00:10:17,580 --> 00:10:21,190
So basically, we're trying to
pack spheres as closely
175
00:10:21,190 --> 00:10:23,200
together as we can.
176
00:10:23,200 --> 00:10:25,070
And we're maybe given
the dimension.
177
00:10:25,070 --> 00:10:27,400
Here I have two dimensions
to play with.
178
00:10:27,400 --> 00:10:30,520
We may be given the number, m.
179
00:10:30,520 --> 00:10:33,760
In general now, we're going
to be thinking of m
180
00:10:33,760 --> 00:10:35,090
becoming very large.
181
00:10:35,090 --> 00:10:38,290
We want to go to arbitrarily
large number of bits per
182
00:10:38,290 --> 00:10:39,890
second per Hertz.
183
00:10:39,890 --> 00:10:43,080
So think of schemes that will
continue to work as we add
184
00:10:43,080 --> 00:10:46,390
more and more points.
185
00:10:46,390 --> 00:10:51,670
And if I do that in two
dimensions, we get
186
00:10:51,670 --> 00:10:52,750
penny-packing.
187
00:10:52,750 --> 00:10:56,340
Take a bunch of seven pennies
out of your pocket, and try to
188
00:10:56,340 --> 00:10:58,565
squeeze them together as
tightly as you can.
189
00:10:58,565 --> 00:11:01,110
That's effectively what
we're trying to do.
190
00:11:01,110 --> 00:11:05,800
They maintain their fixed
radius, and you get something
191
00:11:05,800 --> 00:11:08,930
that quickly starts to look like
what this is called, a
192
00:11:08,930 --> 00:11:13,910
hexagonal sphere-packing,
because the centers line up on
193
00:11:13,910 --> 00:11:15,270
a hexagonal grid.
194
00:11:15,270 --> 00:11:18,870
195
00:11:18,870 --> 00:11:21,910
And in fact, I forget
exactly what we did.
196
00:11:21,910 --> 00:11:33,110
We did some of these back
in the early chapters.
197
00:11:33,110 --> 00:11:37,860
And let me draw a 16-point
constellation, which I
198
00:11:37,860 --> 00:11:41,440
remember we did draw.
199
00:11:41,440 --> 00:11:46,470
Here's a constellation
consisting of 16 pennies.
200
00:11:46,470 --> 00:11:49,100
And subject to my drawing
ability, it's packed as
201
00:11:49,100 --> 00:11:54,120
closely as possible for a fixed
minimum distance between
202
00:11:54,120 --> 00:11:55,510
the points.
203
00:11:55,510 --> 00:12:01,470
And as far as anyone knows, this
constellation, consisting
204
00:12:01,470 --> 00:12:06,350
a set of points on a hexagonal
grid, is the most dense
205
00:12:06,350 --> 00:12:10,930
packing of 16 points in two
dimensions, as measured by the
206
00:12:10,930 --> 00:12:13,860
average power of the centers.
207
00:12:13,860 --> 00:12:17,800
And it's not very symmetrical,
but you can see it's based on
208
00:12:17,800 --> 00:12:21,380
a symmetrical packing.
209
00:12:21,380 --> 00:12:26,540
This hexagonal, this is
basically a subset of 16
210
00:12:26,540 --> 00:12:30,690
points from the hexagonal
lattice.
211
00:12:30,690 --> 00:12:34,210
If we extend these points
in all directions--
212
00:12:34,210 --> 00:12:36,020
again, I haven't done
it very well--
213
00:12:36,020 --> 00:12:42,040
but the hexagonal lattice
is simply the
214
00:12:42,040 --> 00:12:43,440
underlying lattice here.
215
00:12:43,440 --> 00:12:50,185
Let me just draw a set of points
that look like this.
216
00:12:50,185 --> 00:12:53,786
217
00:12:53,786 --> 00:12:55,036
I'm going to fail again.
218
00:12:55,036 --> 00:12:58,660
219
00:12:58,660 --> 00:13:00,560
Sorry, I'm better at rock'n'roll
than I am at
220
00:13:00,560 --> 00:13:03,470
classical music.
221
00:13:03,470 --> 00:13:07,970
Anyway, this is the lattice,
and it's called hexagonal
222
00:13:07,970 --> 00:13:10,960
because, if we draw the decision
regions for this
223
00:13:10,960 --> 00:13:16,610
lattice, if we consider this
as a set of points in a
224
00:13:16,610 --> 00:13:20,150
constellation and draw the
decision regions, it looks
225
00:13:20,150 --> 00:13:28,080
like that, this little hexagon,
defined by the six
226
00:13:28,080 --> 00:13:28,940
nearest neighbors.
227
00:13:28,940 --> 00:13:32,630
They define six sides
of a hexagon.
228
00:13:32,630 --> 00:13:38,120
And so each lattice point has a
little region of space that
229
00:13:38,120 --> 00:13:40,735
belongs to it, you can think
of as its decision region.
230
00:13:40,735 --> 00:13:44,330
231
00:13:44,330 --> 00:13:47,940
Now, what makes this
a lattice?
232
00:13:47,940 --> 00:13:49,340
What's the definition
of a lattice?
233
00:13:49,340 --> 00:13:52,200
234
00:13:52,200 --> 00:13:53,560
We'll say an n-dimensional
lattice.
235
00:13:53,560 --> 00:13:57,010
236
00:13:57,010 --> 00:14:00,520
Very simple and elegant
definition for a lattice.
237
00:14:00,520 --> 00:14:03,700
It's a discrete subset.
238
00:14:03,700 --> 00:14:11,570
That means a set of discrete
points of Rn, sitting in
239
00:14:11,570 --> 00:14:15,925
Euclidean n-space, that has
the group property.
240
00:14:15,925 --> 00:14:29,130
241
00:14:29,130 --> 00:14:31,850
And you now all remember
what that means.
242
00:14:31,850 --> 00:14:39,110
It means the group property,
under ordinary addition of
243
00:14:39,110 --> 00:14:45,330
vectors in Euclidean n-space,
addition or
244
00:14:45,330 --> 00:14:48,670
subtraction, of course.
245
00:14:48,670 --> 00:14:49,830
What are some of the
implications of
246
00:14:49,830 --> 00:14:51,060
having a group property?
247
00:14:51,060 --> 00:14:54,880
That means that the sum of any
two vectors is another vector
248
00:14:54,880 --> 00:14:57,470
in the lattice.
249
00:14:57,470 --> 00:14:59,470
Does the lattice have
to include the
250
00:14:59,470 --> 00:15:00,910
0, the origin point?
251
00:15:00,910 --> 00:15:06,670
252
00:15:06,670 --> 00:15:07,980
Anybody?
253
00:15:07,980 --> 00:15:08,530
Oh, good.
254
00:15:08,530 --> 00:15:09,820
I'm going to ask this
on the final.
255
00:15:09,820 --> 00:15:14,980
256
00:15:14,980 --> 00:15:15,270
Come on.
257
00:15:15,270 --> 00:15:16,260
Does it or doesn't it?
258
00:15:16,260 --> 00:15:17,800
AUDIENCE: It does.
259
00:15:17,800 --> 00:15:18,840
PROFESSOR: It does.
260
00:15:18,840 --> 00:15:19,580
Why?
261
00:15:19,580 --> 00:15:20,830
AUDIENCE: [INAUDIBLE].
262
00:15:20,830 --> 00:15:25,900
263
00:15:25,900 --> 00:15:29,050
PROFESSOR: OK, that's
one good answer.
264
00:15:29,050 --> 00:15:32,050
It has to be closed under
addition and subtraction.
265
00:15:32,050 --> 00:15:35,766
Subtraction of the point from
itself is going to give you 0.
266
00:15:35,766 --> 00:15:38,770
A more mathematically trained
person would probably, say,
267
00:15:38,770 --> 00:15:41,560
well, it obviously has
to have an identity.
268
00:15:41,560 --> 00:15:43,180
Every group has an
identity element.
269
00:15:43,180 --> 00:15:45,890
270
00:15:45,890 --> 00:15:49,330
And that means, if lambda is
in the lattice, does minus
271
00:15:49,330 --> 00:15:52,200
lambda have to be
in the lattice?
272
00:15:52,200 --> 00:15:54,520
Yeah, every element has
to have an additive
273
00:15:54,520 --> 00:15:58,850
inverse, and so forth.
274
00:15:58,850 --> 00:16:06,480
Well, so if I have a lattice
like this, that means that one
275
00:16:06,480 --> 00:16:10,150
of these points has to be the
origin, say this one.
276
00:16:10,150 --> 00:16:15,740
277
00:16:15,740 --> 00:16:18,210
Now, of course, you can have a
translate of a lattice, and
278
00:16:18,210 --> 00:16:21,060
geometrically it has all the
same properties as the lattice
279
00:16:21,060 --> 00:16:25,180
itself, at any fixed t to
all the lattice points.
280
00:16:25,180 --> 00:16:27,730
In other words, make the
origin somewhere else.
281
00:16:27,730 --> 00:16:29,520
But that's not itself a group.
282
00:16:29,520 --> 00:16:30,800
That's the co-set of a group.
283
00:16:30,800 --> 00:16:33,680
It would translate as
a co-set operation.
284
00:16:33,680 --> 00:16:42,100
So lambda must be a group, and
lambda plus t, any lambda
285
00:16:42,100 --> 00:16:49,320
translate is a co-set
of lambda.
286
00:16:49,320 --> 00:16:51,880
So most of the things you say
about lambda are also true for
287
00:16:51,880 --> 00:16:55,330
lambda plus t, but in order to
fix a group, we need the
288
00:16:55,330 --> 00:16:58,400
origin to be part of
the points set.
289
00:16:58,400 --> 00:17:00,340
OK, if it's a group, what
does that mean?
290
00:17:00,340 --> 00:17:03,420
That means, if we take any
particular point, say this
291
00:17:03,420 --> 00:17:14,159
one, regard it as a vector,
then call this lambda_1 --
292
00:17:14,159 --> 00:17:18,000
or let me it g1, more
suggestively.
293
00:17:18,000 --> 00:17:23,310
Does g1 plus g1 have
to be in the group?
294
00:17:23,310 --> 00:17:24,119
Yes.
295
00:17:24,119 --> 00:17:26,760
So that forces this point
to be in there,
296
00:17:26,760 --> 00:17:31,310
2g1, 3g1, so forth.
297
00:17:31,310 --> 00:17:37,860
All of these have to be in
there, as well as minus g1,
298
00:17:37,860 --> 00:17:41,450
minus 2g1, and so forth.
299
00:17:41,450 --> 00:17:44,930
So basically, once we found a
generator, all integers times
300
00:17:44,930 --> 00:17:47,650
that generator have to
be in the lattice.
301
00:17:47,650 --> 00:17:52,170
So that means there's a
cross-section along the axis
302
00:17:52,170 --> 00:17:56,460
defined by g1, such that these
points are simply the
303
00:17:56,460 --> 00:18:02,050
integers, scaled by whatever
the norm of g1 is.
304
00:18:02,050 --> 00:18:06,730
305
00:18:06,730 --> 00:18:11,770
And of course, that holds for
any of these neighbors here.
306
00:18:11,770 --> 00:18:14,190
You want to look for nearest
neighbors of 0's.
307
00:18:14,190 --> 00:18:21,790
Take g2 and then, of course, all
of its integer multiples
308
00:18:21,790 --> 00:18:22,940
have to be in the lattice.
309
00:18:22,940 --> 00:18:33,800
So we've got a set here that, in
some sense, spans n-space.
310
00:18:33,800 --> 00:18:36,290
We have now two vectors,
g1, g2, which are
311
00:18:36,290 --> 00:18:37,570
not linearly dependent.
312
00:18:37,570 --> 00:18:39,220
They're sitting in 2-space.
313
00:18:39,220 --> 00:18:43,080
So if we take all real linear
combinations of these two
314
00:18:43,080 --> 00:18:46,490
vectors, we're going to
get all of 2-space.
315
00:18:46,490 --> 00:18:49,460
If we take all integer linear
combinations of these two
316
00:18:49,460 --> 00:18:53,640
vectors, they all must be
in the lattice, right?
317
00:18:53,640 --> 00:18:55,860
By the group property?
318
00:18:55,860 --> 00:18:58,420
We've already proved that all
the integer multiples of
319
00:18:58,420 --> 00:19:01,000
either of them is in the
lattice, so any thing plus or
320
00:19:01,000 --> 00:19:03,010
minus those two.
321
00:19:03,010 --> 00:19:11,750
So certainly, the lattice
includes the set of all
322
00:19:11,750 --> 00:19:19,410
integer multiples of a_i times
gi, where i equals 1 to 2,
323
00:19:19,410 --> 00:19:25,410
where a1, a2 is in the
two-dimensional integer,
324
00:19:25,410 --> 00:19:26,660
lattice Z-squared.
325
00:19:26,660 --> 00:19:32,590
326
00:19:32,590 --> 00:19:33,840
Could there be any
other points?
327
00:19:33,840 --> 00:19:38,720
328
00:19:38,720 --> 00:19:41,500
There actually could, if you
made a mistake and picked your
329
00:19:41,500 --> 00:19:45,810
initial vector to be 2g1,
because then you wouldn't have
330
00:19:45,810 --> 00:19:47,290
taken account of g1.
331
00:19:47,290 --> 00:19:51,150
So you've got to take your two
initial vectors to be two of
332
00:19:51,150 --> 00:19:53,380
the minimum norm points
in the lattice.
333
00:19:53,380 --> 00:19:58,480
But if you do that, then you can
easily prove that this is
334
00:19:58,480 --> 00:19:59,680
all of the lattice points.
335
00:19:59,680 --> 00:20:07,545
So in fact, a two-dimensional
lattice is equal to a set of
336
00:20:07,545 --> 00:20:09,660
all integer linear combinations
of two
337
00:20:09,660 --> 00:20:12,630
generators, which we can write
as a little two-by-two
338
00:20:12,630 --> 00:20:18,270
generator matrix, G. Or very
briefly, we could write lambda
339
00:20:18,270 --> 00:20:22,260
as G times Z-squared, and we
can view it as just some
340
00:20:22,260 --> 00:20:26,040
linear transformation
of Z-squared.
341
00:20:26,040 --> 00:20:30,660
So in this hexagonal lattice, we
can view it as starting out
342
00:20:30,660 --> 00:20:32,820
with a square grid.
343
00:20:32,820 --> 00:20:37,090
Of course, Z-squared itself
is a lattice.
344
00:20:37,090 --> 00:20:39,840
Z-squared is a group.
345
00:20:39,840 --> 00:20:43,320
This is just the square lattice
in two dimensions.
346
00:20:43,320 --> 00:20:48,350
We have Z_n as a lattice in
n dimensions, in general.
347
00:20:48,350 --> 00:20:53,300
So I take Z-squared, Z_n, more
generally, and I distort it.
348
00:20:53,300 --> 00:20:59,620
I just run it through some
g, which is a linear
349
00:20:59,620 --> 00:21:06,130
transformation, and slants some
of the generator points.
350
00:21:06,130 --> 00:21:08,270
And that's a general way
of getting a lattice.
351
00:21:08,270 --> 00:21:10,330
That's not the way we
usually visualize
352
00:21:10,330 --> 00:21:11,400
the hexagonal lattice.
353
00:21:11,400 --> 00:21:14,660
We usually visualize it in such
a way as to recognize
354
00:21:14,660 --> 00:21:18,000
that it has six-fold symmetry.
355
00:21:18,000 --> 00:21:21,220
But that's certainly
a way to get it.
356
00:21:21,220 --> 00:21:24,740
357
00:21:24,740 --> 00:21:28,426
And this has one interesting
consequence.
358
00:21:28,426 --> 00:21:31,530
One of the measures we want for
a lattice is the volume
359
00:21:31,530 --> 00:21:34,500
per lattice point.
360
00:21:34,500 --> 00:21:41,210
In here, the volume, you can
divide this up into volumes
361
00:21:41,210 --> 00:21:43,770
that are all of equal size,
and we call that
362
00:21:43,770 --> 00:21:45,180
the volume of A2.
363
00:21:45,180 --> 00:21:46,280
This lattice --
364
00:21:46,280 --> 00:21:50,450
hexagonal lattice, is called
A2, and it's the volume of
365
00:21:50,450 --> 00:21:53,200
that hexagon.
366
00:21:53,200 --> 00:21:57,410
1 over the volume of that
hexagon is the density of
367
00:21:57,410 --> 00:21:59,600
points in 2-space,
if you like.
368
00:21:59,600 --> 00:22:03,490
So density is 1/volume.
369
00:22:03,490 --> 00:22:06,930
I mean, think of every point as
having a unique volume, and
370
00:22:06,930 --> 00:22:12,270
if we basically consider the
cells around each lattice
371
00:22:12,270 --> 00:22:17,070
point, they fill up 2-space,
or the perfect tessellation
372
00:22:17,070 --> 00:22:19,920
tiling of 2-space.
373
00:22:19,920 --> 00:22:22,360
These hexagons will.
374
00:22:22,360 --> 00:22:25,310
One easy consequence of the
group property is that all the
375
00:22:25,310 --> 00:22:29,190
decision regions have to be
congruent, in fact, just
376
00:22:29,190 --> 00:22:32,170
translates of one another
by lattice points.
377
00:22:32,170 --> 00:22:35,920
The decision region around this
point is the same as the
378
00:22:35,920 --> 00:22:39,510
decision region about 0,
translated by a lattice point,
379
00:22:39,510 --> 00:22:41,360
namely the center of that
decision region.
380
00:22:41,360 --> 00:22:45,950
I won't prove that but it's
true and easy to prove.
381
00:22:45,950 --> 00:22:48,435
So that's one way of finding
what's the volume.
382
00:22:48,435 --> 00:22:50,080
The volume will mean
the volume of
383
00:22:50,080 --> 00:22:51,590
2-space per lattice point.
384
00:22:51,590 --> 00:22:53,700
But what's another
way to do that?
385
00:22:53,700 --> 00:22:57,550
Another way is just to take
this little parallelotope,
386
00:22:57,550 --> 00:23:03,410
whose volume is easier
to calculate.
387
00:23:03,410 --> 00:23:07,810
And we can see that we can
equally well tile 2-space with
388
00:23:07,810 --> 00:23:11,550
these little parallelograms,
just anchor --
389
00:23:11,550 --> 00:23:14,280
in this case, this is the one
that's anchored in the lower
390
00:23:14,280 --> 00:23:16,830
left corner by the origin.
391
00:23:16,830 --> 00:23:19,120
Then we take another one that's
anchored by this point.
392
00:23:19,120 --> 00:23:21,830
We take another one that's
anchored by this point,
393
00:23:21,830 --> 00:23:23,760
another one that's anchored
by this point.
394
00:23:23,760 --> 00:23:28,590
And it's clear that's another
tiling of 2-space by regions,
395
00:23:28,590 --> 00:23:31,880
one for each lattice point that
completely fills 2-space
396
00:23:31,880 --> 00:23:36,570
with overlaps only on the
boundary, which don't count.
397
00:23:36,570 --> 00:23:40,140
So the volume of A2 is also
the volume of that, what's
398
00:23:40,140 --> 00:23:42,780
called a fundamental
parallelotope.
399
00:23:42,780 --> 00:23:46,690
And what's the volume of that
fundamental parallelotope?
400
00:23:46,690 --> 00:23:52,290
Well, one way of calculating
it is to take
401
00:23:52,290 --> 00:23:55,420
the volume of Z-squared.
402
00:23:55,420 --> 00:23:59,840
We can view this as the
distortion of Z-squared, which
403
00:23:59,840 --> 00:24:01,090
looks like this.
404
00:24:01,090 --> 00:24:03,690
405
00:24:03,690 --> 00:24:07,490
So if this is Z-squared, what's
the fundamental volume
406
00:24:07,490 --> 00:24:09,320
of Z-squared?
407
00:24:09,320 --> 00:24:11,640
The volume of Z-squared
per lattice point.
408
00:24:11,640 --> 00:24:19,150
This is (0,1,-1,1,
so forth,1,1).
409
00:24:19,150 --> 00:24:21,520
What's the volume of
Z-squared per--
410
00:24:21,520 --> 00:24:22,720
AUDIENCE: 1
411
00:24:22,720 --> 00:24:23,370
PROFESSOR: --1.
412
00:24:23,370 --> 00:24:25,176
Clearly.
413
00:24:25,176 --> 00:24:26,850
And we have this decision
region.
414
00:24:26,850 --> 00:24:30,300
This is little square side 1, or
this parallelotope is also
415
00:24:30,300 --> 00:24:31,930
little square side 1.
416
00:24:31,930 --> 00:24:33,190
It's volume is 1.
417
00:24:33,190 --> 00:24:36,220
418
00:24:36,220 --> 00:24:40,070
General geometric principle, if
we take this and distort it
419
00:24:40,070 --> 00:24:47,570
by G, then what's the volume
of, say, this parallelotope
420
00:24:47,570 --> 00:24:48,610
going to be?
421
00:24:48,610 --> 00:24:54,080
This parallelotope goes by G
into this one, and its volume
422
00:24:54,080 --> 00:24:59,010
is the Jacobian, which is
the determinant of G.
423
00:24:59,010 --> 00:25:04,950
So the volume of a lattice
is the determinant of any
424
00:25:04,950 --> 00:25:06,730
generator matrix for
that lattice.
425
00:25:06,730 --> 00:25:13,330
Just a nice little side fact,
but it's an example of the
426
00:25:13,330 --> 00:25:15,890
kind of things you get in
Euclidean space that you don't
427
00:25:15,890 --> 00:25:18,980
get in Hamming space.
428
00:25:18,980 --> 00:25:23,110
Of course, there are various
generator matrices that you
429
00:25:23,110 --> 00:25:24,900
could use to generate
this lattice.
430
00:25:24,900 --> 00:25:29,340
But whichever one you choose,
this is going to be true.
431
00:25:29,340 --> 00:25:32,200
So this is an invariant among
all the generating matrices.
432
00:25:32,200 --> 00:25:34,880
433
00:25:34,880 --> 00:25:37,504
So that's a lattice.
434
00:25:37,504 --> 00:25:41,490
I haven't mentioned what's the
most important thing for us
435
00:25:41,490 --> 00:25:42,065
about a lattice.
436
00:25:42,065 --> 00:25:45,610
A lattice is a group.
437
00:25:45,610 --> 00:25:57,340
What's the fundamental property
of a group, which I
438
00:25:57,340 --> 00:26:00,580
guess was embodied in
the alternate set of
439
00:26:00,580 --> 00:26:03,620
axioms for the group?
440
00:26:03,620 --> 00:26:09,480
If I take the group plus
any element of the
441
00:26:09,480 --> 00:26:13,300
group, what do I get?
442
00:26:13,300 --> 00:26:14,510
AUDIENCE: [INAUDIBLE].
443
00:26:14,510 --> 00:26:15,760
PROFESSOR: The group again.
444
00:26:15,760 --> 00:26:19,000
445
00:26:19,000 --> 00:26:20,870
Let's interpret that
geometrically.
446
00:26:20,870 --> 00:26:22,690
One of the nice things about
this subject is you're
447
00:26:22,690 --> 00:26:24,010
constantly going back and forth
448
00:26:24,010 --> 00:26:27,100
between algebra and geometry.
449
00:26:27,100 --> 00:26:28,600
Geometrically, what
does that mean?
450
00:26:28,600 --> 00:26:32,100
That means, if we take this
lattice, say the origin is
451
00:26:32,100 --> 00:26:39,960
here, and we shift it, say, so
that the origin is up here,
452
00:26:39,960 --> 00:26:41,350
then nothing changes.
453
00:26:41,350 --> 00:26:45,280
Everything is invariant
to this shift.
454
00:26:45,280 --> 00:26:48,260
We can do this for any
lattice point.
455
00:26:48,260 --> 00:26:50,580
That shows you that
all the decision
456
00:26:50,580 --> 00:26:52,990
regions have to be congruent.
457
00:26:52,990 --> 00:27:04,000
It shows you that the number
of nearest neighbors, the
458
00:27:04,000 --> 00:27:09,840
minimum distance from any point
to its nearest neighbors
459
00:27:09,840 --> 00:27:12,470
is always the same for
any lattice point.
460
00:27:12,470 --> 00:27:14,240
Basically --
461
00:27:14,240 --> 00:27:18,030
it's a lattice is geometrically
uniform.
462
00:27:18,030 --> 00:27:21,720
That means you can stand on
any of the points and the
463
00:27:21,720 --> 00:27:24,630
world around you looks the same,
as if you stood on any
464
00:27:24,630 --> 00:27:26,310
of the other points.
465
00:27:26,310 --> 00:27:30,160
Think of these as stars in the
universe, and somebody
466
00:27:30,160 --> 00:27:33,770
blindfolds you and drops you
on one of these points.
467
00:27:33,770 --> 00:27:35,510
Can you tell which one of
those points you've been
468
00:27:35,510 --> 00:27:40,330
dropped on if nobody's written
anything on these points?
469
00:27:40,330 --> 00:27:41,270
No, you can't tell.
470
00:27:41,270 --> 00:27:44,350
The world looks the same from
whichever one you drop on.
471
00:27:44,350 --> 00:27:47,870
So that means the minimum square
distance from any point
472
00:27:47,870 --> 00:27:49,150
is the same as every other.
473
00:27:49,150 --> 00:27:53,040
So the number of nearest
neighbors at minimum square
474
00:27:53,040 --> 00:27:54,640
distance is the same.
475
00:27:54,640 --> 00:27:57,390
It means the total distance
profile, starting from any
476
00:27:57,390 --> 00:27:58,950
point, is the same.
477
00:27:58,950 --> 00:28:02,230
It means if we were to take one
of these points and send
478
00:28:02,230 --> 00:28:06,390
it over a Gaussian channel from
a very large set, and we
479
00:28:06,390 --> 00:28:09,250
take one of the points from the
interior and send it over
480
00:28:09,250 --> 00:28:12,570
an additive white Gaussian noise
channel, the probability
481
00:28:12,570 --> 00:28:15,030
of error is going to be the
same, regardless of which
482
00:28:15,030 --> 00:28:17,610
point we send it from, because
the probability of error is
483
00:28:17,610 --> 00:28:22,050
the probability of falling
outside your decision region.
484
00:28:22,050 --> 00:28:25,260
And since the decision region
always has the same shape,
485
00:28:25,260 --> 00:28:28,410
it's invariant to which
point you send.
486
00:28:28,410 --> 00:28:35,580
So this geometrical uniformity
property, which that's the
487
00:28:35,580 --> 00:28:38,200
general name for it and it
has all these specific
488
00:28:38,200 --> 00:28:45,780
consequences that I just listed,
is the principal
489
00:28:45,780 --> 00:28:49,140
geometrical property
of a lattice.
490
00:28:49,140 --> 00:28:51,650
This just follows from
its group property.
491
00:28:51,650 --> 00:28:54,170
492
00:28:54,170 --> 00:28:57,680
You don't have to be a lattice
to have this property.
493
00:28:57,680 --> 00:29:01,350
For instance, the translate of
a lattice is not a group,
494
00:29:01,350 --> 00:29:04,700
technically, but it obviously
has the same geometrical
495
00:29:04,700 --> 00:29:06,360
uniformity property.
496
00:29:06,360 --> 00:29:09,850
And there are more elaborate
examples of things that look
497
00:29:09,850 --> 00:29:11,730
less like lattices that
are nonetheless
498
00:29:11,730 --> 00:29:12,740
geometrically uniform.
499
00:29:12,740 --> 00:29:14,350
On the other hand, lattices
definitely are
500
00:29:14,350 --> 00:29:16,130
geometrically uniform.
501
00:29:16,130 --> 00:29:18,640
So this seems to be a good thing
to do, because what's
502
00:29:18,640 --> 00:29:19,290
our objective?
503
00:29:19,290 --> 00:29:22,900
We're trying to have
d_min-squared be the same for
504
00:29:22,900 --> 00:29:24,150
every point in our
constellation.
505
00:29:24,150 --> 00:29:26,990
506
00:29:26,990 --> 00:29:30,420
We want to tack these pennies
as close together as we can.
507
00:29:30,420 --> 00:29:32,860
That means we want to hold
d_min-squared constant for
508
00:29:32,860 --> 00:29:36,080
every point that we're
going to use.
509
00:29:36,080 --> 00:29:39,650
And so lattices have
that property.
510
00:29:39,650 --> 00:29:43,430
So that seems like a good way
to do sphere-packing.
511
00:29:43,430 --> 00:29:48,800
And in fact, sphere-packing, in
n-dimensions, is a very old
512
00:29:48,800 --> 00:29:50,050
mathematical subject.
513
00:29:50,050 --> 00:29:53,130
514
00:29:53,130 --> 00:29:55,650
How many spheres can you
pack around a three
515
00:29:55,650 --> 00:29:57,835
sphere, is it 12 or 13?
516
00:29:57,835 --> 00:30:01,400
That was debated in the 18th
and 19th centuries, and I
517
00:30:01,400 --> 00:30:04,910
think it's only finally just
been resolved a few years ago.
518
00:30:04,910 --> 00:30:09,780
This turns out to be not
an easy problem.
519
00:30:09,780 --> 00:30:12,920
The densest sphere-packing in
four dimensions was found in
520
00:30:12,920 --> 00:30:15,230
the mid-19th century.
521
00:30:15,230 --> 00:30:17,060
In eight dimensions,
I think around the
522
00:30:17,060 --> 00:30:19,080
turn of the 20th century.
523
00:30:19,080 --> 00:30:22,620
In 16 dimensions, it's the
Barnes-Wall lattice, which
524
00:30:22,620 --> 00:30:25,635
we'll see a little later, which
was found in '54, about
525
00:30:25,635 --> 00:30:29,620
the same time as Reed-Muller
codes, or maybe it was '59,
526
00:30:29,620 --> 00:30:33,150
maybe a little later than
Reed-Muller codes.
527
00:30:33,150 --> 00:30:37,600
The Leech lattice was certainly
found a little
528
00:30:37,600 --> 00:30:40,980
earlier than that, but around
the same time as the Golay
529
00:30:40,980 --> 00:30:43,320
code, to which it's a
very close cousin.
530
00:30:43,320 --> 00:30:43,920
And so forth.
531
00:30:43,920 --> 00:30:47,220
So you might think that these
questions had all been
532
00:30:47,220 --> 00:30:50,010
settled, but in fact they're
still open for most specific
533
00:30:50,010 --> 00:30:53,300
dimensions, n.
534
00:30:53,300 --> 00:30:56,680
Conway and Sloane have the
authoritative book on this,
535
00:30:56,680 --> 00:30:59,690
which basically lists what
we know about the densest
536
00:30:59,690 --> 00:31:02,600
sphere-packings in
all dimensions.
537
00:31:02,600 --> 00:31:06,930
So here's our idea.
538
00:31:06,930 --> 00:31:10,520
Our idea is we're going to go
consult the mathematicians and
539
00:31:10,520 --> 00:31:13,160
find out what the densest
sphere-packings that they
540
00:31:13,160 --> 00:31:15,280
found are in various
dimensions.
541
00:31:15,280 --> 00:31:17,910
In two dimensions, it's
the hexagonal lattice.
542
00:31:17,910 --> 00:31:21,300
In one dimension, it's the
integer, and basically the
543
00:31:21,300 --> 00:31:24,900
integers are the only packing
with regularity properties in
544
00:31:24,900 --> 00:31:25,690
one dimension.
545
00:31:25,690 --> 00:31:29,380
And two dimensions, you could
consider Z-squared, or you can
546
00:31:29,380 --> 00:31:31,260
consider A2.
547
00:31:31,260 --> 00:31:33,720
These are the two obvious
candidates, the square and the
548
00:31:33,720 --> 00:31:35,560
hexagonal packings.
549
00:31:35,560 --> 00:31:38,840
In three dimensions, you get
face-centered cubic packing.
550
00:31:38,840 --> 00:31:41,110
And what's the BCC?
551
00:31:41,110 --> 00:31:43,410
The base-centered cubic
packing, I guess.
552
00:31:43,410 --> 00:31:44,930
AUDIENCE: Body-centered.
553
00:31:44,930 --> 00:31:45,490
PROFESSOR: Say again.
554
00:31:45,490 --> 00:31:46,700
AUDIENCE: Body center.
555
00:31:46,700 --> 00:31:48,250
PROFESSOR: Body-centered,
thank you.
556
00:31:48,250 --> 00:31:50,510
I'm sure you're right.
557
00:31:50,510 --> 00:31:54,570
So anyway, there's some things
that are known about this from
558
00:31:54,570 --> 00:31:59,250
classical mathematics and
current mathematics.
559
00:31:59,250 --> 00:32:02,950
So our idea is we're going to
try to find the densest
560
00:32:02,950 --> 00:32:08,930
possible lattice in
n dimensions.
561
00:32:08,930 --> 00:32:12,970
But for digital communications,
for coding, we
562
00:32:12,970 --> 00:32:16,480
only want a finite number
of points from that
563
00:32:16,480 --> 00:32:18,970
constellation.
564
00:32:18,970 --> 00:32:22,240
So what we're going to do is
we're going to create our
565
00:32:22,240 --> 00:32:26,560
constellation based
on a lattice and
566
00:32:26,560 --> 00:32:30,130
on a region of n-space.
567
00:32:30,130 --> 00:32:31,860
And the idea is simple.
568
00:32:31,860 --> 00:32:35,060
We're just going to intersect
the lattice.
569
00:32:35,060 --> 00:32:40,710
Or maybe let me allow a
translate of a lattice with
570
00:32:40,710 --> 00:32:43,510
the region.
571
00:32:43,510 --> 00:32:48,580
And make this a finite,
compact region.
572
00:32:48,580 --> 00:32:52,750
And this will give us a certain
finite subset of the
573
00:32:52,750 --> 00:32:54,000
points in the lattice.
574
00:32:54,000 --> 00:32:56,620
575
00:32:56,620 --> 00:32:58,980
Now, when I draw a constellation
like this one,
576
00:32:58,980 --> 00:33:03,620
of a size 16, that's
what I've done.
577
00:33:03,620 --> 00:33:05,540
I've basically taken A2.
578
00:33:05,540 --> 00:33:07,740
Or it's actually a
translate of A2.
579
00:33:07,740 --> 00:33:10,800
I think the origin is
in here somewhere.
580
00:33:10,800 --> 00:33:14,870
And I draw a sphere
around the origin.
581
00:33:14,870 --> 00:33:17,330
And I take the 16 points
that happen to
582
00:33:17,330 --> 00:33:18,560
fall within that sphere.
583
00:33:18,560 --> 00:33:21,470
And I call that my
constellation.
584
00:33:21,470 --> 00:33:23,430
So that's the idea in general.
585
00:33:23,430 --> 00:33:27,050
Think of the region, r,
as a cookie cutter.
586
00:33:27,050 --> 00:33:30,610
We're going to take the cookie
cutter and chop out a finite
587
00:33:30,610 --> 00:33:35,900
number of points from this nice,
regular, infinite grid.
588
00:33:35,900 --> 00:33:38,930
And that will give us
a constellation.
589
00:33:38,930 --> 00:33:42,060
So especially for large
constellations, we'll be able
590
00:33:42,060 --> 00:33:45,950
to make some approximations that
will allow us to analyze
591
00:33:45,950 --> 00:33:48,660
that kind of constellation
very easily.
592
00:33:48,660 --> 00:33:52,060
And actually, the best
constellations that we know
593
00:33:52,060 --> 00:33:53,310
are of this type.
594
00:33:53,310 --> 00:33:55,610
595
00:33:55,610 --> 00:33:58,810
We've taken this baseline,
m-PAM.
596
00:33:58,810 --> 00:34:00,340
Well, what's m-PAM?
597
00:34:00,340 --> 00:34:08,790
We take the lattice plus t,
let's say to be Z plus 1/2.
598
00:34:08,790 --> 00:34:12,199
599
00:34:12,199 --> 00:34:16,980
So that means we don't take the
origin, but we take the
600
00:34:16,980 --> 00:34:22,199
points 1/2 minus 1/2, 3/2.
601
00:34:22,199 --> 00:34:26,120
They're equally spaced points,
but we've shifted them so as
602
00:34:26,120 --> 00:34:28,790
to be symmetric about
the origin.
603
00:34:28,790 --> 00:34:32,460
5/2 minus minus, so forth.
604
00:34:32,460 --> 00:34:37,230
So this is Z plus 1/2
is our lattice.
605
00:34:37,230 --> 00:34:41,219
And then if we want 4-PAM,
we take a cookie cutter
606
00:34:41,219 --> 00:34:44,070
consisting of this region.
607
00:34:44,070 --> 00:34:48,900
So this is the region going
from minus 2 to 2.
608
00:34:48,900 --> 00:34:51,219
It's not unreasonable to expect
that, if we take a
609
00:34:51,219 --> 00:34:56,670
region of length 4, we're going
to get 4 points, if,
610
00:34:56,670 --> 00:34:59,940
again, it's symmetric around
the corner, because each of
611
00:34:59,940 --> 00:35:03,800
these points takes up about one
unit of space over here.
612
00:35:03,800 --> 00:35:08,650
So if we made it from minus 4 to
plus 4, we'd have a region
613
00:35:08,650 --> 00:35:10,950
of length 8, and we'd
get 8 points.
614
00:35:10,950 --> 00:35:13,510
And so forth.
615
00:35:13,510 --> 00:35:20,000
So that's how we do m-PAM, and
then scale it if we want to.
616
00:35:20,000 --> 00:35:21,610
But this is the basic idea.
617
00:35:21,610 --> 00:35:28,040
Similarly, for m by m QAM, like
everything else, it's
618
00:35:28,040 --> 00:35:30,420
just doing the same thing
independently in two
619
00:35:30,420 --> 00:35:31,890
dimensions.
620
00:35:31,890 --> 00:35:39,780
Here we take our lattice to be
Z plus 1/2 squared, which is
621
00:35:39,780 --> 00:35:47,120
just the offset Z-squared,
offset so as to be four-way,
622
00:35:47,120 --> 00:35:50,200
to be roughly symmetrical
to four-way
623
00:35:50,200 --> 00:35:53,620
rotations, 90-degree rotations.
624
00:35:53,620 --> 00:35:56,190
So here is Z-squared
plus (1/2, 1/2).
625
00:35:56,190 --> 00:35:59,690
626
00:35:59,690 --> 00:36:05,930
And then, again, here to get
16-QAM, for instance, we might
627
00:36:05,930 --> 00:36:12,860
take the region to be what?
628
00:36:12,860 --> 00:36:20,280
(-2,2)-squared which,
again, has area 16.
629
00:36:20,280 --> 00:36:23,370
You might think of this as just
taking the little square
630
00:36:23,370 --> 00:36:24,890
around each of these.
631
00:36:24,890 --> 00:36:27,746
Each of these squares
has area 1.
632
00:36:27,746 --> 00:36:31,710
So when we take the
union of all those
633
00:36:31,710 --> 00:36:33,160
squares, we get the region.
634
00:36:33,160 --> 00:36:35,480
That's another way of
characterizing this region.
635
00:36:35,480 --> 00:36:39,530
It's just the union of all the
decision regions of the points
636
00:36:39,530 --> 00:36:40,210
that we want.
637
00:36:40,210 --> 00:36:43,020
If you had started from the
points, you could certainly
638
00:36:43,020 --> 00:36:45,200
invent a region that
is the union of
639
00:36:45,200 --> 00:36:46,430
those decision regions.
640
00:36:46,430 --> 00:36:52,640
It would have an area equal to
the number of points times the
641
00:36:52,640 --> 00:36:57,290
volume of 2-space
per each point.
642
00:36:57,290 --> 00:36:59,380
That's the kind of
approximation
643
00:36:59,380 --> 00:37:00,800
we're going to use.
644
00:37:00,800 --> 00:37:05,270
The bounding region is clearly
going to have something like
645
00:37:05,270 --> 00:37:08,240
the number of points in the
constellation, times the
646
00:37:08,240 --> 00:37:10,590
fundamental volume
as its volume.
647
00:37:10,590 --> 00:37:16,720
648
00:37:16,720 --> 00:37:18,610
So this is our idea.
649
00:37:18,610 --> 00:37:22,660
This is what we're going to call
a lattice constellation.
650
00:37:22,660 --> 00:37:25,940
And we're going to investigate
the properties of lattice
651
00:37:25,940 --> 00:37:29,860
constellations, not the most
general thing we could think
652
00:37:29,860 --> 00:37:33,270
of but good enough.
653
00:37:33,270 --> 00:37:34,955
Seem to have the right
sort of property.
654
00:37:34,955 --> 00:37:38,490
655
00:37:38,490 --> 00:37:41,250
And we're going to take it
at least as far as the
656
00:37:41,250 --> 00:37:45,090
union-bound estimate, so we
can get a good estimate of
657
00:37:45,090 --> 00:37:48,010
probability of error if
we take one of these
658
00:37:48,010 --> 00:37:51,660
constellations, like m-by-m QAM,
and use it to transmit
659
00:37:51,660 --> 00:37:56,690
over a Gaussian noise channel.
660
00:37:56,690 --> 00:38:03,760
And furthermore, for large
constellations, meaning as we
661
00:38:03,760 --> 00:38:06,790
get well up into the
bandwidth-limited regions of
662
00:38:06,790 --> 00:38:10,730
large nominal spectral
densities, we're going to be
663
00:38:10,730 --> 00:38:18,880
able to separate the analysis
into lattice properties and
664
00:38:18,880 --> 00:38:20,130
region properties.
665
00:38:20,130 --> 00:38:23,955
666
00:38:23,955 --> 00:38:28,600
And we'll basically be able to
just add these up to get the
667
00:38:28,600 --> 00:38:37,980
overall analysis, by making some
judicious approximations
668
00:38:37,980 --> 00:38:41,170
that become exact in the large
constellation limit.
669
00:38:41,170 --> 00:38:44,180
670
00:38:44,180 --> 00:38:50,970
So let's now restrict ourselves
to lattices, or
671
00:38:50,970 --> 00:38:56,840
translates of lattices, and
start to analyze them.
672
00:38:56,840 --> 00:39:07,630
What are their key parameters
for our application, which is
673
00:39:07,630 --> 00:39:09,770
digital communications?
674
00:39:09,770 --> 00:39:13,090
Well, one is the minimum squared
distance between
675
00:39:13,090 --> 00:39:15,050
lattice points.
676
00:39:15,050 --> 00:39:19,060
We've seen that doesn't depend
on the particular point.
677
00:39:19,060 --> 00:39:20,940
So this is the minimum
squared distance
678
00:39:20,940 --> 00:39:24,130
from any lattice point.
679
00:39:24,130 --> 00:39:25,680
The number of nearest
neighbors.
680
00:39:25,680 --> 00:39:31,320
681
00:39:31,320 --> 00:39:38,745
The volume per lattice point.
682
00:39:38,745 --> 00:39:42,130
683
00:39:42,130 --> 00:39:45,140
And is there anything else?
684
00:39:45,140 --> 00:39:50,020
685
00:39:50,020 --> 00:39:51,790
It seems to me there's
a fourth one.
686
00:39:51,790 --> 00:40:02,360
687
00:40:02,360 --> 00:40:04,010
No.
688
00:40:04,010 --> 00:40:05,610
I guess that's all
we need to know.
689
00:40:05,610 --> 00:40:11,370
690
00:40:11,370 --> 00:40:15,290
So I want to combine these
into a key figure
691
00:40:15,290 --> 00:40:17,600
of merit for coding.
692
00:40:17,600 --> 00:40:29,510
Again, we did this back in
chapter four for a complete
693
00:40:29,510 --> 00:40:30,320
constellation.
694
00:40:30,320 --> 00:40:34,340
We found the figure of merit by
holding the average power
695
00:40:34,340 --> 00:40:38,620
fixed for a fixed number of
points, m, on a fixed
696
00:40:38,620 --> 00:40:42,780
dimension, n, and maximizing the
minimum square distance,
697
00:40:42,780 --> 00:40:46,160
or, conversely, holding the
minimum squared distance fixed
698
00:40:46,160 --> 00:40:48,270
and minimizing the
average power.
699
00:40:48,270 --> 00:40:52,440
So we get some figure of merit
that was kind of normalized,
700
00:40:52,440 --> 00:40:55,050
and we could just optimize
that one thing.
701
00:40:55,050 --> 00:41:01,900
The key figure of merit was
called by 19th-century
702
00:41:01,900 --> 00:41:08,300
mathematician, Hermite's
parameter, but we are going to
703
00:41:08,300 --> 00:41:13,070
call it the nominal coding gain
because that's what it's
704
00:41:13,070 --> 00:41:14,320
going to turn out to be.
705
00:41:14,320 --> 00:41:17,570
706
00:41:17,570 --> 00:41:20,455
And it's just defined
as follows.
707
00:41:20,455 --> 00:41:25,360
The nominal coding gain of a
lattice is just this minimum
708
00:41:25,360 --> 00:41:28,610
squared distance, so
it goes up as the
709
00:41:28,610 --> 00:41:30,470
minimum squared distance.
710
00:41:30,470 --> 00:41:32,840
But then we want to
normalize this.
711
00:41:32,840 --> 00:41:35,960
And the obvious thing to
normalize it by, we want
712
00:41:35,960 --> 00:41:39,070
something that's going
to scale as a
713
00:41:39,070 --> 00:41:41,275
two-dimensional quantity.
714
00:41:41,275 --> 00:41:44,970
If the lattice was scaled by
alpha, then this would go up
715
00:41:44,970 --> 00:41:46,550
as alpha squared.
716
00:41:46,550 --> 00:41:49,670
So we want something that'll
scale in the same way.
717
00:41:49,670 --> 00:41:53,800
And what we choose to normalize
it by is the volume
718
00:41:53,800 --> 00:41:59,530
of lambda per two dimensions,
if you like.
719
00:41:59,530 --> 00:42:02,405
So you can see immediately, this
is invariant to scaling,
720
00:42:02,405 --> 00:42:06,740
that the nominal coding gain of
lambda is the same as the
721
00:42:06,740 --> 00:42:10,610
nominal coding gain
of alpha lambda.
722
00:42:10,610 --> 00:42:13,890
Is alpha lambda a lattice,
by the way?
723
00:42:13,890 --> 00:42:17,360
If lambda's a group, and we have
any scale factor alpha
724
00:42:17,360 --> 00:42:22,550
greater than 0, then alpha times
lambda is also a group.
725
00:42:22,550 --> 00:42:24,350
Everything's just multiplied
by alpha.
726
00:42:24,350 --> 00:42:26,990
So alpha lambda is a lattice.
727
00:42:26,990 --> 00:42:29,620
And this is actually one of
the homework problems.
728
00:42:29,620 --> 00:42:33,010
If you scale everything by
alpha, d_min-squared goes up
729
00:42:33,010 --> 00:42:34,530
by alpha squared.
730
00:42:34,530 --> 00:42:36,970
The volume goes up by alpha
to the n, for an
731
00:42:36,970 --> 00:42:38,690
n-dimensional lattice.
732
00:42:38,690 --> 00:42:41,120
But when we take 2 over n, we're
going to get back to
733
00:42:41,120 --> 00:42:42,360
alpha squared.
734
00:42:42,360 --> 00:42:48,890
So alpha c of lambda is alpha
c of alpha lambda.
735
00:42:48,890 --> 00:42:51,570
So this is what you want.
736
00:42:51,570 --> 00:42:55,270
You want a kind of scale-free
version of a lattice, because,
737
00:42:55,270 --> 00:42:57,370
obviously, when we use this,
we're going to feel free to
738
00:42:57,370 --> 00:43:00,170
amplify it or compress
it any way we want,
739
00:43:00,170 --> 00:43:02,880
so we do with QAM.
740
00:43:02,880 --> 00:43:07,230
All right, so this is normalized
in that way.
741
00:43:07,230 --> 00:43:09,590
We prove in the homework
it's also
742
00:43:09,590 --> 00:43:10,830
invariant to other things.
743
00:43:10,830 --> 00:43:15,510
If you took, say, an orthogonal
transformation of
744
00:43:15,510 --> 00:43:20,650
this lattice, a rotation,
reflection, multiply it by a
745
00:43:20,650 --> 00:43:28,540
matrix that doesn't change
the scale, think of it
746
00:43:28,540 --> 00:43:32,270
geometrically as an orthogonal
transformation, is that going
747
00:43:32,270 --> 00:43:33,520
to be a lattice?
748
00:43:33,520 --> 00:43:35,560
749
00:43:35,560 --> 00:43:37,020
It is going to be a lattice.
750
00:43:37,020 --> 00:43:41,583
Everything's just going to have
some h out here, times G
751
00:43:41,583 --> 00:43:44,080
times Z-squared, and
that's still of
752
00:43:44,080 --> 00:43:46,830
the form of a lattice.
753
00:43:46,830 --> 00:43:49,680
Or call it U for orthogonal,
unitary.
754
00:43:49,680 --> 00:43:52,650
755
00:43:52,650 --> 00:43:56,090
So we multiply it by some
orthogonal matrix, U. It's
756
00:43:56,090 --> 00:43:57,980
still a lattice.
757
00:43:57,980 --> 00:44:01,080
If we look in here, orthogonal
transformations don't change
758
00:44:01,080 --> 00:44:02,330
squared distances.
759
00:44:02,330 --> 00:44:05,270
760
00:44:05,270 --> 00:44:09,260
We're not going to change this
-- it doesn't matter.
761
00:44:09,260 --> 00:44:13,300
Actually, I might.
762
00:44:13,300 --> 00:44:16,430
763
00:44:16,430 --> 00:44:17,100
No.
764
00:44:17,100 --> 00:44:19,090
It's a rigid rotation,
I'm sorry.
765
00:44:19,090 --> 00:44:20,740
Everything here stays
the same.
766
00:44:20,740 --> 00:44:23,390
Hexagonal lattice still is
hexagonal lattice, because
767
00:44:23,390 --> 00:44:28,330
it's a rigid rotation or
reflection geometrically.
768
00:44:28,330 --> 00:44:32,750
And the volume clearly stays
the same, so none of these
769
00:44:32,750 --> 00:44:33,670
things changes.
770
00:44:33,670 --> 00:44:37,990
And orthogonal transformation
isn't going to change this
771
00:44:37,990 --> 00:44:39,400
quantity either.
772
00:44:39,400 --> 00:44:43,680
So we can rotate this
or do whatever.
773
00:44:43,680 --> 00:44:46,990
And then, finally, there's an
even more interesting one,
774
00:44:46,990 --> 00:44:52,860
which is, if we take the
lattice, lambda to the m, this
775
00:44:52,860 --> 00:44:55,110
is simply the Cartesian
product.
776
00:44:55,110 --> 00:45:02,490
It's the set of all lambda_1,
lambda_2, lambda_m, such that
777
00:45:02,490 --> 00:45:06,580
each of these lambda_k
is in Lambda.
778
00:45:06,580 --> 00:45:08,220
So it's the set of all n-tuples
779
00:45:08,220 --> 00:45:10,460
of elements of Lambda.
780
00:45:10,460 --> 00:45:16,200
This clearly lives in dimension
mn, so it's a much
781
00:45:16,200 --> 00:45:18,770
higher-dimensional lattice.
782
00:45:18,770 --> 00:45:22,745
The simplest case is Z to the m
- it lives in m dimensions,
783
00:45:22,745 --> 00:45:27,190
whereas Z is down in
one dimension.
784
00:45:27,190 --> 00:45:29,230
So it lives in dimension mn.
785
00:45:29,230 --> 00:45:31,870
It's a way of constructing
high-dimensional lattices from
786
00:45:31,870 --> 00:45:34,560
low-dimensional lattices.
787
00:45:34,560 --> 00:45:37,990
It's something that, in
communications, we consider to
788
00:45:37,990 --> 00:45:41,180
be almost a non-event.
789
00:45:41,180 --> 00:45:46,940
Sending a sequence of elements
of lambda is the same as
790
00:45:46,940 --> 00:45:48,360
sending a sequence of elements
of lambda_m.
791
00:45:48,360 --> 00:45:51,080
792
00:45:51,080 --> 00:45:55,340
In fact, if we just send a
sequence of things from here,
793
00:45:55,340 --> 00:45:58,360
and a sequence of things from
lambda, there's no difference
794
00:45:58,360 --> 00:46:00,010
from a receiver's
point of view.
795
00:46:00,010 --> 00:46:04,630
So we regard these two lattices
as equivalent, from a
796
00:46:04,630 --> 00:46:08,190
communications point of view.
797
00:46:08,190 --> 00:46:10,670
Well, what happens
to d_min-squared?
798
00:46:10,670 --> 00:46:15,140
d_min-squared doesn't change,
because we can always--
799
00:46:15,140 --> 00:46:19,335
d_min-squared is, by the way,
the weight of the smallest--
800
00:46:19,335 --> 00:46:20,585
should have said that.
801
00:46:20,585 --> 00:46:24,310
802
00:46:24,310 --> 00:46:27,110
You can always do things
relative to 0.
803
00:46:27,110 --> 00:46:33,500
d_min-squared, therefore, is
the squared norm or lowest
804
00:46:33,500 --> 00:46:37,230
squared energy of any non-zero
lattice point, same as we did
805
00:46:37,230 --> 00:46:40,430
with Hamming codes, just
by the group property.
806
00:46:40,430 --> 00:46:44,930
K_min is the number
of such lowest
807
00:46:44,930 --> 00:46:48,340
Euclidean weight points.
808
00:46:48,340 --> 00:46:51,320
OK, here, what's the lowest
weight point?
809
00:46:51,320 --> 00:46:54,110
It would look something like
this, where this is one of the
810
00:46:54,110 --> 00:46:55,865
lowest weight points
in lambda.
811
00:46:55,865 --> 00:47:00,010
So d_min-squared
doesn't change.
812
00:47:00,010 --> 00:47:04,150
This changes, but we're
not considering that.
813
00:47:04,150 --> 00:47:05,970
The volume, what
is the volume?
814
00:47:05,970 --> 00:47:10,050
It turns out that it's not hard
to show that V(lambda to
815
00:47:10,050 --> 00:47:13,700
the m) is V(lambda) to the m.
816
00:47:13,700 --> 00:47:19,150
And therefore, you can show that
this is figure of merit
817
00:47:19,150 --> 00:47:21,740
doesn't change, even if you
take Cartesian products.
818
00:47:21,740 --> 00:47:25,490
819
00:47:25,490 --> 00:47:31,650
So for instance, I chose that
the nominal coding gain of to
820
00:47:31,650 --> 00:47:38,020
the m is equal to the nominal
coding gain of Z. Which is
821
00:47:38,020 --> 00:47:38,600
what, by the way?
822
00:47:38,600 --> 00:47:40,595
I should have done this
as the first example.
823
00:47:40,595 --> 00:47:43,490
824
00:47:43,490 --> 00:47:47,230
What's the minimum squared
distance of Z?
825
00:47:47,230 --> 00:47:48,250
1.
826
00:47:48,250 --> 00:47:50,530
What's the volume of Z?
827
00:47:50,530 --> 00:47:51,670
1.
828
00:47:51,670 --> 00:47:54,870
So this is 1, or 0 dB.
829
00:47:54,870 --> 00:48:00,586
We're going to measure nominal
coding gain in dB, which,
830
00:48:00,586 --> 00:48:04,410
well, we're getting us
into communications.
831
00:48:04,410 --> 00:48:07,950
What's the nominal coding gain
of the hexagonal lattice?
832
00:48:07,950 --> 00:48:10,640
833
00:48:10,640 --> 00:48:14,420
By asking this question, I'm
basically asking, is the
834
00:48:14,420 --> 00:48:20,150
hexagonal lattice denser than
Z-squared in two dimensions
835
00:48:20,150 --> 00:48:21,920
than the square lattice
in two dimensions?
836
00:48:21,920 --> 00:48:25,872
And if so, quantitatively,
how much denser is it?
837
00:48:25,872 --> 00:48:27,800
AUDIENCE: [INAUDIBLE].
838
00:48:27,800 --> 00:48:32,790
PROFESSOR: OK, well, there's a
root 3 in it, but that isn't
839
00:48:32,790 --> 00:48:34,490
exactly the answer,
I don't think.
840
00:48:34,490 --> 00:48:38,780
841
00:48:38,780 --> 00:48:42,110
There are various ways that you
can do it, but one of the
842
00:48:42,110 --> 00:48:46,910
ways is to take, as a generator
matrix, (1, 0; 1/2,
843
00:48:46,910 --> 00:48:50,310
root 3 over 2).
844
00:48:50,310 --> 00:48:53,480
You like that as a generator
matrix for the--
845
00:48:53,480 --> 00:48:57,390
That's taking this and this
as the two generators, and
846
00:48:57,390 --> 00:49:03,220
recognizing this as 1/2, and
this as root 3 over 2.
847
00:49:03,220 --> 00:49:07,220
That makes the length of this
equal to 1/4 plus 3/4, equals
848
00:49:07,220 --> 00:49:09,310
1 square root, and it's 1.
849
00:49:09,310 --> 00:49:12,550
So these are my two generators
of length 1 in a hexagonal
850
00:49:12,550 --> 00:49:18,690
lattice of minimum squared
distance 1.
851
00:49:18,690 --> 00:49:24,346
So based on that, I can say that
the volume of A2 is what?
852
00:49:24,346 --> 00:49:25,596
AUDIENCE: [INAUDIBLE].
853
00:49:25,596 --> 00:49:27,820
854
00:49:27,820 --> 00:49:29,000
PROFESSOR: Root 3 over 2.
855
00:49:29,000 --> 00:49:33,490
Not hard to take the determinant
of that, right?
856
00:49:33,490 --> 00:49:40,310
And what's the minimum squared
distance in this scaling?
857
00:49:40,310 --> 00:49:43,480
What's the minimum non-zero
weight --
858
00:49:43,480 --> 00:49:46,610
squared norm, it's 1.
859
00:49:46,610 --> 00:49:53,190
So this is equal to 1 over root
3 over 2, which is equal
860
00:49:53,190 --> 00:49:56,060
to 2 over root 3.
861
00:49:56,060 --> 00:49:57,500
Which is equal to what?
862
00:49:57,500 --> 00:49:59,420
This is 3 dB.
863
00:49:59,420 --> 00:50:02,450
This is 4.8 dB, roughly.
864
00:50:02,450 --> 00:50:05,450
Square root is 2.4 dB.
865
00:50:05,450 --> 00:50:07,420
So this is about 0.6 dB.
866
00:50:07,420 --> 00:50:12,850
867
00:50:12,850 --> 00:50:18,040
That's where it pays to
be facile with dB's.
868
00:50:18,040 --> 00:50:21,720
This area, by the way, if you
tried to compute the area of
869
00:50:21,720 --> 00:50:26,578
this little hexagon here,
what would it be?
870
00:50:26,578 --> 00:50:29,230
If the area of this
parallelotope is the square
871
00:50:29,230 --> 00:50:34,770
root of 3/2, what's the
area of this hexagon?
872
00:50:34,770 --> 00:50:35,840
The same, right?
873
00:50:35,840 --> 00:50:37,920
They're both space filling when
874
00:50:37,920 --> 00:50:42,810
centered on lattice points.
875
00:50:42,810 --> 00:50:46,290
So they have to have
the same area.
876
00:50:46,290 --> 00:50:48,880
So that's an easy way of
computing the area of a
877
00:50:48,880 --> 00:50:54,350
hexagon, or of this particular
hexagon anyway.
878
00:50:54,350 --> 00:50:57,020
But you would get the same
result in either way.
879
00:50:57,020 --> 00:51:03,630
So comparing this with
Z-squared, I now have a
880
00:51:03,630 --> 00:51:06,160
quantitative measure
of how much
881
00:51:06,160 --> 00:51:10,740
denser A2 is than Z-squared.
882
00:51:10,740 --> 00:51:13,870
Put in one way, if I fix the
minimum squared distance of
883
00:51:13,870 --> 00:51:16,440
each of them to be 1, if I'm
doing penny-packing and I say
884
00:51:16,440 --> 00:51:20,160
I want pennies of a certain
radius to be packed into
885
00:51:20,160 --> 00:51:26,530
two-dimensional space, then
this is 1 over the volume,
886
00:51:26,530 --> 00:51:28,450
which is basically
the density.
887
00:51:28,450 --> 00:51:33,140
The density of A2 is 2 over root
3, times [UNINTELLIGIBLE]
888
00:51:33,140 --> 00:51:35,540
as the density of points
in z squared.
889
00:51:35,540 --> 00:51:40,480
I get more points per unit area
with the same distance
890
00:51:40,480 --> 00:51:44,210
between them using A2 than
I can with Z-squared.
891
00:51:44,210 --> 00:51:49,510
So that's the significance
of this parameter.
892
00:51:49,510 --> 00:51:52,460
So it's an obvious thing for
Hermite to have come up with.
893
00:51:52,460 --> 00:51:55,380
It's going to turn out to be
equally fundamental for us.
894
00:51:55,380 --> 00:52:00,920
895
00:52:00,920 --> 00:52:04,710
So that may be all we want
to do on lattices
896
00:52:04,710 --> 00:52:07,754
for the time being.
897
00:52:07,754 --> 00:52:09,530
Yeah, seems to be.
898
00:52:09,530 --> 00:52:14,150
899
00:52:14,150 --> 00:52:18,760
So we're talking about lattice
constellations.
900
00:52:18,760 --> 00:52:21,110
The other thing we have
to develop is the
901
00:52:21,110 --> 00:52:22,360
properties of regions.
902
00:52:22,360 --> 00:52:25,490
903
00:52:25,490 --> 00:52:32,050
So let me talk about
regions, R --
904
00:52:32,050 --> 00:52:36,460
I'm thinking of a compact
region in an
905
00:52:36,460 --> 00:52:38,397
n-dimensional space.
906
00:52:38,397 --> 00:52:42,160
So again, let me fix
the dimension to n.
907
00:52:42,160 --> 00:52:44,690
And we're going to have
a certain volume.
908
00:52:44,690 --> 00:52:49,570
That's obviously an important
characteristic of a region.
909
00:52:49,570 --> 00:52:52,520
And I'm going to assume
that this is finite.
910
00:52:52,520 --> 00:52:57,650
And what's the other thing
that we basically want?
911
00:52:57,650 --> 00:52:59,920
Well, when I pick a
constellation like this, by
912
00:52:59,920 --> 00:53:11,020
intersecting a lattice with a
region, eventually I'm going
913
00:53:11,020 --> 00:53:14,695
to let the points in this
constellation be equiprobable,
914
00:53:14,695 --> 00:53:18,170
as I always do in digital
communications.
915
00:53:18,170 --> 00:53:19,190
16 points.
916
00:53:19,190 --> 00:53:22,570
Say they all have probably 1/16,
the uniform probability
917
00:53:22,570 --> 00:53:23,970
over the discrete points.
918
00:53:23,970 --> 00:53:29,750
919
00:53:29,750 --> 00:53:30,260
One of the approximations I am
going to make -- so what's the
920
00:53:30,260 --> 00:53:33,430
average power of that
constellation?
921
00:53:33,430 --> 00:53:41,270
We could figure it out by taking
1/16 times the energy.
922
00:53:41,270 --> 00:53:43,250
I should say the average
energy, I'm sorry.
923
00:53:43,250 --> 00:53:44,300
We're not talking power here.
924
00:53:44,300 --> 00:53:47,450
We're talking energy, but I
read it as P. What's the
925
00:53:47,450 --> 00:53:48,270
average energy?
926
00:53:48,270 --> 00:53:52,980
It's 1/16 times the L2
norm of each of these
927
00:53:52,980 --> 00:53:55,045
points, if you like that.
928
00:53:55,045 --> 00:53:59,940
It's simply the Euclidean
squared norm.
929
00:53:59,940 --> 00:54:02,760
930
00:54:02,760 --> 00:54:05,210
And we get some average.
931
00:54:05,210 --> 00:54:08,090
But here's the key approximation
principle.
932
00:54:08,090 --> 00:54:11,240
I'm going to say the average
power of this constellation,
933
00:54:11,240 --> 00:54:14,280
especially as the constellations
get large, is
934
00:54:14,280 --> 00:54:16,860
simply going to be approximated
as the average
935
00:54:16,860 --> 00:54:20,890
energy of a uniform distribution
over this
936
00:54:20,890 --> 00:54:23,090
bounding region.
937
00:54:23,090 --> 00:54:27,670
Suppose I simply put a uniform
continuous distribution over
938
00:54:27,670 --> 00:54:33,080
this circle, this sphere, two
sphere that I've used to be my
939
00:54:33,080 --> 00:54:34,650
cookie cutter.
940
00:54:34,650 --> 00:54:36,950
I'm going to say that's going to
be a good approximation to
941
00:54:36,950 --> 00:54:41,080
the average energy of the 16
discrete points, which are, by
942
00:54:41,080 --> 00:54:44,560
construction, they're spread
uniformly over this region.
943
00:54:44,560 --> 00:54:49,600
The approximation is, I'm kind
of approximating a unit
944
00:54:49,600 --> 00:54:52,770
impulse of weight there
by distributed
945
00:54:52,770 --> 00:54:55,170
weight across its--
946
00:54:55,170 --> 00:54:58,620
well, actually across its
whole little region.
947
00:54:58,620 --> 00:55:02,280
If we have a hexagon around each
one, uniform distribution
948
00:55:02,280 --> 00:55:04,206
over the hexagon.
949
00:55:04,206 --> 00:55:08,640
And if that's a reasonable
approximation, then it's
950
00:55:08,640 --> 00:55:11,010
reasonable to say that the
average power of the whole
951
00:55:11,010 --> 00:55:14,840
constellation is just the
average power of a uniform
952
00:55:14,840 --> 00:55:16,175
distribution over this region.
953
00:55:16,175 --> 00:55:19,100
954
00:55:19,100 --> 00:55:23,460
It's good exercise to try it for
some moderate size cases.
955
00:55:23,460 --> 00:55:26,730
And you'll find it is a very
good approximation.
956
00:55:26,730 --> 00:55:29,520
In any case, it's the one
we're going to use.
957
00:55:29,520 --> 00:55:39,260
So the other key parameter is
going to be P(R), which is, in
958
00:55:39,260 --> 00:55:53,665
words, the average energy of
uniform distribution over R.
959
00:55:53,665 --> 00:56:02,500
Sorry, script R. Now, when we
write this out, this actually
960
00:56:02,500 --> 00:56:05,730
gets a little formidable.
961
00:56:05,730 --> 00:56:09,160
So I think it's better to
remember what we're actually
962
00:56:09,160 --> 00:56:11,920
trying to compute.
963
00:56:11,920 --> 00:56:14,700
Oh, and one last thing,
we're going to
964
00:56:14,700 --> 00:56:16,200
normalize it per dimension.
965
00:56:16,200 --> 00:56:22,480
966
00:56:22,480 --> 00:56:23,570
So what do I mean?
967
00:56:23,570 --> 00:56:28,630
What's a uniform distribution
over R?
968
00:56:28,630 --> 00:56:33,670
969
00:56:33,670 --> 00:56:44,030
This is P(x equals 1) over
the volume of R for
970
00:56:44,030 --> 00:56:49,920
x in R and 0 otherwise.
971
00:56:49,920 --> 00:56:56,330
972
00:56:56,330 --> 00:56:58,590
Uniform within, 0 outside.
973
00:56:58,590 --> 00:56:59,540
And what does it
have to equal?
974
00:56:59,540 --> 00:57:03,040
It has to be equal to 1 over
V(R), because its integral has
975
00:57:03,040 --> 00:57:07,260
to be equal to 1 if it's a
probability distribution.
976
00:57:07,260 --> 00:57:17,790
So to compute P(R), we basically
take the integral
977
00:57:17,790 --> 00:57:27,030
from over R, of this probability
distribution, 1
978
00:57:27,030 --> 00:57:30,680
over V(R), times what?
979
00:57:30,680 --> 00:57:32,640
The energy per dimension.
980
00:57:32,640 --> 00:57:43,070
The total energy of x is
x-squared, the L2 norm of x.
981
00:57:43,070 --> 00:57:44,910
Normalize per dimension --
we're going to put an
982
00:57:44,910 --> 00:57:47,080
n over there --
983
00:57:47,080 --> 00:57:48,330
dx.
984
00:57:48,330 --> 00:57:50,620
985
00:57:50,620 --> 00:57:54,470
So that's an equation for what
it is I wanted to compute.
986
00:57:54,470 --> 00:58:00,380
987
00:58:00,380 --> 00:58:02,770
I think it's much harder to
remember the terms in this
988
00:58:02,770 --> 00:58:09,270
than to remember this verbal
description of what we want.
989
00:58:09,270 --> 00:58:17,220
But you may be more literate
than I. Now, let's define --
990
00:58:17,220 --> 00:58:22,270
we want to normalize quantity
comparable to Hermite or
991
00:58:22,270 --> 00:58:24,550
coding gain over here.
992
00:58:24,550 --> 00:58:27,960
The normalized quantity
we're going to use --
993
00:58:27,960 --> 00:58:35,170
we want to take P(R) -- and what
should we normalize it by
994
00:58:35,170 --> 00:58:39,220
to make it scale-invariant,
let's say again?
995
00:58:39,220 --> 00:58:43,350
Again, if I take the region, I
scale the region by alpha and
996
00:58:43,350 --> 00:58:45,590
get alpha_R.
997
00:58:45,590 --> 00:58:48,550
What's going to happen to the
average energy of a uniform
998
00:58:48,550 --> 00:58:49,850
distribution over R?
999
00:58:49,850 --> 00:58:53,070
1000
00:58:53,070 --> 00:58:54,450
AUDIENCE: [INAUDIBLE].
1001
00:58:54,450 --> 00:58:57,760
PROFESSOR: It's going to
go up as alpha squared.
1002
00:58:57,760 --> 00:59:02,430
Energy scarce, goes as the
square of the scale factor.
1003
00:59:02,430 --> 00:59:09,850
So again, I want to normalize
it by V(R) over 2 to the n,
1004
00:59:09,850 --> 00:59:12,390
because this is also something
that will go up as alpha
1005
00:59:12,390 --> 00:59:17,130
squared if I scale R by alpha.
1006
00:59:17,130 --> 00:59:20,960
1007
00:59:20,960 --> 00:59:31,830
So this is what's known as the
dimension-less or normalized--
1008
00:59:31,830 --> 00:59:33,380
it has both names--
1009
00:59:33,380 --> 00:59:38,060
1010
00:59:38,060 --> 00:59:39,310
second moment.
1011
00:59:39,310 --> 00:59:44,190
1012
00:59:44,190 --> 00:59:50,970
Another name for this is the
second moment per dimension of
1013
00:59:50,970 --> 00:59:54,660
a uniform distribution over
R. This is a normalized or
1014
00:59:54,660 --> 00:59:58,620
dimension-less second moment.
1015
00:59:58,620 --> 01:00:02,200
I think I call it normalized in
the notes, but it's always
1016
01:00:02,200 --> 01:00:05,100
called G in the literature.
1017
01:00:05,100 --> 01:00:09,420
And let me just introduce
here--
1018
01:00:09,420 --> 01:00:14,810
well, what's the normalized
second moment of an interval?
1019
01:00:14,810 --> 01:00:19,590
Say, a symmetric interval
around the origin.
1020
01:00:19,590 --> 01:00:22,510
G to the minus 1 to the 1, which
is like the interval
1021
01:00:22,510 --> 01:00:26,330
that we used to pull out the
m-PAM signal structure.
1022
01:00:26,330 --> 01:00:34,670
So let's take n to be equal to
1, R just to be the interval
1023
01:00:34,670 --> 01:00:37,680
from -1 to +1.
1024
01:00:37,680 --> 01:00:42,206
And V(R) equals what?
1025
01:00:42,206 --> 01:00:44,380
It's the length of
R, which is 2.
1026
01:00:44,380 --> 01:00:49,640
1027
01:00:49,640 --> 01:00:50,970
What's P(R)?
1028
01:00:50,970 --> 01:00:54,860
1029
01:00:54,860 --> 01:01:02,250
The integral from -1 to 1, x
squared, dx, which is 2/3.
1030
01:01:02,250 --> 01:01:07,490
1031
01:01:07,490 --> 01:01:09,716
I think so.
1032
01:01:09,716 --> 01:01:10,966
AUDIENCE: [INAUDIBLE].
1033
01:01:10,966 --> 01:01:14,150
1034
01:01:14,150 --> 01:01:15,540
PROFESSOR: Right, 1/2.
1035
01:01:15,540 --> 01:01:16,790
Thank you.
1036
01:01:16,790 --> 01:01:18,660
1037
01:01:18,660 --> 01:01:19,910
That's 1/3.
1038
01:01:19,910 --> 01:01:24,290
1039
01:01:24,290 --> 01:01:28,290
All right, so what is G(R) going
to be, or G going to be?
1040
01:01:28,290 --> 01:01:33,410
This is going to be P(R)
over V(R)-squared.
1041
01:01:33,410 --> 01:01:36,070
1042
01:01:36,070 --> 01:01:40,840
And so this is going to be
1/3 over 4, or 1/12.
1043
01:01:40,840 --> 01:01:45,050
1044
01:01:45,050 --> 01:01:49,480
OK, that's not as nice as
this over here, but
1045
01:01:49,480 --> 01:01:51,940
it is what it is.
1046
01:01:51,940 --> 01:01:56,000
There's a number that shows up
in quantization, for instance,
1047
01:01:56,000 --> 01:01:58,890
uniform scalar quantizer, you'll
find a 1/12 in there.
1048
01:01:58,890 --> 01:01:59,590
Why?
1049
01:01:59,590 --> 01:02:00,840
It's because of this equation.
1050
01:02:00,840 --> 01:02:07,901
1051
01:02:07,901 --> 01:02:11,830
If we ask again about the
invariances of this--
1052
01:02:11,830 --> 01:02:15,000
this is the last homework
problem for this week--
1053
01:02:15,000 --> 01:02:18,390
it's invariant not only to
scaling, but, again, it's
1054
01:02:18,390 --> 01:02:20,810
invariant to orthogonal
transformations, because
1055
01:02:20,810 --> 01:02:23,360
orthogonal transformations
won't affect
1056
01:02:23,360 --> 01:02:24,420
either of these things.
1057
01:02:24,420 --> 01:02:28,210
A rigid rotation, about
the origin, won't
1058
01:02:28,210 --> 01:02:32,330
affect V(R) or P(R).
1059
01:02:32,330 --> 01:02:35,110
And furthermore, it's invariant
to Cartesian
1060
01:02:35,110 --> 01:02:37,045
products, again.
1061
01:02:37,045 --> 01:02:40,920
If we just have a series of
regions, a region made up
1062
01:02:40,920 --> 01:02:47,370
basically as the Cartesian
product, like an n-cube is a
1063
01:02:47,370 --> 01:02:48,540
Cartesian product.
1064
01:02:48,540 --> 01:02:52,350
We just pick each of the
dimensions independently from
1065
01:02:52,350 --> 01:02:57,360
some set R. Then that's not
going to affect the normalized
1066
01:02:57,360 --> 01:02:58,450
second moment either.
1067
01:02:58,450 --> 01:03:01,600
Exercise for the student.
1068
01:03:01,600 --> 01:03:11,110
So from that result, I can say
that G([-1 1]) to the m.
1069
01:03:11,110 --> 01:03:12,360
Which is what?
1070
01:03:12,360 --> 01:03:14,640
1071
01:03:14,640 --> 01:03:17,275
This is an m-cube of side 2.
1072
01:03:17,275 --> 01:03:20,570
1073
01:03:20,570 --> 01:03:28,050
If I say each of the
coordinates, x1, x2, x3, up to
1074
01:03:28,050 --> 01:03:35,690
xm, has got to be xk n times
[-1 1], then what I'm going to
1075
01:03:35,690 --> 01:03:39,360
get, the region defined by that
is an m-cubed, centered
1076
01:03:39,360 --> 01:03:41,150
on the origin of side 2.
1077
01:03:41,150 --> 01:03:44,260
1078
01:03:44,260 --> 01:03:47,585
So the normalized second
moment of that
1079
01:03:47,585 --> 01:03:48,835
is going to be 1/12.
1080
01:03:48,835 --> 01:03:52,650
1081
01:03:52,650 --> 01:03:55,770
This is where the shortcut
begins to help you.
1082
01:03:55,770 --> 01:03:59,340
1083
01:03:59,340 --> 01:04:02,560
I guess I'm trying to put
too much on one board.
1084
01:04:02,560 --> 01:04:05,280
But the last thing I want
to put up on here
1085
01:04:05,280 --> 01:04:07,320
is the shaping gain.
1086
01:04:07,320 --> 01:04:17,760
The shape gain of a region is
defined as the shaping gain of
1087
01:04:17,760 --> 01:04:23,120
the region is equal to 1/12.
1088
01:04:23,120 --> 01:04:27,460
So we're kind of taking as a
baseline, the m-cube, in any
1089
01:04:27,460 --> 01:04:31,610
number of dimensions n,
over G(the region).
1090
01:04:31,610 --> 01:04:36,190
1091
01:04:36,190 --> 01:04:41,460
In other words, how much less
is the dimension-less second
1092
01:04:41,460 --> 01:04:44,830
moment, the normalized second
moment, than what you would
1093
01:04:44,830 --> 01:04:47,450
get for an m-cube?
1094
01:04:47,450 --> 01:04:56,650
In every dimension except for
one, a sphere is going to be a
1095
01:04:56,650 --> 01:05:02,360
better region, from a second
moment point of
1096
01:05:02,360 --> 01:05:05,330
view, than an m-cube.
1097
01:05:05,330 --> 01:05:07,460
An m-cube has corners.
1098
01:05:07,460 --> 01:05:11,540
Really, it's pretty obvious
that, if you want to minimize
1099
01:05:11,540 --> 01:05:14,310
the normalized second moment in
any number of dimensions,
1100
01:05:14,310 --> 01:05:19,270
you would choose an m-sphere,
because if you have any
1101
01:05:19,270 --> 01:05:22,440
wrinkle that comes out of an
m-sphere, anything that's not
1102
01:05:22,440 --> 01:05:24,810
an m-sphere, you should try
to squash it back in it.
1103
01:05:24,810 --> 01:05:26,150
And that'll make
it an m-sphere.
1104
01:05:26,150 --> 01:05:29,430
You'll take higher energy and
make it lower energy for the
1105
01:05:29,430 --> 01:05:35,100
same little unit of Play-Doh
or mass stuff.
1106
01:05:35,100 --> 01:05:41,060
So a sphere will give you the
best normalized second moment
1107
01:05:41,060 --> 01:05:44,160
in any number of dimensions n,
so you're always going to get
1108
01:05:44,160 --> 01:05:46,830
a G(R) that's less than 1/12.
1109
01:05:46,830 --> 01:05:48,220
That's good.
1110
01:05:48,220 --> 01:05:52,000
And this measures the
amount of gain.
1111
01:05:52,000 --> 01:05:55,026
Again, you can measure
this in dB.
1112
01:05:55,026 --> 01:05:57,630
That's going to translate
directly to dB.
1113
01:05:57,630 --> 01:06:03,080
1114
01:06:03,080 --> 01:06:06,190
I suppose I could do an example
for the sphere in two
1115
01:06:06,190 --> 01:06:07,150
dimensions, but I won't.
1116
01:06:07,150 --> 01:06:08,400
How are we doing on time?
1117
01:06:08,400 --> 01:06:20,080
1118
01:06:20,080 --> 01:06:24,980
Let me see if I can do the
union-bound estimate.
1119
01:06:24,980 --> 01:06:28,310
1120
01:06:28,310 --> 01:06:33,870
That would be a trick, but I
think it's doable, because
1121
01:06:33,870 --> 01:06:35,120
this is where we're going.
1122
01:06:35,120 --> 01:06:38,160
1123
01:06:38,160 --> 01:06:49,750
So the union-bound estimate,
which is something we did way
1124
01:06:49,750 --> 01:06:58,650
back in chapter five or
something, is basically, given
1125
01:06:58,650 --> 01:07:01,740
a constellation, which we're
going to have a lattice
1126
01:07:01,740 --> 01:07:04,890
constellation based
on some lattice
1127
01:07:04,890 --> 01:07:07,765
lambda and some region.
1128
01:07:07,765 --> 01:07:14,200
1129
01:07:14,200 --> 01:07:19,410
So this is like our m-by-m QAM,
or it might be this guy,
1130
01:07:19,410 --> 01:07:24,930
which is better than 16-QAM,
in some sense.
1131
01:07:24,930 --> 01:07:26,640
In two senses, actually.
1132
01:07:26,640 --> 01:07:29,440
1133
01:07:29,440 --> 01:07:31,670
The union-bound estimate,
what is it?
1134
01:07:31,670 --> 01:07:34,310
1135
01:07:34,310 --> 01:07:37,605
The probability of error is
approximately equal to--
1136
01:07:37,605 --> 01:07:40,280
1137
01:07:40,280 --> 01:07:41,910
let me do it per block.
1138
01:07:41,910 --> 01:07:46,120
So I'll have the number of
nearest neighbors per block of
1139
01:07:46,120 --> 01:07:54,650
our constellation times Q
to the square root of
1140
01:07:54,650 --> 01:07:57,890
d_min-squared of our
constellation,
1141
01:07:57,890 --> 01:07:58,960
over what is it?
1142
01:07:58,960 --> 01:08:00,210
4 sigma squared?
1143
01:08:00,210 --> 01:08:05,930
1144
01:08:05,930 --> 01:08:07,180
4 sigma squared.
1145
01:08:07,180 --> 01:08:15,600
1146
01:08:15,600 --> 01:08:19,859
And if you remember how we got
this, we took the union-bound,
1147
01:08:19,859 --> 01:08:24,460
we took all the pairwise error
probabilities, and from that,
1148
01:08:24,460 --> 01:08:31,830
we get contributions from kd at
every kd contributions at
1149
01:08:31,830 --> 01:08:33,380
distance d.
1150
01:08:33,380 --> 01:08:35,340
We added them all up
in the union-bound.
1151
01:08:35,340 --> 01:08:39,779
We said, well, for moderate
complexity and performance, at
1152
01:08:39,779 --> 01:08:41,300
least this is going
to be dominated
1153
01:08:41,300 --> 01:08:42,920
by the minimum distance.
1154
01:08:42,920 --> 01:08:46,109
So this is the minimum
distance terms.
1155
01:08:46,109 --> 01:08:49,310
This is the number of minimum
distance terms.
1156
01:08:49,310 --> 01:08:50,739
And this is error
probability --
1157
01:08:50,739 --> 01:08:53,220
the pairwise error probability
we get for each minimum
1158
01:08:53,220 --> 01:08:54,470
distance term.
1159
01:08:54,470 --> 01:09:00,300
1160
01:09:00,300 --> 01:09:07,200
Now, to compute this, I'm
going to use three
1161
01:09:07,200 --> 01:09:12,390
approximations, but two
important ones, this
1162
01:09:12,390 --> 01:09:15,370
continuous approximation that
I've already referred to.
1163
01:09:15,370 --> 01:09:18,670
1164
01:09:18,670 --> 01:09:25,140
First, I'm going to say that
the average power of my
1165
01:09:25,140 --> 01:09:33,470
constellation is approximately
equal to just the second
1166
01:09:33,470 --> 01:09:46,609
moment of R. The average energy
per dimension of R.
1167
01:09:46,609 --> 01:09:51,080
And then I'm going to say,
furthermore, how many points
1168
01:09:51,080 --> 01:09:52,279
are there in the
constellation?
1169
01:09:52,279 --> 01:09:54,700
This will affect what
rate I can go at.
1170
01:09:54,700 --> 01:09:57,300
1171
01:09:57,300 --> 01:10:01,350
The average number of points
in the constellation --
1172
01:10:01,350 --> 01:10:03,110
also mention this--
1173
01:10:03,110 --> 01:10:08,470
is going to be approximately
equal to simply the volume of
1174
01:10:08,470 --> 01:10:14,800
my bounding region, V(R), over
the volume taken up by each
1175
01:10:14,800 --> 01:10:16,590
point in my lattice.
1176
01:10:16,590 --> 01:10:20,330
1177
01:10:20,330 --> 01:10:23,590
That makes sense, doesn't it?
1178
01:10:23,590 --> 01:10:28,580
What I want to do here to get
16 points is I take a sphere
1179
01:10:28,580 --> 01:10:32,590
whose area is roughly 16 times
the area of one of these
1180
01:10:32,590 --> 01:10:35,910
little hexagons or one of these
little parallelograms.
1181
01:10:35,910 --> 01:10:39,840
And I'll get about 16 points,
because that's
1182
01:10:39,840 --> 01:10:43,200
the density of it.
1183
01:10:43,200 --> 01:10:48,400
And there's a final little
approximation, which is that
1184
01:10:48,400 --> 01:10:50,710
I'm going to assume that we're
somewhere in the interior of
1185
01:10:50,710 --> 01:10:55,450
the lattice, so that the average
number of nearest
1186
01:10:55,450 --> 01:10:59,760
neighbors in the constellation
is simply equal to the number
1187
01:10:59,760 --> 01:11:01,125
of nearest neighbors
in the lattice.
1188
01:11:01,125 --> 01:11:04,990
So that's a reasonable
approximation, and that's a
1189
01:11:04,990 --> 01:11:06,310
very obvious one.
1190
01:11:06,310 --> 01:11:09,960
If I take any constellation
based on the hexagonal
1191
01:11:09,960 --> 01:11:12,720
lattice, for a large enough
constellation, the average
1192
01:11:12,720 --> 01:11:15,490
number of nearest neighbors
is going to be 6.
1193
01:11:15,490 --> 01:11:18,110
If I take it on the square
lattice, the average number of
1194
01:11:18,110 --> 01:11:19,990
nearest neighbors is
going to be 4.
1195
01:11:19,990 --> 01:11:25,380
But we know we're not terribly
interested in that either.
1196
01:11:25,380 --> 01:11:28,730
1197
01:11:28,730 --> 01:11:33,490
Let me do a little manipulation
here.
1198
01:11:33,490 --> 01:11:45,000
I'm going to write this as Q of
d_min-squared of c, which
1199
01:11:45,000 --> 01:11:49,600
clearly I'm going to also assume
that d_min-squared of c
1200
01:11:49,600 --> 01:11:53,460
is equal to d_min-squared
of the lattice.
1201
01:11:53,460 --> 01:11:55,940
So I'll make that d_min-squared
of the lattice.
1202
01:11:55,940 --> 01:11:58,810
1203
01:11:58,810 --> 01:12:07,670
And I want to normalize that
by v of lambda, to the 2/n,
1204
01:12:07,670 --> 01:12:12,620
whatever dimension n I'm in,
to get something that we're
1205
01:12:12,620 --> 01:12:13,450
familiar with.
1206
01:12:13,450 --> 01:12:17,010
So I need a V(n) over
2 to the n.
1207
01:12:17,010 --> 01:12:23,180
And anticipating a little,
I'm going to put a V(R)
1208
01:12:23,180 --> 01:12:25,864
over 2 to the n.
1209
01:12:25,864 --> 01:12:32,035
And then over here, I'm going
to put V(R) over--
1210
01:12:32,035 --> 01:12:33,450
Is this going to come
out right?--
1211
01:12:33,450 --> 01:12:38,980
over 2 to the n, over--
1212
01:12:38,980 --> 01:12:40,230
this seems upside-down.
1213
01:12:40,230 --> 01:13:00,100
1214
01:13:00,100 --> 01:13:01,570
Ah, no, it isn't upside-down.
1215
01:13:01,570 --> 01:13:05,960
1216
01:13:05,960 --> 01:13:10,560
Because I'm using this
expression here, I
1217
01:13:10,560 --> 01:13:11,810
do want 1 over G(R).
1218
01:13:11,810 --> 01:13:16,340
1219
01:13:16,340 --> 01:13:20,450
So this is 1 over G(R).
1220
01:13:20,450 --> 01:13:29,170
If I multiply this by 1/12, now
I get 1 over G(R), so I
1221
01:13:29,170 --> 01:13:34,440
need to put a 1/12 here.
1222
01:13:34,440 --> 01:13:42,255
And then I have a P(R)
over 4 sigma squared.
1223
01:13:42,255 --> 01:13:44,810
1224
01:13:44,810 --> 01:13:46,810
So does everyone agree I
can write it that way?
1225
01:13:46,810 --> 01:13:50,260
1226
01:13:50,260 --> 01:13:52,280
This'll be good.
1227
01:13:52,280 --> 01:13:53,570
Go out with a bang.
1228
01:13:53,570 --> 01:13:56,200
1229
01:13:56,200 --> 01:13:59,220
So what are all these things?
1230
01:13:59,220 --> 01:14:03,995
This is the coding
gain of lambda.
1231
01:14:03,995 --> 01:14:06,860
1232
01:14:06,860 --> 01:14:13,090
This, according to this, is the
size of the constellation
1233
01:14:13,090 --> 01:14:15,960
to the 2 over n,
approximately.
1234
01:14:15,960 --> 01:14:23,900
The rate of the constellation
is log 2 times the size of
1235
01:14:23,900 --> 01:14:32,020
lambda, R. So the rate
is equal to just
1236
01:14:32,020 --> 01:14:33,750
log 2 of this quantity.
1237
01:14:33,750 --> 01:14:36,450
1238
01:14:36,450 --> 01:14:42,680
So I can say 2 to the R. This
is 2 to the R equals this.
1239
01:14:42,680 --> 01:14:54,780
So this would be 2 to the
2 over n, 2R over n.
1240
01:14:54,780 --> 01:14:56,880
Is that right?
1241
01:14:56,880 --> 01:15:00,280
Again, I'm unsure if
that's what I want.
1242
01:15:00,280 --> 01:15:06,680
This is the shaping gain of the
region, these three things
1243
01:15:06,680 --> 01:15:12,790
together, times 3,
times-- what's
1244
01:15:12,790 --> 01:15:14,440
P(R) over sigma squared?
1245
01:15:14,440 --> 01:15:15,960
That's the SNR.
1246
01:15:15,960 --> 01:15:19,130
That's the average power
per dimension.
1247
01:15:19,130 --> 01:15:21,130
And this is the average noise
power per dimension.
1248
01:15:21,130 --> 01:15:24,230
1249
01:15:24,230 --> 01:15:26,740
I've used up these in that.
1250
01:15:26,740 --> 01:15:28,530
So I've got a 1 over 1/12.
1251
01:15:28,530 --> 01:15:29,890
That's 12 times that.
1252
01:15:29,890 --> 01:15:32,270
So this is 3 times SNR.
1253
01:15:32,270 --> 01:15:35,340
1254
01:15:35,340 --> 01:15:39,200
And let's see.
1255
01:15:39,200 --> 01:15:45,657
This is 2R over n is
just 2 to the--
1256
01:15:45,657 --> 01:15:47,525
AUDIENCE: Isn't that just 2R?
1257
01:15:47,525 --> 01:15:49,753
Because [UNINTELLIGIBLE]
normalized by the dimension.
1258
01:15:49,753 --> 01:15:51,003
PROFESSOR: Yeah.
1259
01:15:51,003 --> 01:15:58,320
1260
01:15:58,320 --> 01:16:05,830
And furthermore, it's minus,
because it's upside-down.
1261
01:16:05,830 --> 01:16:11,480
So it's 2 to the - 2R.
1262
01:16:11,480 --> 01:16:15,360
Actually, what I want is the
spectral efficiency.
1263
01:16:15,360 --> 01:16:21,330
And the spectral efficiency is
the rate per two dimensions.
1264
01:16:21,330 --> 01:16:23,435
So this is just 2 to
the minus rho.
1265
01:16:23,435 --> 01:16:30,060
1266
01:16:30,060 --> 01:16:34,030
So with a little help for
my friends, what's SNR
1267
01:16:34,030 --> 01:16:35,280
over 2 to the rho?
1268
01:16:35,280 --> 01:16:39,320
1269
01:16:39,320 --> 01:16:39,780
AUDIENCE: [INAUDIBLE].
1270
01:16:39,780 --> 01:16:42,210
PROFESSOR: It's approximately
SNR norm.
1271
01:16:42,210 --> 01:16:44,960
Certainly true as row
becomes large.
1272
01:16:44,960 --> 01:16:48,320
It's actually SNR over
2 to the rho minus 1.
1273
01:16:48,320 --> 01:16:51,020
So this is SNR norm.
1274
01:16:51,020 --> 01:16:53,890
1275
01:16:53,890 --> 01:16:58,830
OK, so with a little hand waving
and a few high SNR
1276
01:16:58,830 --> 01:17:09,960
approximations, we get that, in
summary, for the UBE, the
1277
01:17:09,960 --> 01:17:17,930
probability of error is
approximately K_min(lambda)
1278
01:17:17,930 --> 01:17:23,600
times Q to the square root
of the coding gain of the
1279
01:17:23,600 --> 01:17:29,320
lattice, times the shaping
gain of the
1280
01:17:29,320 --> 01:17:36,010
region, times 3SNR norm.
1281
01:17:36,010 --> 01:17:40,860
1282
01:17:40,860 --> 01:17:42,230
And that's a wonderful
approximation.
1283
01:17:42,230 --> 01:17:45,590
1284
01:17:45,590 --> 01:17:48,380
Let me just call out some of
its properties to you, and
1285
01:17:48,380 --> 01:17:52,550
then we'll talk about
them next time.
1286
01:17:52,550 --> 01:17:55,450
But first of all, there's
nothing here--
1287
01:17:55,450 --> 01:17:59,790
all these terms involve either
lambda or R. So we have
1288
01:17:59,790 --> 01:18:05,190
completely separated, in this
analysis, the effects of the
1289
01:18:05,190 --> 01:18:07,180
region from the effects
of the lattice.
1290
01:18:07,180 --> 01:18:12,240
We can choose the lattice and
the region independently.
1291
01:18:12,240 --> 01:18:18,810
And they independently affect
how good a performance we get.
1292
01:18:18,810 --> 01:18:26,260
For the baseline, which was
either m-PAM or m-by-m QAM, we
1293
01:18:26,260 --> 01:18:33,680
have, for the baseline PAM or
QAM, we've set it up so that
1294
01:18:33,680 --> 01:18:39,280
both the coding gain of the
lattice and the shaping gain
1295
01:18:39,280 --> 01:18:41,330
of the region are equal to 1.
1296
01:18:41,330 --> 01:18:45,340
So for the baseline, this
gives us what we already
1297
01:18:45,340 --> 01:18:49,250
developed back in the
early stages.
1298
01:18:49,250 --> 01:18:51,130
I won't worry about this --
1299
01:18:51,130 --> 01:18:54,200
Q-hat, but it does give you
the right coefficient out
1300
01:18:54,200 --> 01:18:58,280
here, too, depending on whether
you normalize it per
1301
01:18:58,280 --> 01:18:59,960
two dimensions or what.
1302
01:18:59,960 --> 01:19:04,850
It just gives you Q to the
square root of 3SNR norm,
1303
01:19:04,850 --> 01:19:07,590
which I hope you remember
from chapter five.
1304
01:19:07,590 --> 01:19:15,920
So this is our baseline curve
for m-PAM or m-by-m QAM.
1305
01:19:15,920 --> 01:19:22,100
We just plotted versus SNR norm,
the probability of error
1306
01:19:22,100 --> 01:19:25,860
per two dimensions, which
is what I recommended.
1307
01:19:25,860 --> 01:19:28,410
And we got some curve.
1308
01:19:28,410 --> 01:19:29,880
So this is the baseline.
1309
01:19:29,880 --> 01:19:33,910
1310
01:19:33,910 --> 01:19:37,390
And now we've got everything
separated out in a nice way,
1311
01:19:37,390 --> 01:19:40,350
so that if somebody gives us a
lattice with a certain coding
1312
01:19:40,350 --> 01:19:45,520
gain and a region with a certain
shaping gain, those
1313
01:19:45,520 --> 01:19:47,970
both just add up as gains.
1314
01:19:47,970 --> 01:19:51,550
If that's all we had, forget the
coefficient, we would just
1315
01:19:51,550 --> 01:19:55,570
move to the right here, and
it would give us effective
1316
01:19:55,570 --> 01:19:59,800
reductions and the required SNR
norm by the coding gain of
1317
01:19:59,800 --> 01:20:03,240
the lattice and the shaping
gain of the region.
1318
01:20:03,240 --> 01:20:07,340
And we'd get the same curve
moved out over here.
1319
01:20:07,340 --> 01:20:11,690
Again, as in the binary case,
typically we're going to get a
1320
01:20:11,690 --> 01:20:15,580
larger number of near neighbors,
leading to a larger
1321
01:20:15,580 --> 01:20:17,070
coefficient out here.
1322
01:20:17,070 --> 01:20:19,320
We want to normalize this
per two dimensions.
1323
01:20:19,320 --> 01:20:23,080
Per two dimensions, it's only
4 for the baseline.
1324
01:20:23,080 --> 01:20:28,220
But in general, we're going to
get a KS of lambda, the number
1325
01:20:28,220 --> 01:20:30,810
of nearest neighbors per two
dimensions, which is going to
1326
01:20:30,810 --> 01:20:32,690
raise this a little.
1327
01:20:32,690 --> 01:20:35,140
And that will lower
the effective
1328
01:20:35,140 --> 01:20:37,650
coding gain down here.
1329
01:20:37,650 --> 01:20:42,350
But we've basically, by these
large constellation, high SNR
1330
01:20:42,350 --> 01:20:46,750
approximations, we've really
reduced the analysis of
1331
01:20:46,750 --> 01:20:49,890
lattice constellations to a
very simple task, just to
1332
01:20:49,890 --> 01:20:54,420
analyze what's the coding gain,
what's the shaping gain,
1333
01:20:54,420 --> 01:20:56,990
number of nearest neighbors,
plot the performance curve,
1334
01:20:56,990 --> 01:21:00,020
end of story.
1335
01:21:00,020 --> 01:21:01,430
So we'll start there next time.
1336
01:21:01,430 --> 01:21:12,532