WEBVTT
00:00:00.280 --> 00:00:00.560
PROFESSOR: OK.
00:00:00.560 --> 00:00:01.360
Welcome back.
00:00:01.360 --> 00:00:04.830
I hope you had a nice break.
00:00:04.830 --> 00:00:10.920
The midterms will be handed out
at the end of the class.
00:00:10.920 --> 00:00:15.730
Ashish has just gone to
get them in my office.
00:00:15.730 --> 00:00:17.580
With solutions.
00:00:17.580 --> 00:00:19.430
I hope you look at
the solutions.
00:00:19.430 --> 00:00:23.910
There's more in the solutions
than is strictly necessary.
00:00:23.910 --> 00:00:27.070
The midterm is partly intended
as a learning exercise, and I
00:00:27.070 --> 00:00:30.340
hope you learned something
from it.
00:00:30.340 --> 00:00:34.070
I didn't learn too much from
it about the performance of
00:00:34.070 --> 00:00:37.770
the class, because some of the
questions were easy enough
00:00:37.770 --> 00:00:39.640
that practically everybody
got them, and some of the
00:00:39.640 --> 00:00:42.210
questions were too hard
for anybody to get.
00:00:42.210 --> 00:00:45.290
And so the distribution --
00:00:45.290 --> 00:00:48.720
you know, what I would really
like to get is a nice
00:00:48.720 --> 00:00:52.910
bell-shaped curve going from 0
to 100 with a median at 50 and
00:00:52.910 --> 00:00:55.020
the standard deviation
about 25.
00:00:55.020 --> 00:00:57.740
And I got a much more compressed
curve than that.
00:00:57.740 --> 00:01:04.650
So frankly, I didn't learn
that much about relative
00:01:04.650 --> 00:01:06.030
performance.
00:01:06.030 --> 00:01:08.880
And I gather from Ashish that
you've all been doing the
00:01:08.880 --> 00:01:12.230
homeworks, and have been doing
a fairly steady job on that.
00:01:12.230 --> 00:01:16.760
So I think everybody is
close to being in
00:01:16.760 --> 00:01:17.670
the same boat here.
00:01:17.670 --> 00:01:21.140
If anybody has concerns about
their performance, of course
00:01:21.140 --> 00:01:23.380
they're welcome to talk to me.
00:01:23.380 --> 00:01:26.560
Drop date is a little
while off.
00:01:26.560 --> 00:01:29.995
If you're concerned about what
your grade might be, I'll give
00:01:29.995 --> 00:01:31.660
you the best idea I can.
00:01:31.660 --> 00:01:35.620
But really, it's going
to come down to the
00:01:35.620 --> 00:01:37.330
final for most of you.
00:01:37.330 --> 00:01:39.500
So it's not too much
to say here.
00:01:39.500 --> 00:01:43.950
I give the grade distribution
and the midterm solutions, and
00:01:43.950 --> 00:01:47.180
from that, you could figure out
where you are relatively.
00:01:47.180 --> 00:01:54.440
And my grading philosophy is at
MIT, in graduate classes,
00:01:54.440 --> 00:01:57.230
roughly they're half
As and half Bs.
00:01:57.230 --> 00:01:59.950
I have that as a guideline
in mind.
00:01:59.950 --> 00:02:02.670
I try to look for where the
natural breaks are.
00:02:02.670 --> 00:02:06.440
I decorate the As and Bs with
pluses and minuses, but I
00:02:06.440 --> 00:02:09.259
don't think anybody pays
attention to those.
00:02:09.259 --> 00:02:12.680
And in other words, I generally
try to follow the
00:02:12.680 --> 00:02:17.770
MIT grading philosophy
as I understand it.
00:02:17.770 --> 00:02:23.330
So are there any questions about
the midterm or anything
00:02:23.330 --> 00:02:28.220
about the class as we go
into the second half?
00:02:28.220 --> 00:02:28.710
No?
00:02:28.710 --> 00:02:30.860
I see some people
aren't back yet.
00:02:30.860 --> 00:02:32.600
It's a somewhat reduced group.
00:02:32.600 --> 00:02:36.010
Maybe a couple people decided
to drop on the
00:02:36.010 --> 00:02:37.210
basis of the midterm.
00:02:37.210 --> 00:02:38.580
That sometimes happens.
00:02:38.580 --> 00:02:42.730
I don't think it necessarily
had to happen this year.
00:02:42.730 --> 00:02:46.420
Or maybe people are just having
an extended break.
00:02:46.420 --> 00:02:47.910
OK.
00:02:47.910 --> 00:02:53.370
We're now going into
conceptually a break, the
00:02:53.370 --> 00:02:55.310
second half of the course.
00:02:55.310 --> 00:02:59.510
The first half of the course we
were talking about trying
00:02:59.510 --> 00:03:02.150
to get to capacity, specifically
on the additive
00:03:02.150 --> 00:03:04.200
white Gaussian noise channel.
00:03:04.200 --> 00:03:09.280
So far all the schemes that
we've talked about have been
00:03:09.280 --> 00:03:14.210
block coding schemes, which
their Euclidean images wind up
00:03:14.210 --> 00:03:20.800
to be constellations in a finite
dimensional space.
00:03:20.800 --> 00:03:25.580
This is the kind of picture that
Shannon had when he wrote
00:03:25.580 --> 00:03:32.290
his original paper on capacity
and information, to start off
00:03:32.290 --> 00:03:33.280
information theory.
00:03:33.280 --> 00:03:38.640
Basically he proved that you
could get to capacity on any
00:03:38.640 --> 00:03:45.390
channel with a randomly
chosen code.
00:03:45.390 --> 00:03:47.400
In the setting where we are,
that would be a randomly
00:03:47.400 --> 00:03:51.250
chosen binary code where you
just pick all the bits of all
00:03:51.250 --> 00:03:54.430
the code words at random
by flipping a coin.
00:03:54.430 --> 00:03:55.150
OK?
00:03:55.150 --> 00:03:58.790
Just construct a code book at
random, and as the code gets
00:03:58.790 --> 00:04:00.460
long enough, that's
going to be good
00:04:00.460 --> 00:04:01.860
enough to get to capacity.
00:04:01.860 --> 00:04:03.280
It's a block code.
00:04:03.280 --> 00:04:08.320
2 to the nr bits, and length
n code words where r is the
00:04:08.320 --> 00:04:11.350
regular code.
00:04:11.350 --> 00:04:14.440
Well, that was a brilliant
achievement, to show that you
00:04:14.440 --> 00:04:16.250
could get to capacity
by doing that.
00:04:16.250 --> 00:04:18.310
If you can do it by choosing
codes at random, then there
00:04:18.310 --> 00:04:20.660
must be some particular
codes that are good.
00:04:20.660 --> 00:04:23.960
Otherwise the average overall
codes wouldn't be good.
00:04:23.960 --> 00:04:28.790
So people started to go out
to find good codes.
00:04:28.790 --> 00:04:34.630
And there was a fairly quick
realization that Shannon
00:04:34.630 --> 00:04:38.550
hadn't addressed some key
parts of the problem.
00:04:38.550 --> 00:04:44.470
Particularly the complexity
of encoding and decoding.
00:04:44.470 --> 00:04:47.720
The complexity of encoding was
pretty quickly recognized to
00:04:47.720 --> 00:04:53.570
be not much of an issue, because
Peter Elias and others
00:04:53.570 --> 00:04:56.340
proved that, at least for the
kinds of channels we're
00:04:56.340 --> 00:04:59.290
talking about, which are
symmetric, linear codes are
00:04:59.290 --> 00:05:00.270
just as good.
00:05:00.270 --> 00:05:03.970
A random linear code will
get to capacity.
00:05:03.970 --> 00:05:07.820
In other words, say, a block
code with a randomly chosen
00:05:07.820 --> 00:05:08.790
generator matrix.
00:05:08.790 --> 00:05:12.740
You pick k generators of length
n, again by flipping a
00:05:12.740 --> 00:05:16.590
coin, that's good enough
to get to capacity.
00:05:16.590 --> 00:05:21.510
So we know we can encode linear
codes with polynomial
00:05:21.510 --> 00:05:22.520
complexity.
00:05:22.520 --> 00:05:24.730
It's just the matrix
multiplication to encode a
00:05:24.730 --> 00:05:26.930
linear block code.
00:05:26.930 --> 00:05:27.300
All right?
00:05:27.300 --> 00:05:29.920
So encoding isn't the problem.
00:05:29.920 --> 00:05:31.520
But decoding.
00:05:31.520 --> 00:05:35.250
What kind of decoding have
we talked about so far?
00:05:35.250 --> 00:05:38.310
I guess particularly with Reed
Solomon codes, we've talked
00:05:38.310 --> 00:05:39.980
about algebraic methods.
00:05:39.980 --> 00:05:43.000
But our generic method is
to do maximum likelihood
00:05:43.000 --> 00:05:47.850
decoding, which basically
involves computing a distance
00:05:47.850 --> 00:05:51.350
or a metric to each
of the code words.
00:05:51.350 --> 00:05:52.950
Exponential number
of code words.
00:05:52.950 --> 00:05:54.100
2 to the nr code words.
00:05:54.100 --> 00:05:59.660
So we have exponential
complexity if we do maximum
00:05:59.660 --> 00:06:02.730
likelihood or minimum
distance decoding.
00:06:02.730 --> 00:06:05.330
And that, of course, was the
motivation for doing some of
00:06:05.330 --> 00:06:09.690
these algebraic decoding
methods, which were the
00:06:09.690 --> 00:06:14.820
primary focus coding theory
for several decades.
00:06:14.820 --> 00:06:18.980
A great deal of very good effort
went into finding good
00:06:18.980 --> 00:06:23.490
classes of codes that were
algebraically decodable, like
00:06:23.490 --> 00:06:27.930
Reed-Solomon codes, BCH codes,
Reed-Muller codes.
00:06:27.930 --> 00:06:28.710
[UNINTELLIGIBLE]
00:06:28.710 --> 00:06:30.050
don't have such --
00:06:30.050 --> 00:06:32.430
well, they do, we now realize.
00:06:32.430 --> 00:06:35.800
But that was the focus.
00:06:35.800 --> 00:06:38.940
And with the algebraic coding
methods, there was polynomial
00:06:38.940 --> 00:06:40.040
complexity.
00:06:40.040 --> 00:06:43.530
But it began to be realized that
these were never going to
00:06:43.530 --> 00:06:46.830
get you to capacity.
00:06:46.830 --> 00:06:51.320
Because of their difficulty in
using reliability information,
00:06:51.320 --> 00:06:55.740
because most of them are bounded
distance character,
00:06:55.740 --> 00:06:57.100
and in fact, that's
still true.
00:06:57.100 --> 00:07:01.850
You can't get to capacity with
algebraic decoding methods.
00:07:01.850 --> 00:07:05.610
So in parallel with this work
on block codes, there was a
00:07:05.610 --> 00:07:12.370
smaller stream of work that has
broadened, I would say, as
00:07:12.370 --> 00:07:15.810
the ancestor of the
capacity-approaching codes of
00:07:15.810 --> 00:07:20.450
today, on codes with other
kinds of structure.
00:07:20.450 --> 00:07:24.520
And specifically dynamical
structure, which is what we're
00:07:24.520 --> 00:07:25.820
going to talk about
when we talk about
00:07:25.820 --> 00:07:27.520
convolutional codes.
00:07:27.520 --> 00:07:31.250
And this was broadened into the
field of codes on graphs.
00:07:31.250 --> 00:07:34.750
And codes on graphs are nowadays
the way we get to
00:07:34.750 --> 00:07:36.040
channel capacity.
00:07:36.040 --> 00:07:36.790
All right?
00:07:36.790 --> 00:07:39.030
So that's really where we're
going in the second half of
00:07:39.030 --> 00:07:40.520
the course.
00:07:40.520 --> 00:07:42.250
We're going to preserve
linear structure.
00:07:42.250 --> 00:07:45.450
Linear is always good for the
symmetric channels we're
00:07:45.450 --> 00:07:46.580
talking about.
00:07:46.580 --> 00:07:48.790
But in addition, we're going
to look for other kinds of
00:07:48.790 --> 00:07:54.570
structure which are, in
particular, suitable for
00:07:54.570 --> 00:07:59.920
lower-complexity decoding
methods, are suitable for
00:07:59.920 --> 00:08:03.480
maximum likelihood or
quasi-maximum likelihood
00:08:03.480 --> 00:08:04.610
decoding methods.
00:08:04.610 --> 00:08:05.290
All right?
00:08:05.290 --> 00:08:08.940
So that's our motivation.
00:08:08.940 --> 00:08:09.470
OK.
00:08:09.470 --> 00:08:11.356
Convolutional codes.
00:08:11.356 --> 00:08:14.480
Convolutional codes were
actually invented by Peter
00:08:14.480 --> 00:08:17.645
Elias in an effort to find --
00:08:20.640 --> 00:08:24.510
just as you can put a linear
structure on codes without
00:08:24.510 --> 00:08:28.420
harming their performance, he
was trying to find more and
00:08:28.420 --> 00:08:31.980
more structure but you could put
on codes while still being
00:08:31.980 --> 00:08:35.500
able to achieve capacity
with a random ensemble.
00:08:35.500 --> 00:08:38.340
And he invented something called
sliding parity-check
00:08:38.340 --> 00:08:42.289
codes, which, if you let the
block link go to infinity,
00:08:42.289 --> 00:08:46.470
become convolutional codes.
00:08:46.470 --> 00:08:54.630
And they say there was a trickle
of effort basically
00:08:54.630 --> 00:08:58.060
motivated by the fact that
a number of people did a
00:08:58.060 --> 00:09:01.410
practical comparison of
performance versus complexity.
00:09:01.410 --> 00:09:05.640
Convolutional codes always
beat block codes.
00:09:05.640 --> 00:09:09.880
And so, for instance, I was at
a small company was trying to
00:09:09.880 --> 00:09:11.920
apply coding theory.
00:09:11.920 --> 00:09:17.470
We quickly gravitated to
convolutional codes, because
00:09:17.470 --> 00:09:20.190
they always gave us a better
performance complexity
00:09:20.190 --> 00:09:22.200
straight off than block codes.
00:09:22.200 --> 00:09:26.640
And I'm going to try to indicate
why that's so.
00:09:26.640 --> 00:09:27.120
All right.
00:09:27.120 --> 00:09:34.045
So I think the easiest way to
understand convolutional codes
00:09:34.045 --> 00:09:38.910
is by example, by putting
down an encoder.
00:09:38.910 --> 00:09:43.150
And the canonical example
everybody always uses is this
00:09:43.150 --> 00:09:45.060
one, so why should I
be any different?
00:09:53.660 --> 00:09:58.780
Here is a rate 1/2 constraint
length 2
00:09:58.780 --> 00:10:00.030
convolutional encoder.
00:10:04.850 --> 00:10:11.140
And it operates on a stream
of bits coming in.
00:10:11.140 --> 00:10:16.570
So what we have here is a time
index stream of bits, uk.
00:10:19.920 --> 00:10:21.430
And you can think of
this stream as
00:10:21.430 --> 00:10:23.690
being infinite in length.
00:10:23.690 --> 00:10:28.320
We'll talk about how to make
this a little bit more precise
00:10:28.320 --> 00:10:30.320
as we go along.
00:10:30.320 --> 00:10:35.630
And we have a shift register
structure here.
00:10:35.630 --> 00:10:39.160
The incoming bit uk is
D means a memory
00:10:39.160 --> 00:10:41.050
element or a delay element.
00:10:41.050 --> 00:10:43.030
It goes into a flip-flop.
00:10:43.030 --> 00:10:45.610
It's delayed for 1 time unit.
00:10:45.610 --> 00:10:50.560
There's some discrete time base
going on here, which is
00:10:50.560 --> 00:10:51.770
indexed by k.
00:10:51.770 --> 00:10:55.570
So k is simply the set
of all integers.
00:10:55.570 --> 00:10:57.630
Comes from the index set.
00:10:57.630 --> 00:10:58.530
Here is u k.
00:10:58.530 --> 00:10:59.930
Here's u k minus 1.
00:10:59.930 --> 00:11:03.520
Here is u k minus 2.
00:11:03.520 --> 00:11:03.980
All right?
00:11:03.980 --> 00:11:07.470
And constraint length 2
means that we save --
00:11:07.470 --> 00:11:10.230
we have the present
information bit.
00:11:10.230 --> 00:11:12.510
This is called the input
or the information bit.
00:11:15.820 --> 00:11:20.300
And the two past information
bits are saved.
00:11:20.300 --> 00:11:24.600
And from that, we generate
two output bits.
00:11:24.600 --> 00:11:35.470
y1 at time k is given by u k
plus u k minus 2, where this
00:11:35.470 --> 00:11:36.600
is a mod 2 sum.
00:11:36.600 --> 00:11:38.960
Everything is in f2.
00:11:38.960 --> 00:11:45.510
y2k is the sum of u k plus u
k minus 1 plus u k minus 2.
00:11:49.210 --> 00:11:49.760
All right?
00:11:49.760 --> 00:11:52.610
So this is where we get
the redundancy.
00:11:52.610 --> 00:11:54.750
What are we generally
doing with coding?
00:11:54.750 --> 00:11:59.300
We're adding redundant bits in
order to get more distance
00:11:59.300 --> 00:12:03.320
between the sequences that we
might send, and thereby
00:12:03.320 --> 00:12:06.940
hopefully to get a coding gain
on the additive white Gaussian
00:12:06.940 --> 00:12:09.650
noise channel.
00:12:09.650 --> 00:12:13.350
Here the redundancy comes from
the fact that you get 2 bits
00:12:13.350 --> 00:12:16.630
out for every bit
that you put in.
00:12:16.630 --> 00:12:20.780
So there ought to be the
possibility of getting some
00:12:20.780 --> 00:12:24.310
distance between possible
code sequences.
00:12:24.310 --> 00:12:28.390
The code sequences here are
infinite or semi-infinite, but
00:12:28.390 --> 00:12:31.880
nonetheless, we can establish
some minimum distance, or as
00:12:31.880 --> 00:12:36.260
it's called in this field,
the free distance.
00:12:36.260 --> 00:12:37.510
OK.
00:12:39.870 --> 00:12:43.280
There are two kinds
of structure here.
00:12:43.280 --> 00:12:49.805
First of all, this is a linear
time invariant system.
00:12:58.350 --> 00:13:01.640
If we were talking about the
real or complex field, we
00:13:01.640 --> 00:13:04.330
might call this a filter.
00:13:04.330 --> 00:13:10.010
Or it's actually a single input,
two output filter.
00:13:10.010 --> 00:13:16.620
So each of the y sequence
is a filtered
00:13:16.620 --> 00:13:18.240
version of the u sequence.
00:13:18.240 --> 00:13:21.020
y2 is a filtered version
of the u sequence.
00:13:24.760 --> 00:13:28.820
And it's linear basically
because it's made up of linear
00:13:28.820 --> 00:13:32.470
elements, lay elements,
and mod 2 adders.
00:13:32.470 --> 00:13:40.220
Over the binary field, it's
time-invariant, because if I
00:13:40.220 --> 00:13:42.010
change the time index,
it doesn't
00:13:42.010 --> 00:13:43.110
really change the equation.
00:13:43.110 --> 00:13:45.750
It's very hand-waving
rough here.
00:13:45.750 --> 00:13:48.640
But so it's a linear
time-invariant system, and we
00:13:48.640 --> 00:13:57.770
can analyze it the way
we analyze a filter.
00:13:57.770 --> 00:14:04.410
It's redundant, so it has more
outputs than inputs, which we
00:14:04.410 --> 00:14:05.665
want in a coding system.
00:14:08.630 --> 00:14:11.240
But other than that, we're going
to find that the same
00:14:11.240 --> 00:14:14.700
kind of techniques that we use
for analyzing discrete time
00:14:14.700 --> 00:14:17.865
linear filters can be used
to analyze the system.
00:14:17.865 --> 00:14:20.710
It's just over a different
field, F2, rather
00:14:20.710 --> 00:14:24.190
than R or C. All right?
00:14:24.190 --> 00:14:28.380
So that's one kind
of structure.
00:14:28.380 --> 00:14:36.600
And secondly, it's a finite
state system.
00:14:36.600 --> 00:14:38.100
Let's forget about
all the algebra.
00:14:41.120 --> 00:14:45.960
There's a discrete time system
which has two memory elements
00:14:45.960 --> 00:14:49.560
in it, each capable
of storing 1 bit.
00:14:49.560 --> 00:14:54.460
So how many states are
there in the system?
00:14:54.460 --> 00:14:54.920
Four.
00:14:54.920 --> 00:14:58.760
There are four possible states
that reflect everything that
00:14:58.760 --> 00:14:59.770
went on in the past.
00:14:59.770 --> 00:15:03.860
That's all you need to know
about the future, is what the
00:15:03.860 --> 00:15:08.600
state is at time k, to see
what's going to happen from
00:15:08.600 --> 00:15:10.420
time k onwards.
00:15:10.420 --> 00:15:10.770
All right?
00:15:10.770 --> 00:15:14.860
So this is a simple four-state
system, in this case.
00:15:14.860 --> 00:15:17.910
In general, if we build similar
encoders out of shift
00:15:17.910 --> 00:15:21.320
registers, even if we put
feedback in them, it's going
00:15:21.320 --> 00:15:24.190
to be a finite state system.
00:15:24.190 --> 00:15:27.520
So we're going to restrict
ourselves to finite state
00:15:27.520 --> 00:15:31.200
realizations, finite
state encoders.
00:15:31.200 --> 00:15:35.330
And that's going to ultimately
be the basis for the decoding
00:15:35.330 --> 00:15:39.180
algorithm, which is going to be
Viterbi algorithm, which is
00:15:39.180 --> 00:15:44.770
a very efficient decoder for
any finite state system
00:15:44.770 --> 00:15:48.140
observed in memoryless noise,
which is what we have here.
00:15:48.140 --> 00:15:51.810
Hidden Markov model, as
it's sometimes called.
00:15:51.810 --> 00:15:52.310
All right.
00:15:52.310 --> 00:15:58.110
So we're going to interplay
these two types of structure.
00:15:58.110 --> 00:16:09.360
And I feel it's really the fact
that convolutional codes
00:16:09.360 --> 00:16:12.610
have these two kinds of
structure that makes them
00:16:12.610 --> 00:16:13.750
better than block codes.
00:16:13.750 --> 00:16:18.710
They have just the amount of
algebraic structure that we
00:16:18.710 --> 00:16:20.650
want, namely, the linear
structure.
00:16:20.650 --> 00:16:22.440
They don't have too
much algebraic
00:16:22.440 --> 00:16:24.750
structure beyond that.
00:16:24.750 --> 00:16:29.240
We don't have elaborate roots
of polynomials and so forth,
00:16:29.240 --> 00:16:31.290
as we do with Reed-Solomon
codes.
00:16:31.290 --> 00:16:34.310
We have a pretty modest
algebraic structure here.
00:16:34.310 --> 00:16:39.970
And it's sort of different in
character than the algebraic
00:16:39.970 --> 00:16:42.870
structure Reed-Solomon codes.
00:16:42.870 --> 00:16:45.810
And it's married to this
finite state dynamical
00:16:45.810 --> 00:16:48.740
structure, which is what
allows us to have
00:16:48.740 --> 00:16:51.580
low-complexity decoding.
00:16:51.580 --> 00:16:54.690
So that's kind of the magic
of convolutional codes.
00:16:54.690 --> 00:16:57.400
Not that they're very magic.
00:16:57.400 --> 00:16:57.680
All right.
00:16:57.680 --> 00:16:59.235
This is a convolutional
encoder.
00:17:04.470 --> 00:17:07.530
What is a convolutional code?
00:17:07.530 --> 00:17:20.050
The convolutional code is
roughly the set of all
00:17:20.050 --> 00:17:21.465
possible output sequences.
00:17:27.160 --> 00:17:29.680
And this is where we're going
to measure the minimum
00:17:29.680 --> 00:17:32.010
distance of the code, minimum
free distance.
00:17:32.010 --> 00:17:34.970
What's the minimum Hamming
distance between any two
00:17:34.970 --> 00:17:36.547
possible output sequences?
00:17:39.290 --> 00:17:46.810
And in the notes, I go into
considerable detail to explain
00:17:46.810 --> 00:17:51.240
how we choose the particular
definition of all.
00:17:51.240 --> 00:17:52.060
What do we mean?
00:17:52.060 --> 00:17:55.300
Do we mean the set of all
possible output sequences if
00:17:55.300 --> 00:17:57.750
we put in a finite
input sequence?
00:17:57.750 --> 00:18:00.870
An input sequences that starts
at time 0, and this
00:18:00.870 --> 00:18:04.410
polynomial, let's say, that
ends at some finite time?
00:18:04.410 --> 00:18:08.800
Are we going to allow
semi-infinite, bi-infinite
00:18:08.800 --> 00:18:10.900
input sequences?
00:18:10.900 --> 00:18:14.650
So you see, there's some choices
to be made here.
00:18:14.650 --> 00:18:19.240
And actually, it's of some
importance that we make the
00:18:19.240 --> 00:18:25.234
right choice to get proper
insight into these codes.
00:18:25.234 --> 00:18:26.484
AUDIENCE: [INAUDIBLE]
00:18:30.440 --> 00:18:33.610
You have to choose the
initial state, then?
00:18:33.610 --> 00:18:34.330
PROFESSOR: Yeah, OK.
00:18:34.330 --> 00:18:38.530
You know from linear systems
that the output isn't even
00:18:38.530 --> 00:18:42.350
well-defined unless you say
what the initial state is.
00:18:42.350 --> 00:18:44.800
And in general, in linear
systems, we say, well, we'll
00:18:44.800 --> 00:18:47.880
assume it starts out in
the all-zero state.
00:18:47.880 --> 00:18:51.620
And that's basically the choice
I'm going to make,
00:18:51.620 --> 00:18:54.731
except I'm going to do
it in a couple steps.
00:18:54.731 --> 00:18:57.370
But that's the right question.
00:18:57.370 --> 00:18:59.300
How do we initialize
the system?
00:18:59.300 --> 00:19:03.260
That's a part of the
specification of this encoder.
00:19:03.260 --> 00:19:07.180
And of course we initialize
it in the all-zero state.
00:19:07.180 --> 00:19:11.640
But for that reason, in a
general system, we can't allow
00:19:11.640 --> 00:19:15.240
bi-infinite inputs.
00:19:15.240 --> 00:19:18.960
Because if we don't know what
the starting time is, we don't
00:19:18.960 --> 00:19:23.730
know how to define the
starting state.
00:19:23.730 --> 00:19:27.770
Again, I'm speaking
very roughly.
00:19:27.770 --> 00:19:29.560
So let me speak a little
bit more precisely.
00:19:32.390 --> 00:19:38.720
We're going to define the
set of all Laurent.
00:19:42.240 --> 00:19:53.630
Which means semi-infinite
sequences
00:19:53.630 --> 00:19:59.820
over the binary field.
00:19:59.820 --> 00:20:12.670
And this is called F2 of D. And
this means a sequence --
00:20:12.670 --> 00:20:14.160
U, say --
00:20:14.160 --> 00:20:29.930
which is only a finite
number of non-zero
00:20:29.930 --> 00:20:34.990
terms before time 0.
00:20:40.130 --> 00:20:43.360
In other words, it can start at
any negative time, but it
00:20:43.360 --> 00:20:47.260
has to have a definite
starting time.
00:20:47.260 --> 00:20:52.790
I'm not talking about the set of
all power series or the set
00:20:52.790 --> 00:20:56.650
of all sequences that start
at time 0 or later.
00:20:56.650 --> 00:20:58.840
The sequence can start
at any time.
00:20:58.840 --> 00:21:01.020
But it does have to have a
definite starting time, and
00:21:01.020 --> 00:21:03.250
before that, it has
to be all 0.
00:21:05.960 --> 00:21:14.130
So that for instance,
the sequence that --
00:21:14.130 --> 00:21:16.390
I haven't introduced
D transforms yet.
00:21:16.390 --> 00:21:18.520
Maybe I should do that
immediately.
00:21:18.520 --> 00:21:23.780
But a sequence which, at minus
5, minus 4, minus 3, minus 2,
00:21:23.780 --> 00:21:25.130
minus 1, 0 --
00:21:25.130 --> 00:21:26.870
these are time indices --
00:21:26.870 --> 00:21:34.660
looks like 1 0 1 0 0 1 0, and
goes on indefinitely in the
00:21:34.660 --> 00:21:38.140
future, but has all
zeros back here --
00:21:38.140 --> 00:21:41.500
that is a legitimate
Laurent sequence.
00:21:41.500 --> 00:21:46.600
Whereas a sequence that's dot
dot dot 1 1 1 1 1 1 1 1 1 and
00:21:46.600 --> 00:21:50.560
so forth, all ones forever, is
not, because it doesn't have a
00:21:50.560 --> 00:21:51.811
definite starting time.
00:21:59.960 --> 00:22:00.830
And I sort of --
00:22:00.830 --> 00:22:02.120
all right in this notation --
00:22:02.120 --> 00:22:05.790
I need to introduce
D transforms.
00:22:05.790 --> 00:22:15.370
The D transform of a sequence U
which has components Uk for
00:22:15.370 --> 00:22:23.480
k and z it is simply the
sum of Uk D to the k.
00:22:23.480 --> 00:22:29.310
Let me write that as U of
D. For all k and z.
00:22:33.590 --> 00:22:35.640
This is simply a generating
function.
00:22:38.290 --> 00:22:42.500
It looks very much like the Z
transform that you see in
00:22:42.500 --> 00:22:46.110
discrete time linear
filter theory.
00:22:46.110 --> 00:22:52.070
We would write for something
up here, well --
00:22:52.070 --> 00:22:57.180
So a sequence that is 1
at time 0, 0 at time
00:22:57.180 --> 00:22:59.280
1, 1 at time 2 --
00:22:59.280 --> 00:23:03.210
if U is equal to that, and 0 at
all other times, we would
00:23:03.210 --> 00:23:08.550
just write U of D equals
1 plus D squared.
00:23:08.550 --> 00:23:12.430
So it's a generating function.
00:23:12.430 --> 00:23:15.560
The D just gives us a location
of where the ones are.
00:23:15.560 --> 00:23:19.256
It's easier to write this
than this, frankly.
00:23:19.256 --> 00:23:22.750
It's actually hard to write
the sequence which is 1 at
00:23:22.750 --> 00:23:27.750
time k and 0 at all other times
in this notation, but
00:23:27.750 --> 00:23:30.215
it's simply D to the
k in this notation.
00:23:32.880 --> 00:23:36.340
D here is algebraically simply
an indeterminate, or a
00:23:36.340 --> 00:23:37.860
placeholder.
00:23:37.860 --> 00:23:40.800
And this is the subtle
difference from Z transforms.
00:23:40.800 --> 00:23:45.070
The Z transforms, we would write
U of Z minus one equals
00:23:45.070 --> 00:23:48.960
1 plus Z minus 2 for something
like this.
00:23:48.960 --> 00:23:52.860
There's this Z to the minus 1
convention in Z transforms.
00:23:52.860 --> 00:23:54.490
That's not the important
difference.
00:23:54.490 --> 00:23:57.200
We could clearly use D to the
minus 1 rather than D. The
00:23:57.200 --> 00:24:00.382
important difference here is
that Z is usually imagined to
00:24:00.382 --> 00:24:03.610
have values in the
complex plane.
00:24:03.610 --> 00:24:06.340
We look for it's poles and
zeros in the complex
00:24:06.340 --> 00:24:08.380
plane and so forth.
00:24:08.380 --> 00:24:11.960
And so the Z transform really
is a transform.
00:24:11.960 --> 00:24:14.510
It takes you into the
frequency domain.
00:24:14.510 --> 00:24:20.540
If you make z equals e to the
j omega t or something, then
00:24:20.540 --> 00:24:22.660
you're in the frequency
domain.
00:24:22.660 --> 00:24:25.950
Here, this is still a time
domain expression.
00:24:25.950 --> 00:24:29.250
Just as a convenient, compact,
generating function notation
00:24:29.250 --> 00:24:32.300
for a particular sequence.
00:24:32.300 --> 00:24:36.770
So that's when I say the set
of all semi-infinite
00:24:36.770 --> 00:24:39.290
sequences, now I can write --
00:24:39.290 --> 00:24:43.410
a Laurent sequence always
looks like this.
00:24:43.410 --> 00:24:49.680
It starts off at some
particular time.
00:24:49.680 --> 00:24:57.165
Let's say the first term is Ud D
to the d plus Ud plus 1 D to
00:24:57.165 --> 00:24:59.840
the d plus 1 plus so forth.
00:24:59.840 --> 00:25:01.520
It always looks like that.
00:25:01.520 --> 00:25:04.980
So it has a definite starting
time d, which
00:25:04.980 --> 00:25:08.950
is called the delay.
00:25:08.950 --> 00:25:12.995
The delay of u of D is equal
to d, in this case.
00:25:15.590 --> 00:25:20.070
And d can be any integer.
00:25:20.070 --> 00:25:24.160
It can start at a negative time
or at a positive time.
00:25:24.160 --> 00:25:28.120
We don't require it to start
at time 0 or later.
00:25:28.120 --> 00:25:31.210
But nonetheless, the sequence
has to look like this.
00:25:35.880 --> 00:25:42.190
And we indicate the set of
all such sequences here.
00:25:42.190 --> 00:25:45.270
One of the immediate
observations that you can make
00:25:45.270 --> 00:25:49.490
is that this is a time
invariant set.
00:25:49.490 --> 00:25:52.440
Which the set of all sequences
that start at
00:25:52.440 --> 00:25:55.700
0 or later is not.
00:25:55.700 --> 00:25:58.030
Set of all sequences that start
at 0 or later has a
00:25:58.030 --> 00:26:04.470
definite time in it, and a set
is time-invariant if D times
00:26:04.470 --> 00:26:06.300
the set is equal to the set.
00:26:06.300 --> 00:26:12.410
So what is D times F2 of D?
00:26:12.410 --> 00:26:15.432
I claim it's just F2 of
D all over again.
00:26:15.432 --> 00:26:20.610
And in fact, D to the k for
any k is equal to F2 of D.
00:26:20.610 --> 00:26:24.706
So the set of all Laurent
sequences is time-invariant.
00:26:24.706 --> 00:26:28.400
Whereas the set of all power
series in D, that's sequences
00:26:28.400 --> 00:26:31.770
that have delay 0 or greater,
is not time invariant.
00:26:31.770 --> 00:26:34.390
It does not satisfy this.
00:26:38.570 --> 00:26:41.520
Chew on that.
00:26:41.520 --> 00:26:44.900
An important consequence of this
is that the set of all
00:26:44.900 --> 00:26:47.035
Laurent sequences
forms a field.
00:26:54.150 --> 00:26:57.510
Let's say what we need
to prove for a field.
00:26:57.510 --> 00:27:01.290
Can we add Laurent sequences?
00:27:01.290 --> 00:27:04.175
Yes, we simply add them in the
usual way, component-wise.
00:27:07.130 --> 00:27:10.540
Same way as you do in
polynomial addition.
00:27:10.540 --> 00:27:14.900
You just gather together
compounds at time k.
00:27:14.900 --> 00:27:16.775
Can you multiply Laurent
sequences?
00:27:20.780 --> 00:27:23.970
You can certainly multiply
polynomials.
00:27:23.970 --> 00:27:29.330
1 plus D times one plus D is
1 plus D squared over F2.
00:27:29.330 --> 00:27:31.055
Can you multiply Laurent
sequences?
00:27:33.830 --> 00:27:35.220
Well, yes.
00:27:35.220 --> 00:27:43.750
We can define multiplication by
convolution as we do with
00:27:43.750 --> 00:27:45.000
polynomials.
00:27:51.290 --> 00:28:01.220
In other words, U of D times V
of D is the sequence W of D,
00:28:01.220 --> 00:28:10.750
where Wk is the sum over k
prime in Z of Uk prime Vk
00:28:10.750 --> 00:28:12.000
minus k prime.
00:28:14.560 --> 00:28:17.690
Now what's the potential problem
that can arise when
00:28:17.690 --> 00:28:20.650
you define multiplication
by convolution?
00:28:20.650 --> 00:28:25.210
If any of these terms are
infinite, we have no notion of
00:28:25.210 --> 00:28:27.670
convergence over F2.
00:28:27.670 --> 00:28:31.065
So we really have no way of
defining an infinite sum.
00:28:31.065 --> 00:28:34.340
An infinite sum of elements
in F2 is, in general, not
00:28:34.340 --> 00:28:35.590
well-defined.
00:28:37.130 --> 00:28:41.870
But from the fact that each of
these starts at one time --
00:28:41.870 --> 00:28:43.360
this is basically multiplying
00:28:43.360 --> 00:28:45.390
something by the time reversal.
00:28:45.390 --> 00:28:48.890
We only have a finite number of
elements in any such sum.
00:28:48.890 --> 00:28:54.240
So that's the real reason
for using the set
00:28:54.240 --> 00:28:55.620
of all Laurent sequences.
00:28:55.620 --> 00:28:58.120
If we had the set of all
bi-infinite sequences, then
00:28:58.120 --> 00:29:00.650
multiplication would not
be well-defined.
00:29:00.650 --> 00:29:03.580
But by insisting that they
have a starting time, we
00:29:03.580 --> 00:29:06.390
ensure that convolution is
always well-defined.
00:29:06.390 --> 00:29:09.300
Therefore, multiplication
of Laurent sequences is
00:29:09.300 --> 00:29:10.550
well-defined.
00:29:12.830 --> 00:29:13.550
OK.
00:29:13.550 --> 00:29:16.386
So we have addition, we
have multiplication.
00:29:19.390 --> 00:29:23.660
Maybe I'll go over here.
00:29:23.660 --> 00:29:25.110
What else do we need
for a field?
00:29:27.650 --> 00:29:31.310
So far we just have a ring.
00:29:31.310 --> 00:29:32.400
Inverses, yeah.
00:29:32.400 --> 00:29:34.600
Let's directly check inverses.
00:29:37.780 --> 00:29:43.391
So suppose I have some Laurent
sequence U of D equals U(d), D
00:29:43.391 --> 00:29:46.635
to the d, plus so forth.
00:29:46.635 --> 00:29:48.085
Does that always have
an inverse?
00:30:00.370 --> 00:30:04.810
In other words, we want to find
something 1 over U of D
00:30:04.810 --> 00:30:07.880
equals what?
00:30:07.880 --> 00:30:14.720
Well, let's just divide U of D
into 1 by long division, and
00:30:14.720 --> 00:30:16.990
we can get an inverse.
00:30:16.990 --> 00:30:18.520
Let me give you an example.
00:30:18.520 --> 00:30:22.250
Suppose U of D is equal
to 1 plus D. What's
00:30:22.250 --> 00:30:24.810
1 over 1 plus D?
00:30:24.810 --> 00:30:30.100
We take 1 plus D, we divide
it into 1, we get 1.
00:30:30.100 --> 00:30:34.610
1 plus D, D plus D,
D plus D squared.
00:30:37.330 --> 00:30:38.580
And so forth.
00:30:42.100 --> 00:30:46.170
So we get a semi-infinite
sequence.
00:30:46.170 --> 00:30:47.910
Not a polynomial.
00:30:47.910 --> 00:30:52.150
But nonetheless, it's a
legitimate Laurent sequence.
00:30:52.150 --> 00:30:55.440
It's actually a very simple
periodic sequence of period 1.
00:30:58.340 --> 00:31:01.170
Are you with me on this?
00:31:01.170 --> 00:31:05.870
Could you find the inverse of
any sequence that starts at
00:31:05.870 --> 00:31:09.170
time 0, let's say?
00:31:09.170 --> 00:31:11.600
If you can do that, you can
certainly find the inverse of
00:31:11.600 --> 00:31:14.790
any sequence that starts at time
D. You just multiply by D
00:31:14.790 --> 00:31:16.040
to the minus d.
00:31:19.680 --> 00:31:42.010
So every non-zero Laurent
sequence has an inverse.
00:31:44.710 --> 00:31:48.240
1 over U of D, let's say.
00:31:48.240 --> 00:31:50.040
Which is also a Laurent
sequence.
00:31:53.080 --> 00:31:55.075
Of course, as always, we
can't divide by 0.
00:31:59.950 --> 00:32:02.320
But that's the definition
of a field.
00:32:02.320 --> 00:32:02.960
That doesn't matter.
00:32:02.960 --> 00:32:05.400
We have inverses.
00:32:05.400 --> 00:32:09.270
We checked the associative,
00:32:09.270 --> 00:32:11.510
distributive, commutative laws.
00:32:11.510 --> 00:32:12.220
They all work.
00:32:12.220 --> 00:32:16.270
So this is actually a field.
00:32:16.270 --> 00:32:16.700
Yes?
00:32:16.700 --> 00:32:17.686
AUDIENCE: [INAUDIBLE]
00:32:17.686 --> 00:32:18.936
multiplication [INAUDIBLE].
00:32:22.616 --> 00:32:24.500
PROFESSOR: Yeah.
00:32:24.500 --> 00:32:28.200
The Laurent sequences certainly
include polynomials,
00:32:28.200 --> 00:32:33.150
and this is consistent with
polynomial multiplication.
00:32:33.150 --> 00:32:36.720
So it's just extending
polynomial multiplication
00:32:36.720 --> 00:32:41.000
really as far as we can to these
semi-infinite sequences
00:32:41.000 --> 00:32:44.620
that are semi-infinite in the
future, but not in the past.
00:32:44.620 --> 00:32:48.640
They're infinite in the future,
not in the past.
00:32:48.640 --> 00:32:50.244
AUDIENCE: I think that's
[INAUDIBLE] infinite sequence
00:32:50.244 --> 00:32:53.175
of [INAUDIBLE].
00:32:53.175 --> 00:32:54.425
[UNINTELLIGIBLE].
00:32:59.140 --> 00:33:00.390
I mean suppose you --
00:33:04.600 --> 00:33:06.860
To find the inverse we just have
to keep solving the set
00:33:06.860 --> 00:33:09.860
of linear equations, right?
00:33:09.860 --> 00:33:12.916
From the first equation, we get
the first one, from the
00:33:12.916 --> 00:33:14.166
second one, by substitution
we get the next one.
00:33:16.650 --> 00:33:17.790
PROFESSOR: Yeah.
00:33:17.790 --> 00:33:20.470
It's long division, which is
the same as the Euclidean
00:33:20.470 --> 00:33:22.980
division algorithm.
00:33:22.980 --> 00:33:29.170
We just want to find a
coefficient here that reduces
00:33:29.170 --> 00:33:35.610
the remainder to start at time
1 or later, to have delay 1.
00:33:35.610 --> 00:33:38.610
We want to find the next
coefficient to make the
00:33:38.610 --> 00:33:39.890
remainder have delay 2.
00:33:39.890 --> 00:33:41.540
And we can always continue
this, ad infinitum.
00:33:44.970 --> 00:33:46.220
AUDIENCE: [INAUDIBLE]
00:33:52.020 --> 00:33:53.840
PROFESSOR: These are
semi-infinite sequences.
00:33:53.840 --> 00:33:56.470
Do you mean sequences that
start at time 0 or later,
00:33:56.470 --> 00:33:59.110
which are called formal
power series?
00:33:59.110 --> 00:33:59.990
Again, there are some --
00:33:59.990 --> 00:34:01.240
AUDIENCE: [INAUDIBLE]
00:34:03.660 --> 00:34:04.130
PROFESSOR: Oh!
00:34:04.130 --> 00:34:05.030
Sorry, yes.
00:34:05.030 --> 00:34:07.940
Of course, we need
to check that.
00:34:07.940 --> 00:34:10.870
But you can see, it
doesn't matter.
00:34:10.870 --> 00:34:16.280
We could divide by 1 plus D plus
D squared plus so forth,
00:34:16.280 --> 00:34:18.389
and the algorithm is the same.
00:34:18.389 --> 00:34:22.682
In that case, of course, if we
did that, we'd get 1 plus D
00:34:22.682 --> 00:34:25.250
plus D squared, plus so forth.
00:34:25.250 --> 00:34:27.880
D plus D squared,
plus so forth.
00:34:27.880 --> 00:34:28.780
And what do you know?
00:34:28.780 --> 00:34:32.710
D times that is that,
and we get 0, and it
00:34:32.710 --> 00:34:35.170
terminates after 1.
00:34:35.170 --> 00:34:36.120
So yeah.
00:34:36.120 --> 00:34:36.610
Thank you.
00:34:36.610 --> 00:34:38.530
Long division also works.
00:34:38.530 --> 00:34:46.159
If we divide by any Laurent
sequence, it's the same.
00:34:46.159 --> 00:34:49.060
The same algorithm.
00:34:49.060 --> 00:34:50.310
Excellent.
00:34:52.610 --> 00:34:53.300
OK.
00:34:53.300 --> 00:35:03.270
So every Laurent sequence
has an inverse.
00:35:03.270 --> 00:35:06.010
In general, what do the
inverses of polynomial
00:35:06.010 --> 00:35:07.260
sequences look like?
00:35:15.100 --> 00:35:19.750
Inverse of a polynomial
sequence.
00:35:25.750 --> 00:35:30.060
Can anyone guess what its
special property is?
00:35:30.060 --> 00:35:33.430
What is a polynomial sequence,
first of all?
00:35:33.430 --> 00:35:36.290
I say a sequence is finite if it
only has a finite number of
00:35:36.290 --> 00:35:37.890
non-zero terms.
00:35:37.890 --> 00:35:41.000
I'm going to say it's polynomial
if it's finite and
00:35:41.000 --> 00:35:42.410
also causal.
00:35:42.410 --> 00:35:45.736
Causal means it starts
at time 0 or later.
00:35:45.736 --> 00:35:47.990
Its delay is non-negative.
00:35:50.840 --> 00:35:51.210
All right.
00:35:51.210 --> 00:35:55.120
So that's the polynomials that
we're already familiar with.
00:35:55.120 --> 00:35:58.460
What does the inverse of a
polynomial sequence look like?
00:35:58.460 --> 00:35:59.710
AUDIENCE: [INAUDIBLE]
00:36:02.230 --> 00:36:05.230
PROFESSOR: It's always going
to be infinite, right?
00:36:05.230 --> 00:36:07.510
Unless it's 1 or something.
00:36:07.510 --> 00:36:12.970
So unless it's an invertible
unit in the polynomials, it's
00:36:12.970 --> 00:36:17.740
going to be an infinite
sequence.
00:36:17.740 --> 00:36:18.990
But it's going to
have a property.
00:36:22.730 --> 00:36:25.740
Nobody can guess what the
property's going to be?
00:36:25.740 --> 00:36:28.070
I already mentioned it once.
00:36:28.070 --> 00:36:28.430
Periodic!
00:36:28.430 --> 00:36:29.680
Right.
00:36:35.560 --> 00:36:37.920
And this is an if and
only if statement.
00:36:46.220 --> 00:36:49.520
I guess I have to elaborate
a little bit, but roughly,
00:36:49.520 --> 00:36:52.090
that's correct.
00:36:52.090 --> 00:36:54.455
This is going to be one of
your homework problems.
00:36:57.590 --> 00:37:00.780
Maybe I'm jumping ahead a
little, but let me --
00:37:00.780 --> 00:37:04.270
there are a couple of
ways of seeing this.
00:37:04.270 --> 00:37:06.640
One is to use some of
the algebra we did
00:37:06.640 --> 00:37:07.750
back in chapter seven.
00:37:07.750 --> 00:37:10.020
But since we didn't
really complete
00:37:10.020 --> 00:37:11.380
that, I won't do that.
00:37:11.380 --> 00:37:16.390
A second one is to see within
this long division operation,
00:37:16.390 --> 00:37:20.480
again, if I'm dividing by a
polynomial, there's only a
00:37:20.480 --> 00:37:25.540
finite set of possible
remainders so I can get up to
00:37:25.540 --> 00:37:31.030
shifts by multiples of D. So
if I see the same remainder
00:37:31.030 --> 00:37:35.900
again, if I see 1, D, and then
D squared, of course the
00:37:35.900 --> 00:37:38.280
series is going to have to
continue in the same way
00:37:38.280 --> 00:37:38.560
necessarily.
00:37:38.560 --> 00:37:41.010
Since there are only a finite
number of remainders, it's got
00:37:41.010 --> 00:37:43.200
to repeat at some point.
00:37:43.200 --> 00:37:45.250
Well, a way that I can do
it which is more in
00:37:45.250 --> 00:37:48.110
the spirit of this.
00:37:48.110 --> 00:38:01.290
Suppose I consider the impulse
response of a system with
00:38:01.290 --> 00:38:09.500
feedback, which is set up so the
impulse response is equal
00:38:09.500 --> 00:38:13.080
to, in this case, say, 1
over D plus D squared.
00:38:16.850 --> 00:38:19.130
You see that the impulse
response of this is going to
00:38:19.130 --> 00:38:21.520
be 1 over 1 plus D
plus D squared.
00:38:21.520 --> 00:38:29.460
If I put in a 1, time 1, then
next time, I'm going to get a
00:38:29.460 --> 00:38:33.170
1 in here, going to get a
1 feeding back there.
00:38:33.170 --> 00:38:35.630
Well anyway, I believe
that's right.
00:38:35.630 --> 00:38:37.820
This gives an impulse response
of 1 over --
00:38:37.820 --> 00:38:41.930
if it doesn't, then readjust
it so it does.
00:38:41.930 --> 00:38:47.250
Now, after time 0, the
input is always 0.
00:38:47.250 --> 00:38:53.470
So I just have an autonomous
system which is producing the
00:38:53.470 --> 00:38:58.180
terms at time 1 and later of 1
over 1 plus D plus D squared.
00:38:58.180 --> 00:39:02.270
And again, it's a finite
state system.
00:39:02.270 --> 00:39:08.220
Because for over F2, there are
only 4 states in the system.
00:39:08.220 --> 00:39:11.100
So what could its state
transition diagram
00:39:11.100 --> 00:39:12.110
possibly look like?
00:39:12.110 --> 00:39:14.760
First of all, the 0
state always goes
00:39:14.760 --> 00:39:17.590
around to the 0 state.
00:39:17.590 --> 00:39:22.200
So the other states are
interconnected in some way.
00:39:22.200 --> 00:39:25.245
And at best, they're always
going to cycle.
00:39:28.400 --> 00:39:35.380
So this is a quick proof
that 1 over 1 plus D
00:39:35.380 --> 00:39:36.340
plus D squared --
00:39:36.340 --> 00:39:39.970
in fact, any polynomial
inverse --
00:39:39.970 --> 00:39:41.690
is going to give you a
periodic response.
00:39:45.730 --> 00:39:48.190
What I should really say is --
00:39:48.190 --> 00:39:53.310
let me define a rational
Laurent sequence.
00:39:53.310 --> 00:39:56.320
I'll leave out the Laurent.
00:39:56.320 --> 00:40:01.470
A rational sequence is something
of the form n of D
00:40:01.470 --> 00:40:09.960
over d of D where these are both
polynomial, or they're
00:40:09.960 --> 00:40:11.440
actually finite.
00:40:11.440 --> 00:40:15.386
Let's say they're
both polynomial.
00:40:22.050 --> 00:40:24.950
You can reduce it to lowest
terms, if you like.
00:40:24.950 --> 00:40:30.780
Actually, I should make this
finite up here, so that it can
00:40:30.780 --> 00:40:34.380
start at time before time 0.
00:40:34.380 --> 00:40:42.830
And I have to have d of
D not equal to 0.
00:40:42.830 --> 00:40:47.330
And in fact, I generally require
d0, the first term
00:40:47.330 --> 00:40:48.300
down here, to be 1.
00:40:48.300 --> 00:40:54.010
So this is a polynomial with
time 0 term equal to 1.
00:40:54.010 --> 00:40:57.230
Like 1 plus D plus D squared.
00:40:57.230 --> 00:41:02.605
So a rational sequence is one
that looks like that.
00:41:05.940 --> 00:41:12.830
We can generate a rational
sequence by feeding n of D
00:41:12.830 --> 00:41:17.520
into a system that has response
1 over d of D. And
00:41:17.520 --> 00:41:19.888
the output --
00:41:19.888 --> 00:41:22.150
I forget where you take
the output off here.
00:41:22.150 --> 00:41:24.290
Probably here.
00:41:24.290 --> 00:41:25.820
This will be n of D
00:41:25.820 --> 00:41:30.980
over d of D. OK.
00:41:34.150 --> 00:41:35.470
There's a linear system.
00:41:35.470 --> 00:41:37.150
Single input, single output.
00:41:37.150 --> 00:41:43.060
If it has impulse response, 1
over d of D, then if I put in
00:41:43.060 --> 00:41:48.970
the sequence n of D, I'm going
to get out n of D over d of D.
00:41:48.970 --> 00:41:52.540
And so by the same finite memory
argument, n of D over d
00:41:52.540 --> 00:41:57.110
of D is going to be eventually
periodic.
00:41:57.110 --> 00:42:03.150
So a sequence is rational
if and only if
00:42:03.150 --> 00:42:06.250
it's eventually periodic.
00:42:09.080 --> 00:42:11.290
And again, on the homework, I'm
going to ask you to prove
00:42:11.290 --> 00:42:12.960
this more carefully.
00:42:12.960 --> 00:42:15.710
You can use this finite memory
argument if you like.
00:42:18.940 --> 00:42:21.020
So this should remind
you of something.
00:42:21.020 --> 00:42:26.950
This should remind you of real
numbers, integers, ratios of
00:42:26.950 --> 00:42:30.490
integers, which are rational
real numbers, all
00:42:30.490 --> 00:42:33.650
that sort of thing.
00:42:33.650 --> 00:42:35.380
What are the analogies here?
00:42:42.710 --> 00:42:44.244
First of all, how big is it?
00:42:47.210 --> 00:42:50.190
How many real numbers
are there?
00:42:50.190 --> 00:42:51.440
It's uncountably infinite,
right?
00:42:54.460 --> 00:42:59.610
How many Laurent sequences
are there?
00:43:02.450 --> 00:43:06.980
You can I think quickly convince
yourself that it's
00:43:06.980 --> 00:43:08.230
uncountably infinite.
00:43:10.590 --> 00:43:15.820
Now, we have a special
subset of the real
00:43:15.820 --> 00:43:18.270
numbers called the integers.
00:43:18.270 --> 00:43:19.650
Oh.
00:43:19.650 --> 00:43:22.920
Let's think of this as a decimal
expansion, by the way.
00:43:22.920 --> 00:43:32.720
So we have a d, a d minus 1,
and so forth, down to a0
00:43:32.720 --> 00:43:35.440
decimal point a minus 1.
00:43:35.440 --> 00:43:38.255
These are the coefficients
of decimal expansion.
00:43:40.970 --> 00:43:43.840
There's something interesting
here.
00:43:43.840 --> 00:43:47.610
Implicitly we always assume
there are only a finite number
00:43:47.610 --> 00:43:53.770
of coefficients in the decimal
expansion above the decimal
00:43:53.770 --> 00:43:54.920
point, right?
00:43:54.920 --> 00:43:58.130
There can be an infinite
number going this way.
00:43:58.130 --> 00:44:03.140
So that's completely analogous
to what we have here.
00:44:03.140 --> 00:44:04.880
What are the integers?
00:44:04.880 --> 00:44:09.820
The integers are the 1's
that stop at a0.
00:44:12.540 --> 00:44:17.150
That are all zero to the right
side of the decimal point.
00:44:17.150 --> 00:44:20.770
Over here we have something that
looks like u d, D to the
00:44:20.770 --> 00:44:29.660
d plus u d plus 1, D to the
d plus 1, and so forth.
00:44:29.660 --> 00:44:34.430
And in here we have a special
subclass called the
00:44:34.430 --> 00:44:39.820
polynomials in D. And
what are these?
00:44:39.820 --> 00:44:44.400
It's not precisely analogous,
because I should really do it
00:44:44.400 --> 00:44:47.180
with Z minus 1, to make it
analogous and expand it in the
00:44:47.180 --> 00:44:48.390
other direction.
00:44:48.390 --> 00:44:52.010
But these are the ones that have
only a finite number of
00:44:52.010 --> 00:45:00.120
coefficients to the right of
the time 0 point, again.
00:45:00.120 --> 00:45:04.790
We noticed before that there is
a very close relationship
00:45:04.790 --> 00:45:07.830
between the factorization
properties of Z and the
00:45:07.830 --> 00:45:10.270
factorization properties
of the polynomials.
00:45:10.270 --> 00:45:12.390
They're both principal
ideal domains.
00:45:12.390 --> 00:45:15.190
They're both unique
factorization domains.
00:45:15.190 --> 00:45:18.505
They're both rings of the
very nicest type.
00:45:23.680 --> 00:45:28.890
Then once we have in here
the rational numbers --
00:45:28.890 --> 00:45:34.560
that's certainly an important
subset of the reals --
00:45:34.560 --> 00:45:37.410
how many rationals are there?
00:45:37.410 --> 00:45:42.490
The rationals, this is basically
a ratio of integers.
00:45:42.490 --> 00:45:44.750
And there's only a countably
infinite number
00:45:44.750 --> 00:45:47.570
of rationals, right?
00:45:47.570 --> 00:45:50.490
And what's the distinguishing
characteristic of the
00:45:50.490 --> 00:45:55.220
rationals in terms of their
decimal expansion?
00:45:55.220 --> 00:45:58.500
It's eventually periodic.
00:45:58.500 --> 00:46:05.640
So these correspond to the
rational functions, which are
00:46:05.640 --> 00:46:11.520
denoted by this regular
parentheses bracket.
00:46:11.520 --> 00:46:14.830
So this is the polynomials.
00:46:14.830 --> 00:46:16.725
This is the rational
functions.
00:46:20.450 --> 00:46:23.750
And because these are eventually
periodic, or
00:46:23.750 --> 00:46:28.870
because all of these can be
written as n of D over d of D,
00:46:28.870 --> 00:46:32.990
and both of these are
polynomials, or these are
00:46:32.990 --> 00:46:36.440
clearly countably infinite
sets, this is a countably
00:46:36.440 --> 00:46:37.210
infinite set.
00:46:37.210 --> 00:46:40.821
The set of all rational
Laurentian polynomials.
00:46:45.540 --> 00:46:52.100
So again, this is
just mnemonics.
00:46:52.100 --> 00:46:53.930
Haven't proved anything.
00:46:53.930 --> 00:46:59.580
But the behavior of these things
very closely reminds
00:46:59.580 --> 00:47:02.260
you of the behavior of these
things over here.
00:47:02.260 --> 00:47:04.500
It's good to keep
this in mind.
00:47:04.500 --> 00:47:05.950
One other point.
00:47:05.950 --> 00:47:10.270
Suppose I'd allow bi-infinite
sequences.
00:47:10.270 --> 00:47:19.202
Then 1 over 1 plus D doesn't
have a definite expansion.
00:47:19.202 --> 00:47:21.552
It has two expansions.
00:47:21.552 --> 00:47:26.670
1 over 1 plus D, if we require
that the expansion be Laurent,
00:47:26.670 --> 00:47:28.175
then we only have this
possibility.
00:47:30.920 --> 00:47:33.660
But if it could be bi-infinite
in the other direction, then
00:47:33.660 --> 00:47:37.320
you can see that this is also
equal to D minus 1 plus D
00:47:37.320 --> 00:47:42.840
minus 2 plus D minus 3 plus
so forth, semi-infinitely.
00:47:42.840 --> 00:47:48.910
So another reason we want to
rule out non-Laurent sequences
00:47:48.910 --> 00:47:51.020
is to have a unique inverse.
00:47:51.020 --> 00:47:53.980
This is just another way of
saying that the Laurent
00:47:53.980 --> 00:47:56.230
sequences form a field.
00:47:56.230 --> 00:48:02.030
That the bi-infinite sequences
don't have the
00:48:02.030 --> 00:48:03.220
multiplicative property.
00:48:03.220 --> 00:48:05.890
They're simply a group.
00:48:05.890 --> 00:48:09.560
And this is all written
up in the notes.
00:48:09.560 --> 00:48:10.110
All right.
00:48:10.110 --> 00:48:14.940
So where I want to get
to eventually here --
00:48:14.940 --> 00:48:16.190
come back over here.
00:48:21.900 --> 00:48:24.490
Let's now analyze this
convolutional encoder.
00:48:32.420 --> 00:48:38.730
It's a linear time invariant
circuit.
00:48:38.730 --> 00:48:43.080
That means it's characterized
by its impulse response.
00:48:53.140 --> 00:48:57.955
Or responses, if you like,
because it has two outputs.
00:49:00.860 --> 00:49:07.290
What does y1 of D, if I write
the D transform of y1, what is
00:49:07.290 --> 00:49:09.152
that going to equal to?
00:49:09.152 --> 00:49:14.060
If I just put in a single
impulse, single 1 at time 0
00:49:14.060 --> 00:49:18.690
for u k, what am I going
to get out up there?
00:49:18.690 --> 00:49:23.530
I'm going to get out 1 0
1 and then all zeros.
00:49:26.430 --> 00:49:36.820
So y1 of D, the impulse response
to steps d1 of D,
00:49:36.820 --> 00:49:40.650
equals 1 plus D squared.
00:49:40.650 --> 00:49:45.780
That's the impulse response
of the first output.
00:49:45.780 --> 00:49:54.020
And y1 of D is simply u of
D times 1 plus D squared,
00:49:54.020 --> 00:49:55.270
bi-linearity and time
invariance.
00:49:58.580 --> 00:50:02.570
So if I put in 1 plus
D squared, I'll get
00:50:02.570 --> 00:50:04.240
out 1 plus D fourth.
00:50:07.320 --> 00:50:09.050
Is this too quick?
00:50:09.050 --> 00:50:10.500
This is just linear
system theory.
00:50:10.500 --> 00:50:13.100
You've all seen this
in Digital Signal
00:50:13.100 --> 00:50:16.080
Processing or something.
00:50:16.080 --> 00:50:20.390
This is undergraduate stuff,
except for over F2.
00:50:20.390 --> 00:50:20.920
OK?
00:50:20.920 --> 00:50:23.570
You've just got to check
that it's linear, it's
00:50:23.570 --> 00:50:24.700
time-invariant.
00:50:24.700 --> 00:50:27.787
Therefore it's characterized
by its impulse response.
00:50:27.787 --> 00:50:32.560
And so once you know the impulse
response, you want to
00:50:32.560 --> 00:50:35.260
know the response to any
sequence, you convolve that
00:50:35.260 --> 00:50:37.490
sequence with the impulse
response.
00:50:37.490 --> 00:50:39.636
This is that statement in
D transform notation.
00:50:42.280 --> 00:50:42.800
OK?
00:50:42.800 --> 00:50:44.050
People happy with that?
00:50:46.750 --> 00:50:49.180
I will spend time on it if it's
mysterious, but I don't
00:50:49.180 --> 00:50:52.420
think it should be mysterious.
00:50:52.420 --> 00:50:52.920
All right.
00:50:52.920 --> 00:50:56.040
And let's complete
the picture.
00:50:56.040 --> 00:50:59.150
What's the impulse response
for the second
00:50:59.150 --> 00:51:02.740
output in that picture?
00:51:02.740 --> 00:51:07.250
1 plus D plus D squared,
thank you.
00:51:07.250 --> 00:51:14.100
So we have y2 of D is equal
to u of D times 1
00:51:14.100 --> 00:51:15.560
plus D plus D squared.
00:51:21.390 --> 00:51:26.870
We can aggregate this by simply
saying that a little 2
00:51:26.870 --> 00:51:34.690
vector, y of D, is equal to a
1 vector, u of D, times a
00:51:34.690 --> 00:51:45.970
little 2 vector, g of D, where
this means y1 of D, y2 of D,
00:51:45.970 --> 00:51:52.690
and this means g1 of
D, g2 of D. OK?
00:51:52.690 --> 00:51:55.845
So this is a very simple
little matrix equation.
00:52:05.690 --> 00:52:09.720
And now the convolutional
code.
00:52:09.720 --> 00:52:14.190
This is the output sequence in
response to a particular input
00:52:14.190 --> 00:52:17.980
sequence u of D. So we're going
to define the code as
00:52:17.980 --> 00:52:23.590
the set of all possible output
sequences when the input
00:52:23.590 --> 00:52:25.020
sequences run through what?
00:52:29.810 --> 00:52:32.160
Now, with all this elaborate
set up --
00:52:32.160 --> 00:52:36.950
so it's the set of all output
sequences y of D equals u of D
00:52:36.950 --> 00:52:41.070
g of D as u of D runs
through the set
00:52:41.070 --> 00:52:46.100
of all Laurent sequences.
00:52:46.100 --> 00:52:49.560
Sequences that start at
some definite time.
00:52:49.560 --> 00:52:52.310
And now I've said much more
precisely what I said back at
00:52:52.310 --> 00:52:53.330
the beginning.
00:52:53.330 --> 00:52:56.140
Because all sequences have a
definite starting time, we
00:52:56.140 --> 00:53:01.430
know what the starting state is
for any of these sequences.
00:53:01.430 --> 00:53:08.600
Always when the sequence starts
at time D, the system
00:53:08.600 --> 00:53:11.830
is quiet, it's in the all-zero
state, and it's driven from
00:53:11.830 --> 00:53:13.460
time D onwards.
00:53:13.460 --> 00:53:15.890
So this convolution
is well-defined.
00:53:15.890 --> 00:53:18.680
If we had an undefined starting
state, then this
00:53:18.680 --> 00:53:19.930
would not be well-defined.
00:53:23.350 --> 00:53:23.770
OK.
00:53:23.770 --> 00:53:29.190
So a long, longer
than I attended
00:53:29.190 --> 00:53:31.440
lecture on Laurent sequences.
00:53:31.440 --> 00:53:34.770
But it's turned out, in the
theory of convolutional codes,
00:53:34.770 --> 00:53:36.970
it's very important
to get this right.
00:53:36.970 --> 00:53:40.182
People have used various
definitions.
00:53:40.182 --> 00:53:43.820
They've let this run through
finite sequences, polynomial
00:53:43.820 --> 00:53:46.910
sequences, formal
power series.
00:53:46.910 --> 00:53:50.910
And trust me, this is the
right way to do it.
00:53:50.910 --> 00:53:53.820
And I don't think anyone
is better qualified
00:53:53.820 --> 00:53:56.300
to make that statement.
00:53:56.300 --> 00:54:00.310
So that is the way we define
a convolutional code.
00:54:02.930 --> 00:54:10.990
Now which direction shall
I go from here?
00:54:10.990 --> 00:54:14.270
I guess I ought to ask,
what are we going to
00:54:14.270 --> 00:54:15.740
allow g of D to be?
00:54:18.980 --> 00:54:24.280
In the picture up there, we
have g1 of D and g2 of D
00:54:24.280 --> 00:54:34.690
polynomial, which means both
causal and finite.
00:54:34.690 --> 00:54:37.580
Causal means it starts
at time 0 or later.
00:54:37.580 --> 00:54:39.390
Finite means that it
has only a finite
00:54:39.390 --> 00:54:41.160
number of non-zero terms.
00:54:43.880 --> 00:54:44.500
OK.
00:54:44.500 --> 00:54:50.080
I want to allow perhaps more
general types of encoders.
00:54:50.080 --> 00:54:53.630
I do want the encoder
to be finite state.
00:54:53.630 --> 00:54:57.680
So let me impose that
I want the encoder
00:54:57.680 --> 00:55:00.930
to be finite state.
00:55:00.930 --> 00:55:02.395
But I'm going to
allow feedback.
00:55:08.280 --> 00:55:11.460
Which I don't have there.
00:55:11.460 --> 00:55:16.430
If I allow feedback, then I can
get infinite responses,
00:55:16.430 --> 00:55:19.140
infinite impulse responses.
00:55:19.140 --> 00:55:23.920
But by the same argument that
I've already made, it's going
00:55:23.920 --> 00:55:25.990
to have to eventually
be periodic, right?
00:55:25.990 --> 00:55:29.260
If the encoder has
finite state.
00:55:29.260 --> 00:55:37.660
So we have to have g1 of D
and g2 of D are general.
00:55:37.660 --> 00:55:40.040
We're going to allow up
to n of these for
00:55:40.040 --> 00:55:42.560
rate 1 over n encoders.
00:55:42.560 --> 00:55:45.460
If we want this to be the
impulse response of a finite
00:55:45.460 --> 00:55:48.820
state encoder --
00:55:48.820 --> 00:55:52.150
they also have to be
causal, don't they.
00:55:52.150 --> 00:55:56.020
In order to be realizable, has
to start at time 0 or later,
00:55:56.020 --> 00:55:59.320
the impulse response.
00:55:59.320 --> 00:56:01.620
Then I'm going to have to
require that these all be
00:56:01.620 --> 00:56:05.725
rational and causal.
00:56:11.050 --> 00:56:14.770
And that's the only limitations
I'm going to put
00:56:14.770 --> 00:56:18.170
on the g of D.
00:56:18.170 --> 00:56:22.710
So in order to get a finite
state system, I need to have
00:56:22.710 --> 00:56:24.900
rational impulse responses.
00:56:24.900 --> 00:56:26.500
In order to have a realizable
system,
00:56:26.500 --> 00:56:27.750
I need causal responses.
00:56:30.430 --> 00:56:34.250
So these are the requirements
on the impulse responses.
00:56:34.250 --> 00:56:39.770
Otherwise I'm not going to force
them to be anything in
00:56:39.770 --> 00:56:41.020
particular.
00:56:42.790 --> 00:56:44.050
Now let's see.
00:56:44.050 --> 00:56:50.550
I go into realization theory
a little bit in the notes.
00:56:50.550 --> 00:56:53.890
Since I've spent so much time on
Laurent sequences, I don't
00:56:53.890 --> 00:56:56.880
think I'm going to
do that here.
00:56:56.880 --> 00:56:57.736
Yeah?
00:56:57.736 --> 00:57:00.170
AUDIENCE: Does this mean that
the denominator is also
00:57:00.170 --> 00:57:01.420
polynomial?
00:57:03.480 --> 00:57:06.030
PROFESSOR: Correct.
00:57:06.030 --> 00:57:11.860
So in fact, let me always
write these.
00:57:11.860 --> 00:57:14.160
Let me take the least common
multiple of all the
00:57:14.160 --> 00:57:15.690
denominators.
00:57:15.690 --> 00:57:17.800
I'm going to write these then.
00:57:17.800 --> 00:57:26.280
g of D is going to be some
vector of polynomials up here.
00:57:26.280 --> 00:57:29.580
I can always write the Least
Common Multiple of all the
00:57:29.580 --> 00:57:32.050
denominators down here and have
a single denominator.
00:57:34.700 --> 00:57:37.190
OK?
00:57:37.190 --> 00:57:44.720
So this is going to be my
general form for a causal,
00:57:44.720 --> 00:57:49.390
rational, single input and
output impulse response.
00:57:49.390 --> 00:57:52.670
It's going to consist of
something like this, where
00:57:52.670 --> 00:57:55.285
these are all polynomial.
00:57:58.280 --> 00:58:06.460
This is a single polynomial
with d0 equals 1.
00:58:06.460 --> 00:58:08.650
Turns out that's necessary
for realizability.
00:58:11.490 --> 00:58:14.770
And I'll just tell you the fact,
which you can read about
00:58:14.770 --> 00:58:17.050
in the notes.
00:58:17.050 --> 00:58:21.340
In fact, I will leave this
mostly as an exercise for the
00:58:21.340 --> 00:58:22.950
student, anyway.
00:58:22.950 --> 00:58:37.900
Realizable with nu equals max of
the degree of the n_i of D,
00:58:37.900 --> 00:58:39.900
or the degree of the
denominator.
00:58:39.900 --> 00:58:42.720
In other words, I take all
these polynomials --
00:58:42.720 --> 00:58:46.950
the maximum degree
of any of them --
00:58:46.950 --> 00:58:48.820
and I call that nu.
00:58:48.820 --> 00:58:50.600
And I claim there's
a realization
00:58:50.600 --> 00:58:51.875
with nu memory elements.
00:58:56.616 --> 00:58:59.430
And of course, I'm going to
assume here that I've reduced
00:58:59.430 --> 00:59:04.270
this to lowest terms in order
to keep that down.
00:59:04.270 --> 00:59:06.270
If I didn't have the
denominator term,
00:59:06.270 --> 00:59:07.880
this would be clear.
00:59:07.880 --> 00:59:12.860
If I just had g of D equal
to a set of a vector of n
00:59:12.860 --> 00:59:18.260
polynomials of maximum degree
nu, then it's clear that just
00:59:18.260 --> 00:59:23.280
by this kind of a shift register
realization of here
00:59:23.280 --> 00:59:28.870
we call the constraint length
equal to the number of memory
00:59:28.870 --> 00:59:35.100
elements, I can realize any n
impulse responses which all
00:59:35.100 --> 00:59:38.240
have degree nu or less, right?
00:59:38.240 --> 00:59:42.910
So the only trick is showing how
to put in a feedback loop
00:59:42.910 --> 00:59:47.190
which basically implements this
denominator polynomial d
00:59:47.190 --> 00:59:52.000
of D. If I start off by
realizing 1 over d of D, then
00:59:52.000 --> 00:59:55.100
I can basically just realize
this in the same way.
00:59:57.710 --> 01:00:00.405
And nu is called the
constraint length.
01:00:05.960 --> 01:00:10.703
And I have 2 to the nu states.
01:00:15.820 --> 01:00:22.400
So by constraining this to be
of this form, I've now gone
01:00:22.400 --> 01:00:23.830
full circle.
01:00:23.830 --> 01:00:27.230
The number of states is in
fact 2 to the nu, and in
01:00:27.230 --> 01:00:28.480
particular, it's finite.
01:00:31.960 --> 01:00:43.600
When I constrain the impulses
responses to be like that,
01:00:43.600 --> 01:00:45.630
then I guarantee that I'm going
to have a finite state
01:00:45.630 --> 01:00:46.880
realization.
01:00:49.990 --> 01:00:52.760
So from now on, that's what
my impulse response is
01:00:52.760 --> 01:00:54.010
going to look like.
01:00:57.020 --> 01:00:57.650
All right.
01:00:57.650 --> 01:01:05.820
Now let's talk about code
equivalence, or encoder
01:01:05.820 --> 01:01:07.070
equivalence.
01:01:11.550 --> 01:01:17.800
I've defined the code
generated --
01:01:17.800 --> 01:01:23.380
now I'm going to characterize
an encoder by its impulse
01:01:23.380 --> 01:01:32.320
responses, g of D. The code
generated by g of D is u of D
01:01:32.320 --> 01:01:38.260
g of D as u of D goes
through the set
01:01:38.260 --> 01:01:39.555
of all Laurent sequences.
01:01:43.460 --> 01:01:56.585
Two encoders are equivalent if
they generate the same code.
01:02:05.280 --> 01:02:07.280
Seems reasonable.
01:02:07.280 --> 01:02:09.320
What we're really ultimately
interested in in
01:02:09.320 --> 01:02:12.330
communications is the behavior
of the code.
01:02:12.330 --> 01:02:16.150
In particular, the minimum
distance of the code, how far
01:02:16.150 --> 01:02:18.300
the sequence is separated.
01:02:18.300 --> 01:02:21.720
We're not particularly
interested in the encoder.
01:02:21.720 --> 01:02:24.880
At the decoder, we're just
simply going to try to tell
01:02:24.880 --> 01:02:27.270
which code sequence was sent.
01:02:27.270 --> 01:02:31.750
So for most purposes,
probability of decoding error,
01:02:31.750 --> 01:02:33.790
performance of the decoder
and so forth, we're only
01:02:33.790 --> 01:02:35.250
interested in the code itself.
01:02:35.250 --> 01:02:38.210
We're not interested in this
little one-to-one map between
01:02:38.210 --> 01:02:42.110
information bits and
the code sequences.
01:02:42.110 --> 01:02:45.130
So this is a reasonable
definition of encoder
01:02:45.130 --> 01:02:46.380
equivalence.
01:02:48.590 --> 01:02:54.260
Now, for the case of rate 1/n
codes, which is all I'm
01:02:54.260 --> 01:02:55.330
talking about here --
01:02:55.330 --> 01:02:59.675
1 input, n output, so the code
generator looks like this --
01:03:03.970 --> 01:03:06.230
it's very simple.
01:03:06.230 --> 01:03:19.670
g of D and g prime of D are
equivalent if and only if g of
01:03:19.670 --> 01:03:29.740
D and g prime of D differ by
some multiple a of D, where I
01:03:29.740 --> 01:03:32.615
could let a of D be any
Laurent sequence.
01:03:32.615 --> 01:03:37.351
But then to keep everything in
the same ballpark, I think I'd
01:03:37.351 --> 01:03:41.820
have to keep a of D rational
and causal.
01:03:41.820 --> 01:03:45.160
But even forgetting this -- in
other words, if one is a
01:03:45.160 --> 01:03:46.510
multiple of the other --
01:03:46.510 --> 01:03:49.610
then the two codes
are equivalent.
01:03:49.610 --> 01:04:01.580
That's clear because I can
invert a of D. So this is just
01:04:01.580 --> 01:04:03.680
a single Laurent sequence,
has an inverse.
01:04:03.680 --> 01:04:08.510
This is the same thing as
saying g of D times
01:04:08.510 --> 01:04:10.180
1 over a of D --
01:04:10.180 --> 01:04:15.450
which is another rational,
causal sequence, is g prime of
01:04:15.450 --> 01:04:17.650
D.
01:04:17.650 --> 01:04:27.050
If this is true, then the
sequence generated by u of D
01:04:27.050 --> 01:04:30.780
when the encoder is g of D is
the same as the sequence
01:04:30.780 --> 01:04:35.135
generated by u of D a of D when
the encoder is g prime of
01:04:35.135 --> 01:04:38.400
D. And this is another
rational sequence.
01:04:38.400 --> 01:04:40.820
And vice versa.
01:04:40.820 --> 01:04:44.450
the code sequence generated by u
of D, the encoder is g prime
01:04:44.450 --> 01:04:47.740
of D, is the sequence generated
by u of D over a of
01:04:47.740 --> 01:04:52.530
D if the encoder
is g of D. OK?
01:04:52.530 --> 01:04:54.120
So that's really the proof.
01:04:57.070 --> 01:05:00.520
So this is very, very
simple theorem.
01:05:00.520 --> 01:05:08.810
In other words, I can multiply
any encoder n-tuple g of D by
01:05:08.810 --> 01:05:14.550
any rational causal sequence out
front, in particular by a
01:05:14.550 --> 01:05:17.400
polynomial or 1 over a
polynomial or something, and
01:05:17.400 --> 01:05:19.950
I'm going to generate
the same code.
01:05:19.950 --> 01:05:21.420
This is not going to matter.
01:05:24.290 --> 01:05:28.400
Now, to see why we might want
to take make use of this --
01:05:32.452 --> 01:05:35.590
let's see.
01:05:35.590 --> 01:05:39.770
Well, given this, we might want
to pick some particularly
01:05:39.770 --> 01:05:45.920
nice encoder from this
equivalence class of encoders,
01:05:45.920 --> 01:05:49.074
all of which generate
the same code.
01:05:49.074 --> 01:05:55.090
And basically, I'm going to
suggest that the nicest
01:05:55.090 --> 01:06:02.040
encoder is the one we get if we
multiply by d of D. And if
01:06:02.040 --> 01:06:04.710
there's any common factor
of these n of D's,
01:06:04.710 --> 01:06:06.630
we divide it out.
01:06:06.630 --> 01:06:11.230
That will give us the least
degree polynomial encoder,
01:06:11.230 --> 01:06:13.710
which is equivalent to
the given encoder.
01:06:13.710 --> 01:06:17.520
If I start off with one like
this, and I want to multiply
01:06:17.520 --> 01:06:20.810
through by the denominator to
make it a polynomial, and if
01:06:20.810 --> 01:06:23.200
there's any common factor to
the numerator, I want to
01:06:23.200 --> 01:06:24.540
divide it out.
01:06:24.540 --> 01:06:27.330
And that's what I'm going to
take as my canonical encoder.
01:06:30.310 --> 01:06:33.924
Now let me give you a little
motivation for that.
01:06:33.924 --> 01:06:36.786
AUDIENCE: [INAUDIBLE]
01:06:36.786 --> 01:06:38.550
PROFESSOR: That's
one motivation.
01:06:38.550 --> 01:06:41.180
It will have the simplest
realization.
01:06:41.180 --> 01:06:45.260
It will have a feedback-free
realization by getting rid of
01:06:45.260 --> 01:06:47.000
the denominator term.
01:06:47.000 --> 01:06:49.780
And it will have the least
constraint length of all
01:06:49.780 --> 01:06:52.920
equivalent encoders, and
that's a theorem too.
01:06:52.920 --> 01:06:56.810
But that's not the primary
motivation.
01:06:56.810 --> 01:07:00.240
Let me talk a little bit about
distance at this point.
01:07:05.630 --> 01:07:13.050
Let me take my example
encoder again.
01:07:13.050 --> 01:07:16.690
And let me ask, what's
the known distance
01:07:16.690 --> 01:07:18.030
between code sequences?
01:07:18.030 --> 01:07:22.320
This is clearly going to be an
important performance metric
01:07:22.320 --> 01:07:25.550
of this code, right?
01:07:25.550 --> 01:07:28.080
How would you go about answering
that question, or at
01:07:28.080 --> 01:07:30.060
least two reasonable
ways to go?
01:07:37.560 --> 01:07:39.810
Do you think the known distance
is greater than 1?
01:07:42.980 --> 01:07:45.090
You think it's 1?
01:07:45.090 --> 01:07:46.730
OK.
01:07:46.730 --> 01:07:49.350
So how would you get
two sequences that
01:07:49.350 --> 01:07:52.013
differ only in one place?
01:07:52.013 --> 01:07:55.320
AUDIENCE: I mean, if you take
the impulse response.
01:07:55.320 --> 01:07:56.580
Or you take the --
01:07:56.580 --> 01:07:59.980
PROFESSOR: Well, the impulse
response, if I put in 1, u of
01:07:59.980 --> 01:08:06.530
D equals 1, then I get out g of
D, which in this case is 1
01:08:06.530 --> 01:08:11.360
plus D squared, 1 plus D plus
D squared, I actually get a
01:08:11.360 --> 01:08:12.800
weight 5 sequence out.
01:08:16.359 --> 01:08:19.225
You see that?
01:08:19.225 --> 01:08:25.170
AUDIENCE: [INAUDIBLE] you get
1 and 1 0 1, and [INAUDIBLE]
01:08:25.170 --> 01:08:27.010
1 and 1 and 1 --
01:08:27.010 --> 01:08:31.593
PROFESSOR: So I get this out,
in D transform notation.
01:08:31.593 --> 01:08:37.850
Or I get 1 0 1 1 1 1 in
terms of that times
01:08:37.850 --> 01:08:41.116
0 1 1 and so forth.
01:08:44.569 --> 01:08:47.916
y1 equals this, y2 equals that.
01:08:47.916 --> 01:08:49.810
Right?
01:08:49.810 --> 01:08:50.200
OK.
01:08:50.200 --> 01:08:54.220
So that's a weight 5 sequence.
01:08:54.220 --> 01:08:57.040
So if I put in the all-zero
sequence, I get out the
01:08:57.040 --> 01:08:57.740
all-zero sequence.
01:08:57.740 --> 01:09:00.990
So as always, the all-zero
sequence is an
01:09:00.990 --> 01:09:02.420
element of the code.
01:09:02.420 --> 01:09:06.439
This differs from the all-zero
sequence in five places.
01:09:06.439 --> 01:09:10.470
So all you've shown me so far is
the minimum distance is not
01:09:10.470 --> 01:09:11.720
greater than five.
01:09:14.550 --> 01:09:17.920
I've found two sequences that
differ in five places.
01:09:17.920 --> 01:09:20.806
Are there any that differ in
fewer places than that?
01:09:27.149 --> 01:09:29.590
The answer is no.
01:09:29.590 --> 01:09:33.090
So in fact, it's called
the free distance.
01:09:33.090 --> 01:09:38.340
For this code, the so-called
free distance is five.
01:09:42.569 --> 01:09:44.750
Let's think of the two different
kinds of structure
01:09:44.750 --> 01:09:46.240
for this code.
01:09:46.240 --> 01:09:51.670
And they lead to different but
complementary arguments that
01:09:51.670 --> 01:09:53.115
both get us to this
conclusion.
01:09:55.820 --> 01:10:00.060
First of all, the code
is linear, right?
01:10:00.060 --> 01:10:01.390
So C is linear.
01:10:05.750 --> 01:10:08.300
Is it not?
01:10:08.300 --> 01:10:17.680
If I have y of D equals u of D g
of D, and y prime of D equal
01:10:17.680 --> 01:10:20.270
to, say, some other sequence.
01:10:20.270 --> 01:10:24.460
u prime of D times g of D. Then
let me check the group
01:10:24.460 --> 01:10:29.020
property, which is all I have
to check for a binary code.
01:10:29.020 --> 01:10:33.540
And I find that y of D the
sequence, the sum of these two
01:10:33.540 --> 01:10:38.020
sequences, is the sequence
that's generated by the sum of
01:10:38.020 --> 01:10:40.640
the two input sequences that
led to these two sequences.
01:10:40.640 --> 01:10:45.860
So the binary sum of any
two sequences is
01:10:45.860 --> 01:10:47.426
another code word.
01:10:47.426 --> 01:10:47.910
All right?
01:10:47.910 --> 01:10:49.255
Ergo it has the group
property.
01:10:51.820 --> 01:10:55.650
Ergo C is a group.
01:10:55.650 --> 01:10:56.990
And that's all we
need to check.
01:10:56.990 --> 01:11:01.170
It's actually linear over F2,
as a vector space of F2.
01:11:03.880 --> 01:11:13.410
0 is in the code, and 1 times
any code word in the code.
01:11:13.410 --> 01:11:18.960
So we've checked closure under
vector addition and scalar
01:11:18.960 --> 01:11:20.900
multiplication.
01:11:20.900 --> 01:11:23.040
So it's linear.
01:11:23.040 --> 01:11:26.080
What is the main conclusion
we get from this?
01:11:29.700 --> 01:11:35.270
Therefore, the minimum
distance between code
01:11:35.270 --> 01:11:45.780
sequences is equal to --
01:11:45.780 --> 01:11:47.030
anybody?
01:11:48.900 --> 01:12:00.280
The minimum non-zero weight
of any y of D in the code.
01:12:04.870 --> 01:12:05.940
Exactly the same argument.
01:12:05.940 --> 01:12:10.090
We're talking infinite sequences
here, but there's
01:12:10.090 --> 01:12:13.150
nothing that changes
in the argument.
01:12:13.150 --> 01:12:15.470
And so that really simplifies
things.
01:12:15.470 --> 01:12:17.250
We simply have to ask --
01:12:17.250 --> 01:12:24.430
here's a non-zero sequence
of Hamming weight 5.
01:12:24.430 --> 01:12:28.950
You can add this to any code
sequence and get another
01:12:28.950 --> 01:12:30.410
legitimate code sequence.
01:12:30.410 --> 01:12:32.260
This is the code sequence.
01:12:32.260 --> 01:12:33.580
Add it to any other,
and you get a
01:12:33.580 --> 01:12:34.790
legitimate code sequence.
01:12:34.790 --> 01:12:37.690
So from any other code sequence,
there's going to be
01:12:37.690 --> 01:12:39.960
one of distance 5.
01:12:39.960 --> 01:12:42.370
Are there any lower weight
sequences in this code?
01:12:49.340 --> 01:12:50.250
Well, what would you --
01:12:50.250 --> 01:12:51.500
AUDIENCE: [INAUDIBLE]
01:12:54.050 --> 01:12:55.140
PROFESSOR: Good.
01:12:55.140 --> 01:12:57.020
So here's where we need
to get into much
01:12:57.020 --> 01:12:58.620
more concrete arguments.
01:12:58.620 --> 01:13:02.340
You can see that first of all,
we're only interested in
01:13:02.340 --> 01:13:06.070
looking at finite sequences
in the code.
01:13:06.070 --> 01:13:08.450
And we might as well have them
start at time 0, because if
01:13:08.450 --> 01:13:11.200
they start later, we could just
shift them over by time
01:13:11.200 --> 01:13:12.480
and variance.
01:13:12.480 --> 01:13:15.420
So we're only interested in
finite polynomial sequences
01:13:15.420 --> 01:13:17.620
that start at time 0.
01:13:17.620 --> 01:13:18.320
OK.
01:13:18.320 --> 01:13:21.180
So just as you said, in the
first place, they're always
01:13:21.180 --> 01:13:22.660
going to have bing bing!
01:13:22.660 --> 01:13:24.240
Two 1's.
01:13:24.240 --> 01:13:25.620
When the first 1 comes
in, we're going
01:13:25.620 --> 01:13:27.800
to get two 1's out.
01:13:27.800 --> 01:13:30.270
For sure.
01:13:30.270 --> 01:13:33.040
Time 2.
01:13:33.040 --> 01:13:37.200
This is what we get out if the
second bit in, in the input
01:13:37.200 --> 01:13:39.990
sequence, is a 0.
01:13:39.990 --> 01:13:45.590
What happens if the second
bit in is a 1?
01:13:45.590 --> 01:13:52.450
Then we add this to this,
shift it over 1.
01:13:52.450 --> 01:13:54.520
We have 1 1 in particular
at this place.
01:13:54.520 --> 01:13:56.610
We get a 1 0 out.
01:13:56.610 --> 01:13:57.430
OK?
01:13:57.430 --> 01:14:01.320
So we conclude at the second
time, we're always going to
01:14:01.320 --> 01:14:05.670
get at least another
unit of distance.
01:14:05.670 --> 01:14:07.160
Now jump ahead.
01:14:07.160 --> 01:14:08.810
However long this is, this
is going to have
01:14:08.810 --> 01:14:10.060
to end at some time.
01:14:12.800 --> 01:14:16.650
At the time when the last bit,
the last non-zero bit, is
01:14:16.650 --> 01:14:20.080
shifting out of the shift
register, we're also going to
01:14:20.080 --> 01:14:21.330
get 1 1 out.
01:14:24.390 --> 01:14:30.565
So any finite code word has
to end with a 1 1 at
01:14:30.565 --> 01:14:31.815
the last time out.
01:14:34.270 --> 01:14:36.890
People don't seem to be totally
comfortable with this,
01:14:36.890 --> 01:14:38.280
so I'll do it in a
more elaborate
01:14:38.280 --> 01:14:39.880
way in just a second.
01:14:39.880 --> 01:14:41.410
But this is the basis
of the argument.
01:14:41.410 --> 01:14:42.956
AUDIENCE: [INAUDIBLE]
01:14:42.956 --> 01:14:44.960
this?
01:14:44.960 --> 01:14:45.920
PROFESSOR: At the end?
01:14:45.920 --> 01:14:47.380
AUDIENCE: No, I mean
by [UNINTELLIGIBLE]
01:14:47.380 --> 01:14:50.040
at the same time.
01:14:50.040 --> 01:14:51.870
PROFESSOR: At the end of
a finite code word?
01:14:51.870 --> 01:14:52.750
AUDIENCE: Yes.
01:14:52.750 --> 01:14:53.040
PROFESSOR: All right.
01:14:53.040 --> 01:14:54.290
What does a finite code
word consist of?
01:14:56.880 --> 01:15:00.060
I'm going to claim that we only
get a finite code word
01:15:00.060 --> 01:15:04.680
out when we put a finite
input sequence in.
01:15:04.680 --> 01:15:05.240
OK?
01:15:05.240 --> 01:15:07.670
There's no feedback here.
01:15:07.670 --> 01:15:11.170
So there's going to be some last
1 in the input sequence
01:15:11.170 --> 01:15:12.980
as that shifts through here.
01:15:12.980 --> 01:15:15.130
At the very last time,
the last state, is
01:15:15.130 --> 01:15:17.540
going to be 0 1.
01:15:17.540 --> 01:15:17.920
All right?
01:15:17.920 --> 01:15:20.400
Just before that comes
out, there's a 0
01:15:20.400 --> 01:15:22.695
coming in, forever after.
01:15:22.695 --> 01:15:26.660
So the last time we get a 1 at
u k minus 2, and that forces
01:15:26.660 --> 01:15:30.780
these two bits out to be 1.
01:15:30.780 --> 01:15:32.680
Or you can do it
by polynomials.
01:15:32.680 --> 01:15:36.730
You can always show that you
multiply this by any finite
01:15:36.730 --> 01:15:40.010
polynomial, you're going to get
highest degree terms, both
01:15:40.010 --> 01:15:41.260
equal to 1.
01:15:43.260 --> 01:15:44.260
OK.
01:15:44.260 --> 01:15:48.010
I'll do this by explicitly
drawing out the state diagram.
01:15:48.010 --> 01:15:56.460
So we can conclude here that we
always have 2 here, 1 here,
01:15:56.460 --> 01:15:59.530
dot dot dot dot, and finally
at the end, we have to
01:15:59.530 --> 01:16:00.850
have at least 2.
01:16:00.850 --> 01:16:04.020
Therefore, every nonzero finite
sequence has to have
01:16:04.020 --> 01:16:05.870
weight at least 5.
01:16:05.870 --> 01:16:08.750
Since we've seen one of that
has weight 5, that is the
01:16:08.750 --> 01:16:11.420
minimum distance.
01:16:11.420 --> 01:16:13.100
OK?
01:16:13.100 --> 01:16:16.260
Now let's do this by drawing
the state diagram.
01:16:20.730 --> 01:16:23.331
Which is where we're going.
01:16:23.331 --> 01:16:24.624
AUDIENCE: So I know
that [INAUDIBLE]
01:16:31.040 --> 01:16:31.350
PROFESSOR: No.
01:16:31.350 --> 01:16:35.410
We had to construct a little
argument here, and in effect,
01:16:35.410 --> 01:16:36.300
make a little search.
01:16:36.300 --> 01:16:38.570
And this simple argument
wouldn't work for more
01:16:38.570 --> 01:16:39.890
complicated cases.
01:16:39.890 --> 01:16:43.460
So let me show you how to attack
more complicated cases.
01:16:43.460 --> 01:16:48.610
We want to draw a finite
state machine.
01:16:48.610 --> 01:16:50.485
How do you analyze finite
state machines?
01:16:53.920 --> 01:16:56.380
Well, a good way to start
is to draw the
01:16:56.380 --> 01:16:57.730
state transition diagram.
01:17:04.460 --> 01:17:08.690
And I don't know what the
curriculum consists of
01:17:08.690 --> 01:17:11.810
nowadays, but I assume
you've all done this.
01:17:11.810 --> 01:17:14.120
We draw the four possible
states.
01:17:14.120 --> 01:17:15.370
0 0.
01:17:19.130 --> 01:17:22.450
1 0, we can get to from 0 0.
01:17:22.450 --> 01:17:28.850
0 0, if I get an input of 0,
then I'm going to put out 0 0,
01:17:28.850 --> 01:17:31.060
then I'm going to stay
in the 0 state.
01:17:31.060 --> 01:17:32.170
All right?
01:17:32.170 --> 01:17:36.266
Because this is linear, if the
0 comes in, that's 0 0, stay
01:17:36.266 --> 01:17:38.110
in the 0 state.
01:17:38.110 --> 01:17:42.780
Or a 1 could come in.
01:17:42.780 --> 01:17:46.310
And in that case, as we've
noted, we'll put out two 1's
01:17:46.310 --> 01:17:50.320
as our output, and we'll
transition to the state 1 0.
01:17:54.040 --> 01:17:57.190
Now from 1 0, where can we go?
01:17:57.190 --> 01:18:09.630
For 1 0, if a 0 comes in, then
we'll put out 0 1 and we'll
01:18:09.630 --> 01:18:11.745
transition to 0 1.
01:18:14.910 --> 01:18:19.650
Or we'll get another 1 in.
01:18:19.650 --> 01:18:23.182
We would put out the complement
of that, 1 0.
01:18:23.182 --> 01:18:27.830
So we get another 1 1 in
this linear thing.
01:18:27.830 --> 01:18:28.405
Work it out.
01:18:28.405 --> 01:18:31.320
I'll just assert that.
01:18:31.320 --> 01:18:33.120
And we go to state 1 1.
01:18:36.350 --> 01:18:43.760
From 0 1, if we get a 0 in, we
return to the 0 0 state and we
01:18:43.760 --> 01:18:45.260
put out 1 1.
01:18:45.260 --> 01:18:49.385
So here's our basic impulse
response, right down here.
01:18:49.385 --> 01:18:52.080
We put in a 1, we go through
this little cycle, we come
01:18:52.080 --> 01:18:54.960
back to 0 0.
01:18:54.960 --> 01:19:01.910
Or if we get a 1 in, we go
back up here, and at that
01:19:01.910 --> 01:19:05.460
point, we actually only
put out two 0's.
01:19:05.460 --> 01:19:06.426
Is that right?
01:19:06.426 --> 01:19:08.590
Yeah, because it has to be
the complement of this.
01:19:12.380 --> 01:19:15.980
And if we're in the 1 1 state
and another 1 comes in, we
01:19:15.980 --> 01:19:19.620
stay in the 1 1 state.
01:19:19.620 --> 01:19:21.340
And we put out what?
01:19:25.890 --> 01:19:29.730
Let me do the 0 1 first.
01:19:29.730 --> 01:19:34.260
0, and we're in 1 1 state,
then what do we get out?
01:19:34.260 --> 01:19:37.725
We get out a 0 down here and
we get out a 1 up there.
01:19:37.725 --> 01:19:38.975
We got 1 0.
01:19:41.750 --> 01:19:46.970
Or if we get in a 1,
we put out 0 1.
01:19:46.970 --> 01:19:47.330
All right.
01:19:47.330 --> 01:19:49.025
So that's the state transition
diagram.
01:19:53.140 --> 01:19:58.990
Now again, let me make the
argument that I made before,
01:19:58.990 --> 01:20:01.652
now a little bit more concisely,
because we're going
01:20:01.652 --> 01:20:02.902
to have to stop.
01:20:06.760 --> 01:20:10.710
So every path through this
state transition diagram
01:20:10.710 --> 01:20:13.960
corresponds to a code
sequence, right?
01:20:13.960 --> 01:20:18.460
Every finite or semi-infinite
path always
01:20:18.460 --> 01:20:21.650
start in the 0 state.
01:20:21.650 --> 01:20:26.120
For a finite code word, I have
to come back to the 0 state.
01:20:26.120 --> 01:20:27.670
Correct?
01:20:27.670 --> 01:20:32.590
So I'm always going to get
two 1's when I start out.
01:20:32.590 --> 01:20:35.440
Then the next time, I'm either
going to go on this transition
01:20:35.440 --> 01:20:38.680
or this transition, but either
way, I'm going to get at least
01:20:38.680 --> 01:20:41.100
another unit of weight.
01:20:41.100 --> 01:20:43.400
And then I can do whatever I
want through here for a while,
01:20:43.400 --> 01:20:46.860
but eventually I'm going to have
to come back out here.
01:20:46.860 --> 01:20:48.800
And when I get back to the
0 state, I'm going to
01:20:48.800 --> 01:20:50.050
get two more 1's.
01:20:52.170 --> 01:20:55.883
So that's maybe a lot easier way
to say the minimum weight
01:20:55.883 --> 01:20:57.133
has to be 5.
01:20:59.290 --> 01:21:02.840
Again, if it were more
complicated, I'd have to make
01:21:02.840 --> 01:21:04.880
more of an argument.
01:21:04.880 --> 01:21:05.200
All right.
01:21:05.200 --> 01:21:13.530
Next time, when we come back,
we'll talk about turning this
01:21:13.530 --> 01:21:16.850
into a trellis diagram.
01:21:16.850 --> 01:21:19.370
We'll talk about
catastrophicity, which is
01:21:19.370 --> 01:21:22.370
where I was going with this
canonical generator.
01:21:22.370 --> 01:21:25.100
It's just a minor algebraic
point, but one you need to
01:21:25.100 --> 01:21:26.160
know about.
01:21:26.160 --> 01:21:27.710
We'll talk about the
Viterbi algorithm.
01:21:27.710 --> 01:21:29.520
I think we can probably
get through
01:21:29.520 --> 01:21:32.570
this chapter on Wednesday.
01:21:32.570 --> 01:21:34.510
OK?
01:21:34.510 --> 01:21:37.650
Ashish has your midterms and
the midterm solutions.