Flash and JavaScript are required for this feature.

Download the video from iTunes U or the Internet Archive.

**Topics covered:** Generalization of Fourier transform to z-transform, relationship of Fourier transform to z-transform, region of convergence of z transforms, characteristics of region of convergence in relation to poles, zeros, stability, and causality.

**Instructor:** Prof. Alan V. Oppenheim

Lecture 5: The z-Transform

## Related Resources

The z-Transform (PDF)

The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.

[MUSIC PLAYING]

ALAN OPPENHEIM: Hi. As you recall, last time we developed the discrete-time Fourier transform. And one of the issues that I alluded to, although I didn't say it explicitly, is that the Fourier transform doesn't exist for all sequences. In fact, as we'll see today, one of the difficulties is the issue of convergence. That is, the Fourier transform doesn't converge for all sequences. And to get around this, we'll generalize the notion of the Fourier transform as we introduced it last time to what's called the Z-transform, the Z-transform being the discrete time counterpart of the Laplace transform for continuous time systems.

Well, let me begin by examining just a little this issue of convergence. We have here the Fourier transform relationship as we developed it last time, x of e to the j omega given by the sum of x of 10 times e to the minus j omega n. And as we discussed the Fourier transform last time, we assumed that the sum was convergent, and-- or at least we didn't particularly raise the issue as to whether it is or isn't.

Well, we can look at the magnitude of x, or equivalently the magnitude of this sum, and ask under what conditions this sum will converge. That is, under what conditions will this sum be finite.

Well, here we have the magnitude of a sum of things from what's called the triangle inequality, or you can just kind of imagine that this is true from common sense, is that the magnitude of a sum has to be less than or equal to the sum of the magnitudes. So the magnitude of this sum has to be less than or equal to the sum of the magnitude of x of n e to the minus j omega n.

The magnitude of a product is the product of the magnitudes, of course. So this says that the magnitude of the Fourier transform has to be less than or equal to the sum of the magnitude of x of n times the magnitude of the complex exponential, e to the minus j omega n.

Well, this of course we can't say a lot about. This we can recognize as unity. It's a complex exponential. The magnitude of a complex exponential is one. So consequently, the magnitude of the Fourier transform is less than or equal to the sum of the magnitudes of x of n summed over minus infinity-- n equals minus infinity to plus infinity.

Well, that tells us then that the Fourier transform will converge, converge meaning that the sum ends up finite, if this converges. That is, certainly if this sum is finite, then the magnitude of x of e to the j omega is finite. So we can say that x of e to the j omega, the Fourier transform, converges if the sum of the magnitude of x of n is less than infinity.

Well, this is a condition that's commonly referred to as absolutely summable. The condition is that the sequence x of n be absolutely summable. The way we've gone through this, the statement is that, if x of n is absolutely summable, then the Fourier transform converges under some sorts of conditions. And depending on how you interpret convergence, this can be modified somewhat. But basically-- and that, in fact, is discussed in some detail in the text. But basically, the condition for convergence of the Fourier transform is that the sequence whose Fourier transform we're talking about be absolutely summable.

Now, we've seen a sum like this someplace else. In fact, a couple lectures ago, we talked about stability of discrete time systems. And the statement about stability was that the unit sample response of the system be absolutely summable. That is, or was, that, if the unit sample response is absolutely summable, then the system is stable. So one conclusion, which you should tuck away for now and we'll be bringing into play later on in this lecture, is that a stable system implies that the system-- that the frequency response, or the Fourier transform of the unit sample response, converges.

So this is convergence, then. And we can look at a couple of examples just to see how this looks in some specific cases. Let's first of all take the example of the sequence x of n is 1/2 to the n u of n as an exponential sequence which is zero for n less than zero because of the unit step. It's exponentially decaying for n greater than zero.

If we sum the magnitude of x of n from minus infinity to plus infinity, we get that the magnitude, the sum of the magnitude of x of n is equal to two. Well, this is something, a type of sum that hopefully you're familiar with carrying out. In any case, what we're summing here is the series 1, 1/2, 1/4, 1/8, et cetera. That sums to two. So we can say that, for this sequence, certainly, the Fourier transform converges. It's a perfectly well-behaved sequence. A Fourier transform converges, we can compute the Fourier transform, et cetera.

Well, another sequence we can try is the sequence x of n equals 2 to the n times a unit step. Well, if we just think about the values that we have, at n equals zero we have 1, at n equals 1, we have 2, at n equals 2, we have 4, at n equals 3, we have 8. And obviously, then, the sum of the absolute values of x of n for that particular sequence diverges. That is, the sum is infinite.

So here's an example of a sequence for which the Fourier transform doesn't converge. In other words, the Fourier transform for this sequence doesn't exist. And you can imagine the generalization here that, if we talk about a sequence which decays exponentially-- by the way, in either direction, negative n or positive n-- then the Fourier transform will exist, whereas if it's an exponential sequence that grows exponentially, then the Fourier transform is not going to exist, or is not going to converge.

Well, what do we do about that? There is a way around it by essentially using the artifice of multiplying a sequence that doesn't converge, or say that grows exponentially by a decaying exponential, and making the decay on the exponential fast enough so that the product has a Fourier transform. That is, we can talk about the Fourier transform in general, not of x of n, but of x of n multiplied by a sequence r to the minus n, for which we pick r so that the product of these two is absolutely summable.

And since we've introduced another parameter r, then the Fourier transform of x of n times r to the minus n, we don't want to call x of e to the j omega. Let's for now think of it as x of r times e to the j omega, the r denoting the complex exponential that we're multiplying by. And then, of course, for some values of r, this sum is going to converge. For other values of r, this sum is not going to converge.

For example, if we look at our example two here, if I multiply this sequence by a decaying exponential that decays fast enough so that the product decays-- and what do I have to multiply it by? I have to multiply it by something that decays faster than 1/2 to the n. So if I stick in here an r that is greater than 2-- 2 because there's a minus sign here, r to the minus n-- then I'll end up with, for this sequence, with a sum here that converges.

So for this example, if r-- any choice of r greater than two, this sum is going to converge. And I can talk about x of r of e to the j omega. So that's the idea. The idea is to multiply by an exponential that decays fast enough or doesn't grow so fast that the product x of n times r to the minus n converges.

Well, we can rewrite this a little differently. We can combine the r to the minus n and the e to the minus j omega n. And what results is the sum, minus infinity to plus infinity, of x of n times re to the j omega to the minus n, just simply combining these two terms together. And it's convenient to think of this as a new complex variable, magnitude r and angle omega, which we'll denote by z, z as in the Z-transform.

The Z-transform. So, this essentially leads us to the notion of the Z-transform, where the Z-transform of a sequence x of n is defined as x of z is equal to the sum of x of n times z to the minus n, where z is equal to r times e to the j omega. In other words, x of z is just simply the x sub r of e to the j omega that we finished talking about.

Well, how is the Z-transform related to the Fourier transform if the Fourier transform exists? Obviously, from what we just finished saying, the Z-transform is the Fourier transform for this weighting factor r equal to unity. That is, x of e to the j omega is equal to x of z where? Well, for z equal to e to the j omega-- or another way of saying that is for the magnitude of z equal to unity. With z equal to e to the j omega, the magnitude of z obviously is equal to unity.

So we have now the Z-transform. It's a function of a complex variable z, which has a magnitude r and an angle omega. And it's just simply a generalization of the Fourier transform. It, in fact, is equal to the Fourier transform for the magnitude of z equal to 1. And it, in effect, corresponds to the Fourier transform of the sequence multiplied by an exponential sequence r to the minus n.

When does the Z-transform converge? Well, the Z-transform converges when this x sub r of e to the j omega that we were talking about converges. And that converges when the sequence x of n r to the minus n is absolutely summable. So the Z-transform converges if the sum of x of n times r to the minus n is finite. And for some values of r, given a certain sequence, for some values of r the Z-transform will converge, for some values of r the Z-transform won't converge.

Well let's look at an example. And it's an example, I guess, that you're seeing fairly frequently. In fact, it'll keep cropping up in a variety of ways. It's kind of a simple one, and nice to use. That is the sequence-- our old friend again-- x of n is 1/2 to the n times u of n. Well, the Z-transform of x of n, x of z is the sum of x of n z to the minus n. Since this is multiplied by a unit step, the limits on the sum are from zero to infinity. So this is the sum from zero to infinity of 1/2 to the n z to the minus n. And that's equal to the sum from n equals 0 to infinity of 1/2 times z to the minus 1-- just combining these together-- to the n.

Now, let me point out, incidentally-- and I think that by now you've seen this-- this is a kind of sum, this geometric series is a kind of sum that comes up over and over and over again in dealing with sequences. And it's the kind of summation that you should carry around in your hip pocket. That is, you should keep in mind, because it keeps coming up like standard integrals do in the continuous time case, that the sum from 0 to infinity of a to the n-- a has to be less than 1 for convergence-- of a to the n is equal to 1 over 1 minus a. Just store that away.

Anyway, summing this, then, we end up with 1 over 1 minus this thing. So the Z-transform is 1 over 1 minus 1/2 times z to the minus 1, which we can leave in this form. And sometimes, we will. Or, we can multiply numerator and denominator by z, in which case we have z divided by z minus 1/2.

Well, when does this converge? It converges when the magnitude of 1/2 times z to the minus 1 raised to the n is finite. And that requires that this thing that we're taking the absolute magnitude of be-- raised to the n-- be less than unity, or that implies that z, the magnitude of z, must be greater than 1/2. The magnitude of z must be greater than 1/2, or the magnitude of z minus z to the minus 1 must be less than two. Magnitude z to the minus 1 less than two, or the magnitude of z greater than 1/2. And under those conditions, the Z-transform will converge.

Now, for this example, does the Fourier transform exist? Does the Fourier transform converge? How do we answer that? Well, the Fourier transform converges if the convergence of the Z-transform includes the magnitude of z equal to 1. Does it? Sure it does. The magnitude of z must be greater than 1/2 for convergence of the Z-transform. That includes the magnitude of z equal to 1. Therefore, for this sequence, the Fourier transform converges.

We had already decided that before, of course. That is obviously a better turnout that way this time also. But an important notion, then, is interpreting the existence of the Fourier transform in terms of the region of convergence of the Z-transform. This then defines the region of convergence, the set of values of z for which the Z-transform converges.

Another interesting observation about this example is that we ended up with a Z-transform that is a ratio of polynomials. In this case, z to the minus 1 or, If we rewrite it, a ratio of polynomials in z. And that's an important observation. In fact, it is a consequence of the fact that we're talking about the Z-transform of an exponential sequence. Any exponential sequence will have a Z-transform that's a ratio of polynomials in z.

Furthermore, any sequence that's a sum of exponentials will have a Z-transform that's a ratio of polynomials in z, since the Z-transform of a sum is the sum of the Z-transforms, and the sum of a ratio of polynomials is a ratio of polynomials and all that. But basically, the point to remember is that sums of exponentials as sequences result in Z-transforms that are ratios of polynomials in z, or in z to the minus 1.

Well, for those class of sequences, which, in fact, are most of the sequences we're going to be talking about in these lectures, it's convenient to look at the Z-transform graphically, in terms of a plot in the z-plane. So I've indicated here the z-plane. This is the real part of z, this axis. This axis is the imaginary part of z. And I've drawn a circle, which I've called the unit circle, which is convenient to focus on. It's the circle in the z-plane for which the magnitude of z is equal to 1.

What's important about the magnitude of z equal to 1? Well, it's on that circle, when we look at the Z-transform on that circle, that we see the Fourier transform. So it's often convenient to make reference to the unit circle in the z-plane corresponding to the place in the z-plane where the Z-transform is equal to the Fourier transform.

When we have a Z-transform that is a ratio of polynomials in z, it's convenient to represent it in terms of the roots of the numerator polynomial and the roots of the denominator polynomial. The roots of the numerator polynomial we'll refer to as the zeros of the Z-transform. They're those values of z for which the Z-transform is going to be zero. And the roots of the denominator polynomial we'll refer to as the poles of the Z-transform. Those are the values of z at which the Z-transform blows up, just like poles and zeros of the Laplace transform in the continuous time case.

We'll denote in the z-plane roots, the zeros, by a circle, and we'll denote the poles by a cross. So we have, for that particular example, a zero at the origin, since the numerator polynomial was simply z, and we have a pole at z equal to 1/2. Well, what else do we know about this Z-transform? We know that the Z-transform doesn't exist for all values of z.

In particular, for convergence we require that the magnitude of z is greater than 1/2. So I've indicated that region in the z-plane, with these slanted lines, the region of convergence for this particular example are those values of z that lie outside the circle that's bounded by the pole at z equal to 1/2. And we'll see that that is, in fact, the way regions of convergences tend to go. They'll tend to be outside and inside poles.

So the important aspects of this picture, there are the zeroes and the poles, there's the unit circle where the Z-transform is equal to the Fourier transform, and there's the region of convergence, which tells us where in the z-plane the Z-transform makes sense. All right. Let's try another example.

Let's take the example of x of n equal to minus 1/2 to the n u of minus n minus 1. Now, you saw that sequence a couple of lectures ago. It's a sequence that is like 1/2 to the n, but not for positive values of n, but for negative values of n. In other words, it's zero for positive values of n, and it does something exponentially for negative values of n.

If you evaluate the Z-transform sum-- and I'll leave it to you to do this-- then we end up with x of z, the Z-transform, is 1 over 1 minus 1/2 z to the minus 1, just like we saw before. In fact, that's exactly the same expression as a ratio of polynomials that we saw before, which, of course, we can rewrite, as we did before, as z divided by z minus 1/2.

All right. Do you think that the Fourier transform of this sequence exists? Well, it's 1/2 to the n, which dies out for negative n-- for positive n. But for negative n, that's a sequence that grows exponentially. So in fact, if you examine convergence of this, this sequence is not absolutely summable. The Fourier transform does not exist.

What's the region of convergence of the Z-transform? Well, to decide that, we need to look at the sum of 1/2 times e to the minus 1 to the n, and ask for what values of z this sum is less than infinity. In other words, for what values of z that sum converges. And to do that, if this was a sum over positive values of n, then we would want the magnitude of this to be less than 1. Since we're looking at this for negative values of n, we want the magnitude of this to be greater than 1.

And the conclusion is-- and if you don't see this right away, you can just fiddle with it on the back of an envelope-- that this converges provided that the magnitude of z is less than 1/2. So we can look at this example in the z-plane, just as we looked at the previous example in the z-plane. We first of all have poles and zeroes. We have-- well, where do we have a zero?

We have a zero at z equals 0. So that's here. We have a pole at z equal to 1/2. And that's here. Just like in the previous example, the zero and the pole fall in exactly the same place. What's the region of convergence? Well, the region of convergence is for the magnitude of z less than 1/2. And now, in this case, that's inside this circle, rather than in the previous case, where it was outside that circle. So the region of convergence now is the region that is inside the circle, which is bounded by that pole.

All right. This raises an important point. Here we had an example where we got a Z-transform ratio of polynomials. The previous example, we got exactly the same ratio of polynomials. How did these two examples differ in terms of the Z-transform? They differed in terms of the region of convergence that we associate with the Z-transform. The other case, the region of convergence was outside this circle. In this case, the region of convergence is inside this circle.

The ratio of polynomials for the Z-transforms is the same in both cases. It's the region of convergence that's different. So an important point to keep in mind is that the Z-transform is not specified just by the function of z, it's also specified-- it has to have attached to it a little tag that tells you for what values of z it's legitimate to look at that expression.

So it's specified by the ratio of polynomials, and also by the region of convergence of the Z-transform. So here, for example, that's the ratio of polynomials, and that's the Z-transform. That's the ratio of polynomials, and that's the region of convergence. And the two together are the Z-transform.

Now, it often happens that the region of convergence of the Z-transform is specified in a somewhat indirect way, by saying something about the sequence that basically implies what the region of convergence has to be. And we can lead into that by stating what some of the properties are that the region of convergence has. And I'll always be talking now about sequences that are sums of exponentials, or Z-transforms that are ratios of polynomials, and consequently that can be described by poles and zeroes in the z-plane.

OK, well, I have here the first thing you can say. Actually, there's a 0 thing you can say that follows fairly straightforwardly, and that is, obviously, there can't be any poles in the region of convergence because, at a pole of the Z-transform, that's where the Z-transform blows up. And we know that the region of convergence is where the Z-transform converges. So obviously, there are no poles in the region of convergence.

However, we can make another statement, which is justified in some detail in the text, that the region of convergence will always be bounded by either poles or zero or infinity. For example, in this example that we worked out here, the region of convergence goes to zero-- I mean, it includes zero-- and it's bounded at the outside, in this particular case, by a pole.

In the example that we had previously, if we look at this example, the region of convergence is bounded on the inside by a pole, and it's bounded on the outside by infinity. And that's a statement that we can generally make. I won't try to justify that right now, but the region of convergence is bounded by poles, and in fact is bounded by a circle that goes through a pole.

All right. That's one statement we can make. Now some other statements that we can make, which are all developed in the text. That is, the proof of them is worked out. First of all, let's consider finite length sequences. A finite length sequence, sequenced at 0, except for a finite number of values of n, has a region of convergence that is the entire z-plane, perhaps with the exception of the point z equals 0, or perhaps with the exception of the point z equals infinity. And that follows in a pretty straightforward way, basically because the sum, in examining absolute convergence, the sum just includes finite limits. So as long as each of the sequence values is finite, the sum has to be finite.

A little more involved, we have right-sided sequences. What I mean by a right-sided sequence is a sequence which is 0 for n less than some value of n. And then goes on up to n equals plus infinity. And what you can say about right-sided sequences, as it turns out, is that the region of convergence is outside some value, which I've denoted by r sub x minus, and goes to infinity. It may include infinity or it may not include infinity, but it goes at least up to infinity.

So it begins at some finite value in the z-plane, bounded by some circle with radius r sub x minus, and goes up to infinity. Well, look, if we had a right-sided sequence, how are you going to find what r sub x minus is? Just by looking at the pole-zero pattern, because we know that the region of convergence is bounded by poles. We know also that the region of convergence can't have any poles, so r sub x minus has to be the circle that goes through the outermost pole in the z-plane.

So if we have a right-sided sequence, that implies a region of convergence, it implies that the region of convergence is outside the outermost pole in the z-plane. We can talk also about left-sided sequences. Left-sided sequences are ones for which the sequence is zero for n greater than some value n1. One And here, the region of convergence begins at zero-- may or may not include zero, depending on some issues about the left-sided sequence, but at least goes up to zero-- and then goes out to some value r sub x plus.

How do we find r sub x plus? We do it the same way. That is, we know that the region of convergence is bounded by poles and doesn't contain any poles, so this must be a circle whose radius is equal-- who's a circle that goes through the innermost pole, other than the one at z equals 0 in the z plane. So if I have a left-sided sequence, if I know it's a left-sided sequence, then by looking at the pole-zero pattern I can immediately infer what the region of convergence is.

Finally, we have two-sided sequences, sequences that go on to minus infinity and they go on the plus infinity. In that case, the region of convergence lies between two circles, which go through two poles. And obviously, again, they have to be two poles that are adjacent in the z-plane. In other words, again, to avoid having poles that are inside the region of convergence.

And out of all this, incidentally, if I tell you that I have a sequence whose Fourier transform exists, and I don't tell you anything else about the sequence, can you infer what the region of convergence is? Well, sure you can, because you know that the region of convergence includes the unit circle. And then where it's got to go from there is to the first pole that you run into heading toward the origin, and to the first pole that you run into heading out toward infinity. So in fact, if I tell you that the sequence is absolutely summable, or that the Fourier transform exists, that also, in effect, lets you construct the region of convergence in the z-plane.

Well, let's just cement this with an example. Let's say that I have a Z-transform which has a pole-zero pattern that has a pole at z equal a and a pole at z equal b. So this is a, and this is b. And let's consider what turn out to be the only three possible choices for the region of convergence, given this pole-zero pattern. No zeroes, to make the example easy.

Well first of all, let's suppose that we picked the region of convergence to be less than a. So the region of convergence for this first case is inside a circle bounded by that pole. Well, let's see. Let's answer the easy question first. Does the Fourier transform exist? To answer that, we ask, is the unit circle inside the region of convergence? Well, it's not if the region of convergence is inside that pole. So for this case, the Fourier transform doesn't exist. That is, it doesn't converge.

How about which sided? Left-sided, right-sided, two-sided? Well, we can answer that, again, because we know, from what we said previously, that a left-sided sequence has a region of convergence that starts at zero and goes out to some value r sub x plus. Well, that's the region of convergence that we're talking about here. So this particular example is a sequence for a sequence that's left-sided.

How about the region of convergence between a and b? Think of two circles with radii a and b. And now the region of convergence is between them. Does that include the unit circle? Yeah, it includes the unit circle. So for that case, the Fourier transform exists. Is it left-sided, right-sided, or two-sided? Well, it's two-sided, because the region of convergence doesn't make it to zero, and it doesn't make it to infinity. So this is a two-sided sequence.

The final example, or the final choice for the region of convergence, the magnitude of z greater than b. Does the Fourier transform exist? No, because the region of convergence is outside this pole. It doesn't include the unit circle, so the Fourier transform doesn't exist. Well, we can figure out which sided it is, either by using what we know, or using quizmanship. Here, we had left, here we had two-sided, so obviously it's right-sided. In fact, it is right-sided, because the region of convergence starts at some value r sub x minus, and goes out to infinity. So this case, then, is a right-sided sequence. This stresses, again, the point that, if I just specify the pole-zero pattern, that doesn't specify the Z-transform or the sequence uniquely.

Now, one of the properties of the Z-transform, which follows essentially directly from the fact that the Z-transform is interpretable in terms of the Fourier transform-- or this could be developed more formally, and we won't do that here-- is the convolution property. Namely, if y of n is the convolution of x of n with h of n, then the Z-transform of y of n is equal to the product of the Z-transform of x of n and the Z-transform of h of n. That is, z transformation, or the Z-transform, maps convolution into multiplication, just like the Fourier transform did. And that's not surprising.

We typically refer to h of z, the Z-transform of the unit sample response of the system, as the system function. Well, on the basis of the discussion that we just finished, there are some things that we can say about the system function and the unit sample response and regions of convergence, et cetera. First of all, suppose that we're talking about a stable system. Stable system, the unit sample response is absolutely summable.

What does that imply about the Z-transform? Well, it implies that the Z-transform, the region of convergence of the Z-transform includes the unit circle. That is, it implies the Fourier transform exists, or, equivalently, that the Z-transform region of convergence includes the unit circle. So this is a statement that we can make.

Suppose that we were talking about a stable system. Sorry, suppose that we were talking about a causal system. If the system is causal, well, what does causal mean? It means that the unit sample response is zero for n less than zero. In particular, the unit sample response is right-sided. If it's right-sided, then the region of convergence must be outside the outermost pole, because that's what we said about what happens with a right-sided sequence. The region of convergence has to be outside the outermost pole. So you can get the notion, then, that when we talk about the Z-transform of a system, in essence we can imply the region of convergence by making some statements about stability or causality.

All right. Let's try this out on an example. Let's take an example which, in fact, we've talked about in some lectures-- in a lecture previously. Let's take the example of a system that is represented by a linear constant coefficient difference equation, y of n minus 1/2, y of n minus 1 equals x of n. To get the system function for this, we'll take a shortcut. And in particular, we'll use a property that we haven't derived, at least not yet, but in fact you'll be deriving in the study guide. Notice how I put everything off to the study guide. When we get the study guide, I'll tell you we did it in lecture.

Well, the property is that if the Z-transform of y of n is y of z, then the Z-transform of y of n plus n0 is z to the n0 times y and z. So if we take the Z-transform of this difference equation, we have, then, y of z, the Z-transform of that minus 1/2 z to the minus 1, since we have y of n minus 1, z to the minus 1 y of z is equal to the Z-transform of the right side of the equation, which is x of z.

Well, the system function is equal to y of z divided by x of z. And so we can simply solve this equation for y of z over x of z. And what we get is 1 over 1 minus 1/2 times z to the minus 1. So that's the system function for that system. Well, what's the region of convergence?

We don't know what the region of convergence is. In fact, we're going to have to-- in fact, there might be several choices for the region of convergence. But that's not too surprising, because remember when we talked about this equation a couple of lectures ago and we showed that, in fact, there is more than one choice for the unit sample response of this system. And that is related to the fact that all we got out of here so far was the ratio of polynomials that h of z is, and not what the region of convergence is.

All right, so this ratio of polynomials is 1 over 1 minus 1/2 times z to the minus 1. We've been staring at that ratio of polynomials through a couple of previous examples. It has a zero at z equals 0, and a pole at z equals 1/2. So let's take a look at that in the z-plane.

We have then the z-plane here, the unit circle. We have, for the example we're talking about, a zero there, and a pole there. And what's the region of convergence? Well, it seems like we can take the region of convergence to be whatever we want to consistent with what regions of convergences tend to do. So let's say, for example, that the region of convergence was outside that pole. So let's take a region of convergence first that looks like that.

Well, for that case, if that's the region of convergence, there are some things that we could say about the system. For example, is the system stable? Is the system stable if the unit sample response has a Fourier transform? And it has that if the region of convergence includes the unit circle, which it does here. So this particular system is stable.

And is it causal? Well, what we want to examine for that region of convergence is does it imply a right-sided sequence. Well, it does, because the region of convergence is outside this pole. So in fact, for this example, the system is also causal. And in fact, look, we know what the unit sample response is because we've been working with that example previously. The unit sample response here, in fact, is h of n is 1/2 to the n times a unit step.

But that's not the only choice for the region of convergence. We could, alternatively, pick a region of convergence which is inside that pole. That is-- let's look at this again. Zero at the origin, a pole at equals z equals 1/2. But now let's take the region of convergence to be inside that pole. If it's inside that pole, then, first of all, is the system stable? No, it's not stable, because the region of convergence didn't include the unit circle.

So it's not stable. And furthermore, it's not causal. In fact, it's just the reverse. It's anti-causal. It's not causal. And in fact, we know, again, what the unit sample response is for that case because we worked it out previously, namely h of n is equal to minus 1/2 to the n, but for n negative, not for n positive. That is, multiplied by u of minus n minus 1.

And this, by the way, if you refer back several lectures ago, is consistent with the two solutions that we got for a linear constant coefficient difference equation. And now you can see that, for example, if I specified the difference equation, and I said that, in addition, what we're talking about is, say, a stable system, then you could construct the region of convergence by saying, all right, the unit circle has to be in the region of convergence, and now I'll take the region of convergence to go in toward a pole and out toward a pole. Or if I said, the system was causal, then you could construct the region of convergence.

So sometimes, in stating a linear constant coefficient difference equation, we'll basically [AUDIO OUT] convergence by making an additional statement about the system. For [AUDIO OUT] system is causal, or the system is stable, or both if it can be both.

[AUDIO OUT] So this is the Z-transform. Just to summarize the key points first, the Z-transform was introduced to get around the convergence problem with the Fourier transform. The Z-transform of sequences that are exponentials are ratios of polynomials. That is, they can be described in terms of poles and zeros. But the Z-transform is not specified just by the ratio of polynomials. You also need to know what the region of convergence is of the Z-transform so that, in fact, for several different sequences, you can end up with the same ratio of polynomials, but of course you'll get different regions of convergence.

Finally, the region of convergence includes the unit circle. That tells us that the Fourier transform converges. And also, we're able to make statements about a region of convergence as it relates to right-sided, left-sided, and two-sided sequences. Next time, we'll continue the discussion of the Z-transform, and in particular develop the inverse Z-transform. Thank you.

[MUSIC PLAYING]

## Welcome!

This OCW supplemental resource provides material from outside the official MIT curriculum.

**MIT OpenCourseWare** is a free & open publication of material from thousands of MIT courses, covering the entire MIT curriculum.

**No enrollment or registration.** Freely browse and use OCW materials at your own pace. There's no signup, and no start or end dates.

**Knowledge is your reward.** Use OCW to guide your own life-long learning, or to teach others. We don't offer credit or certification for using OCW.

**Made for sharing**. Download files for later. Send to friends and colleagues. Modify, remix, and reuse (just remember to cite OCW as the source.)

Learn more at Get Started with MIT OpenCourseWare