Lecture 22: SCET for DIS

Flash and JavaScript are required for this feature.

Download the video from iTunes U or the Internet Archive.

Description: In this lecture, the professor finished DIS factorization, 1-loop renormalization of PDFs, and convolutions in other processes.

Instructor: Prof. Iain Stewart

The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high-quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.

IAIN STEWART: OK, so let me remind you of what we were talking about last time. So we were discussing the example of DIS in the Breit frame. And the way we led into this example is we talked about renormalization group evolution with a heavy light current. And we saw that it had this [INAUDIBLE] dimension. But it was a multiplicative renormalization group evolution, and I said that that happened because we only had one collinear gauge-invariant object in our operator.

And then I just wrote down an operator that looked like this, and I said, there's one that has two. If we run that object, we will get a renormalization group equation that involves convolutions. And I said that that's going to give you the renormalization group evolution of a parton distribution function.

And we wanted to explore that. And so in order to explore that, we should think of some process that has the parton distribution function in it so we can really make sure we know precisely what the operator is. And that process is DIS. That's the simplest process.

So we started thinking about deep inelastic scattering in the Breit frame, which is this framework, Q, of the photon. It has that form, just a component in the z-direction. And in that frame, the incoming quarks in the proton, quarks and gluons, are collinear. Intermediate state, the outstate, the x-state that's going out, is hard.

So you can think of a-- if you were to draw perturbative diagrams, you'd draw them like this. And this propagator would be hard. It would have a hard momentum. And then you would have loop corrections that could also be hard.

In the effective theory, you don't really have to think about what diagrams. You just write down the lowest possible dimension operator, and everything that's a loop that's hard goes to the x, which is the Wilson coefficient, if you like. And likewise, we also get not just external quarks, but external gluons from diagrams. In the full theory, it would involve a quark loop like that.

OK, so this is going to lead to the quark PDF, and this is going to lead to the gluon PDF. And we decided we would do the quark one in some detail. So this is kind of writing out now the operator and the Wilson coefficient in kind of a combined notation where this w plus and minus are w1 plus and minus w2.

And then we had one more formula, which is where we ended. So we have a collinear proton, and then we have this operator. And then we have the collinear proton again.

And this matrix element can be written as follows. So this is the last formula we had last time. So some things here are just conventions, but other things are important. Well, everything's important, but some things are more important than other things.

So this quark here has a flavor. It could be an up quark. It could be down quark. Let me denote that by an index i.

This proton here is collinear. And really, all that matters for this example is that we have some-- we can think of it as a massless proton, even. And as far as its momentum is concerned, we can think of it as massless.

So really, the only momentum that matters in here is the minus momentum-- so minus, which is n bar dot P, and that's what this is-- n bar dot P, n bar dot P. It's capital P. So capital P was the proton momentum. And we can think of this state as just carrying large P minus. All the other components don't matter for this matrix element.

But it's a forward matrix element, so both states carry the same large momentum. And that's what led to this delta function here that says that w1 and w2, with the sign conventions we have, this guy has the opposite sign convention. And so if it's forward, these two guys have to have equal momentum so that the sum is 0. And if you take into account the sign, then that means w minus is 0. So that's what that delta function is doing.

And then the sum is something that's not constrained by the matrix element. And so the sum could either be positive or negative. If it's positive, we can say that it's some fraction of the proton momentum because this is a quark inside the proton and it carries some momentum, but it can't be more than the proton. Otherwise, we would get 0.

So it's some fraction, and that fraction is defined to be xi in this formula. And the reason there's a 2 is because I'm adding the w1 and the w2, which are equal. So this is the momentum fraction.

And then we can have an arbitrary function of that momentum fraction. Nothing stops us from writing that down. And that's kind of where we got to.

Now, so on general grounds, you can argue that that's the most general thing that you can write down for this matrix element. And I tried to argue about why that's true.

From charge conjugation, you can actually do something more. So you can let charge conjugation act on these operators. And since charge conjugation's a good symmetry of QCD, you can prove that that relates, actually, the quark and antiquark operators in the following way.

So quark and antiquark operators are switching signs of w plus-- so if I switch the sign of w plus that's going from quark to antiquark. And basically what happens in the operator when you do charge conjugation, remember that [? chi ?] goes over here to [? chi ?] transpose and the w switches sign.

So basically, what charge conjugation is doing is taking w1 to minus w2 and w2 to minus w1. And that's why the w plus, which is signed by the w minus, doesn't. And then there's an overall sign just from the fields-- so from the usual charge conjugation transformation of the fields for a vector current.

OK, so these are all orders of relation between the Wilson coefficients. So really, when you do the matching, you really only need to do the matching for the quarks. So if you want to do the matching calculation, you'd do a matching calculation for the Wilson coefficient with positive w plus. And you could do it for the antiquarks, as well, but you would just be basically wasting time.

Now, last time, we went through the kinematics of the Breit frame a little bit. And this n bar dot proton momentum is actually Q over x. So we could also write this formula like that. And so you see that w plus is actually something that's [? xi ?] over x, Bjorken x, which is an external leptonic variable.

Now, there was another index over here, j, which we talked about last time. And that had to do with the fact that we're taking the forward scattering graphs with a tensor and we decompose that into two scalars, multiplying things that had indices. So there were two possible things that we could write down. And the index j is just this 1 or 2.

And there was a similar decomposition. In the effective theory, we could think of decomposing-- the effective theory is a scalar in terms of the scalar operators, which are these guys, with some coefficients that have some indices, then multiplied by some tensor. So these guys don't have indices, but I could just multiply them by effectively the effective theory versions of these tensors.

And so that that's why there's a j here. Is that clear to everybody? OK, so there's various indices-- i flavor, j for tensor decomposition, and then a bunch of momentum indices.

So when you go through the analysis of trying to find a formula for, say, T1, it's going to be related to C1. And T2 will be related to C2, OK? Because this guy's a scalar operator. It doesn't have any indices.

So the way that that works, if you just look at the two bases and write down the formula, you'd have an integral over these w's. There are some prefactors which just come about from being careful, and then the thing that has an imaginary part of these Wilson coefficients, and then you have matrix elements of operators, which have a flavor index but don't have a subscript j. So in general, this has a flavor index. Keep track of things.

And then there's another one for T2-- so kinematic prefactors that are easy to work out that just come about from the fact that we wrote the tensors and the effective theory and the full theory slightly differently. But these two guys have the same matrix elements of the same operator. And all the sort of tensor stuff is just saying that there's two different Wilson coefficients that you have to compute, and that's because you have these vector currents from the photons.

OK, so this is what we're after. These show up in the cross-section. And what we're doing is we're writing, at lowest order, the things that show up in the cross-section in terms of effective theory objects, the Wilson coefficients and then the matrix elements of our operators here, which is this thing in square brackets.

And we're almost at what you would call a factorization theorem. Factorization theorem is a result for the cross-section, in our language, in terms of effective theory quantities, and that's going to factor the hard stuff, which is the pink stuff, which is in these Wilson coefficients, from the low-energy stuff, which is in these operators.

AUDIENCE: So those pink elements are [INAUDIBLE] equation.

IAIN STEWART: Yeah.

AUDIENCE: And what is square bracket [INAUDIBLE]?

IAIN STEWART: It's both.

AUDIENCE: It's both, OK.

IAIN STEWART: Yeah. I can write it like this, if you like. Any other questions? OK, so literally what I do is I take this formula and I plug it into that formula. And when I do that, I can do one of the integrals trivially because it's a delta function. This one's just trivial. And then I do this one with the other delta function. So both integrals are actually trivial.

And I can write the result in terms of something that I'll call the hard function, which is just the imaginary part of the Wilson coefficient. And I'm going to denote it in the following way. So this is w-- the Wilson cost efficient can depend on various different things. It can depend on w plus. It can depend on w minus. It can depend on the hard scale, which is q squared. Or it could depend on mu squared.

So w minus, when you do the delta function, gets set to 0. w plus gets set to something. And it's convenient because of the way this delta function is with the-- it kind of has a ratio. Because this function is a function of xi which is the ratio of two things, it's convenient to define a dimensionless z and only talk about a function of that dimensionless thing.

And if you do that, then the final result for these kind of T's, which are imaginary parts of T's-- you can just put the formula together. I'll write one of them-- is that. And then there's a similar formula for in T2 that involves H2 and H1.

OK, so this is the factorization theorem. And it came about, in some sense, just trivially. Once we knew how to write down the operators in the effective theory, we were basically done, and then the rest was just sort of algebraic manipulations, being careful about what momenta go where, and knowing what the sign of this formula for the matrix element is. This is a kind of important point.

But in some sense, the effective theory, from the get-go, was already designed to factorize because we were integrating out the hard degrees of freedom right at the start. And so knowing what operators and knowing their matrix element is really all we needed to do to get to the DIS factorization theorem.

So if you ever look up the original way that this was derived, it was not that easy. This is actually something that's very complicated in a sort of traditional approach. But in the effective theory approach, it becomes almost trivial.

And this is an all-orders result because we never expanded in alpha s. We just used symmetries, and we used the fact that we knew what form the operators would take when we integrated out the hard degrees of freedom. So any alpha s corrections that one might want to add will fit into this formula. And this gives a perturbative result for this H, which you would compute in perturbation theory, which people do [INAUDIBLE] these days.

Now, if you ask about things like-- I didn't write all the possible-- I suppressed some things, right, like Q squared and mu squared. If you ask about the Q squared and the mu squared, then your Wilson coefficients do depend on Q squared and mu squared. And the Wilson coefficients, H here, are actually dimension-- the Wilson coefficients, the original one, [? xi, ?] were dimensionless. So the H is dimensionless. I just pulled out the dimensionable factor so that that would be true.

And so this guy can depend on Q squared over mu squared. The fact that Q squared only shows up there, that's Bjorken scaling. And if you look at the perturbative result for T2, then it vanishes at lowest order. And so that's the Callan-Gross relation. So there's various things that are sort of encoded in this that come out, from the effective theory point of view, in a very simple way.

OK, so let me write. So there's logarithmic corrections that involve Q in the Wilson coefficients that will show up like that.

So there's a mu also that you could add to this formula. So the way that I described it, we didn't think too hard about bare versus normalized, right? We just take these operators. So far, they could have been bare.

But remember that when you have C bare, O bare in [INAUDIBLE] Hamiltonian, for example, that's C mu, O mu. So switching from bare and renormalized-- I mean bare operators and coefficients to renormalized operators and coefficients is simply a matter of sticking in a mu here, and then you imagine that the renormalization has taking place. So we could equally well insert in these formulas a mu for that.

And then what I'm saying is that there being logs of mu over Q will make a little more sense. So there's also a Q. Squeeze everything in here [INAUDIBLE]. OK, now we're being completely honest about what it depends on. All right.

So traditionally what happens in the traditional literature, people talk about factorization scales and renormalization group scales. So factorization scales is the fact that this parton distribution function is mu-dependent-- so operator that you have to renormalize, and we're going to do that in a minute.

And so there has to be a cancellation. Since this thing here is a physical observable and is independent of mu, there has to be a cancellation of the mu-dependence here and the mu-dependence here, all right? And that's this mu. So the thing that's multiplying this result here would involve a cancellation of mu-dependence here and mu-dependence here. So the same anomalous dimension would show up in both the H and the f.

And then sometimes people also talk about mu-dependence that's just cancelling within H itself. And they call that renormalization group renormalization group mu. Sometimes people vary these independently.

In the effective theory, it's really simple. You really just have the classic setup of you have some hard degrees of freedom. In this case, you can even think of it one-dimensional.

You have some hard degrees of freedom that you want to integrate out. You have some scale which we could call mu 0 that's of order 2 where we do that integrating out. And then you can run down or you could run in a more complicated way. So you could run the PDFs which are sitting here at the collinear scale, which is lambda QCD. You could think of evolving them up to some scale and evolving the Wilson coefficients down and meeting somewhere, OK?

And so, yeah, it's just really a sort of classic running and matching picture. Here, I've just used the fact that I could run either one of them or I could run both of them to a common scale. So I usually would pick mu to be either something small or something large rather than running both things.

But in general, you could think about running both things. And we've talked about having anomalous dimensions for either one of these. And usually, we just run one of them, OK? But it's no more complicated than the standard picture of integrating out modes and doing renormalization group evolution.

So if we want to do tree-level matching or one-loop matching or any kind of matching-- let me just show you tree-level matching. So tree-level matching, you would compute this forward scattering graph, and that will give you the other diagram that we drew. And so you'd want to match this guy onto that guy.

And what you find is you find one tensor structure at lowest order. So C1 is not equal to 0. C2 is equal to 0. And that's the Callan-Gross relation which tells you about the spin of the object that you're scattering off, and this is how we know that quarks are spin 1/2, or one way we know.

And then you can calculate C. And so that way that I set things up, C was complex, and then I had to take the imaginary part. So C is just this propagator, basically.

And it's only a nontrivial function of w plus. There are some charges that sit out front. And so the only way that this guy depends on whether it's an up quark or a down quark is you have 2/3 squared or 1/3 squared.

And then there's something that comes about from the propagator that looks like this. And then I take the imaginary part and then I get H1, which is a function of z, which is the xi over x. So if I write it as xi over x like it shows up in the factorization theorem, then I'm getting a delta function of xi over x, which is this coming from this.

So the lowest-order H1 is just a delta function. And that's where the parton model picture comes from because the parton model picture is that you think of xi and x as being the same thing. And that's the tree-level way of thinking, and that's just satisfying this delta function and the hard function. And then you would get that the T is just given by the parton distribution at x, which is the external measurable thing.

OK? So this is how all these classic things come about in the effective theory language. Any questions about that?

All right, so let's renormalize this operator and see how the classic result for the renormalization group evolution of a PDF comes about. And again, the way that you should think about this is you have an operator, and you should just renormalize it.

And once you've got the effective theory, you shouldn't have to think too deeply about what you're doing. You should just be able to follow your nose and do the renormalization. You may have to be careful because these operators are kind of complicated. They have this dependence on these w's that you have to be careful about. But really, it's just follow your nose, compute the one-loop graphs.

If you look up how Peskin would do one-loop renormalization, there'd be an infinite number of operators you'd have to derive in a renormalization group, result for all of them. Here, we only have one operator and we're just going to renormalize it. Our operator is nonlocal in the sense that it depends on these omegas, and that's what's encoding this infinite number of operators that Peskin has.

OK, so solving, if you like, for f from the formula that we had before, I can do that by integrating over the w minus. That sets these guys to be equal. And then if I-- so I can think of it as that there's one free momentum, xi. And that free momentum xi is one of these labels, which is this guy here. So xi is w over Pn minus, and this is the proton which is carrying some momentum Pn minus.

This is the proton state, which carries momentum Pn minus. And there's one delta function. I could put it either place, but I only need one because the other one's kind of trivial.

So the first thing you can think about doing here is looking at mass dimensions. And I already told you that this guy was dimensionless, but let's check that that's true-- so a mass dimension.

So relativistically normalized states have mass dimension minus 1. Quark fields that don't have a delta function have mass dimension 3/2. The delta function gives a minus 1, and then a minus 1, so you get 0. 3/2 plus 3/2 minus 3/1 is 0. So that means this f is a really dimensionless function, and that's why it makes sense that we defined it to depend on this dimensionless ratio.

You can also look at the lambda dimension, and here's how that works. That's also 0.

So the only thing that's-- so we already had power counting for our kai fields. Remember, the c field inside the chi field scale like [? 1. ?] So this is just coming about because this guy's order lambda. The delta function just involves large momentum, so it has no power counting. And the only thing that's nontrivial is that the states have power counting minus 1.

So here's how we can derive that. So if you think about relativistically normalized states, what you're doing is you're defining sort of the inverse of this d3 p over e, which you can write actually, which is more convenient for power counting, in terms of things that we can power count more simply. So this is an exact relation for a nontrivial particle between p minuses and pz.

So then I can write, because of that, the standard relativistic normalization formula for a state with two different momenta as kind of the inverse, which would be this. So the usual formula would have 2e and then delta 3, right? Because it's the inverse of this.

But I can write it also this way. This guy is lambda 0. This guy is lambda minus 2. Therefore, each of these guys must be lambda minus 1. That's where the minus 1 came from.

All right, so we want to renormalize that thing, that matrix element. And what loops can do is that they can change omega. So you might-- or xi, which are equivalent. And so the way that you should think of that is in the following sense, and it's actually something you're familiar with, although you're familiar with it for discrete quantum numbers. And here, in some sense, we have a continuous one.

So you have some function, fq, that depends on some variable xi. And it can mix, under the renormalization group, with an operator at a different value of xi. So you can really think of the fact that loops can change this omega as just a mixing.

You're used to mixing for discrete quantum numbers. You write down all the operators that have the same quantum numbers, and they can mix under renormalization. Here, there's kind of an additional thing that the object can depend on, which is the xi parameter.

And in general, when you do the renormalization, that can change too. Because the operators or matrix elements here are labeled by this xi, in general, there's no reason that it should stay the same under the loop corrections. And it was really a special case that we dealt with last time where that did happen. But in general, it doesn't.

OK, so this is actually what we expect to happen in general unless we can argue that it doesn't happen because you should think of each value of xi as giving a different operator or different matrix element. So I could write formulas here just for the operator. It's actually the operator that gets renormalized, not the matrix element.

So I'm going to keep writing f's just to avoid too much notation, but we could always actually replace the f's by just the operator. And we could do everything in terms of actually just the w instead of the xi variable. But I'll just keep using f.

So what does this mean in terms of the operator is the following. We can think of, if we have some bare operator and we want to split that into a piece that has divergences and a piece that is just the finite pieces, the general formula for doing that involves an integral. So this guy here-- so there's also these indices, i and j, and that's the flavor, if you like, or quarks and gluons. So i is quark or gluon.

And in general, you can also have a mixing in the quark and gluon operators. We started with these two different operators, and they can mix under renormalization as well. So there's two operators in the effective theory, same order in lambda, and they can mix when you do the renormalization. And I'll draw a diagram in a minute.

So this thing here is mu-independent. This thing here in MS bar has all the 1 over epsilon UV's. And it also depends on alpha of mu, and it depends on these xi and xi prime. And this guy here is UV-finite.

So this guy here is really the thing that's the low-energy matrix element. But remember what low energy meant here. Low energy was physics at lambda QCD, physics of the initial-state proton.

So actually, in this guy, there are IR divergences. This is just some matrix element in the effective theory, and in general, it could be IR-divergent if you calculate it. And this guy actually is.

And it really encodes-- that's not going to bother us at all because this is really some universal thing that encodes lambda QCD effects, and that's what parton distribution functions are. Then from the point of view of what we're doing, it doesn't really matter that it has this extra IR divergence so that we will have to regulate diagrams in order to separate UV and IR divergences because of that. Really, in terms of the renormalization, what we're after is getting the UV divergences.

OK, so the usual kind of formula that you'd have where you just write o is z times o is slightly more complicated here. There's this extra integral.

And now, remember how you derive a renormalization group equation. What you do is you say mu d by u mu is this guy is 0. And so if I take mu d by d mu, on the right-hand side, I get mu d by d mu of z and mu d by d mu of f. And I can rearrange that in the usual way, except for keeping track of these integrals, as follows.

So I imagine that there's a z and a z inverse. And the relation between z and z inverse is as follows. Let's just call this double prime. It's matrix multiplication except in the function space, right? So this is like a delta function.

So if you like, you can just think that there's more indices. In some sense, what we have in terms of the quark and gluon operators mixing is a matrix equation, right? This is a vector. This is a matrix. This is a vector for the indices i and j.

And you can think of this integral here as just another-- it really looks, the way I've drawn it, like this is contracted with that and this is summing over the indices. And really, that's what it is. So really, this idea that it's just mixing of quantum numbers is kind of a good way of thinking about things. And when you think about formulas, you know you're just summing over these indices, and the Kronecker delta becomes a regular delta. So in that sense, it's not that hard to do this.

And so we get an anomalous dimension equation which, again, has that kind of form of just an integral for the renormalized guy, and it has mixing. And this gamma ij, if we go through the steps and use this formula, looks like this. So I'm kind of skipping steps, but I hope that you can kind of picture where this result will come from.

And it's actually not-- it's pretty easy to go from that line with this formula to this line. This is one line, but I just split it into two things and defined this quantity, gamma ij, which is the anomalous dimension.

AUDIENCE: So this mu in QCD [INAUDIBLE] factorization scale?

IAIN STEWART: Yeah, that's right. OK, so at one loop, things are simpler. Because at one loop, this thing, we can just replace it by delta ii prime, Kronecker delta at one loop. Because at one loop, we just need the order alpha piece from this guy, and then we can set the tree level for that guy. So at one loop, which is all we're going to do, we get the simpler formula.

OK, so that's our setup, and now we want to calculate this one-loop anomalous dimension by calculating the 1 over epsilon alpha s term and the zij. Before I do that, is there any questions?

All right, so tree level-- so think about there being an external p for whatever state I'm considering, and then the operator is labeled by w. And so we're summing over spin. I've kind of somehow-- sometimes I've dropped that. I said it last time.

And so we get some spinners, and we got a delta function. So what the delta function in the operator is it's delta function of w minus this label, momentum p bar. And in something like this where it's completely trivial and there's just one state, we just get the momentum of that state, which is p.

This sum over spin here is a p minus. And so the result is a delta function of 1 minus omega over p minus for this tree-level matrix element.

One loop-- now we have to think about how we're going to regulate the IR. And I'll do it with an off-shellness. So I'll introduce a nonzero p class, and that will be enough to regulate IR divergences. And we're really after the UV one, so we just want to separate these guys out.

So there's some different diagrams. We insert our operator, and we just attach gluons. So one thing we can do is just string a gluon across kind of like a standard vertex renormalization diagram. So there's some loop momenta. Let me label it on the quark line. And then the gluon here, which is a collinear gluon, has momentum p minus l. And it's forward so it's kind of set up.

There are some numerator to deal with. And I'm not going to go through that, but it simplifies to something kind of simple. After some [INAUDIBLE] algebra, it simplifies down just to an l perp squared.

For this diagram, there's two l squared propagators, and there's one l minus p squared propagator. And then there's a delta function from the insertion of the operator, but now the delta function doesn't involve the [INAUDIBLE] momentum as it did there. It involves the loop momentum, and that was kind of the whole point of this example.

So we have a delta function of l minus minus w. And then there's some dimreg factors, which we can be careful about if we want. So in MS bar, we'd have some factor like that. So this is some loop integral that we just have to do, and we can do it with kind of standard techniques.

So in my, notes I wrote it as a function of epsilon, and then epsilon is just regulating the ultraviolet, and we expand in epsilon. So let me just write down the result after expanding. So this is an ultraviolet divergence, and A here has the infrared regulator-- p plus, p minus. And it also has a z and a 1 minus z, which you can group all together. And z is just this ratio. That thing is dependent on a tree-level omega over p minus.

Now, when I'm doing this calculation, this is a small p, not a big P, because I'm using quark states, not a proton state. So really, if I wanted to think about this as an f, I should say it's an f for the quark state. But I think that you can remember that.

But the renormalization of the operator doesn't depend on the state, remember. We always take the simplest states possible when we're doing the renormalization or doing matching. And so we're free to use quark states, so that's what we're doing.

OK, that's one diagram. Now there's another diagram. I think that should be B. Sometimes in my notes, I'll call it 1, which doesn't make any sense. And we can contract the gluon with the Wilson line. So there's that graph, and there's a symmetric friend.

And each of these actually has two contractions because there was two Wilson lines in the way we wrote our operators. So our operator, as we wrote, is like this.

And you can think-- so let's just think of a contraction with the quark. You can think that there's a contraction like that and there's a contraction like that of a gluon-- OK, I'm contracting gluons with quarks. But really, what I mean is that I'm contracting to the Lagrangian, right, that this quark is evolving under. So hopefully that's clear.

All right, so there's two different ways in which-- when I work out the Feynman rule for this thing where I attach the gluon, you can either get the gluon from here or the gluon from there. That's all I'm saying. But these actually have different physical interpretations because this delta function here, if you think about what it's doing, it's really-- in the original diagram, it's like the cut.

So in the original diagrams that we were drawing, we would cut them because we'd take the imaginary part. And this delta function is in the middle. We have kind of a parton on this side and a squared parton on that side. This delta function is the cut.

So this contraction here actually corresponds to a virtual graph, and this guy here corresponds to real emission because you're doing a contraction across the cut, right? So one of these guys would be a graph like this, and the other one would be a graph like that. I can label them 1 and 2-- 1, 2.

But we'll just keep them and treat them all together. These two graphs give an overall factor of 2. So that's simple. There's some spinner stuff, which is even simpler in this case, so I write it out. There are some stuff from the Wilson line. And then there's two propagators. Let me not write all the i0's. And then there's two different delta functions.

So either we have the real graph where the w is inside, or we have the virtual graph where the w-- sorry. Either we have the real graph where the loop goes around the delta function, or we have the virtual graph where this guy is overall on that thing. So in the overall one, it's just a p minus minus w like it was at tree level. And in the real emission, it's an l minus minus w.

And one's a w. One's a w dagger. So there's a relative sign. So the sign is just easier to understand as w versus w dagger, which has a relative sign. OK, so if we just followed our nose with what the Feynman rule for this thing is, that's what we would get.

And this is, again, some loop integral that we can do. One way of writing the result is as follows. And there's one thing we have to be careful about here which is why I'm writing this all out. So there's actually a cancellation between the virtual and the real diagrams of an infrared divergence, so I want to be careful about that. So that's why I'm writing this guy out in epsilon dimensions fully without expanding first.

OK, so this is the real contribution, and this is the virtual. So in order to sort of deal with this, we have to make use of something that's called the distribution identity. If you know what the result is for the anomalous dimension, you'll be aware of the fact that it involves something called a plus function because splitting functions for a parton distribution involves something called a plus function.

So the way that we can deal with that is as follows. The way we can deal with the fact that actually the result's going to be a distribution, we have to be careful because you see, z goes to 1 is being regulated by epsilon. And so if we integrate over z, for example, it's epsilon that's going to allow us to integrate all the way to 1. And we'd like to encode that in some way where we can expand in epsilon because that's what we need to do in order to extract the anomalous dimension. And this formula is what allows us to do that.

So I'll tell you how to derive it after I tell you what the l is. So ln of anything is defined to be a plus function with a log to that power. And the plus function is defined so that if you integrate from 0 to 1, you get 0.

And if you integrate with a test function, which is the more general result that you need to define it-- so you can define it by this result with a test function. And it just gives you the normal function, but the test function with a subtraction that makes the test function more convergent so that you can integrate through 0. OK, so that's the definition of a plus function. You could also define it with a limit. This will be sufficient.

OK, so these things are like delta functions. The way that you would derive this formula is you would say, well, if z is away from 1, then I can expand because then there's no problem. And if z is away from 1, it turns out that this plus function is just the regular function. It's only at 1 that something special is happening.

And so the standard expansion is what you'd get if you took z away from 1. And to see what's happening at z equals 1, you'd just integrate both sides from 0 to 1, and that's how you can derive the coefficient of the delta function.

All right, so if I plug this formula in here for this thing, then I actually get another 1 over epsilon in this guy. There's a gamma of epsilon out front, and that guy is good. This is our UV divergence. This is our 1 over epsilon UV.

But there's also a gamma of minus epsilon here, which is an IR divergence. So even though I tried to regulate all the IR by off-shellness, it didn't quite work and there was one that was regulated by dimreg. And that one actually cancels between these two pieces once I use this identity and take into account that that's an IR divergence.

So there's a 1 over epsilon IR times 1 over epsilon UV. And that cancels between the real and virtual graphs. So this is like a standard 1 over epsilon IR canceling between real and virtual graphs. And since it's only the 1 over epsilon UV that we're interested in, we're really only worried about that part of it canceling. There's a piece actually that-- anyway. And then the 1 over epsilon that's left is the guy that we're after in order to get the anomalous dimension.

All right, so let me not-- so in my notes, I write one more line where I expand this guy out. And I think just because of the time, I'm going to skip that. And I'll just write the final result.

When we do the final result, we also have to include wave function renormalization. So you can think of this graph as a wave function renormalization term. And it just involves the delta function again like the tree-level graph. [INAUDIBLE] like that.

So in general, if I wanted to do this calculation at one loop, there's one more type of diagram I should consider, OK? And that's a graph where I could have mixing. This guy should be dashed since we're in the effective theory.

So how does the mixing graph work? Well, there's a graph where I have external gluons, but I still am renormalizing the same operator. I've still inserted the quark operator here, but now we have antiquarks in this theory. We can draw a triangle like that. And this graph here would give a mixing that involves-- that would give a mixing term in the anomalous dimension where you're mixing gluons and quarks.

So this mix is what we sort of called O glue. Let me just say this, that it mixes O glue with O quark. And we could compute this graph too, but I'm going to neglect it just for simplicity. I just won't write it down.

One way of doing that rigorously would be to consider operators where the flavors of these guys are different, OK? That's what would happen, for example, if you were having a w exchange or something.

So we could look at nonflavored diagonal operators with, like, a u quark and a d quark. And then you would not have this mixing with O glue. It's only if the flavors of the quarks are the same that you can write down this diagram. But just think about it as I'm focusing on the quark piece, and in general, there's also a gluon piece.

So we have all our one-loop graphs. We know how to expand them in epsilon. And so we just proceed, expand them in epsilon, and add them up.

So you could think that what we derive by doing that is a distribution for a quark inside a quark. So here, I'm being-- this is the state and this is what type of operator. And it's a function of some z. And if I go up to one loop, then the tree level was just a delta function of that fraction z. And then at one loop, we had all these other terms.

So if I collect all the pieces, I had some delta functions. The graph with the Wilson lines actually gives me one of these L0 functions. And then the graph-- so there's wave function normalization plus some other terms that involve delta function.

And then there's some other pieces. And then this is all times 1 over epsilon. And then there's other pieces. But if we're interested in ultraviolet renormalization, we only care about the 1 over epsilon.

And all those terms can be written in a kind of more compact form, which is the more standard form for the anomalous dimension. You can actually group them all together into a single plus function like this. So just in terms of distributions, this distribution is equal to the sum of these pieces.

You can see, as z goes to 1, that there'd be a 2 here and a 2 here. And this would be 1 over 1 minus z. And as z goes to 1 here, that would be 1 and this would be a 1 over 1 minus z. So you see some pieces of it matching up.

Basically, the way that you would derive this is you'd write 1 plus z squared is a plus b, 1 minus z plus c, 1 minus z squared. You'd work out what a, b, and c are, just relating two polynomials. And then this guy here, the 1 minus z in the numerator cancels the one in the denominator, and it's not a plus function anymore. It's just a number. And that's how you would connect the two formulas.

All right, so we were after determining the z. The z has to cancel this 1 over epsilon. So let's go back to our formula which connected those, which was this. Our general formula was that the bare guy could be written in terms of split into UV pieces and finite pieces in the following ways, is with this integral.

Now, this looks like it could be an arbitrary function of xi and xi prime, but our result here was only a function of z, which is actually a ratio. And that's actually something that we can argue in general, that this thing here is actually only a function of one variable, not two.

So that follows from two different things. It follows from RPI III invariance. So remember that RPI III invariance said that you should have the same number of n's and n bars. And remember-- OK, so that's one thing that you have to use.

That tells you that you need to get ratios. Well, the z's are already ratios. So you might say, well, that should be fine. The z's are already ratios between the momentum and the operator and the momentum and the state, the minus momentum of the operator over the minus momentum of the state, right?

And this is a minus momentum. That's a minus momentum. So the z's are RPI III invariant. So that doesn't seem like it would imply this.

But there's one other thing you know, and that is that it can't depend on the state momentum. I could have taken a proton. I could have taken a quark. And the result for the renormalization shouldn't depend on what state I'm taking. And this combination where I have d xi prime xi prime with a xi over xi prime, the p minuses cancel out.

So if I were to do the whole thing with a proton state rather than a quark state, then I should still get the same anomalous dimension. And in order for that to be true, it has to depend on the ratio. And that ratio is then just a ratio of the bare operator and the renormalized operator.

It's like saying, if you had O of omega, there is a convolution of z with an omega over omega prime or something, with O omega prime renormalized. And if I had done it in an operator level and not even written states, then it would really just be RPI III invariance, OK? Because I wrote it in terms of states, there was this other momentum available, but I'm not allowed to have that really be playing a part of the discussion.

So given that formula, then I can expand to one loop. So this guy I think of as having a tree-level result. This guy is a matrix element that has a tree-level and one-loop result as well. So if they're both tree level, I get delta, 1 minus z. And in some kind of obvious notation, up to one-loop order, I can write it out formally like that.

And then I know what these tree-level things are. This guy's a delta function, and this guy's also a delta function. So I can just do the integral. And it really is pretty simple. All the 1 over epsilon terms are just z. And what's left would be associated to this guy in perturbation theory.

But if we want to do the renormalization, we just need the z and not worry about that. So we read off from over here what z is because z is just this right there. So z-- OK, z is just this thing. So when I put the tree-level piece together with the one-loop piece, then this thing is just z.

And then I compute the anomalous dimension by taking mu d by d mu of it-- and that hits the alpha, so that kills the epsilon and gives me a factor of 2 and a minus sign. But the anomalous dimension, gamma qq-- so there was a 1 over xi prime. And then it was minus mu d by d mu.

And if I plug it in the formula that we have, zqq of [? xi over ?] [? xi ?] prime. So there's an a minus here. There's a minus there. And the 2 epsilon cancels this 2 and that epsilon. And this 1 over xi prime is the thing that we needed to make the measure RPI invariant.

So in this notation, our original notation, putting all the pieces together and being careful about beta functions, which I was mostly suppressing, that's the result. OK, so this is the function of xi over xi prime, which I've just written as z. And then there's some beta functions that are setting the boundaries for the integral. And that comes also out of the calculation. And that's the quark one-loop splitting function.

So if we've done the gluon from that other diagram, that we've got the mixing term. OK, so this is the one-loop anomalous dimension for the PDF, and it's really just doing operator renormalization, calculating one-loop diagrams in the effective theory. Questions? OK.

So one question that you can ask, which is an interesting question, is when we did this result for the DIS, we got this convolution between the hard function and the Parton distribution function. And you can ask, why did that happen and, in general, is there a way of characterizing when it could possibly happen?

Because if you think about the answer that we got, it was just Wilson coefficient times operator. And the really only nontrivial thing about it was that there was this one momentum that could kind of trade back and forth between them. There was an integral in the answer.

And actually, power counting even constrains how those integrals can, in general, show up. So if you ask most generally what could possibly happen-- and just thinking about the power counting for the degrees of freedom actually tells us what type of integrals can show up in factorization theorems. This is constrained by power counting.

I keep forgetting to say that there's a makeup lecture tomorrow. I sent around an email, but I should have-- tomorrow, this room at 10:00 AM. And the lecture next week is canceled on Tuesday. That's why we have a makeup lecture tomorrow.

OK, so in what way is it constrained by power counting? So if you think about the degrees of freedom that we had, say, for SCET I, then we had hard, collinear, and soft-- so let's just take a simple case with only one type of collinear-- hard, collinear, and ultrasoft. And the p mu of these guys in terms of plus, minus, and perp components-- I should be more fancy about this. [INAUDIBLE].

So if you think about just power counting for the momentum, it was as follows. Factorization was separating these different things into different objects. We had a Wilson coefficient for the hard. In the case we just did, we only had a proton matrix element. For the collinear, we didn't have any ultrasofts.

If we put the ultrasofts in, they would have all cancelled away. We wouldn't have seen any ultrasofts showing up. And that's because the operator we were dealing with, the Wilson lines would just have cancelled completely out of it.

But actually, it turns out, for deep inelastic scattering, that you shouldn't even include ultrasofts. They're not a good degree of freedom to include there. So really, for the process that we were talking about, you really should only take those two.

But anyway, more generally in some other process, you would have these three different things. And the way that convolutions can show up is simply who can trade momentum with who. So this is plus momenta, minus momenta, and perp momentum.

And in order for momentum to be exchanged, they have to be of the same size. So these guys here are the same size and they can be exchanged, and that's exactly what showed up in our DIS factorization theorem. The hard Wilson coefficients exchanged minus momentum with the collinear [INAUDIBLE] because they are the same order in the power counting. In another case, in a more general or in some other example, we might find that there was nontrivial ultrasoft stuff, and then we could get a convolution in the plus momentum and collinear and ultrasoft because they're the same size.

And so that's a pretty simple way of thinking about why those integrals can possibly show up. It's just because the two sectors can talk to each other because they have momenta that are the same size. And then the rest is about momentum conservation because momentum conservation places nontrivial constraints. And we saw in the DIS example that there were two omegas to start, but one of them was projected to 0 because it was a forward matrix element and we only had one integral. And that also has analogs elsewhere.

If we do SCET II, which we haven't talked about yet-- we did talk about what the degrees of freedom were. And again, if I try to make it completely generic for some examples that we'll treat later, then I can write down something that's a slightly extended version of what we talked about so far.

So I can have Q, Q, Q again for the hard. And then I can have my collinear, which is Q lambda squared, Q, Q lambda, and then soft, which is Q lambda, Q lambda, Q lambda. And it turns out that sometimes, there's also another mode which we haven't talked about, but I'll include it for completeness, which is kind of a collinear mode that's in between the low-energy collinear mode and the high-energy collinear mode.

AUDIENCE: Do you mean the square root of lambda?

IAIN STEWART: Yeah, square root of lambda, sorry. Yeah, otherwise my dimensions are wrong. Yeah.

So again here, you can just-- I mean, the reason I was extending this is just to, again, argue that it's kind of simple to see what can happen. So in general, when you think about convolutions in the hard momentum, it could be in this case between these three modes. There could be some integrals. And then look where else there can be something. So this and this are the same size. And this and this are the same size.

So in general, we can have convolutions in general between all these things, between the guys that are the same size. But that's the most complicated thing that can happen. It can't be more complicated than that. And you see that in some examples, you either have the purple or the orange, but not both. That's kind of typical that you don't get the most complicated thing.

So when you have results from observables that tell you how these things couple together, those are called factorization theorems. And in the effective theory, because you sort of define the modes, separated them at the start, you're kind of very quickly getting to these factorization theorems.

Let's see. So we're going to deal with a bunch of different examples. And I decided that I'm going to do it in the following order. So we're going to do the next exam-- so we're going to do a bunch of examples in order to see the range of possibilities that can happen.

And so I'm going to stick with SCET I for next lecture. So the next example we'll do, which I'll call example one, is we'll do our [? dijet ?] production. So this is a SCET I situation.

And the difference between the example-- so so far, what we did is we did DIS. DIS actually it was so simple because it only had two degrees of freedom that you could kind of either think of it as SCET I or SCET II. I mean, technically, it's more like SCET II, but it behaves like SCET I.

But there's no ultrasofts. And remember, it's ultrasofts and softs that are making the distinction between the two. So if you just have this mode and this mode, it's not really any difference between calling it SCET I or SCET II. So it's just SCET.

So e plus, e minus to [? dijets ?] will be an SCET I example which has ultrasofts. And actually, what we'll find in this case is that it will be the purple. We'll get a purple convolution. We'll see how that happens. And we won't actually-- momentum conservation will rule out the possibility of the orange one. So we'll see the opposite situation where it could be that we have ultrasoft modes as well, but then we only get a convolution with those ultrasoft in the factorization theorem and not with the [INAUDIBLE].

And then we'll turn to SCET II. And I haven't totally decided what processes I'll do, but I think I'll do the following ones. So one thing that you can do, which is pretty simple, is to look at something called the photon-pion form factor. So real photon-to-pion transition through another virtual photon, but you can think of this happening through a diagram like this with two quarks, one of them that's off-shell and one of them that's on-shell. This is pi 0.

So this is a SCET II example. But again, it's pretty simple because it's just going to involve one hadronic object. And actually, it will just, in this case, have collinear modes. We can set things up so the pion is n-collinear, and then we have hard modes.

So that's one example we'll do. Another example we'll do is B to D pi. And here, the B and the D are soft. Remember, we talked about this one, and the pion is collinear. And then we have hard modes.

So this is kind of like, in some sense, a DIS, but it's an exclusive process, not an inclusive one. And so actually, all the tools that we use, which were kind of-- in DIS, it's the most inclusive process you can think of. It's deep inelastic scattering. The I is for "inclusive."

Here, we're doing something completely exclusive, but we'll see that all the things that we've been thinking about, which is just separation of collinear modes and hard modes, will just go through for that process too. So the effective theory, the difference is that in this case, you will not be taking the amplitude squared. You won't be looking at forward scattering.

The forward scattering was what was making it inclusive. We were summing over all the final states. Here, there's only one final state. So the difference between this example and the one we just did is that in this case, we'll factor the amplitude, not the squared amplitude. But other than that, it'll look very similar to the example that we did for DIS.

B to D pi, then, is an SCET II example where we make things a little more complicated because now we have soft, collinear, and hard modes. And we'll see what happens there. And then I'll do some more examples after that. But let me say we'll do some LHC examples. And I think we'll also do broadening, which is another e plus, e minus observable.

All right, so that's where we're going, and we'll start going there next time by talking about [? dijets ?] and SCET I.