Flash and JavaScript are required for this feature.
Download the video from iTunes U or the Internet Archive.
Description: In this lecture, the professor discussed renormalization group equations for Hw, running & scale variations, LL, NLL, etc., matching and scheme dependence.
Instructor: Prof. Iain Stewart

Lecture 5: Classic Operator...
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
IAIN STEWART: --play with each other. We did the standard model as an effective field theory, higher dimension operators in the standard model. And then we started talking about taking the standard model as a theory one and removing things from it, in particular constructing what's called the weak Hamiltonian by removing the top W, Z, and Higgs from the standard model.
And last time, we were focusing on the anomalous dimensions and things about renormalization. So we had this equation for the weak Hamiltonian for a particular case of b goes to c u bar d. That was the case that we decided to study rather than the full Hamiltonian.
So there was some pre-factor. We had two operators with Wilson coefficients and operators. These are four-fermion operators.
And there was different bases. We could write them in the bare form, or we could write them in renormalized coefficient and renormalized operator. And then we could do it either in the 1, 2 basis or the plus minus basis. So the plus minus basis is just linear combinations of the 1, 2.
If you're in the 1, 2 basis, then you have this mixing matrix. So it's a 2 by 2 matrix. So if you're in the plus minus basis, then at least, at the lowest order, it's a simple product equation. So plus doesn't interfere with minus.
OK. So that's where we got to, and we'll just continue today. So we have these anomalous dimension equations for the operators. We can also write down anomalous dimension equations for the Wilson coefficients.
How do we do that? Well, the way that we do that is we make use of the fact that, if we look at the first line here, there's no mu. So the Hamiltonian is mu independent.
That means that the mu dependence of the coefficient cancels the mu dependence of the operator. And you can use that to take an anomalous dimension equation for the operator and turn it into one for the coefficient.
So last time, we talked about the fact that the normalization of the operators was equivalent to-- you can think about it two different ways. There's renormalization of operators, renormalization of coefficients. Likewise, you can think of anomalous dimensions is either running the coefficients or running the operators. So those are equivalent things.
And the thing that makes them equivalent is just imposing that the derivative with respect to mu of the Hamiltonian is 0. So if I use my equation up here for anomalous dimension of the operator, I get, with my sign convention, a minus sign here. Or I had a minus sign here.
I had no minus sign here. These things are conventions. I picked some convention, and I'll stick with it.
So from this equation, we basically have an equation that has to be true for the coefficients if we think of just stripping off the operator.
So we could write it this way, just reading it off right from here, just putting this guy on the other side and then reading it off, dropping the operator. Or we could write it this way if we wanted to write it in a way that's more similar to this equation up here, where you have the anomalous dimension matrix times the coefficient. It's either from the right or from the left, and it's just a matter of transposing.
So the anomalous dimension for the coefficient is determined from the one, this one here. If you know it, then you immediately know the one for the coefficient, which shouldn't be surprising given that we could think of the Z factors as being related to renormalization factors. Anomalous dimensions, therefore, should also be related.
OK, so we'll solve this equation. It's a little bit simpler to think about. Although we could have equivalently solve the operator equation. So how do we solve the equation?
Well, we go over to our plus minus basis. So take the coefficient of either C+ or C-. And the equation that we need to solve can be written as follows.
It's a simple equation. And I can even take the C, which is on the right-hand side, move it over to the left, if I write log C here. Because the derivative of the log is giving me a 1 over C. If I put it back over here, it would just multiply.
OK, so that's the analog. This equation here is the analog of this equation here for the operators, but now for the coefficients. Obviously, if these are numbers, there's no transpose to do.
So you have to solve this equation simultaneously with another differential equation. This is a coupled equation because alpha also has a differential equation.
So at lowest order, this is the beta function equation. And so if we want to solve, take into account this equation and integrate this equation, the simple trick for doing that is to make a change of variable. So we use this equation here to make a change of variable.
This is a very useful trick because it actually works to whatever order in the expansion that you might want to work. So let me explain what the trick is, and then I'll explain why that is. So we're going to change variables in this equation from mu to alpha.
You see we could think about solving this equation here by integrating, just move this thing to the other side and integrate. But we'd be integrating, in mu, a function that's a function of alpha of mu. So if we can switch from mu to alpha, then we'll be just integrating a function of alpha. And that's what we're going to do using this equation.
So if I write it in general, that's this equality here. For any function of alpha, I say that d mu over mu-- just rearranging this equation-- is d alpha over beta of alpha. So I can switch the integration d mu over mu to d alpha over beta of alpha. And that's exactly what I want to do if I want to move this operator to the other side, this operation.
If I work at lowest order in the beta function, then that's just I just plug in this result. So we can switch variables from mu to alpha. And if I had high order terms in this equation and high order terms in this equation, I could use the same trick. I just have other integrals to do. And the integrals are pretty straightforward, so this is a useful way of proceeding.
OK. So what does that do? So now, let's do that. Let's move this over and integrate.
Well, we'll do a definite integral from mu w up to mu. So here, we just have d log this. Integrate that-- just gives log between the limits.
And if I didn't change variable, I would have that. But if I make the change variable, then it becomes a very simple integral.
And remember that this guy here is also just a number times alpha that we worked out last time. So this is just d alpha over alpha. And that's a simple logarithmic integral. Yeah.
AUDIENCE: I don't know if it matter, but mu w should be greater than mu, right?
IAIN STEWART: Mu w should be greater than mu. That's right.
AUDIENCE: OK, so you're just writing integrals like that to avoid signs? OK. I have another question.
IAIN STEWART: Yeah.
AUDIENCE: How do you know that the anomalous dimensions, including the beta function, are only functions of alpha S rather than [INAUDIBLE].
IAIN STEWART: Ah, yeah. So I'm sneaking that in here. So it follows from the renormalization structure of this effective field theory that there's only single logarithmic divergences.
So in the standard model, if you're at one loop, you only have 1 over epsilon poles for the renormalization. And you're renormalizing the coupling. The same is true of this effective theory.
At one loop, you only have 1 over epsilon divergences. And that implies that your anomalous dimensions won't depend on anything more complicated. We will discuss more complicated cases in the future, as you know. But the structure of this effective theory and its UV structure, which I didn't go into on a lot of detail about, implies that fact. Yeah.
AUDIENCE: [INAUDIBLE]
IAIN STEWART: Yeah. OK. So do the integral. There's some pre-factor, which I'll call a+-. And then I get a log, as I mentioned. And just for the record, this a+-, if I put all the factors together and put in what this number is, these would be some factors like this. And I've put in that Nc is 3.
So this alpha at mu w and this C+ of mu w, you should think of mu w as the boundary condition scale. So this is a differential equation. We needed a boundary condition to solve it. And the boundary condition is the value of the coefficients at the scale uw, which is supposed to be of order Mw.
Typically, what that means is you could take a common choice, which would be just to take it equal. Or you could pick twice or half. And these are the most common choices that people pick.
So the way that you should think of that, this guy in the denominator, is you should think that he's really a fixed order series in alpha of mu w, something that you would calculate order by order and perturbation theory. And you'd be determining the boundary condition. We'll talk about how you would do that a little later today.
But for now, just think of it as a series in alpha of mu w. And it doesn't have any large logarithms. And as Elia said, you want to think of this mu as some small scale, some scale that's less than uw because you're thinking of evolving the operators to a scale less than the scale where you integrated out the particles.
I'll draw that picture in a second.
So we can take the exponential of that equation, and then we can write C of mu is equal to something that we've determined. Take the exponential. Move this guy to the other side.
Remember the a+- are just numbers. We can write the solution in this way, where we determine this guy by thinking about doing a matching calculation at the high scale. We determine it to be 1/2 already at the lowest order. So think about sticking in 1/2 here.
And then this factor here is what you get from the renormalization group. And you can see from this form right here that you've summed up an infinite number of logarithms. It's exponential of a number times log. And if I were to expand that out in alpha of some fixed scale, there would be an infinite series in logarithms.
OK. So what do we want to pick this mu to be? Well, we're thinking about the process mu goes to c u bar d. If you think about this process in nature, the scale in the initial state here is the b has a mass. So you'd like to take this scale here not at Mw, but down at the b core mass.
So we want mu to be of order Mb. And then you have a large hierarchy because this is much less than Mw-- 5 g of e-ish, 80 g of e.
So our result here sums what are called the leading logarithms, which is denoted by LL. And schematically, the lowest order term was 1/2. And if we were to expand higher order terms and think about what logarithms we're talking about, we're talking about logarithms of Mw over Mb.
And the series, if we were to expand it, would look like this schematically without worrying about the coefficients, an infinite series where each term has one alpha and one log. So the counting that you're doing in this type of setup, where we would think of using this equation to go down to the scale Mb, is that you're counting this parameter as order 1.
And you're saying, any time I see a log of Mw over Mb times an alpha, I'm not going to count that as order alpha. I'm going to count that as order 1. That's why I have to sum up this infinite series.
So the physical picture of what we've said here is the following. So the basic physical picture would be that there's two scales, Mw an Mb, which are physical scales. You want to get rid of the scale Mw.
You do that by going over to this electroweak Hamiltonian. But then you have to renormalization group evolve Hamiltonian down to the scale where you want to do physics, which is the scale Mb. And when you do that, there's some choice in the matter.
And we've been careful to parameterize that choice. We said that you pick a scale that's of order Mw, which we called mu w. And then we said, you pick another scale that's of order Mb, which we called mu. And you actually do the renormalization group between these two.
You could pick Mw and mu equal exactly Mb. That would be another simpler story. But it is actually important that, once you go beyond the lowest order, to keep track of the fact that you have this freedom. And that's why I've kept track over here.
So what that means is that, in terms of counting, you've counted this, but logs of mu or Mw were counted as order 1. And then down here, logs of mu over B are counted as order 1 numbers. It could be 0, but 0 is order 1, not enhanced such that they would compensate for a factor of alpha.
And this is the renormalization group evolution or the running that sums up these logarithms here, which are the large logs. And that's pretty simple. It just gave this factor.
And that's pretty common in QCD to get factors like that, alpha at one scale over alpha at another scale raised to a power. That's a very common thing to get from renormalization group evolution. OK, any questions so far?
How many people have done the calculation of the anomalous dimension for four-fermion operators in some other course? It's a common problem. Nobody? All right. That means you'll see it on a homework.
So let's come back here and think about what the general structure of what we've done is. And I'll put back indices. So you can think about taking the solution that we have at the top of the board here and generalizing it to a form that would be valid at higher orders.
And basically, it says that the coefficient at one scale is connected to the coefficient at another scale times some evolution factor. In this case, the evolution factor is just the ratio of these alphas to a power. It could be some more complicated function at higher orders.
And it could even be a matrix. That's why I've given it indices.
So we can put our results back together using this higher order form, so that they're generally true, into our Hamiltonian and see what we've achieved. And let me call this scale that I was calling mu a minute ago mu b just to remind you that it's a scale of order Mb.
So previously, we had the coefficient and the operator at the same scale. But now, using this equation, I can move the coefficient to a different scale. And so let me think of sticking this equation in. And then I have mu w.
I've called mu equals mu b. So now, this is mu w mu b. So this here is the coefficient Ci at mu b, but I find it useful to write it out.
So the thing in square brackets is Ci mu b, but I write it out using the renormalization group equation that way. And this tells you how you're doing the calculation. This is a fixed order calculation.
This comes from anomalous dimensions and gives you the evolution. And then you have operators involving the B quark that you would calculate matrix elements of at mu b of order Mb. And there's no dependence at all in those operators on the scale Mw.
All the Mw's are in the pre-factors here. You've taken that into account. You've calculated it.
OK, so that's how this organizes the physics of the different scales. So you could ask, if I had this story, how would I go to higher orders? And we will have some discussion of what goes on at higher orders because there are some things that happen at higher orders that you don't see at leading order.
And they're actually important physical things, so important things to know about and keep track of if you ever want to use things like this. So let's talk a little bit about what it takes to go to higher orders. So let's just first think about what it would look like if we went to higher orders.
Well, leading order was a series that I could schematically say is alpha times large logs. And I summed them all up, and I called that leading log. When I go to higher orders, I am going to continue to get series, but I got extra factors of alpha.
So something that you call Next to Leading Log, or NLL, is the same type of thing, a different series than that 1 times an extra factor of alpha. So it's down compared to this by alpha S. And then you keep going.
This is the general structure of the renormalization group improved perturbation theory. Just keep adding Ns and keep adding alphas, always summing up some series which changes from order to order. And that summation of that series is determined by determining higher order anomalous dimensions. So this kind of thing is called renormalization group improved perturbation theory. Every time you take alpha S and you take it at some scale, you're already doing renormalization group improved perturbation theory. It's just that, once you have theories that have other things that run and have anomalous dimensions, then it can be more complicated than just simply picking alpha S at the appropriate scale.
Here, in this theory, we have these coefficients. We have to run them. Then we have to pick them at the appropriate scale. And that's what we're doing by solving these renormalization group equations.
OK. So what do we need to do? We determined this. I showed you what you needed to do to get that. What would we need to do to get the next term in the series? How much would we have to compute?
Well, we just have to go to one higher order in the perturbation theory. So let's make a little table of what it takes to get leading log, next leading log. Maybe we'll even add one more term.
So there's two parts to the calculation. There's the boundary condition, and then there's the differential equation, which is the anomalous dimension. At leading log, we had tree level matching.
We determined the C plus and minus where 1/2. C1 and C2 were 1 and 0. And we just needed the one loop anomalous dimension.
And then we just keep going in this pattern. So next leading log, we need to match it one loop. And we would need the two-loop anomalous dimension and et cetera.
So the order in which you need the running is one higher order than what you need for the matching. That's the rule. And given those ingredients, we would be able to determine exactly these series here, OK?
So there's some things that happen at this order that aren't really apparent yet at leading log, and so I want to talk a little bit about that. Before we get there, let me add one other little note.
This operator O2, we didn't see it when we thought originally about matching. It had Wilson coefficient that was 0 at tree level. So at leading order, you could say that this Wilson coefficient is 0. But at leading log, it's not 0.
So I have these two different types of perturbation theory. Just order by order and alpha or doing renormalization group improvement, you get different results. And that's because you've included some higher order terms by using the renormalization group improved version.
But you can argue, if alpha times the large log is order 1, then this is the right type of perturbation theory to do. So if you think about it as a picture where this is mu, this is Mw, this is Mb, then you have two coefficients, C1 and C2. We call them C+ and C-, but they're just related.
And the results that we derived at leading order were that, for C1, it started at 1 at the high scale, basically, if we mu w equal to Mw. And it would evolve, actually, this direction if we put in all the signs that came out of our calculations. And for C2, it starts at 0 here, and then it evolves this way to a negative value. [INAUDIBLE].
So roughly putting in some numbers, the kind of thing that we would get is this. So a coefficient which was 0 all of a sudden becomes minus 0.3 and becomes something that you have to keep track of. That's at leading log. Obviously, when you go to higher orders, those numbers will be perturbatively improved.
OK. So is the physical picture here clear of what's happening with these operators? So what is the application? Since we spent all this time deriving these results, we should have some applications in mind.
So for b to c u bar d, if you ask about what process that gives, well, one process that it gives is just a B to D pi transition. B bar is built over u bar b. And this guy is u bar c. And the pi is the u bar d.
So we can think of the reason we're studying this is maybe we want to calculate B of D pi. So if we wanted to calculate B of D pi, we take matrix elements involving our Hamiltonian with a B [INAUDIBLE] in the in state and a D pi in the out state.
And if we just use the original Hamiltonian that we wrote down with the renormalization group improvement, then we would have that. That's with that renormalization group improvement. So this is at mu equals Mw.
And the problem with this formula is that this matrix element has large logs. It depends on Mw. It also depends on Mb.
And if it's something we can't calculate, then that's kind of bad news. In particular, having large logs like that would also make it hard to calculate something like this on the lattice. So it's not just a-- It's really a problem. If you have multiple scales tied together, it just makes the calculations harder.
It's also a problem for dimensional analysis. Because if you have large logs, that means you've got large numbers. And something that you thought was of a certain size might be bigger or smaller.
So what we do is, instead, we work in the renormalization group improved version where we take this down at the scale Mb. So this guy includes the renormalization group evolution. We use our results over there. And then we've got the operators at the scale Mb, and there's no large logs.
OK, so you'd want to calculate something like this on the lattice or some other way. There's other ways of doing it. And we'll talk about other ways of doing it later on. But so far, we've separated out the scale Mw into this coefficient that's evaluated at the scale of mu equals Mb.
OK. So the one way of thinking about this is, if you want to do physics at the scale Mb, the right couplings to use in your theory are these ones. Forget about what's going on at the high scale. You have to determine the low energy couplings that are appropriate to the theory you're dealing with. And those are the C's at Mb, OK?
All right. OK. So now, I want to come back to this question of thinking about the next leading log. And I'm going to do that by going back to our comparison of full theory to effective theory. We'll do a comparison of results in the full theory and results in the effective theory. And I'll show you how, by making that comparison, we can determine the ingredients that we need for this one loop matching here. So we'll focus on this.
So we already renormalized the effective theory. So we can compare the renormalized effective theory and full theory. And that's the right way to proceed.
So in our parlance of theory one and theory two, the effective theory would be theory two. We have to think about the full theory, which in our parlance would be theory one, the full theory being the standard model here. We have to think about renormalizing that theory.
But in the standard model, our calculation involves conserved currents. These are just a weak currents. And so there's actually no extra UV divergences associated to those currents. We just have coupling renormalization.
And one way of saying this is that what happens is that the vertex in the wave function graphs, the UV divergences cancel or the conserved current. So the result for the full theory will be some result that is independent of having-- it doesn't have any ultraviolet divergences.
And like the effective theory, where you had to carry out a renormalization of operators in that theory, for the full theory, coupling renormalization is all there is. So let's draw the full theory graphs. Gluons should be green. Maybe my w should be pink.
Six permutations-- and then there's also wave function normalization. OK. So if you want to do the full theory calculation, these are the graphs you compute. It's triangles as well as box integrals. It's actually a much harder calculation than in the effective theory.
And I'm not going to do the calculation, but I'll tell you what the results look like. So let's start by thinking about the logs. And then we'll talk about the constants that are under the logs as well.
So if we look at this calculation, it has the following form.
So there's S1, which was some spinners. We defined it in an earlier lecture. There's something involving a log, and it has a p squared. p squared was the off-shellness associated to these guys. And we regulated the infrared divergences with p squared.
So p squared not equal to 0 regulates IR divergences. And there are IR divergences in these diagrams. Even though I said they are UV finite, they're not finite in the infrared. And that's what leads to these logs of p squared.
OK. Now, I didn't write everything. I only wrote the pieces proportional to S1. There's pieces proportional to the other spinner, S2. And there's mod log terms. And they're all hiding in the dots.
So let's compare this result to a similar expression in the effective theory that we just set the coefficients to the values at the high scale. So then we have 1 for the coefficient times the one-loop matrix element of O1, which we wrote down earlier. And it looks kind of similar to this, but not exactly the same.
It's very similar, but not precisely the same. A similar statement applies to these guys over here, OK? So the difference is really that, instead of Mw squared in this log, we have a mu squared. That's really the only difference for these terms.
With constant terms, the non-logarithmic terms here and here won't agree. And we'll talk about those things in a minute. So what do we learn by thinking about the physics of these two equations?
Well, one comment is the comment I already said, that the effective theory computation for this line here is much, much easier than this one. So one reason to use effective field theory is just that it makes computations easier. And the reason it makes computations easier is because you're basically dealing with one scale at a time.
And whenever you have integrals involving only one scale, that's always much easier than having multiple scales. But if you want to encode all the physics of the full theory, you'll still have to do that calculation at some point as well. Although you may be able to do it in a simpler configuration to get off the information that you need.
Furthermore, if you really only cared about the logs, then all you really need is the 1 over epsilon term. And that's even easier than the full triangle diagrams. So to compute the anomalous dimensions, you just have to keep the divergences.
And you can throw away all the finite pieces. And that's even easier. So you can organize things by thinking about doing the easier calculations first.
That's what people do. They calculate anomalous dimensions before they calculate matching because it's just easier. And it's also the thing you need for the leading log result.
You don't need the matching at one loop for the leading log result. So there's a conservation of ease and what you need. These two things play nicely with each other.
Second point-- in the effective theory, you're supposed to think that Mw goes to infinity. And that's why this thing doesn't know about Mw. So how could it possibly get an Mw?
And so what happens is that Mw gets replaced by the cut off, which is mu here.
Another point that we can make about this-- if you look at the logs of minus p squared, they're all the same between the two equations. That's actually important because what that means is that the infrared structure of the two theories agree. The logs of p squared are the infrared divergences. They agree between the higher theory and the lower theory. And they have to agree.
What this tells you about the effective theory is that you're doing something right. In particular, it tells you that your effective theory has the right degrees of freedom. That's almost trivial in this example.
What other degrees of freedom could we possibly think that would be missing? But in more complicated examples-- and we will deal with at least one such example later in the course-- it's not so trivial to see that these things match up. And people have discovered new degrees of freedom by doing matching computations like this.
They said, oh, this is a relevant degree of freedom. And it's needed. Because if I do a one-loop calculation, I need it to get the infrared divergences right. So this can really teach you about the effective theory, doing a matching computation, teach you about the physics of the effective theory. So if you made a mistake, this would be a place where you'd catch yourself.
Now, that you've analyzed fully what the differences are, you can subtract them. You take the difference of the renormalized calculations. And that is what gives you one-loop matching. Just like we compared tree level calculations to get tree level matching, we compare one-loop calculations to get one-loop matching.
So [INAUDIBLE] we do that. At tree level, if we're just looking at the S1 pieces, we have these two terms. And then at one-loop-- let me use this notation. This is the full A. So it has a tree level piece and one-loop piece.
Then we take the piece of the C1 that's at one loop. And we take the matrix element of the operator, evaluate it order alpha. And then there's C2 terms. Let me just put dots.
OK, so there's two ways that I can get an alpha. There's an order alpha coefficient. That's what I want to know, what you want to determine. And then there's an order alpha matrix element.
So by subtracting, putting this guy on the left-hand side, I get the value of C1 of 1. So we use this equation to determine this. And the story would be similar if I kept all the C2 terms.
So the matching, i.e. the difference of the full and effective theory and calculations, determines that coefficient for you. So the notation here is that we'd write the full coefficient as 1 plus C1 plus higher order terms where this is order alpha. That's the notation I'm using.
So let's do that for the logs. It's pretty simple. These things here are just cancel. And this one we can just subtract them. The p squareds will cancel. And we'll get a log of my squared over Mw squared.
So just focusing on those terms that we have in S1-- and rearranging the equation in the way I said and then plugging in the values for the things that don't cancel. So the terms that were slightly different-- and dropping the S1.
CF is 4/3. It's Casimir of the fundamental. We see that, since we've only kept the log terms, what we find is the one-loop correction in this guy that's got a logarithm. but it'd also be a term here that's a number times alpha. But we haven't kept those terms in what I've written on the board.
So the way that you should think about this is that we've got these Wilson coefficients that depend on the scale Mw. And what the matching is doing, it's taking the full theory. And it's dividing it into large momentum pieces times small momentum pieces.
So the large momentum pieces are in C. Small momentum pieces are in the matrix element of the operator. And this statement, we see an explicit realization of it.
The full theory knows about high scale Mw's and the p squared. And we can write this as a split like this. The effective theory knows about p squared, doesn't know about Mw squared. The Wilson coefficient knows about Mw squared, doesn't know about p squared.
The additional thing that they both know about is the scale mu. And that's providing a cutoff for where you split between large momentum and small momentum
p squared was the small scale. Now, if you look at this equation, you may wonder for a minute. Why is it additive? Up here, I just said times. And then immediately below times, I wrote something that was a sum, seemed a little weird.
That's just because, if you take something that includes the 1, then the product becomes a sum. So if I write it this way, I can write it in product form if I include the one tree level. So it really is a product.
It's just that, if you look at the order alpha pieces, it breaks into the sum where we can nicely see how things are combining together. But really it has this product structure, and there's non-trivial relations between these two series that make it all work out even when you go to higher orders.
So if I expand that to order alpha, look at the order alpha coefficient. I get this equation back again. And this is how you would think about it in product form.
OK. So the other thing you see here is that order by order in our expansion, as we kind of already stated, the mu dependence between these coefficients and these operators is exactly cancelling because the full theory here didn't involve that mu. That's another little piece of information that we get or that we knew, but we see explicitly from looking at this.
So I think, if I'm counting right, this is comment number five. Was there a question? So not surprisingly, the cut off dependence cancels in the product of C of mu O of mu because the cut off is what we introduced to split up the physics in these two things.
Now, if you look at that in a little more detail, it's only mu independent of the order in perturbation theory that you're working. If you've worked at a fixed order in some expansion, then you shouldn't be surprised that everything you've derived is only true at that order. So if you stopped at one loop, then it's mu independent at order alpha S.
What that technically means is that terms that are alpha S mu log mu cancel. The log mu here cancels, but there's mu dependence also here. And that mu dependence in the alpha is something that would be related to terms that are alpha squared log mu. And that cancels at higher order.
So some of the mu dependents cancel. Some of the mu dependents doesn't cancel. And people actually use the fact that some of the mu dependence doesn't cancel as getting a handle on the higher order terms. It's doing a kind of theory uncertainty.
If we just think about the logarithms, then actually the one-loop results in the full theory has actually less information. And the reason is that, if you wanted to get higher order terms in this leading log series that we talked about, if you wanted to derive those from the full theory point of view, you'd have to do a two-loop computation.
So if you wanted to get alpha squared log squared of Mw squared over minus p squared, then you'd have to look at diagrams, two gluons. On the full theory point of view, that's what you'd have to do to find those terms. From the effective theory point of view, all you have to do to find those terms is renormalize the effective theory properly. And then you get those terms.
So we just needed the one-loop anomalous dimension. So in that sense, the effective theory, because of the renormalization properties of the effective theory, know something that the full theory doesn't know so easily. And that kind of shows you the advantage of taking something that's a constant, Mw squared, and turning it into a scale.
Because by turning it into a scale, you have the whole power of the renormalization group at your disposal to predict higher order things, like the higher order coefficients. And that's one way of phrasing what the example is of splitting scales and going to the effective theory.
So the final thing that I want to talk about here has to do with the fact that-- well, actually, there's two more things I want to talk about, but let me make the final comment here. So the final comment I want to make in my list, which is number seven, has to do with scheme dependence.
So scheme dependence means that we pick the renormalization scheme MS bar. And we could have done the calculation in a different renormalization scheme. And we should ask what depends on that choice.
You may know, if you've taken a course on the beta function or if you've taken QFD3, that the beta function of QCD is scheme independent for the first two orders. The analog of that statement here is that the one-loop anomalous dimension for our operators is scheme independent. It doesn't depend on which mass independent scheme you pick.
So in the class of mass independence schemes, the result is what we derived. We'll come back and study that in a little more detail. OK, so let's go back now and establish some notation where we actually just put the constants back in.
And again, I'm not going to write numbers. I'll just give them names. And we'll track what happens to them. So let's think about the full one-loop matching and how we get the next leading log result.
And really what I want to focus on, or at least one thing I want to focus on, is the scheme dependence. Because the coefficients, once you get to next leading log, are totally scheme dependent. So you can ask, what physical sense do they make if they're scheme dependent?
Well, it turns out that the matrix elements are also scheme dependent. And the anomalous dimensions are scheme dependent. So basically, everything is scheme dependent.
And when we put it all together, we get a scheme independent result. So you might think, well, if we can get scheme dependent results, we should just stop because maybe we can't understand what's going on. But C of mu times O of mu is independent of the scheme.
It's a physical observable. And physical observables don't depend on our definitions of things. Nature gets to decide, not us.
So one way of thinking about this is that we already saw some kind of scheme independence in a statement that C of mu times O of mu is independent of mu. But there's even a deeper scheme independence to it that it's independent of whether we chose MS bar or some others scheme.
So for the context of this discussion, I'm going to start dropping all the matrix indices. And we're not going to write i and j just because I want to keep things a little bit simple. So we'll write that the effective theory is simply one coefficient times the matrix of one operator.
So let's think about, in that context, trying to understand where all this scheme dependence is floating around and how the matching works. So we just do the same thing we did before. I'm leaving off some pre-factors, leaving off the pre-factors.
I don't have the write the spinners anymore since there's only one structure. And let me introduce some notation for the results. So we had this Mw squared over p squared type term.
And let me just focus on these terms and not the terms that just cancelled away. So let me focus on the terms that are different. But now, I'm also going to include the constants.
So the constants that we get in the full theory and the effective theory are different. So I'll call one of them A and the other one B. So I call the A the full theory result and the B the effective theory result.
So you should think of this as a number, like 3, just some number, same thing here. But just to avoid talking about numbers and to track also where the scheme dependence is-- like this 3 could be 2 in one scheme and this 4 could be 2 in one scheme and 5 in another scheme. In order to keep track of that, let me call it a variable. Let me call it B.
So then the Wilson coefficient is just we construct the difference. And then we'll have an A minus B in it.
So if you like what the Wilson coefficient is doing, it's compensating for the fact that the effective theory has the wrong value for this constant. It should be this. That's what the full theory told you it was.
So the effective theory Wilson coefficient has minus the effective theory matrix element result plus the correct result. So this is the thing that's correcting the effective theory. So it has the right constant.
And if we just take C at Mw, then it would simply be equal to that. And the log would go away. So in order to do the renormalization group improved perturbation theory at next leading log, we also need to do a two-loop computation.
We're not going to do the two-loop consultation, but I'll tell you the structure of the series that you get if you did that computation. So this equation is true. Therefore, we can write the anomalous dimension equation again as log C.
And the right-hand side will be a series. And the structure of the series that was there is the 0-th order term. And then there's some higher order terms.
And we need this guy, the two-loop coefficient, [INAUDIBLE] gamma 1. Again, this is a coupled differential equation. And we would solve it by using the kind of thing that we did before.
So d mu over mu is d alpha over beta of alpha. And we would write down beta to one higher order, which I do in my notes. But it's the same idea is I just expand it in alpha. And I keep not just the coefficient beta 0, but I also keep the coefficient beta 1.
I want to kind of not focus so much on the calculations, but more the results and the implications of the calculations. So do some renormalization group evolution. You can write the all-order solution as an integral, like we did before.
And if I just keep it in terms of these all-order objects, then it's just the ratio, which I expand that ratio in alpha. And if I want to do it to second order, I don't just keep the first time. I keep the second term.
So the first term was a 1 over alpha. So we're going to keep the order alpha to the 0 term. And if we use our notation that we established before, where we call this guy here, we call the exponential of this guy u, so C of u C of mu 0, of mu w mu of mu w mu.
Then we can write the solution of that guy as an exponential of an integral of d alpha gamma over beta. So some of the steps that we were doing at one-loop, just like the exponentiation, the separation, they just all go through. And the only thing we do have to do is evaluate this integral at one higher order.
Let me take mu w equal to Mw and then do the integral. And what you get is a result that we can organize in the following way. Try to get my arguments in the right order.
In this particular case, the next leading log solution looks as follows. Our leading log solution is obviously buried inside it. So we have this ratio of alphas, something which is a number.
And then there's these extra factors that depend on this then j. And I can write the result this way where j involves all the things that are the higher order ingredients. So it involves the lowest order anomalous dimension, but now times beta 1.
That's like taking the leading order anomalous dimension, but now running the coupling with the second order term as well. And then there's a term that involves the second order anomalous dimension. So it encodes that information.
So this is the U. We can combine that together with our equation for the C over here, or this one. So let me keep that one.
So I take this equation, multiply by that equation. That gives us C of mu. So I have to write this lone more time.
And basically, I can group that together with these other terms that depend on an alpha of Mw. OK. So j is the anomalous dimension piece. A and B are the matching. A minus B is the matching piece. And I can write the result this way.
So this is next leading order matching, which is A minus B and next leading log running to get the full next leading log result. So this is the kind of structure that you could get. That's what renormalization group improved perturbation theory looks like when you go to higher orders.
You basically have logs. But then the higher order terms, when you expand out this integral, are just giving you polynomials in alpha. So when you integrate polynomials, you get back polynomials. So if you integrate 1, you get alpha. If you integrate alpha squared, you get alpha cubed, et cetera.
So you just get back polynomials. And that's why you can write it this way. What are the terms in this result that are scheme dependent?
I claim that beta 1 gamma 1-- oh, why did I-- not beta 1, B1. B1 gamma 1 J, C, O, these are all scheme dependent. They depend on what renormalization scheme I use to define my effective theory.
And then there's a list of things that are scheme independent. So beta 0 and beta 1 are scheme independent. I told you that gamma 0 is scheme independent.
You could think of that like, when you do the one-loop calculation, the ultraviolet divergences are always going to be the same. You get 1 over epsilon. And it's only the constant that depends on your scheme.
A1 is scheme independent. That's because A1 was the full theory calculation. So how could it possibly know how we define the effective theory?
So that's scheme independent. And a non-trivial one is that B1 plus J is scheme independent. So there's scheme dependence in B1 and scheme dependence in J, but it cancels in exactly the combination that's showing up in this result.
And as I mentioned, C times O is scheme independent because that's related to observables. So I have a little proof of that in my notes, which, because of time, I'm going to skip. But I encourage you, when I post my notes through the website, that you take a look at where that comes from.
So the only non-trivial one really is this B1 plus J being scheme independent, OK? I have a little proof of that in my notes here. OK, so let's go back to the equation at the top in the middle there and see what conclusions we can draw once we believe this.
So if B1 plus J is scheme independent, then this thing that's showing up in that term, A1 minus B minus J, is scheme independent, as A was. It was just a full theory thing. And there's a cancellation of scheme dependence between the one-loop anomalous dimension.
There's a cancellation here between the two-loop anomalous dimension, which we called gamma 1, and the B1. That's where the scheme dependence cancels. So the scheme you pick, you have to be consistent.
You have to keep using it. If you do a matching calculation or if someone else did a matching calculation, you want to use it. You better figure out what scheme they're working in. Because if you start working in a different scheme, you're just making mistakes.
So this is the statement that the matching is scheme dependent, the anomalous dimension scheme dependent, but there's a cancellation between those two. If we look at the gamma 0 over beta 0 term, that's scheme independent. So that's good. If we look over here, J was not scheme dependent. J is scheme dependent. So we still have to worry about that.
So leading log result was scheme independent, but we still have scheme dependence of this factor 1 plus alpha of mu J over 4 pi in our C of mu. And the thing that cancels that scheme dependence is the fact that the Wilson coefficient alone is not a physical observable.
It's really the Wilson coefficient times the operator. And so there is scheme dependence in the matrix element of the operator.
So a matrix element of the operator at the scale mu is scheme dependent.
And this is at the lower end of our integration. So this is the final matrix element, like the matrix element at the B scale. That's a scheme dependent thing.
So if you think of these things as numbers that you want to determine from data, one way of thinking about it, those numbers are going to depend on what scheme you're using. If you extract some numbers in one scheme and your friend does it in another scheme, you could get totally different numbers. So you have to know what scheme you're working in. And you have to combine it together with the Wilson coefficient in the same scheme.
If you take some numbers from the literature and you don't know what scheme they're in and you're working at next leading log, you have a problem. You got to know what the scheme is because you have to work in the same scheme consistently. And that's the lesson here.
If you really want to do this whole program that I've talked about, which is done in this 250 page review article-- and I'm not asking you to read that. If you really want to do this whole program, there are some subtleties. And I should at least mention them to you since maybe you'll encounter the word someday.
So we've sketched the physics and the basic stuff that would be involved in the analysis, but we haven't written down the full operator basis with the full set of mixing and dozens and dozens of diagrams, which people have done. Mostly what you should be thinking of this is as a user. So I'm teaching you the things that you need to be able to use results like that. In an effective theory, if you're using a higher order result, you have to worry about scheme dependent.
So what are the subtleties? Well, one of them is that there's gamma 5s. This theory is chiral. And gamma 5 is inherently four-dimensional.
And you have to worry about that. And you have to create that carefully in dim reg. And when people originally did these calculations, that caused some confusion. Be careful enough. Obviously, dim reg is a powerful way of doing the calculation, but you do have to be careful about gamma 5.
And there's another thing you have to be careful about in dim reg. And that's something that are called evanescent operators. You see, part of our arguments, and originally when we were writing down the basis of operators for our calculation, were actually inherently four-dimensional.
When we wrote down the operators, we said we effectively used that these Dirac structures, which are 16 of them, we used completeness over those 16. And the problem is that, in d dimensions, that's not a complete set.
And any opinions that are outside that set that are additional operators that you need in d dimensions are called evanescent operators. So they involve Dirac structures that vanish as epsilon goes to 0, but are technically needed to get some calculations right. OK, so those are two subtle things to be aware of in the full calculation.
And I think we'll stop there for today. And we'll do something different next time. So homework is due next Tuesday. And as I said in my original handout, you should talk to each other about the homework. That's how you learn.