Lecture 12: More Renormalons

Flash and JavaScript are required for this feature.

Download the video from iTunes U or the Internet Archive.

Description: In this lecture, the professor discussed solution of R-RGE, sum rule for renormalons, renormalons in OPEs, connecting Wilsonian and Continuum EFT.

Instructor: Prof. Iain Stewart

The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.

IAIN STEWART: So where were we? So last time, we were talking about MS-bar scheme and renormalons. And so we talked about the MS-bar scheme.

We talked about renormalons. And we said that we could introduce a mass scheme that has an arbitrary power log cutoff, which we called R to distinguish it from mu, or lambda or some other cutoff. And so the idea of this scheme right here was perturbing a way from MS-bar to get rid of this renormalon problem that MS-bar has, but retain all the nice features of MS-bar.

In particular, we don't really want to calculate anything new if we can avoid it. And that's what this scheme does because really what it does is it uses the fact that we know that the MS-bar scheme has this renormalon. So this series here has a renormalon.

And it just takes that series over from MS-bar, puts it in with a cutoff, which is this R, and defines a new mass scheme using that formula. So this is like a mass scheme that has an adjustable cutoff, which we can take to be at whatever scale we want. And in particular, at HQET, you'd want to take that R to be something like a GeV type scale because that wouldn't spoil your power counting.

So recall, in HQET, that this delta m should be, by power counting, or order lambda QCD. So technically, you would take R somewhat greater than lambda QCD, but of order lambda QCD. OK. So now, we have this extra cutoff.

So when we have a cutoff, we have a renormalization group. And if you think about the renormalization group for MS-bar mass, it would be something like mu d by d mu of M of mu is equal to anomalous dimension times M of mu back again. This one's a little different.

It's R d by dR of M of R. And then there's no M on the right-hand side. It's an additive renormalization to get rid of the renormalon. And there's a power, also, of R here.

So it's not just summing logs. It is summing some logs related to the running of alpha, but it's also got this power. And when you solve this anomalous dimension equation, which we talked about last time, you get an integral like. In this t variable and at leading log order, it looks like this, difference of two incomplete gamma functions.

Because of the mass dimensions, it has to be made up by mass dimensions on the right-hand side. Coupling is dimensionless. The only thing that has dimensions is lambda QCD. And that's exactly what pops out of solving this equation. The R gets converted to a lambda QCD by this formula here. This g of t was just t at lowest order.

So this is an all-order form here. And this is the leading log solution there. So if we look at that leading log and what is the t0 or the t1, that's just the same formula here, but with alpha at R1 and R0, the boundary conditions.

We want to run from R0 to R1. And this formula tells us how to do that. And we get a result here that has the coupling at R0 and the coupling at R1.

OK, so if we look at that result, it's interesting. Because if we just had one of these gamma functions, these incomplete gamma functions, then we would have a renormalon. And that's where I stopped last time.

So just looking at this lambda QCD times gamma and expanding in a series of about 0, which is expanding in the coupling at infinity, you get this series, which has a 2 to the n n factorial. It's an asymptotic series. This is a classic example of a function that has an asymptotic series, this incomplete gamma function.

There's some exponential that combines together with the lambda QCD to give an R. So I pulled that out front. And then we have this series, which is exactly u equals 1/2 renormalon. And the u equals 1/2 had to do with this power 2 to the n here, the fact that it's 2 to the n and not some other number.

But if we take the difference here-- and this is what I said in words last time, but now I'll write it down in equations. If we take the difference there, that actually doesn't have this renormalon. So this is the difference of two series, each of which is asymptotic.

If we want to compare those series, we should expand them in the same coupling constant. So let me do that.

So if I take the difference of those two series using this formula, but now I re-expand all these alphas in terms of alphas at the R1 scale-- so when they're at the R0 scale, I re-expand them in terms of alpha at the R1 scale. Do some rearranging. I can write the result in this form as this. So the key thing is that difference there.

And if you think about what this sum here is, this is kind of like an exponential, something to a power, k factorial. Except it's limited to n. So if I really went all the way up to n equals infinity, this would be just giving actually exponential of R1 over R0. It would cancel the R0 over R1, which would then cancel the 1.

You can rearrange the series a little bit to make it more obvious that it's convergent.

Because there is this n factorial here. And you might worry, well, this thing here has to fall fast enough to get rid of that n factorial. So write this thing out, this 1, as an infinite series of terms from 0 to infinity of the same form as this.

And then what you're left with is the following thing. All the terms from 0 to n cancel. And then you're just left with the terms from n plus 1 to infinity.

And then you can see that the n factorial is being tamed by another factorial growth, which is always of a greater power, n plus 1. OK, so this is a number less than 1. And this thing here, beta 0 alpha over 2 pi times the log, is also something that's less than 1. It's exactly the thing that you sum up when you're running the coupling. So this is a convergent series.

And basically the physics of it is we had this mass that didn't have a renormalon, but it had an arbitrary scale. And what the renormalization group allows us to do is move to another scale. When we move to another scale, it better be that we don't reintroduce the renormalon. And the fact that I can write this as a convergent series shows very explicitly that I'm not having that problem in the difference between these two.

So it's renormalon free. And it remains renormalon free when we change the scale. And we can sum up these logarithms that have to do with changing the scale.

Why would you care about that? Well, imagine that you extract this mass from some B physics. So you take an r. that's of order GeV. And then you say, well, that's a very precise number. But how do I use it for, say, a LHC phenomenology?

Well, if I want to connect it to LHC phenomenology, I probably want to convert to the MS-bar scheme because LHC is high energy. Maybe I'm doing Higgs to bb-bar. Bb-bar-- very energetic, so we want to use the MS-bar scheme.

But I've got this mass at a very low scale. What you would want to do is you'd want to use them the renormalization group, run it up to the mass scale Mb, switch schemes from this scheme where you extracted the mass to MS-bar. You'd get a correspondingly precise value of a MS-bar because the series between this mass and the MS-bar mass is, again, a renormalon free series.

And then you could take that mass at the Mb scale, run it up to, say, the Higgs scale, and use it for phenomenology. So you always want to be able to go back and forth between schemes. And if different schemes have different scales associated to them, then you need to the renormalization group in order to put them at the same scale where you want to do the conversion, OK?

So that's why being able to sum up these logarithms without reintroducing any renormalon problems is important. And in general, if you were to try to do this with MS-bar, it wouldn't work. You don't really have a way of treating the renormalon in the MS-bar scheme when you're talking about physics below the mass of the particle.

OK, so we can generalize this little discussion here, which was a fixed order leading log discussion. And I can write down for you just what would happen if I were to formally integrate this integral here without making any expansion in alpha S, OK?

So express gamma R of t as an infinite series. And it's an infant series in alpha S. That means it's an infinite series in 1 over t. We had an expression for this capital G of t last time, which again was an infinite series.

Just plug in those infinite series. Do all the integrals. They all turned out to be incomplete gamma functions, OK?

So we can write down a formula which is an all-order generalization of that result just to see what happens at higher orders. So basically, we would have something like-- if I look back at what g of t was, we'd have something like this.

And then all the rest of the terms we can expand out in an inverse Taylor series in 1 over t. So just there's two terms in g of t I want to keep in the exponential. And then the rest of them I can expand out and combine with these terms. And that just gives me some series. And I can do these integrals.

So writing the integrand out, we can just establish some notation for it for that infinite series of 1 of t's. And we'll just call the coefficients Sj.

There is 1 over t. Yeah. So this gamma bar starts as a 1 over t. So really it should be a [INAUDIBLE] 1 over t here. That's where this 1 over t came from.

So then integrate that, and that gives us a generalization of this formula here where we say that M of R1 minus M of R0 where we work at some order in the resummation, what you would say is, N to the KLL. So you have leading log, next leading log, next to next leading log. And when you have K of them, you say NKLL.

And the solution of that equation would be the k-th order lambda QCD and then this series. And instead of just the 0-th incomplete gamma function, we get a slightly different one. But it's got the same kind of structure where we get the difference of gamma functions.

And these S's are just whatever you get from the anomalous dimension, so kind of simple notation. [INAUDIBLE]. So they're just some combination of numbers. And the twiddles here is just my shorthand for accounting for a fact that there's some beta 0s floating around.

So this is just algebra. And these b1 hats and b2 hats were things that showed up in the beta function of QCD, which we also talked about last time. So we defined those last time. The b1 hat and b2 hat were just combinations of the beta 1 and beta 2 and beta 0.

So these are some numbers that you can calculate given the anomalous dimensions. Given those numbers, you can plug it into this formula. And then you have a generalization of the previous result.

So whatever order you know the anomalous dimension to you can use that solution. And we'll use it a little later when I talk about some numbers. All right, so any questions so far?

There's a similar thing for MS-bar mass. If you were summing logs of the MS-bar mass, there's an all-orders formula you could write down and use once you know the anomalous dimensions. You could find it in the particle data book I think.

All right, so just coming back to this comment about the fact that the anomalous dimension is renormalon free-- so the fact that the solution here was renormalon free, that the difference here was renormalon free, was because there was no renormalon on the right-hand side here and that the anomalous dimension was renormalon free.

And it was constructed from something that had a renormalon, which is this delta M, but it was a derivative of that delta M. And so the derivative kills the renormalon. So this thing is free of the delta M of order lambda QCD renormalon because that renormalon is a constant. And when you take the derivative, you kill the constant. That's how you can think of it.

If you start writing out the gamma functions, then you see that it looks like you're finding the renormalon. But then there's some differences that show up and cancel it off. So the way that that works, it's kind of like this where you sort of think that you have the renormalon, but then you realize there's another term and it cancels it off.

So if we look back at our formula for delta M, this had an infinite series of a's. This was like a sum over sum a n's, and some alpha to the n, some other factors. And these gammas are just related to the a's. But they're not just related to the a's. They're also factors of the beta function that come in.

So just to give you an idea-- so once you know the scheme conversion to the scheme and you know the anomalous dimension and if you look at this series and you think about whether the series for the anomalous dimension is convergent or asymptotic, you can sort of identify the pattern of how these coefficients look.

And basically, if you, for example, look at the bubble sum, this a n plus 1 would be something that has an n factorial 2 beta 0 to the n plus 1-- sorry, to the n growth. And then this thing here is n minus 1 factorial to beta 0 to the n minus 1. And then there's this extra 2n beta 0 that multiplies it.

And you can see that there is an exact cancellation. One way of probing the renormalon is just to look at the bubble sum. And the bubble sum gave us a particular value of these a's. And you can see that the bubble sum renormalon is cancelling out between the terms here.

So that's just an expression of what I said over here. That's the kind of more definite expression of the fact that this thing is actually free of the renormalon. So I'm going to put this aside for a minute. And we'll come back in to use this formula a little bit later in today's lecture.

So there's sort of two ways that you can use this technology. You can use it as a means of doing what I said, of connecting masses at different scales and doing phenomenology. And we'll come back to that a little bit later.

But you could also use it as a probe of the renormalon. So one thing that you might complain about with the bubble sum is that it's just some arbitrarily chosen way of probing the renormalon. And maybe there's some problem where you don't have light fermions around. Maybe you only have gluons. And there could still be a renormalon in those problems.

So how would you deal with that if you didn't have the light quarks available to make up this bubble sum? Or maybe there's some renormalon that just so happens you don't see it with the quarks. There's no guarantee of that.

So you'd like to have some other mechanism for looking for renormalons. And you can actually use the renormalization group to do that, which is kind of interesting. So I'll show you how that works.

A nice thing about this is that it also makes it clear how the renormalon relates to the Landau pole. So we'll see that as well. So take our solution for the RGE, which we could write in terms of this integral.

And let's consider what this integral actually is doing in the complex t plane. So the t's are negative, and so they look like this. The Landau pole where the coupling blows up is where the t goes to 0. So the Landau pole is at the origin. So this is what the complex t plane looks like.

And what we're doing when we do the integral is we're just doing an integral over that little range here, right? So we're far away from the Landau pole. And that's why everything was nicely convergent and you could connect one mass to another.

But you could ask, what happens if I take this t0 and I move it somewhere else in this picture? And that's what we're going to do. So let's consider the limit where R0 goes to 0.

If R goes to 0 and you look back at the formula that we had that converts between the MS-bar mass and this M of R, you'll see that M of R0 goes to M pole. Because we had a power of R times a series of alpha of R's, which are just logs of R. The power always wins, and so the mass goes to the pole mass and that limit that R0 goes to 0. And if you look at what T0 is, T0 is related to the coupling, which I could write in terms of a log of lambda QCD.

And T0 is basically in this limit going to plus infinity. So what happens is, if I want to get back to the pole mass from one of these masses, I basically have to take this guy and move out to infinity, which means I have to make a choice about going around that Landau pole.

So basically, you're taking your RGE and you're pushing it into a region where you're no longer completely perturbative and that you're forced to continue by Landau pole. Now, you can actually avoid the Landau pole. It's true that the series becomes non-perturbative in this region, but you could go way up here and come back over there.

So you can actually formally stay in a region in the complex plane where you're perturbation theory is still applicable. But you still have to decide which way you're going to go around it. And so that introduces an ambiguity.

So doing that, say we go above or something. And if you plug in this result, the lowest order leading log result, you get this integral.

And this integral, you know, you have to go around the t equals 0 pole because t1 is negative. So it turns out that you can actually take this integral-- and I'm going to leave this for your problem set-- and write it in the form of a Borel integral with an exponential that's exactly the Borel transform variable.

So I just change the variable from t to u. So I get this integral. And if you do that, what you find is that f of u is proportional to 1 over u minus 1/2.

AUDIENCE: What's the lower limit of integration?

IAIN STEWART: 0. Yeah. OK, so you can actually see that the RGE and what was the Landau pole here is effectively, in this integral here, becoming this Borel pole at u equals 1/2.

And that makes a little more clear the fact that these poles are related to the non-perturbative physics that's in the coupling and the infrared physics. So formally, this actually allows you to set up a method of trying to find the renormalon. You can ask, given some series, if I sort of construct this thing, what if I just had this attitude that we want to pick that series to cancel the renormalon? Now, take the attitude, what if we didn't know that that series had a renormalon?

I could still throw it in, make a change of variable to some mass. If it's a convergent series, then I'm just defining some new mass. And I could go through this technology. And it should turn out that, if the series was OK, so to speak, then you wouldn't find this renormalon. You wouldn't find a pole there.

I won't go through this in detail because it is a bit tricky to derive. But you can derive what's called a sum rule for the renormalon, which is another way of probing it by basically taking our all-order solution. So I took our leading log solution here, right?

If I did the same thing with the all-order solution, I could also take the Borel transform of that. And that, if I do that, gives me this thing called the sum rule.

So it's just manipulating an infinite series. And Mathematica can sum some of them up for you. So it's just tedious.

And what you find is a formula for the residue of the Borel pole. So let me call that residue P 1/2. And the answer is actually pretty simple.

So these S's were these coefficients that were showing up in our solution for the all-orders RGE. P 1/2 is a series which will tell me the residue of the pole. So that means that, if P 1/2 is not equal to 0, then you have a u equals 1/2 pole.

And if 1/2 is equal to 0, then you have no u equals 1/2 pole. Now, in practice, you don't have an all-orders result. You only know a finite number of these S's.

So what you actually do is you look at this series and try to figure out where it's converging to, OK? But actually, even with the knowledge that we have of the series, for example, for the b quark mass, you can very quickly see that it has a u equals 1/2 pole. And you can even do some error analysis by varying some things. And you get some kind of ID of the theoretical uncertainty from higher order terms in the series.

If you just formally, say, take a series that's convergent, if I hand you a series that's convergent, you can calculate all these SK's. And then you can very easily see that actually there's no u equals 1/2 pole. What happens is that all the terms are cancelling in this series, and you just get 0.

So it's kind of like the cancellations I was talking about where I was cancelling a poll in the anomalous dimension, but here it's happening in this thing if I have a convergent series. So this is a way of probing for renormalons, which doesn't rely on the bubble sum. And it basically uses the perturbative information that you have available, whether it's non-abelian or abelian information.

AUDIENCE: Iain?

IAIN STEWART: Yeah?

AUDIENCE: So when you did, well, this calculation here, that capital F, is it the same result as we found a week and a half ago or something that has the renormalon at u equals 1 and all that stuff as well?

IAIN STEWART: No, it's just the u equals 1/2 pole.

AUDIENCE: So [INAUDIBLE].

IAIN STEWART: Yeah. Yeah.

AUDIENCE: So what happened to the [INAUDIBLE]?

IAIN STEWART: Right. So what we've done here-- and actually, I'm just going to come to this, but I'll foreshadow. So what we've done here is we set up a construction to remove the u equals 1/2 pole. And we did not remove any higher poles.

We said the u equals 1/2 is the most important. Let's get rid of that one from the MS-bar mass. There would be, in principle, higher ones. And higher ones, basically, we had something that was proportional to R. That's for u equals 1/2.

If you wanted to remove higher ones, you'd need terms proportional to R squared, for example, for u equals 1, et cetera. So we could have added more terms to do more, but we didn't. We just sort of removed the problem at u equals 1/2. And that's why, when we go through this, you only see a u equals 1/2 pole.

AUDIENCE: So if you wanted to do [INAUDIBLE] analysis or something, you'd have to change the result of this--

IAIN STEWART: Yeah. So the way you should think about it is that basically what you're doing is you're setting up a scheme change that perturbatively takes you out of MS-bar towards something else. You remove the u equals 1/2, and you get rid of that problem. If you thought you had enough perturbative accuracy that you were seeing problems related to u equals 1, then you could make further scheme change with another series that's proportional to R squared to get rid of that one.

AUDIENCE: Wait, so the Landau pole would tell you about all the renormalons?

IAIN STEWART: It would tell you about these ones, too. yeah, absolutely. So you can actually generalize this for R to the P. And you can derive a sum rule for R to the P. And it's a slightly more complicated formula here. And then you'd have the P over 2 pole.

All right, OK, so the sum rule is a probe, alternate probe, for the renormalon. The nf bubbles are the classic one that people know about. And this provides an alternate.

The nice thing about this is it also provides a series that you can look at the convergence, where the nf bubbles just get a result. See, nf bubbles are good for finding out whether or not this thing has a renormalon. If you get some non-0 result, then you expect that it does. This way you could actually calculate the residue if you wanted to know that value for some reason.

OK. Now, that actually might be useful, for example, if you wanted to think about higher renormalon poles because you might want to say, well, let me get rid of the first one with whatever residue I can best approximate for and see if there's another one underneath. There's not really enough perturbative information that we have about QCD to be able to do that type of thing. There's a few cases where u equals 1/2 is absent, and then you can do that kind of thing.

I mean, there's actually many cases where u equals 1/2 is absent, and you can look for u equals 1. But looking for a sub-leading renormalon is something that nobody's found one. People sometimes speculate about them because maybe the residue of the first one happens to be small. This would allow you to test for that. In the tau decays people talk about that.

OK, so let's come back and actually show you how you can use this technology for some phenomenological stuff. And I want to do that by generalizing our discussion slightly away from masses towards just a general operator product expansion. So everything that we've said here for masses can be applied more generally to an operator product expansion.

So let me show you what I mean. So let's first consider what I mean by an operator product expansion at MS-bar. So you can think of this like maybe I'm integrating out a heavy particle.

And I'm writing a formula down, and it has some Wilson coefficients. It has some operators. And it's a Taylor series in this parameter q, which is the hard scale, high energy scale.

And I'm basically just expanding, if you like, in lambda QCD over Q. And the operators, they're set by lambda QCD. So I'm going to take this guy to be dimensionless. This is some dimensionless observable.

You can always make it dimensionless by a suitably choosing, multiplying by appropriate math scales. So then this here will be a dimensionless Wilson coefficient. And it's a Wilson coefficient in MS-bar. That's what the bar means.

And so this is an MS-bar operator, MS-bar matrix element of some operator. And it's also dimensionless. So this guy will be dimensionless, too. And finally, this guy will have dimension equals 1.

Now, at MS-bar, what's beautiful about MS-bar is that the series for these coefficients are very simple. That's why people like to compute in MS-bar. So if you look at what the series for C bar looks like, the form of it is as follows.

It's got dependence on mu over Q and alpha s of mu. And these guys here are not an arbitrary function of mu over Q, but they're really just some logs. So this looks like a series of logs.

So in MS-bar, you're only seeing these logarithms. So that's good from a computational point of view, but we also know that generically, in this MS-bar approach, you might get sensitivity to renormalons.

Also, another thing we like about MS-bar is that it satisfies all the symmetries we want to keep. It doesn't mess them up. So it's Lorentz and gauge invariant, which is always a good thing.

And not only is it simple for multi-loop computations, but multi-loop computations are not even done in other schemes because they're just too difficult. So it really makes multi-loop computations possible. And the but, there's only really this one but, that it has this renormalon issue.

So we talked about renormalon for a mass. And I just showed you what it was, and then we explored it. Here, let me tell you what the renormalon would be.

So generically, what the renormalon would be is the following. You would have an ambiguity in the coefficient C0 bar that's of the size lambda QCD over whatever the scale Q is. And you would have the same ambiguity in this theta 1 bar.

So this theta 1 bar, which is an MS-bar matrix element has an ambiguity, which is of order lambda QCD. This guy has an ambiguity. And those ambiguities cancel.

So you're dividing up the physics again in MS-bar into short distance physics and long distance physics. And the interesting thing is that there's a piece of long distance physics trapped in the C0 that, if you really wanted, should be physically in the theta 1 bar and vice versa. So really what's the problem with MS-bar is that there's a piece of something that you want in theta 1 bar, but ends up just being in the C0 bar.

And that's what I mean by this that there is something here, which is infrared sensitive. Correspondingly, there is kind of an ultraviolet sensitivity in this theta 1 bar. And you'd like to get that rearranged in an appropriate way.

And that's like this kind of rearrangement we were making by changing schemes for masses. Except here it's going to be changing schemes for Wilson coefficients. So this thing here is what we would call a u equals 1 renormalon.

So what happens with u equals 1 is that you make a connection between the leading order term and the power suppressed term. That's what happens with the delta C0 bar. So in order to flesh out a little bit better what I just said in words, let's pretend that we understand what the theory is.

And let's just pretend it's some integral. And I'll just show you what is going on here.

Let's imagine that our theory is an integral. Instead of a quantum field theory, it's a one-dimensional integral. And it has two scales, one which I'll call lambda QCD, which is some low scale, and one which I'll call Q, which is some high scale.

And I'm using a notation where the integral could diverge and I regulate it with a dimensional regularization type parameter. So I'm doing a one dimensional undergo but continuing it into epsilon dimensions to regulate any divergences.

So what MS-bar does with this integral is it just says, well, I'm not going to change the limits of the integral. I'm just going to either expand out this f or expand out this denominator. I'll do two different Taylor series, and then I'll put them together. And that's basically how I'm identifying the terms in my OPE.

So MS-bar-- so I sort of associate there being a high energy piece where k is of order Q. And then there's some low energy piece. I don't change the limit. So in the low energy piece, I keep the full function f. And then I expand out the other thing.

And if I drop the dots in this formula, I can identify these integrals as toy models for the things that are appearing in my OPE. So let me write that OPE again.

So here these things are 1, this example. This integral here, which is over high energy scales k [INAUDIBLE] by Q is this guy. And this integral here, where k is of order lambda QCD, effectively, there's a 1 over Q, which is this 1 over Q. That's giving me the operator, theta 1 bar.

So Wilson coefficient is the high energy piece of the original thing. Matrix element here is the low energy piece which is suppressed by 1 over Q, but they both came from the same physics in the beginning. And I just was expanding it because that's how I wanted to organize it.

Now, the thing about MS-bar is that you integrate all the way down to 0 there. And you integrate all the way up to infinity here. And that's where the renormalon problems come from.

So this little procedure here separates the long and short distance physics for the logs. It does that correctly. But for powers, we effectively rely on the fact that it's at scale. This integral goes to 0.

That's what's forced upon us by the definition of MS-bar and dimensional regularization. And what goes wrong is basically that physically we're still integrating over regions of those integrals which are not associated to the physics that should be in those parameters.

So, now, this is a toy example, but the same thing is happening in the quantum field theory when you think about what you're doing when you write down Feynman diagram and you construct a Wilson coefficient. It's effectively the same thing in dimensional regularization. You're integrating over all values of momentum. And this little kind of toy analogy is actually apt for what's going on in a true operator product expansion.

What would happen if you did a Wilsonian picture in our toy model? Well, then you would cut off the integrals explicitly. You'd say, let's introduce some scale lambda f. I could still think of it expanding in the same way.

I don't need the dimensional regularization anymore. So I'll set that, get rid of that. I think of breaking the integral into two pieces, the low energy and high energy pieces. And in each of those pieces, I expand the integral in different ways-- so same type of expansion, I'm just regulating it differently.

Now, I'm guaranteed that this is high energy and this is low energy. And you can write down sort of what this is a proxy for as a Wilsonian OPE. Mu gets replaced by the scale lambda f. Everything gets a W for Wilsonian.

Again, this is one with our toy model. This guy is this guy now. This guy is this guy now.

So it's a very similar kind of association, but just the way that we're treating the integral is different. In this way of doing things, you don't have any problem with the renormalon. You just have an explicit cutoff, which says there's no low energy part of this, and there's no high energy part of that.

I'm not integrating all the way down to 0 in the first integral, so I don't have any problems with the renormalon. The renormalon came from infrared physics that was happening near k equals 0. And in this one, there's no sort of problem which would come from infinity.

So in some sense, this is what we'd like to do. And the but here is that it causes difficulties with symmetries. In particular, it generically breaks gauge invariance and Lorentz invariance. These get broken.

And the calculations that you would like to do are just too difficult. I don't know of anybody that's done-- maybe somebody's done two loops in the Wilsonian picture, but nobody's ever done three.

So in some sense, you have some nice things here. You have some nice things here. But they're not the same nice things. And you'd like to have something that does the best of both. And that's effectively what we're doing when we construct these R schemes.

What we're doing is we're saying, well, let's take MS-bar as a starting point and try to get rid of the problem that it has by perturbing towards a Wilsonian picture. So what would an R scheme OPEC look like? Start with MS-bar and make a scheme change.

And make a scheme change here both to the operators and make the same scheme change to the coefficient. So I'm just really moving things around.

So let me just formally write down what such a scheme change would look like. I set up some coefficients dn. And I do a subtraction on the operators and an addition on the coefficients in the way I've written it.

So that's just a rearrangement of the physics. And we've set it up so that the result of that looks like this. I took, in this example, C1 equal to 1 simplicity. A little more complicated if I want to keep it, but we could keep it if we needed to.

So basically, I've made a rearrangement to some equivalent OPE by just changing the thing, the definitions here, but I have now the freedom to pick these d's. And what I can do is I can pick the d's to have the same renormalon as the MS-bar scheme. And, therefore, I can iteratively, again, remove the first problematic term of this to get a better defined theta 1 in C0 just like we were doing for the masses.

So you can remove the u equals 1, for example. You can remove the u equals 1 renormalon by choosing the d. And that's exactly what the Wilsonian was trying to do for you.

And that's what it was doing for you with putting this cutoff. MS-bar was having problems at 0. And we're sort of perturbatively removing those problems, perturbatively going towards the Wilsonian picture. And the nice thing about this is that we can maintain the symmetries, still gauge invariant, still Lorentz invariant.

And we don't get the full sort of power of the Wilsonian, but we get closer to it. And we can, in some sense, get as close as we want, again, by sort of perturbing this from order by order. So what we introduce is power law dependence in R. And that power law dependence, it's the analog of the power law dependence that would be in the Wilsonian scheme.

So when you have integrals like this, you would have powers. There's nothing that stops you from having a complicated function of Q over lambda. And what we're saying is that the dominant kind of power law sensitive terms here and here are captured by making this scheme change.

So we decide that we like MS-bar because we have calculations that other people have done hopefully. If they're three-loop order, you certainly hope that someone else spent two years of their life on it rather than you. And you can take and perturb your results from MS-bar towards a gauge invariant, Lorentz invariant version of the Wilsonian, of the Wilsonian picture.

So let me give you one example of that to show you in practice, putting in some numbers. Oh, I'm not quite there yet. OK, so let me give you a well-defined scheme which is analogous to the scheme that we just talked about, which was MSR scheme for the mass. There's also an MSR scheme for the OPE.

And again, the attitude is let's not calculate anything new that we don't have to. So we'll reuse the coefficients of MS-bar. Those coefficients had the renormalon. So if those coefficients were, we called them a few minutes ago, bn mu over Q, we just reuse these at a different scale.

So what we do is we say dn of mu over R is defined to be the bn of my over R. So these guys were the MS-bar coefficients. These guys are whatever we decide to put into the series over there. Just take them to be equal. That's the MSR scheme.

And then if you look at what the coefficient is, if we just write out explicitly what the coefficient is by plugging in the series, you can again see how it's providing a cutoff.

So this was our original MS-bar series, bn of mu over Q. We introduce a subtraction, which is suppressed by R over Q times the same series bn. And this R is providing kind of like a power law cutoff on a problem that this guy had, exactly the power law cutoff.

So again, if you look at the renormalon in this thing, this combined thing, the renormalon is independent of R and Q. So you would just plug in the bubble series result. You'd find, actually, that the R over Q here would cancel out. And so that these would just exactly cancel each other.

The u equals 1 [INAUDIBLE] 1. And again, if I wanted to probe other renormalons, I'd have to consider other powers besides R over Q. But those actually, in this OPE, would be connected to other operators.

So there is other sub-leading renormalons here, which could be connected to 1 over Q squared, et cetera, et cetera. And in MS-bar, that generically will happen. But usually, it suffices just to worry about the first one since many people in the community that do these multi-loop calculations don't even worry about the first one.

Anyway, OK, so you can again, after you've got this definition, you could actually formally write this as C0 bar [INAUDIBLE] Q, mu if you want to think about what it is, minus R over Q, same Wilson coefficient, replacing all the Q's by R. So that's another way of saying what the scheme change is from MS-bar results to this result.

And this R is acting like an IR cutoff to get rid of the bad problem. And again, we have a renormalization group. So R d by dR of this Wilson coefficient, it's convenient to think about that renormalization group with the log cutoff mu set equal to the power law cutoff R.

And it's very much like our masses. There's some function with an R in front. And we could formally write down solutions to it. And we can explicitly plug in results given how many terms in the series we know in the MS-bar series.

So all the anomalous dimensions of this Wilson coefficient would be determined by the MS-bar series, the original one, because the scheme that redefining is set up so that it uses those [INAUDIBLE] coefficients. So what would the RGE look like?

It's the same kind of structure as what we have for masses, some gamma functions.

And you can think of this as some C0. If you want kind of to have a more standard notation, you would say that this is some U function. And this is a bit of abuse of notation, actually, here because it's additive, not multiplicative. But if you allow me to put this guy into here, then I could always write it this way. And I'll just do that for convenience.

So I want to show you one example of how this plays out in practice. And it's related to our discussion of HQET. So one thing that we said about HQET is that you could calculate the following quantity.

So HQET makes some predictions about the B star and the B. They were connected. They were in a symmetry multiplet.

And when you took the difference of the B star and B and you took this combination, it actually is purely perturbative. Because there's a non-perturbative parameter. But if you treat the charm quark as heavy, it's the same for charm and bottom. And it cancels out.

So you can actually write down an MS-bar operator product expansion, which is technically defined in HQET. And what it looks like is the first thing is just a ratio of Wilson coefficients. So we talked about this.

And we plugged in this sort of leading log result. And we saw that it was working actually pretty well. And if you were to kind of look at higher order terms in that OPE, there would be some power corrections to that.

They've got some traditional names. The names are not too important. But this is like the analog of the 1 over Q term.

This is the 1 over Q term. This is the leading order term, which was purely perturbative. There's no matrix element here. It's just 1.

This is a higher dimension operator. This guy scales like lambda QCD cubed over lambda QCD squared. This is just lambda QCD.

So that's like the lambda QCD over Q term. And this is the formula you would write down if you wanted to make a prediction for this thing at higher precision in MS-bar. So what happens if I think about this perturbative result at higher orders?

So people have actually calculated that coefficient to three-loop order. And here's where it looks like. So at order alpha, you get minus 0.113. At order alpha squared, you get minus 0.078.

And then you get another minus 0.0755 at alpha cubed. It doesn't look so pretty, all right? So each order that you calculate is giving you a correction about the same size as the previous order.

Well, you might say, of course, there's logs. So I should resum the logs. So let's resum the logs, so leading log. And then since I have higher order anomalous dimensions, let me put in next leading log.

And actually, I have one more order. So let me put that in, too, [? 908 ?] next leading log. Uh-oh, all right? So this is the one that we actually talked about earlier.

And that one is actually close to the data. The leading log result was sort of close to what the data was. But when you think about whether that's actually right and you calculate the higher order corrections, you find that you're moving away from the data, right?

And again, it's not convergent. Summing the logs has not helped you. And that's because what we're talking about here has nothing to do with logs. It has to do with powers. It's the renormalon that's causing this problem, OK?

So we could throw up our hands and say, oh, it doesn't work. Or we could try to do something better. So let me show you what happens if we do some of this stuff that I've been telling you about.

So this guy has a u equals 1 renormalon. You can check that very easily by doing a bubble sum. Switch to the MSR scheme.

So we reuse our perturbative information in MS-bar, but just organize in a more intelligent fashion. So we define a coefficient as above, saying that this problem must have to do with that renormalon.

And then if we write down what the OPE looks like, we have, again, an OPE. And we've changed matrix examined definitions as well as Wilson coefficients definitions. So I'm putting everything in this scale R0.

So that's just rewriting the OPE in this new scheme. When we look at this scheme and we think about what's going on, we want to choose the R0 to be of order lambda QCD or a bit bigger. And the reason we want to do that is to preserve power counting.

So power counting is important when we're making expansions. In some sense, it's key. And if we thought that the original MS-bar matrix element was dominated by the scale lambda QCD, we better not screw that up when we make a change of variable.

And so the kind of change of variable you're making is you're taking the original matrix element. You're subtracting a series with a proportionality constant R0. This guy was lambda QCD squared. And you want this to be not too far different than lambda QCD, so that this whole thing here is lambda QCD cubed just like this, so that this thing is still of order lambda QCD cubed.

So you don't want to do something like choose the R0 to be Mb or something because then you'd be making a huge change in the value of this matrix element. You really want to take the R0 cutoff to be of order, say, g of r or something.

So you can use the RGE to sum up logs from that scale R0 up to the scale of MQ. So again, we can sum up logs and have a logarithmically improved result. But actually the most important thing here is getting rid of the renormalon.

I just wanted to kind of show you what the result would look like with both the RGE improvement, just sort of using the full set of tools that we have. So here's the scales. You have three physical scales, b quark mass, charm mass, lambda QCD.

You have two cutoffs. You have our R0, which I just told you you should pick to be close to lambda QCD. And then you can think that there's these other scales, Mb and M charm. If this is some low scale and that's some high scale, I'd like to sum logs.

And I can do that by running up from R0 to R1, which in this case I'll just pick between Mb and M charm. So if you did that, you'd get some result that looks like this, just putting in some notation for the resummation here.

And then this guy is at the low scale. So now, everybody's happy. The Wilson coefficients here can be expanded in perturbation theory. We don't think there's any large logs.

This U sums up any large logs between the low scales and the high scales. And this guy's living at a scale that he likes to live at. And there's no renormalon, no u equals 1 renormalon.

So what does it look like if I write down this guy? So first of all, if I look at this thing order by order, this is how the convergence looks. You get a downward shift by about the same amount as before, but the next shifts are small.

So the series actually looks like it's converging to something. So that's the first three orders if you use this approach. And if you use the complete power of this at the highest order, you can write down that you get this 0.860 from the perturbative part, which is not too far from the 0.88 or the 0.8517, which I said was close to the data.

So this actually is pretty good. You can make some error estimate for how big this thing is. And you can make some error estimate for the perturbative uncertainty from higher orders by varying scales. And you get a prediction that looks like that.

So when you want to estimate uncertainties, what you typically do with your cutoffs is you vary them. You say, well, it has to be up here, but I don't have to take it exactly there. I could take it up here or take it a little lower, move this guy around.

That gives you some idea of higher order uncertainties because the dependence on these scales cancels out order by order in perturbation theory. And so varying it is a way of getting an idea of higher order uncertainties from perturbation theory. And that's what I've done, for example, for this perturbative term here.

This is varying R1 up and down by a factor of 2. And just see kind of what residual uncertainty you have. This one's more interesting.

So that one's like mu variation. This one's more interesting. And that's because we have this other cutoff, R0. But what R0 does is it connects a leading order term to a sub-leading power term. The R0 dependence cancels between something that's order 1 in the power counting, something that's higher order in the power counting.

But when I vary R0, it still cancels between these two pieces. Because remember, if you look at the top of the board, I've included higher power terms in my Wilson coefficient. And so varying the R0, you can actually get an idea of how big this thing is kind of in a naive dimensional analysis type way. And not gives this number here.

So R0 variation actually allows you to test for the size of the power corrections if you don't know them. And in this case, it turns out that the power correction is kind of small in this scheme. But it didn't have to be that small. It could have been 165, 265. It happens to be 0.065, OK?

So everything looks much nicer after we just get rid of the renormalon. Someone else did the three-loop calculation. You made the prediction, just true in this case.

OK, so questions? So the moral of the story is that, if you really want to use calculations that are out there, you have to think about the physics. You can't just calculate, sorry.

So these renormalons, when you have kind of order alpha squared or alpha cubed information, then you have to worry about things like this. And if you didn't, what looked like was working beautifully at leading log order might get spoiled. You're leading log might be the best prediction unless you think about this that you can make. Because your just higher orders, you're getting sensitive to this renormalon problem.

And what that means is generically this is changing and this is changing. The power correction is changing. And so at each order that this changes, I could compensate by changing this.

But it would have to be a pretty big change, right? I changed this guy by this amount. And then I have to compensate by that. Whoops, now, it even changes by a larger amount. Well, compensate by changing this by order 1 amount.

That's exactly this fact, that both things have this problem. Order by order, I can cancel out the problem, but then I can never really assign a number to this.

It's an order dependent number. Each order in perturbation theory that I use here, this number changes. And that's not really what you think of for a physical concept.

This matrix element should have a meaning that it shouldn't be changing by order 1 depending on the perturbative order that you're working. In this way of thinking, this thing is stable, much more stable anyway.

And you can assign a number to it. That's kind of the moral. All right.

AUDIENCE: Hey, Iain?

IAIN STEWART: Yeah.

AUDIENCE: That [INAUDIBLE] looks [INAUDIBLE].

IAIN STEWART: So the thing is that the convergence, it's not about the precision. It's about the convergence, right? So you would be fine in some sense. It would not be technically a problem if this was 0, right? If this guy was not much of a shift-- and this was a 15 shift. This is a 7 shift.

If this guy was small again, you'd be fine, right? Because then you'd say, OK, well, I extract this thing in the same order that I include that. And maybe this thing gives a plus 0.07, and this guy is a minus 0.07. And then I'm close to the data. And you're happy.

The issue is that you go to a higher order, and you get even bigger shift. That's the real problem, that you don't, in the perturbative part alone, have any sign of convergence. That's the issue.

So the fact that this number is sort of close to that number is, in some sense, OK because you think that this thing should generically be of that size. That's what dimensional analysis would say. In dimensional analysis, you might even say it's a little bigger. So the real issue is that you want your predictions to be stable. Yeah.

AUDIENCE: But I thought that precision should be higher than the [INAUDIBLE].

IAIN STEWART: No. This, actually, here is connected to that term. I'm saying, even if I leave this term out, I actually get something that agrees with the data. And that's because it turns out that the power correction here is of this size.

And I can get an idea of how big it is by varying the R0 in this prediction. And that changes this number by that amount. And since this thing is also changing in exactly a way that compensates that R0 dependence, I'm getting an estimate for this thing.

That's what I did there. So if I were to put this thing in, I'd have a new parameter. And I could exactly fit the data, right? But what I wrote was a little different. Yeah.

I kind of went over that quickly. All right, so that's actually it for HQET. So before we get to SCET, which is coming up after spring break, we're going to talk about one more topic, which I'll just basically introduce. And then we'll stop and continue on Thursday.

So there's going to be one more example of an effective field theory. And this is going to be an example of an effective field theory with what looks like a problem. It's got a fine tuning.

So usually, if the whole notion of effective field theory is against the idea that there should be a fine tuning-- because you're making dimensional analysis estimates of things. If there's a fine tuning, that means your dimensional analysis failed. In this example, we'll see that you can make one fine tuning.

And you can understand what's going on with that fine tuning and actually propagate it to change your power counting, to change your power counting such that it takes into account the existence of that fine tuning. It builds the whole effective theory around the idea that there was this fine tuning from original, perhaps, dimension counting point of view. But we can adopt a different power counting that actually organizes the physics in exactly the right way that we want to.

And in fact, it's going to be such an easy example where we can just basically calculate all the Feynman diagrams very simply, one line. So that's what we'll do. It'll be a very simple example of an effective theory field.

And we'll prove some things about quantum mechanics that would be very difficult if we weren't using effective field theory along the way. So we'll investigate an effective theory that has a naively irrelevant operator that must be promoted to being relevant.

So by dimensional analysis, it would be irrelevant. And one way of thinking about it is that it just has such a large anomalous dimension that it actually ends up being totally relevant. And that is actually not a bad way of thinking about it.

So the example we'll talk about is something called two-nucleon non-relativistic effective field theory. So you have two nucleons, a neutron and a proton, for example. It's a bottom-up effective theory.

We're not going to be thinking about calculating nucleons in QCD. We're going to work at momenta that are so small that we actually integrate out the pion. So all exchange particles are integrated out.

So there's nothing to exchange and really just have contact type interactions. So something that you might think of as like A pion exchange between two nucleons gets represented by some local operators between the nucleons. All right, so let me stop there. And we'll come back and talk more about this theory next time.

And we'll see, first, that these operators of this type actually can organize some facts about quantum mechanics in a very nice way. And then we'll see how to think about fine tuning from this contact interaction theory and also how to think about how we want to organize the power counting, what's MS-bar say, et cetera. It's kind of a fun theory.