WEBVTT
00:00:00.500 --> 00:00:02.993
The following content is
provided under a Creative
00:00:02.993 --> 00:00:03.719
Commons license.
00:00:03.719 --> 00:00:06.284
Your support will help
MIT OpenCourseWare
00:00:06.284 --> 00:00:10.680
continue to offer high quality
educational resources for free.
00:00:10.680 --> 00:00:13.632
To make a donation or
view additional materials
00:00:13.632 --> 00:00:19.068
from hundreds of MIT courses,
visit MIT OpenCourseWare
00:00:19.068 --> 00:00:21.460
at ocw.mit.edu.
00:00:21.460 --> 00:00:23.182
PROFESSOR: OK.
00:00:23.182 --> 00:00:24.904
Let's start.
00:00:24.904 --> 00:00:31.792
So we are back to calculating
partition functions
00:00:31.792 --> 00:00:34.243
for Ising models.
00:00:34.243 --> 00:00:39.304
So again some in general
hypercubic lattice
00:00:39.304 --> 00:00:44.370
where you have
variables sigma i being
00:00:44.370 --> 00:00:49.818
minus plus 1 at each side with
a tendency to be parallel.
00:00:49.818 --> 00:00:54.782
And our task is to calculate
the partition function which
00:00:54.782 --> 00:00:59.246
for n sides amounts
to summing over 2
00:00:59.246 --> 00:01:04.146
to the n binary configurations
of a weight that
00:01:04.146 --> 00:01:10.434
favors near neighbors to be in
the same state with a strength
00:01:10.434 --> 00:01:10.970
K.
00:01:10.970 --> 00:01:11.532
OK.
00:01:11.532 --> 00:01:16.590
So the procedure that
we are going to follow
00:01:16.590 --> 00:01:19.408
is this high
temperature expansion,
00:01:19.408 --> 00:01:27.078
that is writing this as
hyperbolic cosine of k 1
00:01:27.078 --> 00:01:34.671
plus t standing for
tan K sigma i, sigma j.
00:01:34.671 --> 00:01:40.479
And then as we discussed,
actually this i
00:01:40.479 --> 00:01:47.819
would have to write as a
product over all bonds.
00:01:47.819 --> 00:01:54.704
And then essentially to
each bond on this lattice
00:01:54.704 --> 00:02:03.219
I would have to either assign
1 or t sigma i, sigma j.
00:02:03.219 --> 00:02:06.590
Binary choice now
moves to bonds.
00:02:06.590 --> 00:02:12.000
And then we saw that
in order to make sure
00:02:12.000 --> 00:02:15.246
that following the
summation over sigma
00:02:15.246 --> 00:02:19.800
these factors survived we have
to make sure that we construct
00:02:19.800 --> 00:02:24.757
graphs where out of each side
go out an even number of bonds.
00:02:24.757 --> 00:02:27.669
And we rewrote the
whole thing as 2
00:02:27.669 --> 00:02:32.402
to the n, cos K to the
number of bonds, which
00:02:32.402 --> 00:02:35.660
for a hypercubic lattice is dn.
00:02:35.660 --> 00:02:40.004
And then we had
a sum over graphs
00:02:40.004 --> 00:02:48.335
where we had an even
number of bonds per side.
00:02:48.335 --> 00:02:53.525
And the contribution
of the graph
00:02:53.525 --> 00:03:04.019
was basically t to the power
of the number of bonds.
00:03:04.019 --> 00:03:10.649
So I'm going to call
this sum over here s.
00:03:10.649 --> 00:03:15.959
After all, the interesting
part of the problem
00:03:15.959 --> 00:03:19.755
is captured in this factor
s, these 2 to the n cos K
00:03:19.755 --> 00:03:22.091
to the power of dn
are perfectly well
00:03:22.091 --> 00:03:23.452
behaved analytical functions.
00:03:23.452 --> 00:03:27.700
We are looking for
something that is singular.
00:03:27.700 --> 00:03:33.554
So this s I can either pick
one from all of the factors--
00:03:33.554 --> 00:03:38.510
so to the lowest order in t
I would have to do to one.
00:03:38.510 --> 00:03:41.591
And then we saw that the
next order correction
00:03:41.591 --> 00:03:43.625
would be something
like a square,
00:03:43.625 --> 00:03:46.935
would be this t to the 4th.
00:03:46.935 --> 00:03:51.140
And then objects that are
more complicated versions.
00:03:51.140 --> 00:03:58.462
And I can write all of those
as a sum over kind of graphs
00:03:58.462 --> 00:04:04.424
that I can draw on the lattice
by going around and making
00:04:04.424 --> 00:04:05.342
a loop.
00:04:05.342 --> 00:04:11.496
But then I will have graphs that
will be composed of two loops,
00:04:11.496 --> 00:04:14.236
for example, that
are disconnected.
00:04:14.236 --> 00:04:18.971
Since this can be translated
all over the place
00:04:18.971 --> 00:04:22.832
this would have a factor
of n for a lattice that
00:04:22.832 --> 00:04:25.951
is large for getting side
effects and edge effects.
00:04:25.951 --> 00:04:29.321
This would have a contribution
once I slide these two
00:04:29.321 --> 00:04:31.006
with respect to each other.
00:04:31.006 --> 00:04:33.614
That is, of the
order of n squared.
00:04:33.614 --> 00:04:35.896
And then I would
have things that
00:04:35.896 --> 00:04:41.449
would three loops and so forth.
00:04:41.449 --> 00:04:47.443
Now based on what we
have seen before it
00:04:47.443 --> 00:04:52.879
is very tempting to somehow
exponentiate this sum
00:04:52.879 --> 00:04:58.951
and write it as
exponential of the sum
00:04:58.951 --> 00:05:04.264
over objects that
have a single loop.
00:05:04.264 --> 00:05:12.488
And I will call this new sum
actually s prime for reasons
00:05:12.488 --> 00:05:16.689
to become apparent
because if I start
00:05:16.689 --> 00:05:19.930
to expand this
exponential, what do I get?
00:05:19.930 --> 00:05:23.066
I will certainly
start to get 1, then
00:05:23.066 --> 00:05:26.594
I will have the sum
over single loop graphs
00:05:26.594 --> 00:05:33.560
plus one half whatever is in
the exponent over here squared,
00:05:33.560 --> 00:05:40.728
1 over 3 factorial whatever
is in the exponent, which is
00:05:40.728 --> 00:05:44.620
this sum cubed, and so forth.
00:05:44.620 --> 00:05:49.282
And if I were to
expand this thing that
00:05:49.282 --> 00:05:52.390
is something squared,
I will uncertainty
00:05:52.390 --> 00:05:55.033
get terms in it that
would correspond
00:05:55.033 --> 00:05:57.865
to two of the terms in the sum.
00:05:57.865 --> 00:06:01.524
Here I would get things that
would correspond to three
00:06:01.524 --> 00:06:03.786
of the terms in the sum.
00:06:03.786 --> 00:06:06.050
And the communitorial
factors would certainly
00:06:06.050 --> 00:06:10.175
work out to get rid
of the 2 and the 6,
00:06:10.175 --> 00:06:14.101
depending on-- let's say
I have loop a, b, or c,
00:06:14.101 --> 00:06:16.757
I could pick a
from the first sum
00:06:16.757 --> 00:06:19.759
here, b from the
second, c from the third
00:06:19.759 --> 00:06:22.927
or any permutation thereafter
that would amount to c.
00:06:22.927 --> 00:06:25.759
I would have a factor of c abc.
00:06:25.759 --> 00:06:32.791
But s definitely is not equal to
s prime because immediately we
00:06:32.791 --> 00:06:39.315
see that once I square this term
a plus b squared, in addition
00:06:39.315 --> 00:06:44.880
to a squared ab I will also
get a squared and b squared,
00:06:44.880 --> 00:06:52.230
which corresponds to
objects such as this.
00:06:52.230 --> 00:07:00.571
One half of the same
graph repeated twice.
00:07:00.571 --> 00:07:01.071
Right?
00:07:01.071 --> 00:07:06.243
It is the a squared term that
is appearing in this sum.
00:07:06.243 --> 00:07:11.490
And this clearly has no
analog here in my sum s.
00:07:11.490 --> 00:07:16.790
I emphasize that in
the sum s each bond can
00:07:16.790 --> 00:07:20.580
occur either zero or one time.
00:07:20.580 --> 00:07:26.564
Whereas if I exponentiate
this you can see that s prime
00:07:26.564 --> 00:07:30.550
certainly contains things that
potentially are repeated twice.
00:07:30.550 --> 00:07:34.600
As I go further I could
have multiple times.
00:07:34.600 --> 00:07:39.820
Also when I square this
term I could potentially
00:07:39.820 --> 00:07:46.660
get 1 factor from 1, another
factor, which is also
00:07:46.660 --> 00:07:52.967
a loop from the second term, the
second in this product of two
00:07:52.967 --> 00:07:56.771
brackets, which once
I multiply them happen
00:07:56.771 --> 00:08:02.276
to overlap and share something.
00:08:02.276 --> 00:08:03.377
OK?
00:08:03.377 --> 00:08:05.580
All right.
00:08:05.580 --> 00:08:10.572
So very roughly
and hand wavingly,
00:08:10.572 --> 00:08:19.740
the thing is that s has
loops that avoid each other,
00:08:19.740 --> 00:08:21.030
don't intersect.
00:08:21.030 --> 00:08:25.545
Whereas s prime has
loops that intersect.
00:08:25.545 --> 00:08:33.312
So very naively s is
sum over, if you like,
00:08:33.312 --> 00:08:39.798
a gas of non intersecting loops.
00:08:39.798 --> 00:08:50.620
Whereas s prime is a
sum over gas of what
00:08:50.620 --> 00:08:53.385
I could call phantom loops.
00:08:53.385 --> 00:08:56.150
They're kind of like ghosts.
00:08:56.150 --> 00:08:59.470
They can go through each other.
00:08:59.470 --> 00:09:00.973
OK?
00:09:00.973 --> 00:09:03.980
All right.
00:09:03.980 --> 00:09:08.572
So that's one of a
number of problems.
00:09:08.572 --> 00:09:13.738
Now in this lecture
we are going to ignore
00:09:13.738 --> 00:09:18.160
this difference between s and
s prime and calculate s prime,
00:09:18.160 --> 00:09:21.798
which therefore we will
not be doing the right job.
00:09:21.798 --> 00:09:25.906
And this is one example of where
I'm not doing the right job.
00:09:25.906 --> 00:09:28.231
And as I go through
the calculation
00:09:28.231 --> 00:09:31.611
you may want to figure out
also other places where
00:09:31.611 --> 00:09:35.138
I don't know correctly
reproduce the sum s in order
00:09:35.138 --> 00:09:38.000
to ultimately be able
to make a calculation
00:09:38.000 --> 00:09:39.880
and see what it comes to.
00:09:39.880 --> 00:09:44.128
Now next lecture we will try
to correct all of those errors,
00:09:44.128 --> 00:09:50.715
so you better keep track
of all of those errors,
00:09:50.715 --> 00:09:58.403
make sure that I am
doing everything right.
00:09:58.403 --> 00:10:02.260
Seems good-- all right.
00:10:02.260 --> 00:10:10.340
So let's go and take a
look at this s prime.
00:10:10.340 --> 00:10:22.183
So s prime, actually, log of
s prime is a sum over graphs
00:10:22.183 --> 00:10:28.570
that I can draw on the lattice.
00:10:28.570 --> 00:10:31.474
And since I could
exponentiate that,
00:10:31.474 --> 00:10:35.891
there was no problem in the
exponentiation involved over
00:10:35.891 --> 00:10:36.432
here.
00:10:36.432 --> 00:10:40.760
Essentially I have to,
for calculating the log,
00:10:40.760 --> 00:10:43.766
just calculate these
singly connected loops.
00:10:43.766 --> 00:10:46.726
And you can see that
that will work out fine
00:10:46.726 --> 00:10:48.510
as far as extensivity
is concerned
00:10:48.510 --> 00:10:51.174
because I could translate
each one of these figures
00:10:51.174 --> 00:10:55.900
of the loops over the lattice
and get the factors of n.
00:10:55.900 --> 00:11:00.150
So this is clearly
something that is OK.
00:11:00.150 --> 00:11:04.884
Now since I'm already making
this mistake of forgetting
00:11:04.884 --> 00:11:07.802
about the intersections
between loops
00:11:07.802 --> 00:11:13.486
I'm going to make
another assumption that
00:11:13.486 --> 00:11:19.170
is in this sum of single loops.
00:11:19.170 --> 00:11:29.786
I already include things
where the single loop
00:11:29.786 --> 00:11:37.266
is allowed to intersect itself.
00:11:37.266 --> 00:11:48.034
For example, I'm going to
allow as a single loop entity
00:11:48.034 --> 00:11:54.681
something that is like this
where this particular bond will
00:11:54.681 --> 00:11:57.687
give a factor of t squared.
00:11:57.687 --> 00:12:02.700
Clearly I should not include
this in the correct sum.
00:12:02.700 --> 00:12:05.010
But since I'm
ignoring intersections
00:12:05.010 --> 00:12:09.180
among the different groups
and I'm making it phantom,
00:12:09.180 --> 00:12:12.672
let's make it also phantom
with respect to itself,
00:12:12.672 --> 00:12:15.010
allow it to intersect
itself [INAUDIBLE].
00:12:15.010 --> 00:12:19.530
It's in the same
spirit of the mistake
00:12:19.530 --> 00:12:22.930
that we are going to do.
00:12:22.930 --> 00:12:27.070
So now what do we have?
00:12:27.070 --> 00:12:33.280
We can write that
log of s prime is
00:12:33.280 --> 00:12:40.170
this sum over loops of
length l and then multiply it
00:12:40.170 --> 00:12:42.920
by t to the l.
00:12:42.920 --> 00:12:49.969
So basically I say that I can
draw loops, let's say of size
00:12:49.969 --> 00:12:52.164
four, loops of size six.
00:12:52.164 --> 00:12:57.871
Each one of them I have
to multiply by t to the l.
00:12:57.871 --> 00:13:09.500
Of course, I have to multiply
with the number of loops
00:13:09.500 --> 00:13:12.857
of length l.
00:13:12.857 --> 00:13:13.980
OK?
00:13:13.980 --> 00:13:17.932
And this I'm going to
write slightly differently.
00:13:17.932 --> 00:13:22.872
So I'm going to say
that log of s prime
00:13:22.872 --> 00:13:26.628
is sum over this
length of the loop.
00:13:26.628 --> 00:13:30.340
All the loops of
length l are going
00:13:30.340 --> 00:13:35.477
to contribute a
factor of t to the l.
00:13:35.477 --> 00:13:40.886
And I'm going to count
the loops of length
00:13:40.886 --> 00:13:45.712
l that start and
end at the origin.
00:13:45.712 --> 00:13:49.960
And I'll give that
a symbol wl00.
00:13:49.960 --> 00:13:54.376
Actually, very soon
I will introduce
00:13:54.376 --> 00:13:57.320
a generalization of this.
00:13:57.320 --> 00:14:01.736
So let me write the definition.
00:14:01.736 --> 00:14:06.128
I define a matrix
that is indexed
00:14:06.128 --> 00:14:10.992
by two side of the
lattice and counts
00:14:10.992 --> 00:14:18.569
the number of walks that I can
have from one to the other,
00:14:18.569 --> 00:14:22.566
from i to j in l steps.
00:14:22.566 --> 00:14:36.727
So this is number of walks
of length l from i to j.
00:14:36.727 --> 00:14:38.260
OK?
00:14:38.260 --> 00:14:43.408
So what am I doing here?
00:14:43.408 --> 00:14:50.272
Since I am looking at
single loop objects,
00:14:50.272 --> 00:14:56.150
I want to sum over,
let's say, all terms
00:14:56.150 --> 00:14:59.705
that contribute, in
this case t to the 4.
00:14:59.705 --> 00:15:02.704
It's obvious because it's
really just one shape.
00:15:02.704 --> 00:15:06.920
It's a square, but this square
I could have started at anywhere
00:15:06.920 --> 00:15:09.131
on the lattice.
00:15:09.131 --> 00:15:15.027
And this slight factor, which
captures the extensivity,
00:15:15.027 --> 00:15:19.592
I'll take outside because
I expect this log of s
00:15:19.592 --> 00:15:20.600
to be extensive.
00:15:20.600 --> 00:15:22.616
It should be proportional to n.
00:15:22.616 --> 00:15:25.210
So one part of it
is essentially where
00:15:25.210 --> 00:15:27.094
I start to draw this loop.
00:15:27.094 --> 00:15:31.224
So I say that I always
start with the loops
00:15:31.224 --> 00:15:36.040
that I have at point zero.
00:15:36.040 --> 00:15:39.344
Then I want to come
back to myself.
00:15:39.344 --> 00:15:43.887
So I indicate that the end
point should also be zero.
00:15:43.887 --> 00:15:46.238
And if I want to
get a term here,
00:15:46.238 --> 00:15:48.648
this is a term that
is t to the fourth.
00:15:48.648 --> 00:15:52.220
I need to know how many
of such blocks I have.
00:15:52.220 --> 00:15:52.720
Yes?
00:15:52.720 --> 00:15:55.960
AUDIENCE: Are you allowing
the loop to intersect itself
00:15:55.960 --> 00:15:57.760
in this case or not?
00:15:57.760 --> 00:15:59.395
PROFESSOR: In this case, yes.
00:15:59.395 --> 00:16:01.684
I will also, when I'm
calculating anything
00:16:01.684 --> 00:16:04.393
to do with s prime I
allow intersection.
00:16:04.393 --> 00:16:08.144
So if you are asking whether I'm
allowing something like this,
00:16:08.144 --> 00:16:09.200
the answer is yes.
00:16:09.200 --> 00:16:10.200
AUDIENCE: OK.
00:16:10.200 --> 00:16:11.200
PROFESSOR: Yeah.
00:16:11.200 --> 00:16:15.320
AUDIENCE: And are we assuming
an infinitely large system so
00:16:15.320 --> 00:16:15.820
that--
00:16:15.820 --> 00:16:16.392
PROFESSOR: Yes.
00:16:16.392 --> 00:16:16.964
That's right.
00:16:16.964 --> 00:16:20.110
So that the edge effects, you
don't have to worry about.
00:16:20.110 --> 00:16:24.671
Or alternatively you can imagine
that you have periodic boundary
00:16:24.671 --> 00:16:25.170
conditions.
00:16:25.170 --> 00:16:27.335
And with periodic
boundary conditions
00:16:27.335 --> 00:16:30.985
we can still slide it
all over the place.
00:16:30.985 --> 00:16:31.485
OK?
00:16:31.485 --> 00:16:34.588
But clearly then the
maximal size of these loops,
00:16:34.588 --> 00:16:36.532
et cetera, will be
determined potentially
00:16:36.532 --> 00:16:40.490
by the size of the lattice.
00:16:40.490 --> 00:16:44.994
Now this is not entirely
correct because there
00:16:44.994 --> 00:16:47.246
is an over counting.
00:16:47.246 --> 00:16:51.715
This 1one square that
I have drawn over here,
00:16:51.715 --> 00:16:54.802
I could have started
from this point,
00:16:54.802 --> 00:16:57.010
to this point, this point.
00:16:57.010 --> 00:17:01.386
And essentially for
something that has length l,
00:17:01.386 --> 00:17:05.769
I would have had l
possible starting points.
00:17:05.769 --> 00:17:11.453
So in order to avoid the over
counting I have to divide by l.
00:17:11.453 --> 00:17:16.595
And in fact I could have started
walking along this direction,
00:17:16.595 --> 00:17:20.039
or alternatively
I could have gone
00:17:20.039 --> 00:17:22.339
in the clockwise direction.
00:17:22.339 --> 00:17:24.796
So there's two
orientations to the walk
00:17:24.796 --> 00:17:27.604
that will take me
from the origin back
00:17:27.604 --> 00:17:30.080
to the origin in l steps.
00:17:30.080 --> 00:17:33.940
And not to do the over counting,
I have to divide by 2l.
00:17:33.940 --> 00:17:34.440
Yes?
00:17:34.440 --> 00:17:38.528
AUDIENCE: If we allow
walking over ourselves
00:17:38.528 --> 00:17:42.620
is it always a degeneracy of 2l?
00:17:42.620 --> 00:17:43.418
PROFESSOR: Yes.
00:17:43.418 --> 00:17:47.009
You can go and do the
calculation to convince
00:17:47.009 --> 00:17:52.213
yourself that even for
something as convoluted as that,
00:17:52.213 --> 00:17:55.420
you can... is too.
00:17:55.420 --> 00:17:56.628
PROFESSOR: OK.
00:17:56.628 --> 00:18:01.460
So this is what we
want to calculate.
00:18:01.460 --> 00:18:06.046
Well, it turns out that
this entity actually
00:18:06.046 --> 00:18:07.776
shows up somewhere else also.
00:18:07.776 --> 00:18:12.897
So let me tell you why I wanted
to write a more general thing.
00:18:12.897 --> 00:18:16.769
Another quantity that
I can try to calculate
00:18:16.769 --> 00:18:19.190
is the spin spin correlation.
00:18:19.190 --> 00:18:24.998
I can pick spin zero
here and say spin r here,
00:18:24.998 --> 00:18:26.582
some other location.
00:18:26.582 --> 00:18:32.086
And I want to calculate
what is the correlation
00:18:32.086 --> 00:18:34.942
between these two spins.
00:18:34.942 --> 00:18:35.660
OK?
00:18:35.660 --> 00:18:39.800
So how do I have to do
that for the Ising model?
00:18:39.800 --> 00:18:42.775
I have to essentially sum
over all configurations
00:18:42.775 --> 00:18:47.215
with an additional factor
of sigma zero sigma
00:18:47.215 --> 00:18:58.508
r of this weight, e to the k,
sum over ij, sigma i, sigma j,
00:18:58.508 --> 00:19:04.990
appropriately normalized,
of course, by the partition
00:19:04.990 --> 00:19:05.920
function.
00:19:05.920 --> 00:19:08.552
And I can make the
same transformation
00:19:08.552 --> 00:19:12.901
that I have on the first line
of these exponential factors
00:19:12.901 --> 00:19:18.130
to write this as a
sum over sigma i,
00:19:18.130 --> 00:19:23.360
I have sigma zero, sigma
r, and then product
00:19:23.360 --> 00:19:28.950
over all bonds of these factors
of one plus t sigma i, sigma j.
00:19:28.950 --> 00:19:32.886
The factors of 2 to the
n cos K will cancel out
00:19:32.886 --> 00:19:34.860
between the numerator
and the denominator.
00:19:34.860 --> 00:19:41.324
And basically I will
get the same thing.
00:19:41.324 --> 00:19:48.114
Now of course the denominator
is the partition function.
00:19:48.114 --> 00:19:51.039
It is the sum s
that we are after,
00:19:51.039 --> 00:19:55.580
but we can also, and we've seen
this already how to do this,
00:19:55.580 --> 00:19:58.380
express the sum in the
numerator graphically.
00:19:58.380 --> 00:20:02.176
And the difference between the
numerator and the denominator
00:20:02.176 --> 00:20:06.217
is that I have an additional
sigma sitting here,
00:20:06.217 --> 00:20:08.962
an additional sigma
sitting there,
00:20:08.962 --> 00:20:15.110
that if left by themselves
will average out to 0.
00:20:15.110 --> 00:20:18.894
So I need to connect
them by paths
00:20:18.894 --> 00:20:23.151
that are composed of
factors of t sigma sigma,
00:20:23.151 --> 00:20:28.816
originating on one and
ending on the other.
00:20:28.816 --> 00:20:29.557
Right?
00:20:29.557 --> 00:20:38.202
So in the same sense that what
is appearing in the denominator
00:20:38.202 --> 00:20:42.514
is a sum that
involves these loops,
00:20:42.514 --> 00:20:47.418
the first term that
appears in the numerator
00:20:47.418 --> 00:20:52.042
is a path that
connects zero to r
00:20:52.042 --> 00:20:56.272
through some combination
of these factors of t,
00:20:56.272 --> 00:21:01.230
and then I have to sum over all
possible ways of doing that.
00:21:01.230 --> 00:21:04.898
But then I could
certainly have graphs
00:21:04.898 --> 00:21:07.518
that involves the same thing.
00:21:07.518 --> 00:21:11.540
And the loop, there
is nothing that
00:21:11.540 --> 00:21:17.910
is against that, or the
same thing, and two loops,
00:21:17.910 --> 00:21:19.821
and so forth.
00:21:19.821 --> 00:21:25.280
And you can see that as
long as, and only as long
00:21:25.280 --> 00:21:28.206
as I treat these
as phantom objects
00:21:28.206 --> 00:21:33.149
that can pass through each
other I can factor out this term
00:21:33.149 --> 00:21:37.692
and the rest of 1 plus
1, loop plus 2 loop,
00:21:37.692 --> 00:21:41.309
is exactly what I have
in the denominator.
00:21:41.309 --> 00:21:49.265
So we see that under the
assumption of phantomness,
00:21:49.265 --> 00:21:55.453
if phantom then this
becomes really just
00:21:55.453 --> 00:22:02.810
the sum over all paths
that go from 0 to r.
00:22:02.810 --> 00:22:07.978
And of course the
contribution of each path
00:22:07.978 --> 00:22:12.874
is how many factors of t I have.
00:22:12.874 --> 00:22:13.479
Right?
00:22:13.479 --> 00:22:21.642
So I have to have a sum over
the length of this path l.
00:22:21.642 --> 00:22:26.152
Paths of length l will
contribute a factor of t
00:22:26.152 --> 00:22:27.505
to the l.
00:22:27.505 --> 00:22:30.346
But there are
potentially multiple ways
00:22:30.346 --> 00:22:34.970
to go from i0 to r in l steps.
00:22:34.970 --> 00:22:37.709
How many ways?
00:22:37.709 --> 00:22:45.926
That's precisely what
I call this 0w of lr.
00:22:45.926 --> 00:22:46.840
Yes?
00:22:46.840 --> 00:22:50.606
AUDIENCE: Why does
a graph that goes
00:22:50.606 --> 00:22:56.000
from 0 to r in three different
ways have [INAUDIBLE]?
00:22:56.000 --> 00:22:56.896
PROFESSOR: OK.
00:22:56.896 --> 00:23:01.824
So you want to have
to go from 0 to r,
00:23:01.824 --> 00:23:05.852
you want to have a
single path, and then
00:23:05.852 --> 00:23:09.740
you want that path to
do something like this?
00:23:09.740 --> 00:23:10.880
AUDIENCE: Yeah.
00:23:10.880 --> 00:23:12.590
That doesn't [INAUDIBLE].
00:23:12.590 --> 00:23:14.351
PROFESSOR: That's fine.
00:23:14.351 --> 00:23:17.873
If I ignore the
phantomness condition
00:23:17.873 --> 00:23:21.990
this is the same
as this multiplied
00:23:21.990 --> 00:23:26.687
by this, which is a term that
appears in the denominator
00:23:26.687 --> 00:23:27.980
and cancels out.
00:23:27.980 --> 00:23:31.226
AUDIENCE: But you're
assuming that you
00:23:31.226 --> 00:23:33.390
have the phantom condition.
00:23:33.390 --> 00:23:38.120
So this is completely normal.
00:23:38.120 --> 00:23:45.800
It doesn't matter.
00:23:45.800 --> 00:23:48.296
PROFESSOR: I'm not sure I
understand your question.
00:23:48.296 --> 00:23:50.955
You say that even without
the phantom condition
00:23:50.955 --> 00:23:52.380
this graph exists.
00:23:52.380 --> 00:23:54.320
AUDIENCE: With them
phantom condition--
00:23:54.320 --> 00:23:55.360
PROFESSOR: Yes.
00:23:55.360 --> 00:23:58.220
AUDIENCE: --this graph
is perfectly normal.
00:23:58.220 --> 00:24:06.008
PROFESSOR: Even without
the phantom condition
00:24:06.008 --> 00:24:12.498
this is an acceptable graph.
00:24:12.498 --> 00:24:13.796
Yeah.
00:24:13.796 --> 00:24:15.094
OK.
00:24:15.094 --> 00:24:16.400
Yeah?
00:24:16.400 --> 00:24:20.582
AUDIENCE: So what
does phantomness mean?
00:24:20.582 --> 00:24:26.160
Why then we can simplify
only a [INAUDIBLE]?
00:24:26.160 --> 00:24:27.682
PROFESSOR: OK.
00:24:27.682 --> 00:24:36.053
Because let's say I were
to take this as a check
00:24:36.053 --> 00:24:40.435
and multiply it by
the denominator.
00:24:40.435 --> 00:24:45.035
The question is, would
I generate a series
00:24:45.035 --> 00:24:47.910
that is in the numerator?
00:24:47.910 --> 00:24:48.485
OK?
00:24:48.485 --> 00:24:53.678
So if I take this object that
I have said is the answer,
00:24:53.678 --> 00:24:57.037
I have to multiply
it by this object.
00:24:57.037 --> 00:25:03.049
And make sure that it introduces
correctly the numerator.
00:25:03.049 --> 00:25:06.755
The question is, when does it?
00:25:06.755 --> 00:25:10.049
I mean, certainly when
I multiply this by this,
00:25:10.049 --> 00:25:14.204
I will get the possibility of
having a graph such as this.
00:25:14.204 --> 00:25:17.691
And from here, I can
have a loop such as this.
00:25:17.691 --> 00:25:22.101
And the two of them would
share a bond such as that.
00:25:22.101 --> 00:25:26.131
So in the real Ising
model, that is not allowed.
00:25:26.131 --> 00:25:31.308
So that's the
phantomness condition
00:25:31.308 --> 00:25:39.652
that allows me to
factor these things.
00:25:39.652 --> 00:25:40.844
OK?
00:25:40.844 --> 00:25:43.240
All right.
00:25:43.240 --> 00:25:50.611
So we see that if I have this
quantity that I have written
00:25:50.611 --> 00:25:55.660
in red, then I can calculate
both correlation functions,
00:25:55.660 --> 00:26:01.130
as well as the free energy,
log of the partition
00:26:01.130 --> 00:26:04.175
function within this
phantomness assumption.
00:26:04.175 --> 00:26:09.167
So the question is,
can I calculate that?
00:26:09.167 --> 00:26:12.309
And the answer is that
calculating number
00:26:12.309 --> 00:26:16.089
of random walks is one
of the most basic things
00:26:16.089 --> 00:26:20.430
that one does in
statistical physics,
00:26:20.430 --> 00:26:24.380
and easily accomplished
as follows.
00:26:24.380 --> 00:26:32.432
Basically, I say that, OK,
let's say that I start from 0,
00:26:32.432 --> 00:26:38.665
actually let's do it
0 and r, and let's
00:26:38.665 --> 00:26:46.750
say that I have looked at
all possible paths that have
00:26:46.750 --> 00:26:53.210
l steps and end up over here.
00:26:53.210 --> 00:26:57.190
So this is step one, step
two, step number three.
00:26:57.190 --> 00:27:00.779
And the last one, step
l, I have purposely
00:27:00.779 --> 00:27:03.049
drawn as a dotted line.
00:27:03.049 --> 00:27:06.681
Maybe I will pull this
point further down
00:27:06.681 --> 00:27:10.394
to emphasize that
this is the last one.
00:27:10.394 --> 00:27:14.615
This is l minus 1
is the previous one.
00:27:14.615 --> 00:27:21.124
So I can certainly
state that the number of
00:27:21.124 --> 00:27:29.460
walks from 0 to r in l steps.
00:27:29.460 --> 00:27:34.320
Well, any walk that
got to r in l steps,
00:27:34.320 --> 00:27:39.190
at the l minus one step
had to be somewhere.
00:27:39.190 --> 00:27:39.811
OK?
00:27:39.811 --> 00:27:49.747
So what I do is I do a number
of walks from 0 to r prime,
00:27:49.747 --> 00:27:58.150
I'll call this point r
prime, in l minus one steps.
00:27:58.150 --> 00:28:07.830
And then times the number
of ways, or number of walks
00:28:07.830 --> 00:28:12.833
from r prime to r in one step.
00:28:12.833 --> 00:28:18.134
So before I reach my
destination the previous step
00:28:18.134 --> 00:28:22.189
I had to have been
somewhere, I sum
00:28:22.189 --> 00:28:27.170
over all possible various places
where that somewhere has to be,
00:28:27.170 --> 00:28:30.869
and then I have to
make sure that I
00:28:30.869 --> 00:28:35.542
can reach from that somewhere
in one step to my destination.
00:28:35.542 --> 00:28:38.919
That's all that sum is, OK?
00:28:38.919 --> 00:28:43.098
Now I can convert
that to mathematics.
00:28:43.098 --> 00:28:50.098
This quantity is start from
0, take l steps, arrive at r.
00:28:50.098 --> 00:28:52.643
By definition that's the number.
00:28:52.643 --> 00:29:00.238
And what it says is that this
should be a sum over r pi,
00:29:00.238 --> 00:29:08.194
start from 0, take l minus
1 steps, arrive at r prime.
00:29:08.194 --> 00:29:13.471
Start from r prime,
take one step,
00:29:13.471 --> 00:29:22.300
arrive at your destination
r, sum over r prime.
00:29:22.300 --> 00:29:23.289
OK?
00:29:23.289 --> 00:29:30.105
Now these are n by n matrices
that are labeled by l.
00:29:30.105 --> 00:29:30.673
Right?
00:29:30.673 --> 00:29:37.549
So these, being n by n matrices,
this summation over r prime
00:29:37.549 --> 00:29:40.439
is clearly a matrix
multiplication.
00:29:40.439 --> 00:29:45.765
So what that says is
that summing over r prime
00:29:45.765 --> 00:29:51.797
tells you that that is
w 1, w of l minus 1, 0.
00:29:51.797 --> 00:29:56.359
And that is true for any
pair of elements, starting
00:29:56.359 --> 00:29:57.703
and final points.
00:29:57.703 --> 00:29:59.943
So basically, quite
generically, we
00:29:59.943 --> 00:30:03.800
see that w, the matrix
that corresponds
00:30:03.800 --> 00:30:10.919
to the count for l steps, is
obtained from the matrix that
00:30:10.919 --> 00:30:15.087
corresponds to the count
for one step multiplying
00:30:15.087 --> 00:30:18.213
that of l minus 1 steps.
00:30:18.213 --> 00:30:23.064
And clearly I can keep
going. wl minus 1,
00:30:23.064 --> 00:30:30.820
I can write as w 1, w of
l minus 2, and so forth.
00:30:30.820 --> 00:30:33.760
And ultimately the
answer is none other
00:30:33.760 --> 00:30:37.549
than the entity that
corresponds to one step raised
00:30:37.549 --> 00:30:39.353
to the l power.
00:30:39.353 --> 00:30:43.412
And just to make things
easier on my writing,
00:30:43.412 --> 00:30:52.560
I will indicate this as
t to the l where t stands
00:30:52.560 --> 00:30:59.748
for this matrix for one set.
00:30:59.748 --> 00:31:00.950
OK?
00:31:00.950 --> 00:31:03.925
This condition over
here that I said
00:31:03.925 --> 00:31:10.074
in words that allows me to write
this in this nice matrix form
00:31:10.074 --> 00:31:13.734
is called the
Markovian condition.
00:31:13.734 --> 00:31:20.330
The kind of walks that
I have been telling
00:31:20.330 --> 00:31:24.320
you our Markovian in the
sense that they only they
00:31:24.320 --> 00:31:28.598
depend on where you came
from at the last step.
00:31:28.598 --> 00:31:33.989
They don't have memory of
where you had been before.
00:31:33.989 --> 00:31:37.133
And that's what
enables us to do this.
00:31:37.133 --> 00:31:42.399
And that's why I had to do the
phantom condition because if I
00:31:42.399 --> 00:31:45.943
really wanted to say
that something like this
00:31:45.943 --> 00:31:49.478
has to be excluded,
then the walk must
00:31:49.478 --> 00:31:53.515
keep memory of every place
that it had been before.
00:31:53.515 --> 00:31:54.015
Right?
00:31:54.015 --> 00:31:56.500
And then it would
be non Markovian.
00:31:56.500 --> 00:32:01.230
Then I wouldn't have been able
to do this nice calculation.
00:32:01.230 --> 00:32:06.559
That's why I had to do this
phantomness assumption so
00:32:06.559 --> 00:32:15.062
that I forgot the memory of
where my walk was previously.
00:32:15.062 --> 00:32:15.840
OK?
00:32:15.840 --> 00:32:17.260
Now...
00:32:17.260 --> 00:32:18.680
Yeah?
00:32:18.680 --> 00:32:20.100
Question?
00:32:20.100 --> 00:32:21.520
No?
00:32:21.520 --> 00:32:28.010
So this matrix, where
you can go in one step,
00:32:28.010 --> 00:32:34.678
is really the matrix of
who is connected to whom.
00:32:34.678 --> 00:32:35.496
Right?
00:32:35.496 --> 00:32:40.404
So this tells you
the connectivity.
00:32:40.404 --> 00:32:47.766
So for example, if
I'm dealing with a 2D
00:32:47.766 --> 00:32:57.146
square lattice the sides on my
lattice are labeled by x and y.
00:32:57.146 --> 00:33:04.429
And I can ask, where
can I go in one step
00:33:04.429 --> 00:33:08.069
if I start from x and y?
00:33:08.069 --> 00:33:15.461
And the answer is that either
x stays the same and why y
00:33:15.461 --> 00:33:26.329
shifts by one, or y stays
the same and x shifts by one.
00:33:26.329 --> 00:33:31.801
These are the four
non zero elements
00:33:31.801 --> 00:33:37.547
of the matrix that allows you to
go on the square lattice either
00:33:37.547 --> 00:33:39.385
up, down, right, or left.
00:33:39.385 --> 00:33:41.275
And the corresponding
things that you
00:33:41.275 --> 00:33:45.430
would have for the cube
or whatever lattice.
00:33:45.430 --> 00:33:47.380
OK?
00:33:47.380 --> 00:33:51.016
Now you look at
this and you can see
00:33:51.016 --> 00:33:54.248
that what I have
imposed clearly is such
00:33:54.248 --> 00:33:58.892
that for a lattice
where every side looks
00:33:58.892 --> 00:34:03.351
like every equivalent
side this is really
00:34:03.351 --> 00:34:08.734
a function only of the
separation between the two
00:34:08.734 --> 00:34:09.319
points.
00:34:09.319 --> 00:34:12.829
It's an n by n matrix.
00:34:12.829 --> 00:34:17.208
It has n squared elements,
but the elements really
00:34:17.208 --> 00:34:19.380
are essentially one
column that gets
00:34:19.380 --> 00:34:23.284
shifted as you go
further and further down
00:34:23.284 --> 00:34:26.760
in a very specific way.
00:34:26.760 --> 00:34:32.600
And whenever you have a matrix
such as this translational
00:34:32.600 --> 00:34:37.408
symmetry implies that
you can diagonalize it
00:34:37.408 --> 00:34:40.240
by Fourier transformation.
00:34:40.240 --> 00:34:46.848
And what do I mean by that?
00:34:46.848 --> 00:34:53.655
This is, I can define
a vector q such
00:34:53.655 --> 00:35:00.310
that it's various components
are things like e to the i
00:35:00.310 --> 00:35:03.740
q dot r in whatever dimension.
00:35:03.740 --> 00:35:06.860
Let's normalize it
by square root of n.
00:35:06.860 --> 00:35:12.560
And I should be sure that
that is an eigenvector.
00:35:12.560 --> 00:35:19.220
So basically my statement is
that if I take the matrix t,
00:35:19.220 --> 00:35:27.050
act on q, then I should get some
eigenvalue in the vector back.
00:35:27.050 --> 00:35:34.270
And let's check that for
the case of the 2D system.
00:35:34.270 --> 00:35:47.530
So for the 2D system, if I say
that x y t qx qy, what is it?
00:35:47.530 --> 00:35:54.778
Well, that is x y
t x prime, y prime,
00:35:54.778 --> 00:35:58.271
the entity that I
have calculated here,
00:35:58.271 --> 00:36:01.265
x prime, y prime qx, qy.
00:36:01.265 --> 00:36:06.410
And of course, I have a sum
over x prime and y prime.
00:36:06.410 --> 00:36:07.870
That's the matrix product.
00:36:07.870 --> 00:36:13.382
And so again, remember
this entity is simply
00:36:13.382 --> 00:36:21.650
e to the i qx x prime
plus qy y prime, divided
00:36:21.650 --> 00:36:24.130
by square root of n.
00:36:24.130 --> 00:36:29.090
And because this is a set
of delta functions, what
00:36:29.090 --> 00:36:30.578
does it do?
00:36:30.578 --> 00:36:34.746
It basically sets x prime either
to x plus 1, or x minus 1,
00:36:34.746 --> 00:36:38.698
y prime either to y or
y minus 1, y minus 1.
00:36:38.698 --> 00:36:44.606
You can see that you always
get back your e to the i qx
00:36:44.606 --> 00:36:49.330
x plus qy y with a
factor of root n.
00:36:49.330 --> 00:36:52.193
So essentially that
by the delta functions
00:36:52.193 --> 00:36:56.290
just changes the x
primes to x at the cost
00:36:56.290 --> 00:36:58.124
of the different
shifts that you have
00:36:58.124 --> 00:37:02.094
to do over there, which means
that you will get a factor of e
00:37:02.094 --> 00:37:06.462
to the i qx, e to the minus
iqx, with qy not changing,
00:37:06.462 --> 00:37:11.052
or e to the i qy,
and e to the minus i
00:37:11.052 --> 00:37:14.041
qy with the x
component not changing.
00:37:14.041 --> 00:37:18.682
So this is the standard
thing that you have seen,
00:37:18.682 --> 00:37:24.898
is none other than 2 cosine
of qx, plus cosine of qy.
00:37:24.898 --> 00:37:33.310
And so we can see that quite
generally in the d dimensional
00:37:33.310 --> 00:37:40.780
hypercube, my t of
q is going to be
00:37:40.780 --> 00:37:52.403
the sum over all d components
of these factors of cosine
00:37:52.403 --> 00:37:55.724
of q alpha.
00:37:55.724 --> 00:38:01.270
And that's about it, OK?
00:38:01.270 --> 00:38:05.630
So why did I bother to
this diagonalization?
00:38:05.630 --> 00:38:11.090
The answer is that that
now allows me to calculate
00:38:11.090 --> 00:38:13.754
everything that I want.
00:38:13.754 --> 00:38:20.414
So, for example, I know
that this quantity that I'm
00:38:20.414 --> 00:38:25.440
interested in,
sigma 0 sigma r, is
00:38:25.440 --> 00:38:34.865
going to be a sum over
l, t to the l, then 0.
00:38:34.865 --> 00:38:44.146
wl is t to the l times r.
00:38:44.146 --> 00:38:46.030
Right?
00:38:46.030 --> 00:38:50.593
Now this small t, I
can take inside here
00:38:50.593 --> 00:38:53.128
and do it like this.
00:38:53.128 --> 00:39:01.972
And if I want, I can write this
as the 0 r component of a sum
00:39:01.972 --> 00:39:07.264
over l tt raised
to the power of l.
00:39:07.264 --> 00:39:11.588
So it's a new matrix,
which is essentially
00:39:11.588 --> 00:39:17.936
sum over l, small t times this
connectivity matrix to the l th
00:39:17.936 --> 00:39:18.435
power.
00:39:18.435 --> 00:39:20.660
This is a geometric series.
00:39:20.660 --> 00:39:23.775
We can immediately do
the geometric series.
00:39:23.775 --> 00:39:35.876
The Answer is 0, 1 over 1 minus
tt going all the way to r.
00:39:35.876 --> 00:39:36.738
OK?
00:39:36.738 --> 00:39:43.270
And the reason I did
this diagonalization is
00:39:43.270 --> 00:39:48.550
so that I can calculate this
matrix element, because I don't
00:39:48.550 --> 00:39:51.832
really want to invert
a whole matrix,
00:39:51.832 --> 00:39:54.968
but I can certainly
invert the matrix when
00:39:54.968 --> 00:40:00.120
it is in the diagonal basis
because all I have to do
00:40:00.120 --> 00:40:02.620
is to invert pure numbers.
00:40:02.620 --> 00:40:11.013
So what is done is to
go to the Fourier basis
00:40:11.013 --> 00:40:17.120
and rotate to the Fourier
basis calculate this.
00:40:17.120 --> 00:40:20.630
It is diagonal in this basis.
00:40:20.630 --> 00:40:22.970
I have q r.
00:40:22.970 --> 00:40:25.895
And so what is that?
00:40:25.895 --> 00:40:30.348
These are these exponentials
here evaluated at the origin,
00:40:30.348 --> 00:40:33.484
so it's just 1 over root n.
00:40:33.484 --> 00:40:36.295
This is 1 over root n.
00:40:36.295 --> 00:40:42.379
This is e to the i
q dot r over root n.
00:40:42.379 --> 00:40:48.753
This is just the eigenvalue that
I have calculated over here.
00:40:48.753 --> 00:40:58.005
So this entity is none
other than a sum over q, e
00:40:58.005 --> 00:41:04.315
to the i q dot r
divided by n, two
00:41:04.315 --> 00:41:07.261
factors of square root of n.
00:41:07.261 --> 00:41:10.698
1 over 1 minus
t-- well, actually
00:41:10.698 --> 00:41:23.620
let's write it 2t-- sum over
alpha cosine of q alpha.
00:41:23.620 --> 00:41:27.157
And then of course I'm
interested in big systems.
00:41:27.157 --> 00:41:30.710
I replace the sum over
q with an integral
00:41:30.710 --> 00:41:34.870
over q, 2 pi to the
d density of states.
00:41:34.870 --> 00:41:39.256
In going from there to there,
there's a factor of volume,
00:41:39.256 --> 00:41:43.325
the way that I have set the
unit of length in my system.
00:41:43.325 --> 00:41:46.310
The volume is actually
the number of particles
00:41:46.310 --> 00:41:47.600
that I have.
00:41:47.600 --> 00:41:50.180
So that factor of n disappears.
00:41:50.180 --> 00:41:54.655
And all I need to do
is evaluate this factor
00:41:54.655 --> 00:42:00.211
of 1 minus 2t sum over alpha
cosine of q alpha integrated
00:42:00.211 --> 00:42:05.700
over q Fourier transformed.
00:42:05.700 --> 00:42:07.370
OK?
00:42:07.370 --> 00:42:09.040
Yes?
00:42:09.040 --> 00:42:12.131
AUDIENCE: So I notice that
you have a sum over q,
00:42:12.131 --> 00:42:15.692
but then you also have
a sum alpha q alpha.
00:42:15.692 --> 00:42:16.400
PROFESSOR: Right.
00:42:16.400 --> 00:42:17.790
AUDIENCE: Is there
a relationship
00:42:17.790 --> 00:42:20.300
between the q and
the q alpha or not?
00:42:20.300 --> 00:42:20.990
PROFESSOR: OK.
00:42:20.990 --> 00:42:22.715
So that goes back here.
00:42:22.715 --> 00:42:26.456
So when I had two
dimensions, I had qx and qy.
00:42:26.456 --> 00:42:26.956
Right?
00:42:26.956 --> 00:42:32.624
And so I labeled, rather
than x and y, with q1 and q2.
00:42:32.624 --> 00:42:39.100
So the index alpha is just a
number of spatial dimensions.
00:42:39.100 --> 00:42:46.040
If you like, this is
also dq1, dq2, dqd.
00:42:46.040 --> 00:42:51.146
AUDIENCE: OK.
00:42:51.146 --> 00:42:53.700
[INAUDIBLE]
00:42:53.700 --> 00:42:55.824
PROFESSOR: OK.
00:42:55.824 --> 00:42:57.948
All right.
00:42:57.948 --> 00:43:03.258
So we are down here.
00:43:03.258 --> 00:43:05.382
Let's proceed.
00:43:05.382 --> 00:43:11.760
So what is going to happen?
00:43:11.760 --> 00:43:15.810
So suppose I'm picking
two sides, 0 and r,
00:43:15.810 --> 00:43:18.960
let's say both along
the x direction,
00:43:18.960 --> 00:43:21.804
some particular distance apart.
00:43:21.804 --> 00:43:25.794
Let's say seven, eight apart.
00:43:25.794 --> 00:43:32.879
So in order to evaluate this I
would have an integral if this
00:43:32.879 --> 00:43:37.807
is the x direction of something
like e to the iq x times r.
00:43:37.807 --> 00:43:44.709
Now when I integrate over qx the
integral of e to the iqx times
00:43:44.709 --> 00:43:48.820
r would go to 0.
00:43:48.820 --> 00:43:51.682
The only way that
it won't go to 0
00:43:51.682 --> 00:43:54.880
is from the expansion of
what is in the denominator.
00:43:54.880 --> 00:43:57.624
I should bring on
enough factors of e
00:43:57.624 --> 00:44:00.025
to the minus iqx,
which certainly exist
00:44:00.025 --> 00:44:04.535
in these cosine factors,
to get rid of that.
00:44:04.535 --> 00:44:09.440
So essentially, the mathematical
procedure that is over here
00:44:09.440 --> 00:44:15.644
is to bring in sufficient
factors of e to the minus i
00:44:15.644 --> 00:44:18.746
q dot r to eliminate that.
00:44:18.746 --> 00:44:22.878
And the number of ways that
you are going to do that
00:44:22.878 --> 00:44:26.776
is precisely another way of
capturing this entity, which
00:44:26.776 --> 00:44:33.196
means that clearly if I'm
looking at something like this,
00:44:33.196 --> 00:44:39.178
and I mean the limit that t
goes to be very, very small so
00:44:39.178 --> 00:44:41.760
that the lowest order
in t contributes,
00:44:41.760 --> 00:44:46.390
the lowest order t would
be the shortest path that
00:44:46.390 --> 00:44:48.250
joins these two points.
00:44:48.250 --> 00:44:53.578
So it is like connecting these
two points with a string that
00:44:53.578 --> 00:44:54.910
is very tight.
00:44:54.910 --> 00:44:59.540
So that what I am saying
is that the limit as t
00:44:59.540 --> 00:45:02.532
goes to 0 of something
like sigma 0,
00:45:02.532 --> 00:45:12.110
sigma r is going to be identical
to t to the minimum distance
00:45:12.110 --> 00:45:15.490
between 0 and r.
00:45:15.490 --> 00:45:19.146
Actually, I should
say proportional,
00:45:19.146 --> 00:45:21.911
is in fact more correct.
00:45:21.911 --> 00:45:25.790
Because there could be
multiple shortest paths
00:45:25.790 --> 00:45:30.580
that go between two points.
00:45:30.580 --> 00:45:31.540
OK?
00:45:31.540 --> 00:45:33.764
Now let's make sense.
00:45:33.764 --> 00:45:36.544
There's kind of an exponential.
00:45:36.544 --> 00:45:40.690
Essentially I start with
the high temperature limit
00:45:40.690 --> 00:45:43.190
where the two things don't
know anything about each other.
00:45:43.190 --> 00:45:46.201
So sigma 0, sigma
r is going to be 0.
00:45:46.201 --> 00:45:50.788
So anything that is beyond
0 has to come from somewhere
00:45:50.788 --> 00:45:54.780
in which the information
about the state of the site
00:45:54.780 --> 00:45:57.475
was conveyed all
the way over here.
00:45:57.475 --> 00:46:01.600
And it is done through
passing one bond at a time.
00:46:01.600 --> 00:46:04.885
And in some sense, the
fidelity of each one
00:46:04.885 --> 00:46:09.940
of those transformations
is proportional to t.
00:46:09.940 --> 00:46:13.988
Now as t becomes
larger you are going
00:46:13.988 --> 00:46:18.542
to be willing to pay
the costs of paths
00:46:18.542 --> 00:46:24.610
that go from 0 to r in a
slightly more disordered way.
00:46:24.610 --> 00:46:28.150
So your string that
was tight becomes
00:46:28.150 --> 00:46:30.600
kind of loose and floppy.
00:46:30.600 --> 00:46:34.030
And why does that
become the case?
00:46:34.030 --> 00:46:37.845
Because now although these
paths are longer and carry
00:46:37.845 --> 00:46:40.813
more factors of this
t that is small,
00:46:40.813 --> 00:46:45.455
there are just so many
of them that the entropy,
00:46:45.455 --> 00:46:49.911
the number of these
paths starts to dominate.
00:46:49.911 --> 00:46:50.470
OK?
00:46:50.470 --> 00:46:56.640
So very roughly you can state
the competition between them
00:46:56.640 --> 00:47:01.233
as follows, that the
contribution of the path
00:47:01.233 --> 00:47:08.247
of length l, decays like t to
the l, but the number of paths
00:47:08.247 --> 00:47:11.934
roughly grows like
2d to the power of l.
00:47:11.934 --> 00:47:14.608
And since we have this
phantomness character,
00:47:14.608 --> 00:47:19.364
if I am sitting at a
particular site, I can go up,
00:47:19.364 --> 00:47:20.573
down, right, left.
00:47:20.573 --> 00:47:24.210
So at each step I
have in two dimensions
00:47:24.210 --> 00:47:29.127
a choice of four, in three
dimensions a choice of six,
00:47:29.127 --> 00:47:32.270
in d dimensions a choice of 2d.
00:47:32.270 --> 00:47:35.366
So you can see that this
is exponentially small,
00:47:35.366 --> 00:47:36.742
this is exponentially large.
00:47:36.742 --> 00:47:39.455
So they kind of
balance each other.
00:47:39.455 --> 00:47:42.290
And the balance is
something like e
00:47:42.290 --> 00:47:46.870
to the minus l over some
typical l that we'll contribute.
00:47:46.870 --> 00:47:49.866
And clearly the
typical l is going
00:47:49.866 --> 00:47:55.202
to be finite as long
as 2dt is less than 1.
00:47:55.202 --> 00:47:59.458
So you can see that
something strange has
00:47:59.458 --> 00:48:04.902
to happen at the
value where tc is such
00:48:04.902 --> 00:48:09.060
that 2dtc is equal to 1.
00:48:09.060 --> 00:48:14.540
At that point the cost of
making your paths longer
00:48:14.540 --> 00:48:19.868
is more than made up by
increasing the number of paths
00:48:19.868 --> 00:48:23.216
that you can have, the
entropy starts to be.
00:48:23.216 --> 00:48:28.860
And you can see that precisely
that condition tells me
00:48:28.860 --> 00:48:34.152
whether or not this
integral exists, right?
00:48:34.152 --> 00:48:38.556
Because one point of this
integral where the integrand is
00:48:38.556 --> 00:48:41.384
largest is when q goes to 0.
00:48:41.384 --> 00:48:45.652
And you can see
that as q goes to 0,
00:48:45.652 --> 00:48:49.576
the value in the
denominator is 1 minus 2td.
00:48:49.576 --> 00:48:53.454
So there is
precisely a pole when
00:48:53.454 --> 00:48:56.290
this condition takes place.
00:48:56.290 --> 00:48:59.839
And if I'm interested
in seeing what
00:48:59.839 --> 00:49:04.910
is happening when I'm in the
vicinity of that transition,
00:49:04.910 --> 00:49:09.838
right before these paths become
very large, what I can do
00:49:09.838 --> 00:49:13.542
is I can start exploring
what is happening
00:49:13.542 --> 00:49:16.548
in the vicinity of that pole.
00:49:16.548 --> 00:49:22.336
So 1 minus 2d, sum over
alpha cosine of q alpha,
00:49:22.336 --> 00:49:25.400
how does it behave?
00:49:25.400 --> 00:49:27.850
Each cosine I can
start expanding around
00:49:27.850 --> 00:49:32.050
its q going to 0 as 1
minus q squared over 2,
00:49:32.050 --> 00:49:38.940
so you can see that this
is going to be 1 minus 2td.
00:49:38.940 --> 00:49:43.466
And then I would have
plus t q squared,
00:49:43.466 --> 00:49:46.612
because I will have q1 squared
plus q2 squared plus qd.
00:49:46.612 --> 00:49:50.600
So this q squared is
the sum of all the q's.
00:49:50.600 --> 00:49:55.048
And then I do have
higher order terms.
00:49:55.048 --> 00:49:59.496
Order of q to
fourth, and so forth.
00:49:59.496 --> 00:50:00.052
OK?
00:50:00.052 --> 00:50:07.031
So if I'm looking in the
vicinity of t going to tc,
00:50:07.031 --> 00:50:10.565
this is roughly tc q squared.
00:50:10.565 --> 00:50:18.115
This is something that goes to
0 and once I factor out the tc,
00:50:18.115 --> 00:50:22.840
I can define a length
squared in this fashion,
00:50:22.840 --> 00:50:23.881
inverse length squared.
00:50:23.881 --> 00:50:24.381
OK?
00:50:24.381 --> 00:50:31.070
PROFESSOR: You can see that
this inverse length squared is
00:50:31.070 --> 00:50:42.133
going to be 1 over tc 1 minus
2d, which is 1 over tc times t.
00:50:42.133 --> 00:50:51.220
So you can see that this is
none other than tc minus t.
00:50:51.220 --> 00:50:54.576
And if I'm looking at the
vicinity of that point,
00:50:54.576 --> 00:50:58.491
I find that the correlation
between sigma 0 sigma r
00:50:58.491 --> 00:51:01.576
is approximately
the integral ddq
00:51:01.576 --> 00:51:09.680
2 pi to the d, Fourier transform
of the denominator that I said
00:51:09.680 --> 00:51:16.080
is approximately 1 over
tc times q squared plus z
00:51:16.080 --> 00:51:18.650
to the minus 2.
00:51:18.650 --> 00:51:20.938
We've seen this before.
00:51:20.938 --> 00:51:24.370
You evaluated this
Fourier transform when
00:51:24.370 --> 00:51:27.240
you were doing Landau Ginzburg.
00:51:27.240 --> 00:51:32.140
So this is something
that grows when
00:51:32.140 --> 00:51:36.340
you are looking
at distances that
00:51:36.340 --> 00:51:39.924
are much less than
this correlation
00:51:39.924 --> 00:51:43.200
length as the Coulomb power law.
00:51:43.200 --> 00:51:46.480
When you are
looking at distances
00:51:46.480 --> 00:51:50.920
that are much larger than
the correlation event,
00:51:50.920 --> 00:51:56.006
you get the exponential
decay with this r
00:51:56.006 --> 00:52:02.990
to the d minus 1 over 2 factor.
00:52:02.990 --> 00:52:07.638
So what we find is
that the correlation
00:52:07.638 --> 00:52:11.124
of these phantom
loops is precisely
00:52:11.124 --> 00:52:16.676
the correlation that we had
seen for the Gaussian model,
00:52:16.676 --> 00:52:17.770
in fact.
00:52:17.770 --> 00:52:21.610
It has a correlation
length that diverges
00:52:21.610 --> 00:52:24.501
in precisely the
same way that we
00:52:24.501 --> 00:52:28.640
had seen for the Gaussian
model with the square root
00:52:28.640 --> 00:52:29.255
singularity.
00:52:29.255 --> 00:52:36.020
So this is our usual mu
equals 1/2 type of behavior
00:52:36.020 --> 00:52:37.870
that we've seen.
00:52:37.870 --> 00:52:43.456
And somehow, by all
of these routes,
00:52:43.456 --> 00:52:50.640
we have reproduced some
property of the Gaussian model.
00:52:50.640 --> 00:52:54.600
In fact, it's a little
bit more than that,
00:52:54.600 --> 00:52:59.000
because we can go back
and look at what we
00:52:59.000 --> 00:53:01.946
had here for the free energy.
00:53:01.946 --> 00:53:06.860
So let's erase the things that
pertain to the correlation
00:53:06.860 --> 00:53:15.188
length and correlations,
and focus on the calculation
00:53:15.188 --> 00:53:26.040
that we kind of left in
the middle over here.
00:53:26.040 --> 00:53:28.885
So what do we have?
00:53:28.885 --> 00:53:34.575
We have that log of S
prime, the intensive part,
00:53:34.575 --> 00:53:39.662
is a sum over the
lengths of these loops
00:53:39.662 --> 00:53:43.568
that start and
end at the origin.
00:53:43.568 --> 00:53:48.225
And the contribution
of a loop of length l
00:53:48.225 --> 00:53:50.840
is small t to the l.
00:53:50.840 --> 00:53:54.332
And since w of l is
the connectivity matrix
00:53:54.332 --> 00:53:59.268
to the l power, it's really like
looking at the matrix element
00:53:59.268 --> 00:54:00.696
of this entity.
00:54:00.696 --> 00:54:05.537
And of course, there is this
degeneracy factor of 2l.
00:54:05.537 --> 00:54:11.010
And I can write this
as 1/2 sum over l.
00:54:11.010 --> 00:54:18.552
Well, let's do it
this way-- the 0 th,
00:54:18.552 --> 00:54:29.460
0 th element of sum over
l dt to the l over l.
00:54:29.460 --> 00:54:33.600
And what is this?
00:54:33.600 --> 00:54:40.845
This is, in fact,
the series expansion
00:54:40.845 --> 00:54:48.090
of minus log of 1 minus dt.
00:54:48.090 --> 00:54:56.298
So I can, again, go
to the Fourier basis,
00:54:56.298 --> 00:55:07.370
write this as minus 1/2
sum over q0, 0 q log of 1
00:55:07.370 --> 00:55:12.520
minus t the eigenvalue
t of q, and then q0.
00:55:12.520 --> 00:55:17.808
Each one of these is
just the factor of 1
00:55:17.808 --> 00:55:21.048
over square root of n.
00:55:21.048 --> 00:55:28.176
The sum over q goes over
to n integral over q.
00:55:28.176 --> 00:55:36.424
So this simply becomes minus
1/2 integral over q 2 pi
00:55:36.424 --> 00:55:48.580
to the d log of 1 minus t,
this sum over alpha cosine of q
00:55:48.580 --> 00:55:55.790
alpha [INAUDIBLE] 2
that we had over here.
00:55:55.790 --> 00:56:00.020
And again, if I go
to this limit where
00:56:00.020 --> 00:56:05.200
I am close to tc, the
critical value of this t,
00:56:05.200 --> 00:56:08.200
and focus on the
behavior as q goes to 0,
00:56:08.200 --> 00:56:10.300
this is going to
be something that
00:56:10.300 --> 00:56:15.013
has this q squared plus c to
the minus 2 type of singularity.
00:56:15.013 --> 00:56:17.997
And again, this is
a kind of integral
00:56:17.997 --> 00:56:24.020
that we saw in connection
with the Gaussian model.
00:56:24.020 --> 00:56:30.380
And we know the kind of
singularities it gives.
00:56:30.380 --> 00:56:37.350
But why did we end up
with the Gaussian model?
00:56:37.350 --> 00:56:39.441
Let's work backward.
00:56:39.441 --> 00:56:43.526
That is, typically,
when we are doing
00:56:43.526 --> 00:56:47.846
some kind of a partition
function of a Gaussian model--
00:56:47.846 --> 00:56:54.610
let's say we have some integral
over some variables phi i.
00:56:54.610 --> 00:56:58.086
Let's say we put them on
the sides of a lattice.
00:56:58.086 --> 00:57:02.887
And we have e to the minus
phi i some matrix m ij phi
00:57:02.887 --> 00:57:07.716
j over 2 sum over i and
j implicit over there.
00:57:07.716 --> 00:57:09.472
What was the answer?
00:57:09.472 --> 00:57:13.606
Then the answer was
typically proportional to 1
00:57:13.606 --> 00:57:18.480
over the determinant of
this matrix to the 1/2,
00:57:18.480 --> 00:57:23.250
which, if I
exponentiated, would be
00:57:23.250 --> 00:57:29.620
exponential of minus 1/2
logarithm of the determinant
00:57:29.620 --> 00:57:36.760
of this matrix.
00:57:36.760 --> 00:57:39.462
So that's the general
result. And we
00:57:39.462 --> 00:57:42.936
see the result for
our log of S prime
00:57:42.936 --> 00:57:47.656
is, indeed, the form
of 1/2 minus 1/2
00:57:47.656 --> 00:57:50.936
of the log of something.
00:57:50.936 --> 00:57:55.528
And indeed, this sum
over q corresponds
00:57:55.528 --> 00:57:58.175
to summing over the
different eigenvalues.
00:57:58.175 --> 00:58:02.145
And if I were to
express det m in terms
00:58:02.145 --> 00:58:05.268
of the product of
its eigenvalues,
00:58:05.268 --> 00:58:08.180
it would be precisely that.
00:58:08.180 --> 00:58:12.348
So you can see that
actually, what we
00:58:12.348 --> 00:58:15.995
have calculated by
comparison of these two
00:58:15.995 --> 00:58:23.117
things corresponds to
a matrix m ij, which
00:58:23.117 --> 00:58:33.159
is delta ij minus t times this
single step connectivity matrix
00:58:33.159 --> 00:58:35.635
that I had before.
00:58:35.635 --> 00:58:38.730
So indeed, the
partition function
00:58:38.730 --> 00:58:47.448
that I calculated, that I called
Z prime or S prime, corresponds
00:58:47.448 --> 00:58:55.790
to doing the following--
doing an integral over phi i's
00:58:55.790 --> 00:58:57.330
from the delta ij.
00:58:57.330 --> 00:59:01.180
Essentially for each phi
i, I would have a factor
00:59:01.180 --> 00:59:04.420
of minus phi i squared over 2.
00:59:04.420 --> 00:59:09.005
So essentially, I
have to do this.
00:59:09.005 --> 00:59:13.600
And then from here,
once it's exponentiated,
00:59:13.600 --> 00:59:30.980
I will get a factor of e to the
sum over ij this t phi i phi j.
00:59:30.980 --> 00:59:39.320
So you can see that I started
calculating Ising variables
00:59:39.320 --> 00:59:41.822
on this lattice.
00:59:41.822 --> 00:59:46.108
The result that I calculated
for these phantom walks
00:59:46.108 --> 00:59:50.312
is actually identical if I had
to replace the Ising variables
00:59:50.312 --> 00:59:52.736
with just quantities
that I integrate
00:59:52.736 --> 00:59:56.648
all over the place,
provided that I weigh them
00:59:56.648 --> 00:59:58.136
with this factor.
00:59:58.136 --> 01:00:01.608
So really, the difference
between the Ising
01:00:01.608 --> 01:00:06.149
and what I have done
here can be captured
01:00:06.149 --> 01:00:12.940
by putting a weight for the
indirect integration per site.
01:00:12.940 --> 01:00:16.970
So if I really want to
do Ising, the weight
01:00:16.970 --> 01:00:20.597
that I want to do
for the Ising-- let's
01:00:20.597 --> 01:00:24.384
do it this way--
for phi has to have
01:00:24.384 --> 01:00:30.524
a delta function at minus 1
and a delta function at plus 1.
01:00:30.524 --> 01:00:35.718
Rather than doing
that, I have calculated
01:00:35.718 --> 01:00:40.912
a w that corresponds
to the Gaussian
01:00:40.912 --> 01:00:50.420
where the weight for each phi
is basically a Gaussian weight.
01:00:50.420 --> 01:00:55.160
And if I really wanted to
do the Landau Ginzburg,
01:00:55.160 --> 01:01:01.720
all I would need to do is to
add here a phi to the 4th.
01:01:01.720 --> 01:01:05.640
The problem with this
Gaussian-- the phantom system
01:01:05.640 --> 01:01:09.070
that I have-- is
the same problem
01:01:09.070 --> 01:01:12.104
that we had with
the Gaussian model.
01:01:12.104 --> 01:01:16.344
It only gives me one side
of the phase transition.
01:01:16.344 --> 01:01:20.632
Because you see that I did
all of these calculations.
01:01:20.632 --> 01:01:24.412
All of these calculations
were consistent, as long
01:01:24.412 --> 01:01:30.980
as I was dealing with t
that was less than tc.
01:01:30.980 --> 01:01:34.610
Once I go to t that
is greater than tc,
01:01:34.610 --> 01:01:37.520
then this denominator that
I had became negative.
01:01:37.520 --> 01:01:39.370
It just doesn't make sense.
01:01:39.370 --> 01:01:41.220
Correlations negative
don't make sense.
01:01:41.220 --> 01:01:45.105
The log, the argument that
I have to calculate here,
01:01:45.105 --> 01:01:49.155
if t is below-- is
larger than 1 over 2d,
01:01:49.155 --> 01:01:53.440
then it doesn't make sense.
01:01:53.440 --> 01:01:56.531
And of course, the reason the
whole theory doesn't make sense
01:01:56.531 --> 01:01:58.628
is kind of related
to the instability
01:01:58.628 --> 01:02:01.414
that we have in
the Gaussian model.
01:02:01.414 --> 01:02:03.404
Essentially, in
the Gaussian model
01:02:03.404 --> 01:02:06.088
also, when t becomes
large enough,
01:02:06.088 --> 01:02:09.864
this phi squared is
not enough to remove
01:02:09.864 --> 01:02:14.158
the instability that you have
for the largest eigenvalue.
01:02:14.158 --> 01:02:17.504
Physically, what
that means is that we
01:02:17.504 --> 01:02:20.074
started with this taut string.
01:02:20.074 --> 01:02:23.476
And as we approached
the transition,
01:02:23.476 --> 01:02:26.320
the string became more flexible.
01:02:26.320 --> 01:02:29.944
And in principle, what
this instability is telling
01:02:29.944 --> 01:02:35.302
me is that you go below
the transition of t greater
01:02:35.302 --> 01:02:39.320
than tc, and the string
becomes something
01:02:39.320 --> 01:02:44.492
that can go over and over
itself as many times,
01:02:44.492 --> 01:02:46.796
and gain entropy
further and further.
01:02:46.796 --> 01:02:49.100
So it will keep going forever.
01:02:49.100 --> 01:02:51.950
There is nothing to stop it.
01:02:51.950 --> 01:02:55.833
So the phantomness condition,
the cost that you pay for it,
01:02:55.833 --> 01:02:58.852
is that once you go
beyond the transition,
01:02:58.852 --> 01:03:00.636
you essentially
overwhelm yourself.
01:03:00.636 --> 01:03:04.870
There's just so much
that is going on.
01:03:04.870 --> 01:03:12.640
There is nothing
that you can do.
01:03:12.640 --> 01:03:17.080
So that's the story.
01:03:17.080 --> 01:03:23.550
Now, let's try to finally
understand some of the things
01:03:23.550 --> 01:03:29.040
that we had before, like this
other critical dimension of 4.
01:03:29.040 --> 01:03:31.280
Where did it come
from, et cetera?
01:03:31.280 --> 01:03:36.340
You are now in the position
to do things and understand
01:03:36.340 --> 01:03:36.970
things.
01:03:36.970 --> 01:03:43.466
First thing to
note is, let's try
01:03:43.466 --> 01:03:52.760
to understand what this
exponent mu equal to 1/2 means.
01:03:52.760 --> 01:03:59.900
So we said that if I think
about having information
01:03:59.900 --> 01:04:04.898
about my site at
the origin, that
01:04:04.898 --> 01:04:10.571
has to propagate so that further
and further neighbors start
01:04:10.571 --> 01:04:15.880
to know what the information
was at site sigma 0--
01:04:15.880 --> 01:04:22.900
that that information can
come through these paths that
01:04:22.900 --> 01:04:26.029
fluctuate, go
different distances,
01:04:26.029 --> 01:04:34.741
and eventually, let's say, reach
a boundary that is at size r.
01:04:34.741 --> 01:04:43.045
As we said, the contribution of
each path decays exponentially,
01:04:43.045 --> 01:04:49.580
but the number of paths
grows exponentially.
01:04:49.580 --> 01:04:53.792
And so for a particular
t that is smaller
01:04:53.792 --> 01:04:57.068
than the critical
value, I can roughly
01:04:57.068 --> 01:05:00.804
say that this falls
off like this,
01:05:00.804 --> 01:05:05.700
so that there is a
characteristic length, l bar.
01:05:05.700 --> 01:05:12.368
This characteristic l bar
is going to be minus 1
01:05:12.368 --> 01:05:15.152
over log of 2dt.
01:05:15.152 --> 01:05:23.592
And 2dt I can write as
2d tc plus t minus tc.
01:05:23.592 --> 01:05:28.926
2 d tc is, by construction, 1.
01:05:28.926 --> 01:05:37.308
So this is minus 1 over
log of 1 plus something
01:05:37.308 --> 01:05:47.320
like 2d, which is 1 over t
minus tc, t minus tc over tc.
01:05:47.320 --> 01:05:53.365
Now, log of 1 plus a small
number-- so if my t goes
01:05:53.365 --> 01:05:56.628
and approaches tc--
this log will behave
01:05:56.628 --> 01:05:59.436
like what I have over here.
01:05:59.436 --> 01:06:04.584
So you can see that this
diverges as t minus tc
01:06:04.584 --> 01:06:07.786
to the minus 1 power.
01:06:07.786 --> 01:06:15.255
I want it, I guess, to
be correct-- tc minus t,
01:06:15.255 --> 01:06:19.078
because t is less than tc.
01:06:19.078 --> 01:06:24.019
But the point is that
the divergence is linear.
01:06:24.019 --> 01:06:29.662
As I approach tc, the
length of these paths
01:06:29.662 --> 01:06:36.830
will grow inversely
to how close I am.
01:06:36.830 --> 01:06:39.485
Now what are these paths?
01:06:39.485 --> 01:06:44.795
I start from the origin,
and I randomly take steps.
01:06:44.795 --> 01:06:51.035
And I've said that the
typical steps that I will get
01:06:51.035 --> 01:06:54.490
will roughly have length l bar.
01:06:54.490 --> 01:06:58.338
How far have these paths
carried the information?
01:06:58.338 --> 01:07:03.226
These are random walks, so
the distance over which they
01:07:03.226 --> 01:07:06.304
have managed to carry
the information,
01:07:06.304 --> 01:07:10.921
c, is going to be
like the square root
01:07:10.921 --> 01:07:13.570
of the length of these walks.
01:07:13.570 --> 01:07:18.658
And since the length of the
walks grows like t minus tc,
01:07:18.658 --> 01:07:24.478
this goes like tc minus
t to the minus 1/2 power.
01:07:24.478 --> 01:07:31.022
So the exponent mu of 1/2 that
we have been thinking about
01:07:31.022 --> 01:07:36.566
is none other than the 1/2
that you have for random walks,
01:07:36.566 --> 01:07:40.256
once you realize
that what is going on
01:07:40.256 --> 01:07:44.846
is that the length of the
paths that carry information
01:07:44.846 --> 01:07:51.167
essentially diverges
linearly on approaching this.
01:07:51.167 --> 01:07:56.560
So that's one understanding.
01:07:56.560 --> 01:08:01.860
Now, you would say that this
is the Gaussian picture.
01:08:01.860 --> 01:08:06.274
Now I know that when
we calculated things
01:08:06.274 --> 01:08:14.722
to order of epsilon, we found
that mu was 1/2 plus something.
01:08:14.722 --> 01:08:16.839
It became larger.
01:08:16.839 --> 01:08:19.679
So what does that mean?
01:08:19.679 --> 01:08:26.509
Well, if you have these paths,
and the paths cannot cross each
01:08:26.509 --> 01:08:31.489
other-- it comes here, it
has to go further away,
01:08:31.489 --> 01:08:36.341
because they are really non
phantom-- then they will swell.
01:08:36.341 --> 01:08:40.400
So the exponent mu
that you expect to get
01:08:40.400 --> 01:08:42.655
will be larger than 1/2.
01:08:42.655 --> 01:08:48.520
So that's what's
captured in here.
01:08:48.520 --> 01:08:53.054
Well, how can I really
try to capture that more
01:08:53.054 --> 01:08:53.679
mathematically?
01:08:53.679 --> 01:08:57.928
Well, I say that in the
calculations that I did--
01:08:57.928 --> 01:09:01.675
let's say when I was calculating
the correlation functions sigma
01:09:01.675 --> 01:09:05.883
0 sigma r, in the
approximation of phantomness,
01:09:05.883 --> 01:09:11.160
I included all paths
that went from 0 to r.
01:09:11.160 --> 01:09:15.174
Among those there were paths
that were crossing themselves.
01:09:15.174 --> 01:09:21.013
So I really have to subtract
from that a path that
01:09:21.013 --> 01:09:23.719
comes and crosses itself.
01:09:23.719 --> 01:09:26.683
So I have to subtract that.
01:09:26.683 --> 01:09:31.623
I also had this condition
that I had the numerator
01:09:31.623 --> 01:09:35.236
and denominator that cancel
each other, which really means
01:09:35.236 --> 01:09:39.617
that I have to subtract
the possibility of my path
01:09:39.617 --> 01:09:45.169
intersecting with another
loop that is over here.
01:09:45.169 --> 01:09:53.399
And we can try to incorporate
these as corrections.
01:09:53.399 --> 01:09:57.239
But we've already done that,
because if I Fourier transform
01:09:57.239 --> 01:10:02.365
this object, I saw that it
is this 1 over q squared
01:10:02.365 --> 01:10:03.739
plus k squared.
01:10:03.739 --> 01:10:07.410
And then we were calculating
these u perturbative
01:10:07.410 --> 01:10:12.536
corrections, and we had diagrams
that kind of looked like this.
01:10:12.536 --> 01:10:22.480
Oops, I guess I want to
first draw the other diagram.
01:10:22.480 --> 01:10:26.920
And then we had a diagram
that was like this.
01:10:26.920 --> 01:10:30.489
You remember when we
were doing these phi
01:10:30.489 --> 01:10:33.081
to the 4th calculations,
the corrections
01:10:33.081 --> 01:10:38.484
that we had for the propagator,
which was related to the two
01:10:38.484 --> 01:10:42.524
point correlation function, were
precisely these diagrams, where
01:10:42.524 --> 01:10:47.634
we were essentially subtracting
factors that were set by u.
01:10:47.634 --> 01:10:52.250
Of course, the value
of u could be anything,
01:10:52.250 --> 01:10:56.275
and you can see that there
is really a one to one
01:10:56.275 --> 01:10:56.900
correspondence.
01:10:56.900 --> 01:11:00.752
Any of the diagrams that
you had before really
01:11:00.752 --> 01:11:04.179
captures the picture
of one of these paths
01:11:04.179 --> 01:11:08.337
trying to cross itself
that you have to subtract.
01:11:08.337 --> 01:11:13.732
And you can sort of put a one to
one mathematical correspondence
01:11:13.732 --> 01:11:15.420
between what is going on here.
01:11:15.420 --> 01:11:15.920
Yeah.
01:11:15.920 --> 01:11:18.104
AUDIENCE: So why
can't we have the path
01:11:18.104 --> 01:11:19.742
in the first
correction you drew?
01:11:19.742 --> 01:11:21.668
Because aren't we
allowed to have
01:11:21.668 --> 01:11:24.062
four bonds that
attach to one site
01:11:24.062 --> 01:11:26.960
when we're doing the
original expansion?
01:11:26.960 --> 01:11:29.687
PROFESSOR: OK, so I told
you at the beginning
01:11:29.687 --> 01:11:32.719
that you should keep track
of all of my mistakes.
01:11:32.719 --> 01:11:35.005
And that's a very subtle thing.
01:11:35.005 --> 01:11:39.199
So what you are asking is,
in the original Ising model,
01:11:39.199 --> 01:11:45.019
I can draw perfectly OK a
graph such as this that has
01:11:45.019 --> 01:11:47.444
an intersection such as this.
01:11:47.444 --> 01:11:51.684
As we will show next
time-- so bear with me--
01:11:51.684 --> 01:11:55.019
in calculating things while
in the phantom condition,
01:11:55.019 --> 01:11:59.149
this is counted three
times as much as it should.
01:11:59.149 --> 01:12:03.815
So I have to subtract that,
because a walk that comes here
01:12:03.815 --> 01:12:06.300
can either go
forward, up, or down.
01:12:06.300 --> 01:12:09.090
There is some degeneracy
there that essentially, this
01:12:09.090 --> 01:12:11.786
has done an over counting
that is important,
01:12:11.786 --> 01:12:15.980
and I have to correct
for when I do things more
01:12:15.980 --> 01:12:17.640
carefully next time around.
01:12:17.640 --> 01:12:18.140
Yes.
01:12:18.140 --> 01:12:20.072
AUDIENCE: When you did
the Gaussian model,
01:12:20.072 --> 01:12:22.559
we never had to put
any sort of requirement
01:12:22.559 --> 01:12:24.360
on the lattice being
a square lattice.
01:12:24.360 --> 01:12:25.340
PROFESSOR: No.
01:12:25.340 --> 01:12:27.491
AUDIENCE: Didn't we have
to do that here when
01:12:27.491 --> 01:12:28.690
you did those random walks?
01:12:28.690 --> 01:12:33.140
PROFESSOR: No, I only use the
square condition or hypercube
01:12:33.140 --> 01:12:36.522
condition in order
to be able to write
01:12:36.522 --> 01:12:37.926
this in general dimension.
01:12:37.926 --> 01:12:40.734
I could very well have
done triangular, FCC,
01:12:40.734 --> 01:12:44.648
or any other lattice.
01:12:44.648 --> 01:12:57.480
The expression here would
have been more complicated.
01:12:57.480 --> 01:13:06.560
So finally, we can also
ask, we have a feel
01:13:06.560 --> 01:13:11.100
from renormalization
group, et cetera,
01:13:11.100 --> 01:13:15.707
that the Gaussian exponents,
like mu equals to 1/2,
01:13:15.707 --> 01:13:18.940
are, in fact, good
provided that you
01:13:18.940 --> 01:13:20.875
are in sufficiently
high dimension--
01:13:20.875 --> 01:13:23.197
if you are above
four dimensions.
01:13:23.197 --> 01:13:33.384
Where did you see that
occurring in this picture?
01:13:33.384 --> 01:13:42.019
The answer is as follows.
01:13:42.019 --> 01:13:51.109
So basically, I have ignored the
possibility of intersections.
01:13:51.109 --> 01:13:58.652
So let's see when that
condition is roughly good.
01:13:58.652 --> 01:14:03.098
The kind of entities
that I have as I
01:14:03.098 --> 01:14:08.683
get closer and closer to
tc in the phantom case
01:14:08.683 --> 01:14:11.519
are these random walks.
01:14:11.519 --> 01:14:15.139
And we said that the
characteristic of a random walk
01:14:15.139 --> 01:14:19.934
is that if I have something
that carries l steps,
01:14:19.934 --> 01:14:30.296
that the typical size in
space that it will grow scales
01:14:30.296 --> 01:14:35.006
like l to the 1/2.
01:14:35.006 --> 01:14:41.770
So we can recast
this as a dimension.
01:14:41.770 --> 01:14:49.240
Basically, we are used to
say linear objects having
01:14:49.240 --> 01:15:00.059
a mass that grows--
what do I want to do?
01:15:00.059 --> 01:15:07.717
Let's say that I
have a hypercube
01:15:07.717 --> 01:15:18.657
of size L. Let's actually
call it size R. Then
01:15:18.657 --> 01:15:22.816
the mass of this, or
the number of elements
01:15:22.816 --> 01:15:31.650
that constitute this object,
grow like R to the d.
01:15:31.650 --> 01:15:35.731
So if I take my random
walk, and think of it
01:15:35.731 --> 01:15:39.194
as something in which
every step has unit mass,
01:15:39.194 --> 01:15:44.034
you would say that the l
is proportional to mass
01:15:44.034 --> 01:15:48.404
so that the radius grows
like the number of elements
01:15:48.404 --> 01:15:51.979
to the 1/2 power or the
mass of the 1/2 power.
01:15:51.979 --> 01:15:55.081
So you would say
that the random walk,
01:15:55.081 --> 01:16:01.412
if I want to force it in terms
of a relationship between mass
01:16:01.412 --> 01:16:06.691
and radius, that mass
goes like radius squared.
01:16:06.691 --> 01:16:14.579
So in that sense, you can
say that the random walk
01:16:14.579 --> 01:16:28.170
has a fractal or
Hausdorff dimension of 2.
01:16:28.170 --> 01:16:32.967
So if you kind of
are very, very blind,
01:16:32.967 --> 01:16:38.300
you would say that this
is like a random walk.
01:16:38.300 --> 01:16:41.065
It's a two dimensional thing.
01:16:41.065 --> 01:16:42.730
It's a page.
01:16:42.730 --> 01:16:49.770
So now the question is, if I
have two geometrical entities,
01:16:49.770 --> 01:16:51.690
will they intersect?
01:16:51.690 --> 01:16:58.676
So if I have a plane and a
line in three dimensions,
01:16:58.676 --> 01:17:00.884
they will barely intersect.
01:17:00.884 --> 01:17:05.409
In four dimensions,
they won't intersect.
01:17:05.409 --> 01:17:10.557
If I have two surfaces that
are two dimensional, in three
01:17:10.557 --> 01:17:13.380
dimensions, they
intersect in a line.
01:17:13.380 --> 01:17:16.521
In four dimensions, they
would intersect in a point.
01:17:16.521 --> 01:17:19.701
And in five dimensions, they
won't generically intersect,
01:17:19.701 --> 01:17:24.579
like two lines generically don't
intersect in three dimensions.
01:17:24.579 --> 01:17:31.146
So if you ask, how bad
is it that I ignored
01:17:31.146 --> 01:17:36.699
the intersection of objects that
are inherently random walking
01:17:36.699 --> 01:17:40.124
in sufficiently
high dimensions, I
01:17:40.124 --> 01:17:43.549
would say the
answer geometrically
01:17:43.549 --> 01:17:50.189
would be in intersection
is generic if d
01:17:50.189 --> 01:17:56.989
is less than 2 df, which is 4.
01:17:56.989 --> 01:18:02.587
So we made a very
drastic assumption.
01:18:02.587 --> 01:18:08.054
But as long as we are above
four dimensions, it's OK.
01:18:08.054 --> 01:18:12.088
There's so much
space around that
01:18:12.088 --> 01:18:17.183
statistically, these
intersections, this non
01:18:17.183 --> 01:18:23.601
phantomness, is so entropically
constraining that it never
01:18:23.601 --> 01:18:24.100
happens.
01:18:24.100 --> 01:18:28.033
You can ignore it, and
the results are OK.
01:18:28.033 --> 01:18:31.969
But you go to four
dimensions and below, you
01:18:31.969 --> 01:18:34.503
can't ignore it, because
generically, these things
01:18:34.503 --> 01:18:36.313
will intersect with each other.
01:18:36.313 --> 01:18:40.441
That's why these diagrams
are going to blow up on you,
01:18:40.441 --> 01:18:43.509
and give you some important
contribution that would swell,
01:18:43.509 --> 01:18:47.019
and give you a value of mu
that is larger than the 1/2
01:18:47.019 --> 01:18:52.739
that we have for random walks.
01:18:52.739 --> 01:18:57.119
So that's the essence of
where the Gaussian model was,
01:18:57.119 --> 01:19:00.955
why we get mu equals
to 1/2, why we
01:19:00.955 --> 01:19:03.555
get mu's that are
larger than 1/2, what
01:19:03.555 --> 01:19:06.570
the meaning of these diagrams
is, what four dimensions
01:19:06.570 --> 01:19:07.392
is special.
01:19:07.392 --> 01:19:11.913
All of it really just comes
down to central limit theorem
01:19:11.913 --> 01:19:15.493
and knowing that the sum of
a large number of variables
01:19:15.493 --> 01:19:19.822
that has a square root of n type
of variance and fluctuations.
01:19:19.822 --> 01:19:23.579
And it's all captured by that.
01:19:23.579 --> 01:19:29.177
But we wanted to really
solve the model exactly.
01:19:29.177 --> 01:19:35.322
It turns out that we can
make the conditions that
01:19:35.322 --> 01:19:39.586
were very hard to implement
in general dimensions
01:19:39.586 --> 01:19:43.329
to work out correctly
in two dimensions.
01:19:43.329 --> 01:19:46.489
And so the next
lecture will show you
01:19:46.489 --> 01:19:49.649
what these mistakes
are, how to avoid them,
01:19:49.649 --> 01:19:54.600
and how to get the exact value
of this sum in two dimensions.