WEBVTT
00:00:00.000 --> 00:00:01.952
[SQUEAKING]
00:00:01.952 --> 00:00:03.404
[PAPER RUSTLING]
00:00:03.904 --> 00:00:04.880
[CLICKING]
00:00:18.080 --> 00:00:20.960
YUFEI ZHAO: Last time we started
talking about Roth's theorem,
00:00:20.960 --> 00:00:26.420
and we showed a Fourier
analytic proof of Roth's theorem
00:00:26.420 --> 00:00:28.760
in the finite field model.
00:00:28.760 --> 00:00:32.770
So Roth's theorem
in F3 to the N.
00:00:32.770 --> 00:00:37.670
And I want to today show
you how to modify that proof
00:00:37.670 --> 00:00:39.650
to work in integers.
00:00:39.650 --> 00:00:46.130
And this will be basically
Roth's original proof
00:00:46.130 --> 00:00:46.820
of his theorem.
00:00:56.440 --> 00:00:57.410
OK.
00:00:57.410 --> 00:01:00.860
So what we'll prove
today is the statement
00:01:00.860 --> 00:01:10.580
that the size of the largest
3AP-free subset of 1 through N
00:01:10.580 --> 00:01:21.080
is, at most, N divided
by log log N. OK,
00:01:21.080 --> 00:01:22.880
so we'll prove a
bound of this form.
00:01:26.450 --> 00:01:29.600
The strategy of this proof
will be very similar to the one
00:01:29.600 --> 00:01:30.920
that we had from last time.
00:01:30.920 --> 00:01:32.837
So let me review for you
what is the strategy.
00:01:44.680 --> 00:01:49.530
So from last time, the
proof had three main steps.
00:01:49.530 --> 00:01:52.920
In the first step, we
observed that if you
00:01:52.920 --> 00:02:02.170
are in the 3AP-free set then
there exists a large Fourier
00:02:02.170 --> 00:02:03.274
coefficient.
00:02:11.810 --> 00:02:13.730
From this Fourier
coefficient, we
00:02:13.730 --> 00:02:22.370
were able to extract
a large subspace where
00:02:22.370 --> 00:02:25.220
there is a density increment.
00:02:25.220 --> 00:02:28.910
I want to modify that
strategy so that we
00:02:28.910 --> 00:02:31.580
can work in the integers.
00:02:31.580 --> 00:02:34.190
Unlike an F2 to
the N, where things
00:02:34.190 --> 00:02:37.610
were fairly nice and clean,
because you have subspaces,
00:02:37.610 --> 00:02:39.530
you can take a
Fourier coefficient,
00:02:39.530 --> 00:02:41.900
pass it down to a subspace.
00:02:41.900 --> 00:02:43.760
There is no subspaces, right?
00:02:43.760 --> 00:02:46.980
There are no subspaces
in the integers.
00:02:46.980 --> 00:02:49.020
So we have to do something
slightly different,
00:02:49.020 --> 00:02:51.120
but in the same spirit.
00:02:51.120 --> 00:02:53.577
So you'll find a large
Fourier coefficient.
00:03:00.400 --> 00:03:10.370
And we will find that there
is density increment when
00:03:10.370 --> 00:03:14.410
you restrict not to
subspaces, but what
00:03:14.410 --> 00:03:17.335
could play the role of subspaces
when it comes to the integers?
00:03:19.900 --> 00:03:23.050
So I want something which
looks like a smaller
00:03:23.050 --> 00:03:26.260
version of the original space.
00:03:26.260 --> 00:03:29.280
So instead of it
being integers, if we
00:03:29.280 --> 00:03:33.780
restrict to a subprogression,
so to a smaller arithmetic
00:03:33.780 --> 00:03:34.970
progression.
00:03:34.970 --> 00:03:38.040
I will show that you can
restrict to a subprogression
00:03:38.040 --> 00:03:40.697
where you can obtain
density increment.
00:03:45.840 --> 00:03:50.070
So we'll restrict integers
to something smaller.
00:03:50.070 --> 00:04:04.680
And then, same as last time,
we can iterate this increment
00:04:04.680 --> 00:04:08.490
to obtain the
conclusion that you
00:04:08.490 --> 00:04:14.545
have an upper bound on the
size of this 3AP-free set.
00:04:14.545 --> 00:04:15.670
OK, so that's the strategy.
00:04:15.670 --> 00:04:18.310
So you see the same strategy
as the one we did last time,
00:04:18.310 --> 00:04:20.829
and many of the ingredients
will have parallels,
00:04:20.829 --> 00:04:23.830
but the execution will
be slightly different,
00:04:23.830 --> 00:04:26.750
especially in the
second step where,
00:04:26.750 --> 00:04:29.440
because we no longer
have sub spaces, which
00:04:29.440 --> 00:04:30.880
are nice and clean,
so that's why
00:04:30.880 --> 00:04:33.070
we started with a
finite fuel model, just
00:04:33.070 --> 00:04:35.530
to show how things work in
a slightly easier setting.
00:04:35.530 --> 00:04:39.670
And today, we'll see how to
do a same kind of strategy
00:04:39.670 --> 00:04:42.890
here, where there is going
to be a bit more work.
00:04:42.890 --> 00:04:45.290
Not too much more,
but a bit more work.
00:04:45.290 --> 00:04:47.791
OK, so before we
start, any questions?
00:04:53.200 --> 00:04:54.920
All right.
00:04:54.920 --> 00:04:57.290
So last time I used the
proof Roth's theorem
00:04:57.290 --> 00:04:59.245
as an excuse to introduce
Fourier analysis.
00:04:59.245 --> 00:05:01.620
And we're going to see basically
the same kind of Fourier
00:05:01.620 --> 00:05:05.420
analysis, but it's going to take
on a slightly different form,
00:05:05.420 --> 00:05:07.940
because we're not working
at F3 to the N. We're
00:05:07.940 --> 00:05:10.490
working inside the integers.
00:05:10.490 --> 00:05:13.870
And there's a general theory of
Fourier analysis on the group,
00:05:13.870 --> 00:05:16.472
on a billion groups.
00:05:16.472 --> 00:05:18.680
I don't want to go into that
theory, because that's--
00:05:18.680 --> 00:05:20.442
I want to focus on
the specific case,
00:05:20.442 --> 00:05:22.400
but the point is that
given the billion groups,
00:05:22.400 --> 00:05:27.220
you always have a dual
group of characters.
00:05:27.220 --> 00:05:29.800
And they play the role
of Fourier transform.
00:05:29.800 --> 00:05:33.790
Specifically in
the case of Z, we
00:05:33.790 --> 00:05:35.890
have the following
Fourier transform.
00:05:41.390 --> 00:05:51.110
So the dual group of Z
turns out to be the torus.
00:05:51.110 --> 00:05:54.540
So real numbers mod one.
00:05:54.540 --> 00:06:03.450
And the Fourier
transform is defined
00:06:03.450 --> 00:06:14.910
as follows, starting with a
function on the integers, OK.
00:06:14.910 --> 00:06:17.010
If you'd like, let's say
it's finitely supported,
00:06:17.010 --> 00:06:18.730
just to make our
lives a bit easier.
00:06:18.730 --> 00:06:20.680
Don't have to deal
with technicalities.
00:06:20.680 --> 00:06:23.820
But in general, the
following formula holds.
00:06:23.820 --> 00:06:32.280
We have this Fourier
transform defined
00:06:32.280 --> 00:06:39.620
by setting f hat of theta
to be the following sum.
00:06:44.922 --> 00:06:50.030
OK, where this e is actually
somewhat standard notation,
00:06:50.030 --> 00:06:52.480
additive combinatorics.
00:06:52.480 --> 00:06:56.400
It's e to the 2
pi i t, all right?
00:06:56.400 --> 00:07:02.350
So it goes a fraction, t,
around the complex unit circle.
00:07:02.350 --> 00:07:02.850
OK.
00:07:02.850 --> 00:07:05.507
So that's the Fourier
transform on the integers.
00:07:05.507 --> 00:07:08.090
OK, so you might have seen this
before under a different name.
00:07:08.090 --> 00:07:09.770
This is usually
called Fourier series.
00:07:13.790 --> 00:07:14.290
All right.
00:07:14.290 --> 00:07:16.332
You know, the notation
may be slightly different.
00:07:16.332 --> 00:07:19.050
OK, so that's what
we'll see today.
00:07:19.050 --> 00:07:22.760
And this Fourier transform plays
the same role as the Fourier
00:07:22.760 --> 00:07:29.390
transform from last time, which
was on the group F3 to the N.
00:07:29.390 --> 00:07:33.062
And just us in--
00:07:33.062 --> 00:07:37.020
so last time, we had a number
of important identities,
00:07:37.020 --> 00:07:39.900
and we'll have the same
kinds of identities here.
00:07:39.900 --> 00:07:41.970
So let me remind
you what they are.
00:07:41.970 --> 00:07:43.810
And the proofs are all
basically the same,
00:07:43.810 --> 00:07:45.102
so I won't show you the proofs.
00:07:54.130 --> 00:08:01.540
f hat of 0 is simply the
sum of f over the domain.
00:08:05.220 --> 00:08:16.470
We have this Plancherel Parseval
identity, which tells us
00:08:16.470 --> 00:08:22.620
that if you look at
the inner product
00:08:22.620 --> 00:08:27.610
by linear form in
the physical space,
00:08:27.610 --> 00:08:33.450
it equals to the inner
product in the Fourier space.
00:08:42.620 --> 00:08:43.120
OK.
00:08:43.120 --> 00:08:45.250
So in the physical
space now, you sum.
00:08:45.250 --> 00:08:47.350
In the frequency space,
you take integral
00:08:47.350 --> 00:08:51.960
over the torus, or the
circle, in this case.
00:08:51.960 --> 00:08:54.570
It's a one-dimensional torus.
00:08:54.570 --> 00:08:58.470
There is also the Fourier
inversion formula,
00:08:58.470 --> 00:09:05.350
which now says that f of x
is equal to f hat of theta.
00:09:05.350 --> 00:09:09.780
E of x theta, you integrate
theta from 0 to 1.
00:09:09.780 --> 00:09:13.830
Again, on the torus,
on the circle.
00:09:13.830 --> 00:09:18.060
And third-- and finally, there
was this identity last time
00:09:18.060 --> 00:09:20.940
that related three-term
arithmetic progressions
00:09:20.940 --> 00:09:22.680
to the Fourier transform, OK?
00:09:22.680 --> 00:09:26.530
So this last one was
slightly not as--
00:09:26.530 --> 00:09:29.550
I mean, it's not as standard
as the first several, which are
00:09:29.550 --> 00:09:31.560
standard Fourier identities.
00:09:31.560 --> 00:09:34.360
But this one will
be useful to us.
00:09:34.360 --> 00:09:38.280
So the identity relating the
Fourier transform, the 3AP, now
00:09:38.280 --> 00:09:39.643
has the following form.
00:09:39.643 --> 00:09:47.098
OK, so if we define
lambda of f, g,
00:09:47.098 --> 00:09:55.790
and h to be the
following sum, which
00:09:55.790 --> 00:10:02.600
sums over all 3APs
in the integers,
00:10:02.600 --> 00:10:11.470
then one can write
this expression
00:10:11.470 --> 00:10:15.970
in terms of the Fourier
transform as follows.
00:10:26.750 --> 00:10:27.720
OK.
00:10:27.720 --> 00:10:28.220
All right.
00:10:28.220 --> 00:10:31.940
So comparing this formula to the
one that we saw from last time,
00:10:31.940 --> 00:10:34.490
it's the same formula,
where different domains,
00:10:34.490 --> 00:10:35.990
where you're summing
or integrating,
00:10:35.990 --> 00:10:37.160
but it's the same formula.
00:10:37.160 --> 00:10:39.000
And the proof is the same.
00:10:39.000 --> 00:10:40.070
So go look at the proof.
00:10:40.070 --> 00:10:40.903
It's the same proof.
00:10:43.710 --> 00:10:44.210
OK.
00:10:44.210 --> 00:10:47.810
So these are the key Fourier
things that we'll use.
00:10:47.810 --> 00:10:50.980
And then we'll try to
follow on those with--
00:10:50.980 --> 00:10:55.170
the same as the proof as last
time, and see where we can get.
00:10:55.170 --> 00:10:57.180
So let me introduce
one more notation.
00:10:57.180 --> 00:11:04.490
So I'll write lambda sub 3
of f to be lambda of f, f, f,
00:11:04.490 --> 00:11:05.030
three times.
00:11:07.509 --> 00:11:08.009
OK.
00:11:08.009 --> 00:11:09.980
So at this point, if you
understood the lecture
00:11:09.980 --> 00:11:13.220
from last time, none of
anything I've said so far
00:11:13.220 --> 00:11:14.450
should be surprising.
00:11:14.450 --> 00:11:16.280
We are working
integers, so we should
00:11:16.280 --> 00:11:19.170
look at the corresponding
Fourier transform in integers.
00:11:19.170 --> 00:11:21.680
And if you follow your
notes, this is all the things
00:11:21.680 --> 00:11:23.660
that we're going to use.
00:11:23.660 --> 00:11:25.970
OK, so what was one
of the first things
00:11:25.970 --> 00:11:29.270
we mentioned regarding
the Fourier transform
00:11:29.270 --> 00:11:30.830
from last time after this point?
00:11:40.250 --> 00:11:40.750
OK.
00:11:40.750 --> 00:11:42.075
AUDIENCE: The counting lemma.
00:11:42.075 --> 00:11:42.700
YUFEI ZHAO: OK.
00:11:42.700 --> 00:11:44.260
So let's do a counting lemma.
00:11:44.260 --> 00:11:49.780
So what should the
counting lemma say?
00:11:49.780 --> 00:11:52.700
Well, the spirit of the counting
lemma is that if you have two
00:11:52.700 --> 00:11:56.360
functions that are close to
each other-- and now, "close"
00:11:56.360 --> 00:11:58.940
means close in Fourier--
00:11:58.940 --> 00:12:03.910
then their corresponding number
of 3APs should be similar, OK?
00:12:03.910 --> 00:12:06.150
So that's what we want to say.
00:12:06.150 --> 00:12:14.740
And indeed, right, so
the counting lemma for us
00:12:14.740 --> 00:12:25.510
will say that if f and g
are functions on Z, and--
00:12:28.420 --> 00:12:47.375
such that their L2 norms are
both bounded by this M. OK,
00:12:47.375 --> 00:12:51.940
the sum of the squared absolute
value entries are both bounded.
00:12:51.940 --> 00:13:03.710
Then the difference
of their 3AP counts
00:13:03.710 --> 00:13:08.570
should not be so different
from each other if f
00:13:08.570 --> 00:13:10.790
and g are close in Fourier, OK?
00:13:10.790 --> 00:13:21.990
And that means that if all
the Fourier coefficients of f
00:13:21.990 --> 00:13:28.270
minus g are small,
then lambda 3ff,
00:13:28.270 --> 00:13:32.290
which considers 3AP counts
in f, is close to that of g.
00:13:35.685 --> 00:13:38.110
OK.
00:13:38.110 --> 00:13:42.744
Same kind of 3AP counting
lemma from last time.
00:13:42.744 --> 00:13:45.706
OK, so let's prove it, OK?
00:13:51.520 --> 00:13:53.620
As with the counting
lemma proofs
00:13:53.620 --> 00:13:56.350
you've seen several times
already in this course,
00:13:56.350 --> 00:14:00.070
we will prove it by first
writing this difference
00:14:00.070 --> 00:14:01.450
as a telescoping sum.
00:14:07.420 --> 00:14:12.920
The first term
being f minus g f f,
00:14:12.920 --> 00:14:25.045
and then g of f minus g f and
lambda g, g, f, f minus g.
00:14:25.045 --> 00:14:27.790
OK, and we will like to show
that each of these terms
00:14:27.790 --> 00:14:34.273
is small if f minus g has
small Fourier coefficients.
00:14:39.690 --> 00:14:40.190
OK.
00:14:40.190 --> 00:14:41.900
So let's bound the first term.
00:14:50.077 --> 00:14:55.340
OK, so let me bound this first
term using the 3AP identity,
00:14:55.340 --> 00:14:57.950
relating 3AP to
Fourier coefficients,
00:14:57.950 --> 00:15:05.150
we can write this lambda
as the following integral
00:15:05.150 --> 00:15:07.118
over Fourier coefficients.
00:15:24.070 --> 00:15:25.730
And now, let me--
00:15:25.730 --> 00:15:27.550
OK, so what was the
trick last time?
00:15:27.550 --> 00:15:32.600
So we said let's pull
out one of these guys
00:15:32.600 --> 00:15:37.245
and then use triangle inequality
on the remaining factors.
00:15:37.245 --> 00:15:38.150
OK, so we'll do that.
00:15:56.410 --> 00:15:58.852
So far, so good.
00:15:58.852 --> 00:16:02.295
And now you see this integral.
00:16:02.295 --> 00:16:03.773
Apply Cauchy-Schwartz.
00:16:18.090 --> 00:16:20.890
OK, so apply Cauchy-Schwartz
to the first factor,
00:16:20.890 --> 00:16:25.533
you got this l2 sum,
this l2 integral.
00:16:25.533 --> 00:16:26.950
And then you apply
Cauchy-Schwartz
00:16:26.950 --> 00:16:28.295
to the second factor.
00:16:34.633 --> 00:16:35.550
You get that integral.
00:16:38.826 --> 00:16:40.920
OK, now what do we do?
00:16:44.912 --> 00:16:46.409
Yep?
00:16:46.409 --> 00:16:49.980
AUDIENCE: [INAUDIBLE]
00:16:49.980 --> 00:16:50.980
YUFEI ZHAO: OK, so yeah.
00:16:50.980 --> 00:16:54.700
So you see an l2 of Fourier,
the first automatic reaction
00:16:54.700 --> 00:16:56.890
should be to use a
Plancherel or Parseval, OK?
00:16:56.890 --> 00:17:04.589
So apply Plancherel identity
to each of these factors.
00:17:04.589 --> 00:17:13.520
We find that each
of those factors
00:17:13.520 --> 00:17:20.040
is equal to this l2 sum
in the physical space.
00:17:23.750 --> 00:17:25.609
OK, so this square
root, the same thing.
00:17:25.609 --> 00:17:27.980
Square root again.
00:17:27.980 --> 00:17:28.520
OK.
00:17:28.520 --> 00:17:34.750
And then we find
that because there
00:17:34.750 --> 00:17:36.720
was an hypothesis--
in the hypothesis,
00:17:36.720 --> 00:17:40.430
there was a bound M on
this sum of squares.
00:17:43.610 --> 00:17:46.090
You have that down there.
00:17:46.090 --> 00:17:48.080
And similarly, with
the other two terms.
00:18:01.526 --> 00:18:02.050
OK.
00:18:02.050 --> 00:18:03.511
So that proves the
counting lemma.
00:18:07.359 --> 00:18:08.802
Question?
00:18:08.802 --> 00:18:12.760
AUDIENCE: Last time, the
term on the right-hand side
00:18:12.760 --> 00:18:16.525
was the maximum over
non-zero frequency?
00:18:16.525 --> 00:18:17.730
YUFEI ZHAO: OK.
00:18:17.730 --> 00:18:18.230
OK.
00:18:18.230 --> 00:18:22.230
So the question is, last time
we had a counting lemma that
00:18:22.230 --> 00:18:24.120
looked slightly different.
00:18:24.120 --> 00:18:26.730
But I claimed they're all
really the same counting lemma.
00:18:26.730 --> 00:18:28.320
They're all the same proofs.
00:18:28.320 --> 00:18:30.810
If you run this
proof, it won't work.
00:18:30.810 --> 00:18:33.150
If you take what
we did last time,
00:18:33.150 --> 00:18:35.260
it's the same kind of proofs.
00:18:35.260 --> 00:18:36.760
So last time we had
a counting lemma
00:18:36.760 --> 00:18:44.120
where we had the same
f, f, f essentially.
00:18:44.120 --> 00:18:48.660
We know have-- I allow you
to essentially take three
00:18:48.660 --> 00:18:49.640
different things, and--
00:18:49.640 --> 00:18:52.980
OK, so both-- in
both cases, you're
00:18:52.980 --> 00:18:54.750
running through
this calculation,
00:18:54.750 --> 00:18:57.378
but they look
slightly different.
00:18:57.378 --> 00:19:02.555
AUDIENCE: [INAUDIBLE]
00:19:02.555 --> 00:19:03.430
YUFEI ZHAO: So, yeah.
00:19:03.430 --> 00:19:04.435
So I agree.
00:19:04.435 --> 00:19:05.810
It doesn't look
exactly the same,
00:19:05.810 --> 00:19:07.520
but if you think about
what's involved in the proof,
00:19:07.520 --> 00:19:08.520
they're the same proofs.
00:19:11.232 --> 00:19:12.630
OK.
00:19:12.630 --> 00:19:16.170
Any more questions?
00:19:16.170 --> 00:19:19.060
All right.
00:19:19.060 --> 00:19:20.670
So now, we have
this counting lemma,
00:19:20.670 --> 00:19:25.660
so let's start our proof of
Roth's theorem in the integers.
00:19:25.660 --> 00:19:27.820
As with last time, there
will be three steps,
00:19:27.820 --> 00:19:32.175
as mentioned up there, OK?
00:19:32.175 --> 00:19:37.590
In the first step, let us
show that if you are 3AP-free,
00:19:37.590 --> 00:19:40.130
then we can obtain a density--
00:19:40.130 --> 00:19:42.198
a large Fourier coefficient.
00:19:56.640 --> 00:19:59.200
Yeah, so in this course,
this counting lemma,
00:19:59.200 --> 00:20:01.450
we actually solved this--
00:20:01.450 --> 00:20:03.970
basically this kind of proof
for the first time when we
00:20:03.970 --> 00:20:07.000
discussed graph counting
lemma, back in the chapter
00:20:07.000 --> 00:20:09.810
on Szemerédi's regularity lemma.
00:20:09.810 --> 00:20:12.430
And sure, they all
look literally--
00:20:12.430 --> 00:20:14.740
not exactly the same,
but they're all really
00:20:14.740 --> 00:20:16.400
the same kind of proofs, right?
00:20:16.400 --> 00:20:17.050
So I want--
00:20:17.050 --> 00:20:19.540
I'm showing you the same thing
in many different guises.
00:20:19.540 --> 00:20:20.873
But they're all the same proofs.
00:20:23.690 --> 00:20:28.540
So if you are a set
that is 3AP-free--
00:20:37.630 --> 00:20:41.480
and as with last time,
I'm going to call
00:20:41.480 --> 00:20:46.550
alpha the density of A now
inside this progression,
00:20:46.550 --> 00:20:49.820
this length N progression.
00:20:49.820 --> 00:20:54.560
And suppose N is large enough.
00:20:59.460 --> 00:21:07.580
OK, so the conclusion now is
that there exists some theta
00:21:07.580 --> 00:21:19.170
such that if you look
at this sum over here
00:21:19.170 --> 00:21:23.930
as a sum over both integers--
00:21:23.930 --> 00:21:26.680
actually, let me do
the sum only from 1
00:21:26.680 --> 00:21:35.040
to uppercase N. Claim that--
00:21:35.040 --> 00:21:39.510
OK, so-- so it's saying
what this title says.
00:21:39.510 --> 00:21:43.205
If you are 3AP-free
and this N is large
00:21:43.205 --> 00:21:45.580
enough relative to the density,
you think of this density
00:21:45.580 --> 00:21:49.990
alpha is a constant, then
I can find a large Fourier
00:21:49.990 --> 00:21:50.860
coefficient.
00:21:50.860 --> 00:21:54.010
Now, there's a small
difference, and this
00:21:54.010 --> 00:21:56.140
is related to what you
were asking earlier,
00:21:56.140 --> 00:21:59.710
between how we set things up now
versus what happened last time.
00:21:59.710 --> 00:22:05.530
So last time, we just looked
for a Fourier coefficient
00:22:05.530 --> 00:22:10.080
corresponding to a non-zero r.
00:22:10.080 --> 00:22:13.340
Now, I'm not
restricting non-zero,
00:22:13.340 --> 00:22:15.700
but I don't start with
an indicator function.
00:22:15.700 --> 00:22:19.280
I start with the demeaned
indicator function.
00:22:19.280 --> 00:22:24.340
I take out the mean so that
the zeroth coefficient,
00:22:24.340 --> 00:22:28.450
so to speak, which corresponds
to the mean, is already 0.
00:22:28.450 --> 00:22:32.790
So you don't get to use
that for your coefficient.
00:22:32.790 --> 00:22:34.840
So if you didn't do
this, if you just
00:22:34.840 --> 00:22:36.420
tried to do this
last time, I mean,
00:22:36.420 --> 00:22:38.045
you can also do
exactly the same setup.
00:22:38.045 --> 00:22:40.180
But if you don't
demean it, then--
00:22:40.180 --> 00:22:42.400
if you don't have this
term, then this statement
00:22:42.400 --> 00:22:46.960
is trivially true, because I
can take theta equal to 0, OK?
00:22:46.960 --> 00:22:48.300
But I don't want.
00:22:48.300 --> 00:22:52.900
I want an actual significant
Fourier improvement.
00:22:52.900 --> 00:22:53.930
So I take--
00:22:53.930 --> 00:22:58.540
I do this demean, and then
I consider its Fourier
00:22:58.540 --> 00:22:59.583
coefficient.
00:23:02.260 --> 00:23:02.760
OK.
00:23:02.760 --> 00:23:06.220
Any questions about
the statement?
00:23:06.220 --> 00:23:09.810
Yeah, so this demeaning is
really important, right?
00:23:09.810 --> 00:23:12.530
So that's something that's a
very common technique whenever
00:23:12.530 --> 00:23:14.140
you do these kind of analysis.
00:23:14.140 --> 00:23:16.510
So make sure you're--
00:23:16.510 --> 00:23:19.940
so that you're-- yeah, so you're
looking at functions with mean
00:23:19.940 --> 00:23:22.777
0.
00:23:22.777 --> 00:23:23.610
Let's see the proof.
00:23:28.190 --> 00:23:33.290
We have the
following information
00:23:33.290 --> 00:23:39.670
about all 3AP counts in A.
Because A is 3AP-free, OK,
00:23:39.670 --> 00:23:43.320
so what is the value of lambda
sub 3 of the indicator of A?
00:23:46.100 --> 00:23:49.440
Lambda of 3, if you
look at the expression,
00:23:49.440 --> 00:23:55.770
it basically sums over all
3APs, but A has no 3APs,
00:23:55.770 --> 00:23:58.500
except for the trivial ones.
00:23:58.500 --> 00:24:01.020
So we'll only consider
the trivial 3APs,
00:24:01.020 --> 00:24:06.210
which has size exactly
the size of A, which
00:24:06.210 --> 00:24:10.820
is alpha N from trivial 3APs.
00:24:14.870 --> 00:24:20.030
On the other hand, what
do we know about lambda 3
00:24:20.030 --> 00:24:25.256
of this interval from 1 to N?
00:24:25.256 --> 00:24:28.410
OK, so how many 3APs are there?
00:24:28.410 --> 00:24:33.640
OK, so roughly, it's going
to be about N squared over 2.
00:24:33.640 --> 00:24:38.470
And in fact, it will be
at least N squared over 2,
00:24:38.470 --> 00:24:41.860
because to generate
a 3AP, I just
00:24:41.860 --> 00:24:49.070
have to pick a first
term and a third term,
00:24:49.070 --> 00:24:51.760
and I'm OK as long as
they're the same parity.
00:24:58.195 --> 00:25:00.410
And then you have a 3AP.
00:25:00.410 --> 00:25:04.220
So the same parity
cuts you down by half,
00:25:04.220 --> 00:25:11.190
so you have at least N squared
over 2 3APs from 1 through N.
00:25:11.190 --> 00:25:14.930
So now, let's look at
how to apply the counting
00:25:14.930 --> 00:25:16.480
lemma all to the setting.
00:25:16.480 --> 00:25:18.420
So we have the counting
lemma up there,
00:25:18.420 --> 00:25:19.930
where I now want to apply it--
00:25:22.600 --> 00:25:28.960
so apply counting
to, on one hand,
00:25:28.960 --> 00:25:34.780
the indicator function of A
so we get the count 3APs in A,
00:25:34.780 --> 00:25:43.770
but also compared to
the normalized indicator
00:25:43.770 --> 00:25:45.252
on the interval.
00:25:45.252 --> 00:25:46.940
OK, so maybe this is
a good point for me
00:25:46.940 --> 00:25:50.425
to pause and remind you that
the spirit of this whole proof
00:25:50.425 --> 00:25:54.950
is understanding structure
versus pseudorandomness, OK?
00:25:54.950 --> 00:25:57.680
So as was the case last time.
00:25:57.680 --> 00:26:03.620
So we want to understand, in
what ways is A pseudorandom?
00:26:03.620 --> 00:26:06.020
And here, "pseudorandom,"
just as with last time,
00:26:06.020 --> 00:26:08.540
means having small
Fourier coefficients,
00:26:08.540 --> 00:26:11.856
being Fourier uniform.
00:26:11.856 --> 00:26:16.850
If A is pseudorandom,
which here, means f and g
00:26:16.850 --> 00:26:18.830
are close to each other.
00:26:18.830 --> 00:26:20.420
That's what being
pseudorandom means,
00:26:20.420 --> 00:26:21.920
then the counting
lemma will tell us
00:26:21.920 --> 00:26:25.790
that f and g should
have similar AP counts.
00:26:25.790 --> 00:26:31.190
But A has basically no AP
count, so they should not
00:26:31.190 --> 00:26:35.970
be close to each other.
00:26:35.970 --> 00:26:38.610
So that's the strategy, to
show that A is not pseudorandom
00:26:38.610 --> 00:26:41.490
in this sense, and thereby
extracting a large Fourier
00:26:41.490 --> 00:26:43.240
coefficient.
00:26:43.240 --> 00:26:47.490
So we apply counting
to these two functions,
00:26:47.490 --> 00:26:49.350
and we obtain that.
00:26:54.860 --> 00:26:55.360
OK.
00:26:55.360 --> 00:27:03.000
So this quantity, which
corresponds to lambda 3 of g,
00:27:03.000 --> 00:27:17.300
minus alpha N. So these were
lambda 3 of g, lambda 3 of F.
00:27:17.300 --> 00:27:18.760
So it is upper-bounded.
00:27:18.760 --> 00:27:23.897
The difference is up
rebounded by the--
00:27:28.010 --> 00:27:29.760
using the counting
lemma, we find
00:27:29.760 --> 00:27:31.500
that their difference
is upper-bounded
00:27:31.500 --> 00:27:32.670
by the following quantity.
00:27:43.760 --> 00:27:46.850
Namely, you look at the
difference between f and g
00:27:46.850 --> 00:27:52.082
and evaluate its maximum
Fourier coefficient.
00:27:52.082 --> 00:27:53.010
OK.
00:27:53.010 --> 00:27:58.760
So if A is pseudorandom,
meaning that Fourier uniform--
00:27:58.760 --> 00:28:04.840
this l infinity
norm is small, then
00:28:04.840 --> 00:28:10.010
I should expect lots
and lots of 3APs in A,
00:28:10.010 --> 00:28:12.390
but because that
is not the case,
00:28:12.390 --> 00:28:14.510
we should be able to
conclude that there is
00:28:14.510 --> 00:28:16.745
some large Fourier coefficient.
00:28:20.000 --> 00:28:21.090
All right, so thus--
00:28:33.131 --> 00:28:36.720
so rearranging the equation
above, we have that--
00:28:44.030 --> 00:28:45.270
so this should be a square.
00:28:51.480 --> 00:28:52.100
OK.
00:28:52.100 --> 00:28:54.260
So we have this expression here.
00:28:54.260 --> 00:28:56.690
And now we are--
00:28:56.690 --> 00:29:03.320
OK, so let me simplify
this expression slightly.
00:29:03.320 --> 00:29:07.550
And now we're using that N
is sufficiently large, OK?
00:29:07.550 --> 00:29:12.290
So we're using N is
sufficiently large.
00:29:12.290 --> 00:29:23.202
So this quantity is at least
a tenth of alpha squared N.
00:29:23.202 --> 00:29:24.910
OK, and that's the
conclusion, all right?
00:29:24.910 --> 00:29:28.937
So that's the conclusion
of this step here.
00:29:28.937 --> 00:29:29.770
What does this mean?
00:29:29.770 --> 00:29:33.600
This means there
exists some theta
00:29:33.600 --> 00:29:37.050
so that the Fourier
coefficient at theta
00:29:37.050 --> 00:29:38.750
is at least the
claimed quantity.
00:29:42.530 --> 00:29:43.834
Any questions?
00:29:51.420 --> 00:29:52.050
All right.
00:29:52.050 --> 00:29:53.960
So that finishes step 1.
00:29:53.960 --> 00:29:57.720
So now let me go on step 2.
00:29:57.720 --> 00:30:02.120
In step 2, we wish to show that
if you have a large Fourier
00:30:02.120 --> 00:30:11.956
coefficient, then one can
obtain a density increment.
00:30:21.200 --> 00:30:26.720
So last time, we were working
in a finite field vector space.
00:30:26.720 --> 00:30:30.470
A Fourier coefficient, OK,
so which is a dual vector,
00:30:30.470 --> 00:30:34.020
corresponds to some hyperplane.
00:30:34.020 --> 00:30:37.610
And having a large
Fourier coefficient
00:30:37.610 --> 00:30:40.380
then implies that
the density of A
00:30:40.380 --> 00:30:43.140
on the co-sets of
those hyperplanes
00:30:43.140 --> 00:30:46.720
must be not all
close to each other.
00:30:46.720 --> 00:30:48.570
All right, so one
of the hyperplanes
00:30:48.570 --> 00:30:53.380
must have significantly
higher density than the rest.
00:30:53.380 --> 00:30:55.300
OK, so we want to do
something similar here,
00:30:55.300 --> 00:30:57.580
except we run into this
technical difficulty
00:30:57.580 --> 00:31:01.250
where there are no
subspaces anymore.
00:31:01.250 --> 00:31:05.243
So the Fourier character, namely
corresponding to this theta,
00:31:05.243 --> 00:31:06.160
is just a real number.
00:31:06.160 --> 00:31:07.510
It doesn't divide up your space.
00:31:07.510 --> 00:31:09.070
It doesn't divide
up your 1 through N
00:31:09.070 --> 00:31:11.050
very nicely into sub chunks.
00:31:11.050 --> 00:31:15.820
But we still want to use this
theta to chop up 1 through N
00:31:15.820 --> 00:31:20.860
into smaller spaces so
that we can iterate and do
00:31:20.860 --> 00:31:24.730
density increment.
00:31:24.730 --> 00:31:25.910
All right.
00:31:25.910 --> 00:31:28.920
So let's see what we can do.
00:31:28.920 --> 00:31:36.980
So given this theta,
what we would like to do
00:31:36.980 --> 00:31:50.260
is to partition this 1 through
N into subprogressions.
00:31:53.564 --> 00:32:07.030
OK, so chop up 1 through
N into sub APs such that
00:32:07.030 --> 00:32:12.230
if you evaluate for--
so this theta is fixed.
00:32:12.230 --> 00:32:17.990
So on each sub AP,
this function here
00:32:17.990 --> 00:32:28.000
is roughly constant
on each of your parts.
00:32:31.150 --> 00:32:34.000
Last time, we had this
Fourier character,
00:32:34.000 --> 00:32:37.120
and then we chopped it up
using these three hyperplanes.
00:32:37.120 --> 00:32:40.390
And each hyperplane,
the Fourier character
00:32:40.390 --> 00:32:43.078
is literally constant, OK?
00:32:43.078 --> 00:32:46.670
So you have-- and so
that's what we work with.
00:32:46.670 --> 00:32:49.450
And now, you cannot get
them to be exactly constant,
00:32:49.450 --> 00:32:53.020
but the next best thing we can
hope for is to get this Fourier
00:32:53.020 --> 00:32:56.097
character to be
roughly constant.
00:32:56.097 --> 00:32:58.430
OK, so we're going to do some
positioning that allows us
00:32:58.430 --> 00:33:00.860
to achieve this characteristic.
00:33:00.860 --> 00:33:03.740
And let me give
you some intuition
00:33:03.740 --> 00:33:04.910
about why this is true.
00:33:04.910 --> 00:33:07.570
And this is not exactly
a surprising fact.
00:33:07.570 --> 00:33:10.040
The intuition is
just that if you
00:33:10.040 --> 00:33:11.900
look at what this
function behaves like--
00:33:17.444 --> 00:33:19.950
all right, so what's
going on here?
00:33:19.950 --> 00:33:27.600
You are on the unit circle,
and you are jumping by theta.
00:33:31.288 --> 00:33:42.080
OK, so you just keep
jumping by theta and so on.
00:33:42.080 --> 00:33:49.370
And I want to show that I
can sharp up my progression
00:33:49.370 --> 00:33:57.690
into a bunch of almost periodic
pieces, where in each part,
00:33:57.690 --> 00:34:01.370
I'm staying inside a small arc.
00:34:01.370 --> 00:34:06.560
So in the extreme case of this
where it is very easy to see
00:34:06.560 --> 00:34:18.159
is if x is some rational number,
a over b, with b fairly small,
00:34:18.159 --> 00:34:22.010
then we can--
00:34:22.010 --> 00:34:34.190
so then, this character is
actually constant on APs
00:34:34.190 --> 00:34:35.659
with common difference b.
00:34:43.063 --> 00:34:43.563
Yep?
00:34:43.563 --> 00:34:46.708
AUDIENCE: Is theta
supposed to be [INAUDIBLE]??
00:34:46.708 --> 00:34:48.875
YUFEI ZHAO: Ah, so theta,
yes. so theta-- thank you.
00:34:48.875 --> 00:34:50.270
So theta 2 pi.
00:34:54.174 --> 00:34:57.370
AUDIENCE: Like, is x
equal to your theta?
00:34:57.370 --> 00:34:58.078
YUFEI ZHAO: Yeah.
00:34:58.078 --> 00:34:59.054
Thank you.
00:34:59.054 --> 00:35:00.520
So theta equals-- yeah.
00:35:00.520 --> 00:35:06.180
So if theta is some rational
with some small denominator--
00:35:06.180 --> 00:35:13.420
so then you are literally
jumping in periodic steps
00:35:13.420 --> 00:35:15.690
on the unit circle.
00:35:15.690 --> 00:35:21.230
So if you partition N according
to the exact same periods,
00:35:21.230 --> 00:35:26.300
you have that this character
is exactly constant in each
00:35:26.300 --> 00:35:28.190
of your progressions.
00:35:28.190 --> 00:35:32.210
Now, in general, the theta
you get out of that proof
00:35:32.210 --> 00:35:35.400
might not have this
very nice form,
00:35:35.400 --> 00:35:38.340
but we can at
least approximately
00:35:38.340 --> 00:35:39.840
achieve the desired effect.
00:35:42.630 --> 00:35:43.130
OK.
00:35:46.890 --> 00:35:47.700
Any questions?
00:36:15.560 --> 00:36:16.260
OK.
00:36:16.260 --> 00:36:20.100
So to achieve approximately the
desired effect, what we'll do
00:36:20.100 --> 00:36:25.370
is to find something
so that b times theta
00:36:25.370 --> 00:36:29.485
is not quite an integer, but
very close to an integer.
00:36:29.485 --> 00:36:31.610
OK, so this, probably many
of you have seen before.
00:36:31.610 --> 00:36:37.470
It's a classic
pigeonhole-type result.
00:36:37.470 --> 00:36:41.510
It's usually attributed
to Dirichlet.
00:36:45.020 --> 00:36:54.990
So if you have theta, a
real number, and a delta,
00:36:54.990 --> 00:37:00.300
kind of a tolerance,
then there exists
00:37:00.300 --> 00:37:09.350
a positive integer d at
most 1 over delta such
00:37:09.350 --> 00:37:19.030
that d times theta is
very close to an integer.
00:37:19.030 --> 00:37:23.720
OK, so this norm
here is distance
00:37:23.720 --> 00:37:25.850
to the closest integer.
00:37:33.215 --> 00:37:38.390
All right, so the proof is
by pigeonhole principle.
00:37:38.390 --> 00:37:44.560
So if we let N be 1
over delta rounded down
00:37:44.560 --> 00:37:51.970
and consider the numbers 0,
theta, 2 theta, 3 theta, and so
00:37:51.970 --> 00:37:55.630
on, to N theta--
00:37:55.630 --> 00:38:08.040
so by pigeonhole, there
exists i theta and j theta, so
00:38:08.040 --> 00:38:12.660
two different terms
of the sequence such
00:38:12.660 --> 00:38:20.700
that they differ by
less than-- at most
00:38:20.700 --> 00:38:22.590
delta in their fractional parts.
00:38:30.260 --> 00:38:36.920
OK, so now take d to be
difference between i and j.
00:38:36.920 --> 00:38:37.820
OK, and that works.
00:38:48.060 --> 00:38:48.560
OK.
00:38:48.560 --> 00:38:52.720
So even though you don't
have exactly rational,
00:38:52.720 --> 00:38:55.210
you have approximately rational.
00:38:55.210 --> 00:38:56.200
So this is a--
00:38:56.200 --> 00:38:58.970
it's a simple rational
approximation statement.
00:38:58.970 --> 00:39:00.890
And using this
rational approximation,
00:39:00.890 --> 00:39:05.660
we can now try to do
the intuition here,
00:39:05.660 --> 00:39:10.490
pretending that we're working
with rational numbers, indeed.
00:39:16.442 --> 00:39:20.670
OK, so if we take
eta between 0 and 1
00:39:20.670 --> 00:39:34.850
and theta irrational and
suppose N is large enough--
00:39:34.850 --> 00:39:37.090
OK, so here, C
means there exists
00:39:37.090 --> 00:39:40.490
some sufficiently large--
00:39:40.490 --> 00:39:45.420
some constant C such that
the statement is true, OK?
00:39:45.420 --> 00:39:49.420
So suppose you think
a million here.
00:39:49.420 --> 00:39:51.160
That should be fine.
00:39:51.160 --> 00:39:55.740
So then there exists--
00:39:55.740 --> 00:40:12.330
so then one can partition
1 through N into sub-APs,
00:40:12.330 --> 00:40:14.960
which we'll call P i.
00:40:14.960 --> 00:40:24.250
And each having length
between cube root of N
00:40:24.250 --> 00:40:33.110
and twice the cube root of
N such that this character
00:40:33.110 --> 00:40:35.700
that we want to stay
roughly constant indeed
00:40:35.700 --> 00:40:39.550
does not change very much.
00:40:39.550 --> 00:40:47.610
If you look at two terms in
the same AP, in the sub-AP,
00:40:47.610 --> 00:41:00.670
then the value of this
character on each P sub i
00:41:00.670 --> 00:41:03.080
is roughly the same.
00:41:03.080 --> 00:41:08.920
So they don't vary by
more than eta on each P i.
00:41:08.920 --> 00:41:16.350
So here, we're partitioning this
1 through N into a sub-A piece
00:41:16.350 --> 00:41:20.355
so that this guy here
stays roughly constant.
00:41:25.940 --> 00:41:26.440
OK.
00:41:26.440 --> 00:41:29.470
Any questions?
00:41:29.470 --> 00:41:29.970
All right.
00:41:29.970 --> 00:41:33.728
So think about how
you might prove this.
00:41:33.728 --> 00:41:34.770
Let's take a quick break.
00:41:37.580 --> 00:41:41.160
So you see, we are basically
following the same strategy
00:41:41.160 --> 00:41:44.770
as the proof from last
time, but this second step,
00:41:44.770 --> 00:41:47.370
which we're on right now,
needs to be somewhat modified
00:41:47.370 --> 00:41:52.230
because you cannot cut this
space up into pieces where
00:41:52.230 --> 00:41:54.330
your character is constant.
00:41:54.330 --> 00:41:58.140
Well, if they're roughly
constant then we're go to go,
00:41:58.140 --> 00:42:01.300
so that's what we're doing now.
00:42:01.300 --> 00:42:03.000
So let's prove the
statement up there.
00:42:15.350 --> 00:42:15.860
All right.
00:42:15.860 --> 00:42:18.950
So let's prove this
statement over here.
00:42:18.950 --> 00:42:37.600
So using Dirichlet's lemma, we
find that there exists some d.
00:42:37.600 --> 00:42:39.615
OK, so I'll write down
some number for now.
00:42:39.615 --> 00:42:40.490
Don't worry about it.
00:42:40.490 --> 00:42:45.090
It will come up shortly why I
write this specific quantity.
00:42:47.670 --> 00:43:06.330
So there exists some d which is
not too big, such that d theta
00:43:06.330 --> 00:43:09.830
is very close to an integer.
00:43:09.830 --> 00:43:12.540
So now, I'm literally
applying Dirichlet's lemma.
00:43:16.050 --> 00:43:17.680
OK.
00:43:17.680 --> 00:43:24.826
So given such d--
00:43:24.826 --> 00:43:27.640
so how big is this d?
00:43:27.640 --> 00:43:32.740
You see that because I assumed
that N is sufficiently large,
00:43:32.740 --> 00:43:48.800
if we choose that C large
enough, d is at most root N. So
00:43:48.800 --> 00:43:53.570
given such d, which
is at most root N,
00:43:53.570 --> 00:44:04.180
you can partition 1 through
N into subprogressions
00:44:04.180 --> 00:44:09.580
with common difference d.
00:44:09.580 --> 00:44:14.550
Essentially, look at--
let's do classes mod d.
00:44:14.550 --> 00:44:19.130
So they're all going to have
length by basically N over d.
00:44:19.130 --> 00:44:25.140
And I chop them up a
little bit further to get--
00:44:25.140 --> 00:44:33.390
so a piece of length
between cube root of N
00:44:33.390 --> 00:44:40.990
and twice cube root of N.
00:44:40.990 --> 00:44:41.490
OK.
00:44:41.490 --> 00:44:44.520
So I'm going to make
sure that all of my APs
00:44:44.520 --> 00:44:48.000
are roughly the same length.
00:44:48.000 --> 00:44:56.090
And now, inside each
subprogression--
00:45:00.790 --> 00:45:06.610
let me call this subprogression
P prime, subprogression P,
00:45:06.610 --> 00:45:09.430
let's look at how much
this character value can
00:45:09.430 --> 00:45:13.119
vary inside this progression.
00:45:19.346 --> 00:45:20.304
All right.
00:45:26.010 --> 00:45:29.420
OK, so how much can this vary?
00:45:29.420 --> 00:45:36.330
Well, because theta is such
that d times theta is very close
00:45:36.330 --> 00:45:41.778
to an integer and the length
of each progression is not too
00:45:41.778 --> 00:45:42.570
large-- so here's--
00:45:42.570 --> 00:45:44.710
I want some control
on the length.
00:45:44.710 --> 00:45:49.020
So we find that the
maximum variation
00:45:49.020 --> 00:45:56.695
is, at most, the size of
P, the length of P, times--
00:45:59.580 --> 00:46:04.530
so this-- that
difference over there.
00:46:04.530 --> 00:46:08.920
So all of these are exponential,
so I can shift them.
00:46:08.920 --> 00:46:19.995
Well, the length of P is at
most twice cube root of N. And--
00:46:19.995 --> 00:46:22.160
OK, so what is this quantity?
00:46:22.160 --> 00:46:25.470
So the point is that if
this fractional part here
00:46:25.470 --> 00:46:29.080
is very close to an
integer, then e to that,
00:46:29.080 --> 00:46:31.560
e to the 2 pi times that
i times some number should
00:46:31.560 --> 00:46:36.630
be very close to 1, because
what is happening here?
00:46:36.630 --> 00:46:42.050
This is the distance
between those two points
00:46:42.050 --> 00:46:45.230
on the circle, which
is at most bounded
00:46:45.230 --> 00:46:46.490
by the length of the arc.
00:46:57.460 --> 00:47:02.530
OK, so cord length,
at most of the arc.
00:47:02.530 --> 00:47:05.560
So now, you put
everything here together,
00:47:05.560 --> 00:47:09.230
and apply the bound
that we got on d theta.
00:47:09.230 --> 00:47:11.410
So this is the reason for
choosing that weird number
00:47:11.410 --> 00:47:13.960
up there.
00:47:13.960 --> 00:47:30.050
We find that the variation
within each progression
00:47:30.050 --> 00:47:31.903
is at most eta, right?
00:47:31.903 --> 00:47:33.320
So the variation
of this character
00:47:33.320 --> 00:47:36.175
within each progression
is not very large, OK?
00:47:36.175 --> 00:47:37.050
And that's the claim.
00:47:39.830 --> 00:47:41.780
Any questions?
00:47:41.780 --> 00:47:45.920
All right, so this is the
analogous claim to the one
00:47:45.920 --> 00:47:46.780
that we had--
00:47:46.780 --> 00:47:48.500
the one that we used
last time, where
00:47:48.500 --> 00:47:51.950
we said that the character
is constant on each coset
00:47:51.950 --> 00:47:52.915
of the hyperplane.
00:47:52.915 --> 00:47:55.430
They're not exactly constant,
but almost good enough.
00:48:05.610 --> 00:48:06.300
All right.
00:48:06.300 --> 00:48:10.020
So the goal of step 2
is to show an energy--
00:48:10.020 --> 00:48:13.020
show a density increment, that
if you have a large Fourier
00:48:13.020 --> 00:48:15.180
coefficient, then
we want to claim
00:48:15.180 --> 00:48:18.720
that the density
goes up significantly
00:48:18.720 --> 00:48:21.940
on some subprogression.
00:48:21.940 --> 00:48:24.910
And the next part,
the next lemma,
00:48:24.910 --> 00:48:27.280
will get us to that goal.
00:48:27.280 --> 00:48:29.920
And this part is very
similar to the one
00:48:29.920 --> 00:48:34.600
that we saw from last time,
but with this new partition
00:48:34.600 --> 00:48:35.980
in mind, like I said.
00:48:35.980 --> 00:48:47.860
If you have A that is
3AP-free with density alpha,
00:48:47.860 --> 00:48:57.990
and N is large
enough, then there
00:48:57.990 --> 00:49:08.980
exists some subprogression
P. So by subprogression,
00:49:08.980 --> 00:49:11.560
I just mean that I'm starting
with original progression
00:49:11.560 --> 00:49:15.340
1 through N, and I'm zooming
into some subprogression,
00:49:15.340 --> 00:49:22.160
with the length of P fairly
long, so the length of P
00:49:22.160 --> 00:49:29.430
is at least cube root
of N, and such that A,
00:49:29.430 --> 00:49:32.760
when restricted to
this subprogression,
00:49:32.760 --> 00:49:36.295
has a density increment.
00:49:41.995 --> 00:49:44.950
OK, so originally, the
density of A is alpha,
00:49:44.950 --> 00:49:47.300
so we're zooming into
some subprogression P,
00:49:47.300 --> 00:49:48.980
which is a pretty
long subprogression,
00:49:48.980 --> 00:49:50.870
where the density
goes up significantly
00:49:50.870 --> 00:49:53.360
from A to essentially A--
00:49:53.360 --> 00:49:56.330
from alpha to roughly
alpha plus alpha squared.
00:50:00.240 --> 00:50:00.740
OK.
00:50:07.120 --> 00:50:10.660
So we start with
A, a 3AP-free set.
00:50:10.660 --> 00:50:25.110
So from step 1, there exists
some theta with large--
00:50:25.110 --> 00:50:27.932
so that corresponds to a
large Fourier coefficient.
00:50:31.790 --> 00:50:46.070
So this sum here is large.
00:50:48.782 --> 00:50:54.710
OK, and now we use--
00:50:54.710 --> 00:51:00.510
OK, so-- so step 1 obtains us,
you know, this consequence.
00:51:00.510 --> 00:51:14.780
And from this theta, now we
apply the lemma up there to--
00:51:14.780 --> 00:51:16.760
so we apply lemma
with, let's say,
00:51:16.760 --> 00:51:21.350
eta being alpha squared over 30.
00:51:21.350 --> 00:51:23.690
OK, so the exact constants
are not so important.
00:51:23.690 --> 00:51:31.010
But when we apply the
lemma to partition,
00:51:31.010 --> 00:51:37.720
and into a bunch
of subprogressions,
00:51:37.720 --> 00:51:40.570
which we'll call P1 through Pk.
00:51:43.462 --> 00:51:48.080
And each of these
progressions have
00:51:48.080 --> 00:51:59.750
length between cube root of
N and twice cube root of N.
00:51:59.750 --> 00:52:04.070
And I want to understand what
happens to the density of A
00:52:04.070 --> 00:52:07.680
when restricted to
these progressions.
00:52:07.680 --> 00:52:14.030
So starting with this
inequality over here,
00:52:14.030 --> 00:52:17.150
which suggests to us that
there must be some deviation.
00:52:30.191 --> 00:52:34.700
OK, so starting
with what we saw.
00:52:34.700 --> 00:52:40.950
And now, inside each
progression this e x theta
00:52:40.950 --> 00:52:44.050
is roughly constant.
00:52:44.050 --> 00:52:46.760
So if you pretend them
as actually constant,
00:52:46.760 --> 00:52:54.230
I can break up the sum,
depending on where the x's lie.
00:52:54.230 --> 00:52:57.530
So i from 1 to k.
00:52:57.530 --> 00:53:01.492
And let me sum inside
each progression.
00:53:13.560 --> 00:53:17.610
So by triangle inequality, I
can upper bound the first sum
00:53:17.610 --> 00:53:23.540
by where I now cut the sum into
progression by progression.
00:53:23.540 --> 00:53:29.520
And on each progression, this
character is roughly constant.
00:53:29.520 --> 00:53:34.760
So let me take out the
maximum possible deviations
00:53:34.760 --> 00:53:36.440
from them being constant.
00:53:36.440 --> 00:53:48.525
So upper bound-- again, you'll
find that we can essentially
00:53:48.525 --> 00:53:49.025
pretend--
00:53:52.190 --> 00:53:55.410
all right, so if
each exponential
00:53:55.410 --> 00:53:57.450
is constant on each
subprogression,
00:53:57.450 --> 00:54:00.750
then I might as well
just have this sum here.
00:54:00.750 --> 00:54:03.510
But I lose a little bit, because
it's not exactly constant.
00:54:03.510 --> 00:54:04.680
It's almost constant.
00:54:04.680 --> 00:54:06.140
So I loose a little bit.
00:54:06.140 --> 00:54:11.350
And that little bit is this eta.
00:54:11.350 --> 00:54:13.440
So you lose that
little bit of eta.
00:54:13.440 --> 00:54:18.960
And so on each
progression, P i, you
00:54:18.960 --> 00:54:22.830
lose at most something that's
essentially of alpha squared
00:54:22.830 --> 00:54:24.265
times the length of P i.
00:54:27.240 --> 00:54:29.160
OK.
00:54:29.160 --> 00:54:32.880
Now, you see, I've chosen
the error parameter
00:54:32.880 --> 00:54:37.260
so that everything
I've lost is not
00:54:37.260 --> 00:54:41.250
so much more than the
initial bound I began with.
00:54:41.250 --> 00:54:47.100
So in particular,
we see that even
00:54:47.100 --> 00:54:49.800
if we had pretended that the
characters were constant,
00:54:49.800 --> 00:54:54.090
on each progression we would
have still obtained some lower
00:54:54.090 --> 00:54:56.880
bound of the total deviation.
00:55:08.680 --> 00:55:09.860
OK.
00:55:09.860 --> 00:55:15.130
And what is this
quantity over here?
00:55:15.130 --> 00:55:18.880
Oh, you see, I'm
restricting each sum
00:55:18.880 --> 00:55:23.590
to each subprogression,
but the sum
00:55:23.590 --> 00:55:26.440
here, even though it's
the sum, but it's really
00:55:26.440 --> 00:55:31.990
counting how many elements
of A are in that progression.
00:55:31.990 --> 00:55:35.710
So this sum over here
is the same thing.
00:55:35.710 --> 00:55:38.543
OK, so let me write
it in a new board.
00:55:43.480 --> 00:55:45.213
Oh, we don't need
step 1 anymore.
00:55:49.077 --> 00:55:51.270
All right.
00:55:51.270 --> 00:55:52.440
So what we have--
00:56:01.386 --> 00:56:05.431
OK, so left-hand side over
there is this quantity here,
00:56:05.431 --> 00:56:07.690
all right?
00:56:07.690 --> 00:56:15.890
We see that the right-hand side,
even though you have that sum,
00:56:15.890 --> 00:56:19.010
it is really just counting
how many elements of A
00:56:19.010 --> 00:56:23.360
are in each progression versus
how many you should expect
00:56:23.360 --> 00:56:28.750
based on the overall
density of A. OK,
00:56:28.750 --> 00:56:32.220
so that should look similar
to what we got last time.
00:56:32.220 --> 00:56:35.280
I know the intuition
should be that, well,
00:56:35.280 --> 00:56:42.210
if the average deviation
is large, then one of them,
00:56:42.210 --> 00:56:47.447
one of these terms, should
have the density increment.
00:56:51.190 --> 00:56:53.950
If you try to do the next
step somewhat naively,
00:56:53.950 --> 00:56:56.840
you run into an issue,
because it could be--
00:56:56.840 --> 00:56:59.290
now, here you have k terms.
00:56:59.290 --> 00:57:07.270
It could be that you have all
the densities except for one
00:57:07.270 --> 00:57:12.640
going up only slightly, and one
density dropping dramatically,
00:57:12.640 --> 00:57:15.580
in which case you may not
have a significant density
00:57:15.580 --> 00:57:18.210
increment, all right?
00:57:18.210 --> 00:57:20.840
So we want to show that
on some progression
00:57:20.840 --> 00:57:24.350
the density increases
significantly.
00:57:24.350 --> 00:57:26.870
So far, from this
inequality, we just
00:57:26.870 --> 00:57:29.720
know that there is some
subprogression where
00:57:29.720 --> 00:57:32.450
the density changes
significantly.
00:57:32.450 --> 00:57:34.850
But of course, the overall
density, the average density,
00:57:34.850 --> 00:57:36.120
should remain constant.
00:57:36.120 --> 00:57:38.950
So if some goes up,
others must go down.
00:57:38.950 --> 00:57:41.180
But if you just try to
do an averaging argument,
00:57:41.180 --> 00:57:42.890
you have to be careful, OK?
00:57:42.890 --> 00:57:45.290
So there was a trick last
time, which we didn't really
00:57:45.290 --> 00:57:49.910
need last time, but now,
it's much more useful, where
00:57:49.910 --> 00:57:52.220
I want to show
that if this holds,
00:57:52.220 --> 00:57:55.850
then some P i sees
a large energy--
00:57:55.850 --> 00:57:58.520
sees a large density increment.
00:57:58.520 --> 00:58:08.990
And to do that, let me rewrite
the sum as the following,
00:58:08.990 --> 00:58:11.660
so I keep the same expression.
00:58:14.880 --> 00:58:18.980
And I add a term, which
is the same thing,
00:58:18.980 --> 00:58:20.560
but without the absolute value.
00:58:29.434 --> 00:58:35.000
OK, so you see these
guys, they total to 0,
00:58:35.000 --> 00:58:40.350
so adding that term doesn't
change my expression.
00:58:40.350 --> 00:58:45.460
But now, the summand
is always non-negative.
00:58:45.460 --> 00:58:49.380
So it's either 0 or
twice this number,
00:58:49.380 --> 00:58:51.500
depending on the
sign of that number.
00:58:54.610 --> 00:58:55.110
OK.
00:58:55.110 --> 00:58:57.600
So comparing left-hand
side and right-hand side,
00:58:57.600 --> 00:59:00.790
we see that there
must be some i.
00:59:00.790 --> 00:59:08.730
So hence, there exists some eye
such that the left-hand side--
00:59:13.410 --> 00:59:15.270
the i-th term on
the left hand side
00:59:15.270 --> 00:59:18.272
is less than or equal
to the i-th term
00:59:18.272 --> 00:59:19.230
on the right-hand side.
00:59:24.940 --> 00:59:28.640
And in particular, that
term should be positive,
00:59:28.640 --> 00:59:31.460
so it implies--
00:59:31.460 --> 00:59:35.740
OK, so how can you
get this inequality?
00:59:35.740 --> 00:59:40.540
It implies simply that the
restriction of a to this P i
00:59:40.540 --> 00:59:50.810
is at least alpha plus alpha
squared over 40 times P i.
00:59:50.810 --> 00:59:54.320
So this claim here just says
that on the i-th progression,
00:59:54.320 --> 00:59:58.570
there's a significant
energy increment.
00:59:58.570 --> 01:00:02.342
If it's more decrement,
that term would have been 0.
01:00:02.342 --> 01:00:04.610
So remember that.
01:00:16.040 --> 01:00:17.280
OK.
01:00:17.280 --> 01:00:20.610
So this achieves what we
were looking for in step 2,
01:00:20.610 --> 01:00:24.720
namely to find that there
is a density increment
01:00:24.720 --> 01:00:28.041
on some long subprogression.
01:00:28.041 --> 01:00:30.630
OK, so now we can go to
step 3, which is basically
01:00:30.630 --> 01:00:35.460
the same as what we saw
last time, where now we
01:00:35.460 --> 01:00:38.929
want to iterate this
density increment.
01:00:47.911 --> 01:00:55.900
OK, so it's basically the
same argument as last time,
01:00:55.900 --> 01:01:00.300
but you start with
density alpha,
01:01:00.300 --> 01:01:04.140
and each step in the
iteration, the density
01:01:04.140 --> 01:01:05.600
goes up quite a bit.
01:01:09.740 --> 01:01:14.030
And we want to control
the total number of steps,
01:01:14.030 --> 01:01:20.704
knowing that the final
density is at most 1, always.
01:01:20.704 --> 01:01:23.370
OK.
01:01:23.370 --> 01:01:25.020
So how many steps can you take?
01:01:25.020 --> 01:01:28.080
Right, so this was the same
argument that we saw last time.
01:01:28.080 --> 01:01:38.450
We see that starting with
alpha, alpha 0 being alpha,
01:01:38.450 --> 01:01:41.810
it doubles after a certain
number of steps, right?
01:01:41.810 --> 01:01:45.590
So we double after--
01:01:45.590 --> 01:01:48.710
OK, so how many
steps do you need?
01:01:48.710 --> 01:01:53.430
Well, I want to get
from alpha to 2 alpha.
01:01:53.430 --> 01:02:00.960
So I need at most
alpha over 40 steps.
01:02:00.960 --> 01:02:03.770
OK, so last time I
was slightly sloppy.
01:02:03.770 --> 01:02:09.630
And so there's basically a
floor, upper floor down--
01:02:09.630 --> 01:02:11.060
rounding up or down situation.
01:02:11.060 --> 01:02:13.620
But I should add a plus 1.
01:02:13.620 --> 01:02:14.120
Yeah.
01:02:14.120 --> 01:02:16.120
AUDIENCE: Shouldn't
it be 40 over alpha?
01:02:16.120 --> 01:02:17.340
YUFEI ZHAO: 40-- thank you.
01:02:17.340 --> 01:02:19.930
40 over alpha, yeah.
01:02:19.930 --> 01:02:25.080
So you double after at
most that many steps.
01:02:25.080 --> 01:02:31.560
And then now, you add
density at least 2 alpha.
01:02:31.560 --> 01:02:39.880
So we double after at most 20
over alpha steps, and so on.
01:02:39.880 --> 01:02:44.690
And we double at most--
01:02:44.690 --> 01:02:51.690
well, basically, log sub
2 of 1 over alpha times.
01:02:55.658 --> 01:02:58.600
OK, so anyway, putting
everything together,
01:02:58.600 --> 01:03:04.350
we see that the
total number of steps
01:03:04.350 --> 01:03:08.790
is at most on the
order of 1 over alpha.
01:03:12.340 --> 01:03:15.490
When you stop, you can
only stop for one reason,
01:03:15.490 --> 01:03:18.210
because in the--
01:03:18.210 --> 01:03:18.710
yeah.
01:03:18.710 --> 01:03:19.880
So, yeah.
01:03:19.880 --> 01:03:24.790
So in step 1,
remember, the iteration
01:03:24.790 --> 01:03:28.762
said that the
process terminates.
01:03:28.762 --> 01:03:37.589
The process can always go
on, and then terminates
01:03:37.589 --> 01:03:43.170
if the length--
01:03:43.170 --> 01:03:45.590
so you're now at
step i, so let N i be
01:03:45.590 --> 01:03:54.580
the length of the progression,
that step i is at most C times
01:03:54.580 --> 01:03:56.990
alpha i to the minus 12th.
01:03:56.990 --> 01:04:01.180
So we have a--
01:04:01.180 --> 01:04:04.980
right, so provided
that N is large enough,
01:04:04.980 --> 01:04:07.020
you can always pass
to a subprogression.
01:04:07.020 --> 01:04:09.540
And here, when you pass to
subprogression, of course,
01:04:09.540 --> 01:04:12.030
you can re-label
that subprogression.
01:04:12.030 --> 01:04:15.140
And it's now, you
know, 1 through N i.
01:04:15.140 --> 01:04:16.553
Right, so I can-- it's--
01:04:16.553 --> 01:04:17.970
all the progressions
are basically
01:04:17.970 --> 01:04:21.373
the same as the first
set of positive integers.
01:04:21.373 --> 01:04:23.850
I'm sorry, prefix of
the positive integers.
01:04:23.850 --> 01:04:26.590
So when we stop a
step i, you must
01:04:26.590 --> 01:04:31.290
have N sub i being at most
this quantity over here, which
01:04:31.290 --> 01:04:36.510
is at most C times the initial
density raised to this minus
01:04:36.510 --> 01:04:37.450
12th.
01:04:37.450 --> 01:04:49.050
So therefore, the initial length
N of the space is bounded by--
01:04:49.050 --> 01:04:55.595
well, each time we went
down by a cube root at most.
01:04:55.595 --> 01:04:57.560
Right, so the fine--
01:04:57.560 --> 01:05:00.980
if you stop a step i,
then the initial length
01:05:00.980 --> 01:05:07.670
is at most N sub i to the
3 times 3 to the power of i
01:05:07.670 --> 01:05:10.922
each time you're
doing a cube root.
01:05:10.922 --> 01:05:13.135
OK, so you put
everything together.
01:05:20.280 --> 01:05:22.360
At most that many
iterations when you stop
01:05:22.360 --> 01:05:24.781
the length is at most this.
01:05:24.781 --> 01:05:26.680
So you put them
together, and then you
01:05:26.680 --> 01:05:37.320
find that the N must be at
most double exponential in 1
01:05:37.320 --> 01:05:40.140
over the density.
01:05:40.140 --> 01:05:47.670
In other words, the
density is at most 1
01:05:47.670 --> 01:05:55.665
over log log N, which is what
we claimed in Roth's theorem,
01:05:55.665 --> 01:05:57.060
so what we claimed up there.
01:06:01.460 --> 01:06:01.960
OK.
01:06:01.960 --> 01:06:03.228
So that finishes the proof.
01:06:12.210 --> 01:06:13.307
Any questions?
01:06:18.780 --> 01:06:22.230
So the message here is that it's
the same proof as last time,
01:06:22.230 --> 01:06:24.660
but we need to do
a bit more work.
01:06:24.660 --> 01:06:26.220
And none of this
work is difficult,
01:06:26.220 --> 01:06:27.780
but there are more technical.
01:06:27.780 --> 01:06:29.580
And that's often
the theme that you
01:06:29.580 --> 01:06:31.500
see in additive combinatorics.
01:06:31.500 --> 01:06:34.650
This is part of the reason
why the finite field model is
01:06:34.650 --> 01:06:38.130
a really nice playground,
because there, things
01:06:38.130 --> 01:06:41.850
tend to be often cleaner,
but the idea's often similar,
01:06:41.850 --> 01:06:43.140
or the same ideas.
01:06:43.140 --> 01:06:43.890
Not always.
01:06:43.890 --> 01:06:46.242
Next lecture, we'll
see one technique
01:06:46.242 --> 01:06:47.700
where there's a
dramatic difference
01:06:47.700 --> 01:06:50.940
between the finite field vector
space and over the integers.
01:06:50.940 --> 01:06:53.170
But for many things in
additive combinatorics,
01:06:53.170 --> 01:06:54.720
the finite field
vector space is just
01:06:54.720 --> 01:06:58.172
a nicer place to be in to try
all your ideas and techniques.
01:07:04.440 --> 01:07:09.090
Let me comment on some analogies
between these two approaches
01:07:09.090 --> 01:07:10.200
and compare the bounds.
01:07:13.090 --> 01:07:15.280
So on one hand--
01:07:15.280 --> 01:07:20.130
OK, so we saw last time
this proof in F3 to the N,
01:07:20.130 --> 01:07:26.770
and now in the integers
inside this interval of length
01:07:26.770 --> 01:07:33.280
N. So let me write
uppercase N in both cases
01:07:33.280 --> 01:07:38.205
to be the size of the
overall ambient space.
01:07:38.205 --> 01:07:43.310
OK, so what kind of bounds
do we get in both situations?
01:07:43.310 --> 01:07:46.050
So last time, for--
01:07:46.050 --> 01:07:51.720
in F3 to the N, we
got a bound which
01:07:51.720 --> 01:08:06.220
is of the order N over log N,
whereas today, the bound is
01:08:06.220 --> 01:08:08.560
somewhat worse.
01:08:08.560 --> 01:08:09.920
It's a little bit worse.
01:08:09.920 --> 01:08:13.690
Now we lose an extra log.
01:08:13.690 --> 01:08:16.810
So where do we lose an
extra log in this argument?
01:08:16.810 --> 01:08:19.060
So where does these
two argument--
01:08:19.060 --> 01:08:22.980
where do these two arguments
differ, quantitatively?
01:08:26.240 --> 01:08:26.740
Yep?
01:08:26.740 --> 01:08:29.569
AUDIENCE: When you're dividing
by 3 versus [INAUDIBLE]??
01:08:29.569 --> 01:08:31.560
YUFEI ZHAO: OK, so
you're dividing by--
01:08:31.560 --> 01:08:41.960
so here, in each
iteration, over here,
01:08:41.960 --> 01:08:46.399
your size of the iteration--
01:08:46.399 --> 01:08:48.300
I mean, each iteration
the size of the space
01:08:48.300 --> 01:08:53.279
goes down by a factor of
3, whereas over here, it
01:08:53.279 --> 01:08:56.979
could go down by a cube root.
01:08:56.979 --> 01:08:58.399
And that's precisely right.
01:08:58.399 --> 01:09:02.420
So that explains for this
extra log in the balance.
01:09:05.630 --> 01:09:08.490
So while this is
a great analogy,
01:09:08.490 --> 01:09:10.760
it's not a perfect analogy.
01:09:10.760 --> 01:09:13.279
You see there is this
divergence here between the two
01:09:13.279 --> 01:09:14.790
situations.
01:09:14.790 --> 01:09:16.430
And so then you
might ask, is there
01:09:16.430 --> 01:09:21.240
some way to avoid the
loss, this extra log factor
01:09:21.240 --> 01:09:22.950
loss over here?
01:09:22.950 --> 01:09:25.260
Is there some way to
carry out the strategy
01:09:25.260 --> 01:09:28.689
that we did last
time in a way that
01:09:28.689 --> 01:09:31.300
is much more faithful to
that strategy of passing down
01:09:31.300 --> 01:09:33.060
to subspaces.
01:09:33.060 --> 01:09:36.090
So here, we pass
to progressions.
01:09:36.090 --> 01:09:40.649
And because we have to do this
extra pigeonhole-type argument,
01:09:40.649 --> 01:09:41.970
it was somewhat lost--
01:09:41.970 --> 01:09:48.770
we lost a power, which
translated into this extra log.
01:09:48.770 --> 01:09:51.910
So it turns out there
is some way to do this.
01:09:51.910 --> 01:09:53.600
So let me just
briefly mention what's
01:09:53.600 --> 01:09:55.640
the idea that is
involved, all right?
01:09:55.640 --> 01:09:59.930
So last time, we
went down from--
01:09:59.930 --> 01:10:02.660
so the main objects
that we were passing
01:10:02.660 --> 01:10:05.180
would start with a vector
space and pass down
01:10:05.180 --> 01:10:08.310
to a subspace, which is
also a vector space, right?
01:10:08.310 --> 01:10:17.490
So you can define subspaces in
F3 to the N by the following.
01:10:17.490 --> 01:10:24.310
So I can start with some set
of characters U, and I define--
01:10:24.310 --> 01:10:26.630
some set of characters
S, and I define
01:10:26.630 --> 01:10:40.822
U sub S to be basically the
orthogonal complement of S.
01:10:40.822 --> 01:10:42.972
OK, so this is a subspace.
01:10:42.972 --> 01:10:44.430
And these were the
kind of subspace
01:10:44.430 --> 01:10:48.168
that we saw last time, because
the S's or the R's that
01:10:48.168 --> 01:10:50.460
came out of the proof last
time, every time we saw one,
01:10:50.460 --> 01:10:51.390
we threw it in.
01:10:51.390 --> 01:10:56.330
We cut down to a smaller
subspace, and we repeat.
01:10:56.330 --> 01:10:59.018
But the progressions, they
don't really look like this.
01:10:59.018 --> 01:11:00.560
So the question is,
is there some way
01:11:00.560 --> 01:11:02.935
to do this argument so that
you end up with progressions,
01:11:02.935 --> 01:11:04.010
and looked like that?
01:11:04.010 --> 01:11:06.300
And it turns out there is a way.
01:11:06.300 --> 01:11:08.150
And there are these
objects, which
01:11:08.150 --> 01:11:13.215
we'll see more later in this
course, called Bohr sets.
01:11:13.215 --> 01:11:18.030
OK, so they were
used by Bourgain
01:11:18.030 --> 01:11:23.220
to mimic this Machoulin argument
that we saw last time more
01:11:23.220 --> 01:11:26.010
faithfully into
the integers, where
01:11:26.010 --> 01:11:30.840
we're going to come up with some
set of integers that resemble--
01:11:30.840 --> 01:11:34.950
much more closely resemble
this notion of subspaces
01:11:34.950 --> 01:11:37.290
in the finite field setting.
01:11:37.290 --> 01:11:41.020
And for this, it's much
easier to work inside a group.
01:11:41.020 --> 01:11:42.900
So instead of working
in the integers,
01:11:42.900 --> 01:11:45.110
let's work inside and Z mod nZ.
01:11:45.110 --> 01:11:47.580
So we can do Fourier
transform in Z mod nZ, so
01:11:47.580 --> 01:11:50.040
the discrete Fourier
analysis here.
01:11:50.040 --> 01:11:52.335
So in Z mod nZ, we define--
01:11:55.510 --> 01:11:59.260
so given a S, let's
define this Bohr
01:11:59.260 --> 01:12:14.080
set to be the set of
elements of Z mod nZ such
01:12:14.080 --> 01:12:20.470
that if you look at what
really is supposed to resemble
01:12:20.470 --> 01:12:26.280
this thing over here, OK?
01:12:26.280 --> 01:12:34.280
If this quantity is
small for all S--
01:12:34.280 --> 01:12:36.930
OK, so we put that element
into this Bohr set.
01:12:39.670 --> 01:12:44.870
OK, so these sets, they function
much more like subspaces.
01:12:44.870 --> 01:12:48.890
So there are the
analog of subspaces
01:12:48.890 --> 01:12:53.420
inside Z mod nZ, which,
you know, n is prime,
01:12:53.420 --> 01:12:54.430
has no subgroups.
01:12:54.430 --> 01:12:56.990
It has no natural
subspace structure.
01:12:56.990 --> 01:12:58.880
But by looking at
these Bohr sets,
01:12:58.880 --> 01:13:03.020
they provide a
natural way to set up
01:13:03.020 --> 01:13:05.330
this argument so that you can--
01:13:05.330 --> 01:13:07.910
but with much more
technicalities,
01:13:07.910 --> 01:13:13.630
repeat these kind of arguments
more similar to last time,
01:13:13.630 --> 01:13:17.220
but passing not to
subspaces but to Bohr sets.
01:13:17.220 --> 01:13:22.270
And then with quite
a bit of extra work,
01:13:22.270 --> 01:13:32.370
one can obtain bounds of the
quantity N over a poly log N.
01:13:32.370 --> 01:13:36.510
So the current best bound
I mentioned last time
01:13:36.510 --> 01:13:40.080
is of this type, which is
through further refinements
01:13:40.080 --> 01:13:41.347
of this technique.
01:13:50.610 --> 01:13:52.230
The last thing I
want to mention today
01:13:52.230 --> 01:13:56.040
is, so far we've been
talking about 3APs.
01:13:56.040 --> 01:13:59.600
So what about four term
arithmetic progressions?
01:13:59.600 --> 01:14:04.998
OK, do any of the things that we
talk about here work for 4APs?
01:14:04.998 --> 01:14:07.290
And there's an analogy to be
made here compared to what
01:14:07.290 --> 01:14:09.085
we discussed with graphs.
01:14:09.085 --> 01:14:11.770
So in graphs, we had a triangle
counting lemma and a triangle
01:14:11.770 --> 01:14:12.930
removal lemma.
01:14:12.930 --> 01:14:15.210
And then we said
that to prove 4APs,
01:14:15.210 --> 01:14:18.390
we would need the hypergraph
version, the simplex removal
01:14:18.390 --> 01:14:20.580
lemma, hypergraph
regularity lemma.
01:14:20.580 --> 01:14:22.500
And that was much
more difficult.
01:14:22.500 --> 01:14:24.170
And that analogy
carries through,
01:14:24.170 --> 01:14:27.040
and the same kind of
difficulties that come up.
01:14:27.040 --> 01:14:30.700
So it can be done, but
you need something more.
01:14:30.700 --> 01:14:34.590
And the main message I
want you to take away
01:14:34.590 --> 01:14:42.940
is that 4APs, while we
had a counting lemma that
01:14:42.940 --> 01:14:45.980
says that the
Fourier coefficients,
01:14:45.980 --> 01:14:54.540
so the Fourier transform,
controls 3AP counts,
01:14:54.540 --> 01:14:59.100
it turns out the same
is not true for 4APs.
01:14:59.100 --> 01:15:05.200
So the Fourier does
not control 4AP counts.
01:15:10.410 --> 01:15:12.340
Let me give you some--
01:15:12.340 --> 01:15:12.840
OK.
01:15:12.840 --> 01:15:14.910
So in fact, in the
homework for this week,
01:15:14.910 --> 01:15:19.870
there's a specific example of
a set where it has uniformly
01:15:19.870 --> 01:15:21.490
small Fourier coefficients.
01:15:21.490 --> 01:15:22.990
But that's the wrong
number of 4APs.
01:15:27.130 --> 01:15:28.500
So the following-- it is true--
01:15:28.500 --> 01:15:32.610
OK, so it is true that you
have Szemerédi's term in--
01:15:32.610 --> 01:15:34.770
let's just talk about
the finite field setting,
01:15:34.770 --> 01:15:36.840
where things are a
bit easier to discuss.
01:15:36.840 --> 01:15:42.510
So it is true that the size
of the biggest subset of F5
01:15:42.510 --> 01:15:45.180
to the N is a tiny
fraction-- it's a little, one
01:15:45.180 --> 01:15:46.530
fraction of the entire space.
01:15:46.530 --> 01:15:50.460
OK, I use F5 here,
because if I set F3,
01:15:50.460 --> 01:15:53.690
it doesn't make sense
to talk about 4APs.
01:15:53.690 --> 01:15:58.990
So F5, but it doesn't really
matter which specific field.
01:15:58.990 --> 01:16:03.550
So you can prove this using
hypergraph removal, same proof,
01:16:03.550 --> 01:16:07.330
verbatim, that we saw earlier,
if you have hypergraph removal.
01:16:07.330 --> 01:16:10.610
But if you want to try to prove
it using Fourier analysis,
01:16:10.610 --> 01:16:15.640
well, it doesn't work quite
using the same strategy.
01:16:15.640 --> 01:16:17.590
But in fact, there
is a modification
01:16:17.590 --> 01:16:20.800
that would allow
you to make it work.
01:16:20.800 --> 01:16:23.670
But you need an extension
of Fourier analysis.
01:16:23.670 --> 01:16:30.790
And it is known as higher
order Fourier analysis, which
01:16:30.790 --> 01:16:35.470
was an important development in
modern additive combinatorics
01:16:35.470 --> 01:16:38.590
that initially arose
in Gowers' work
01:16:38.590 --> 01:16:41.190
where he gave a new proof
of similarities theorem.
01:16:41.190 --> 01:16:42.880
So Gowers didn't
work in this setting.
01:16:42.880 --> 01:16:44.020
He worked in integers.
01:16:44.020 --> 01:16:47.140
But many of the ideas
originated from his paper,
01:16:47.140 --> 01:16:49.120
and then subsequently
developed by a lot
01:16:49.120 --> 01:16:51.250
of people in various settings.
01:16:51.250 --> 01:16:53.877
I just want to give you one
specific statement, what
01:16:53.877 --> 01:16:55.710
this high-order Fourier
analysis looks like.
01:16:55.710 --> 01:16:58.450
So it's a fancy term,
and the statements often
01:16:58.450 --> 01:17:00.070
get very technical.
01:17:00.070 --> 01:17:04.140
But I just want to give you one
concrete thing to take away.
01:17:04.140 --> 01:17:05.790
All right, so for
a Fourier piece,
01:17:05.790 --> 01:17:08.220
higher-order Fourier
analysis, roughly--
01:17:08.220 --> 01:17:12.484
OK, so it also goes by the name
quadratic Fourier analysis.
01:17:16.436 --> 01:17:22.370
OK, so let me give you
a very specific instance
01:17:22.370 --> 01:17:23.570
of the theorem.
01:17:27.090 --> 01:17:31.710
And this can be sometimes
called an inverse theorem
01:17:31.710 --> 01:17:34.272
for quadratic Fourier analysis.
01:17:40.790 --> 01:17:48.700
OK, so for every delta,
there exists some c such
01:17:48.700 --> 01:17:51.490
that the following is true.
01:17:51.490 --> 01:18:03.390
If A is a subset of a F5 to the
N with density alpha and such
01:18:03.390 --> 01:18:05.132
that it's--
01:18:05.132 --> 01:18:07.750
OK, so now lambda
sub 4, so this is
01:18:07.750 --> 01:18:10.380
the 4AP density,
so similar to 3AP,
01:18:10.380 --> 01:18:11.710
but now you write four terms.
01:18:11.710 --> 01:18:18.630
The 4AP density of A differs
from alpha to the fourth
01:18:18.630 --> 01:18:21.542
by a significant amount.
01:18:21.542 --> 01:18:25.050
OK, so for 3APs, then we said
that now A has a large Fourier
01:18:25.050 --> 01:18:26.750
coefficient, right?
01:18:29.700 --> 01:18:33.640
So for-- OK.
01:18:33.640 --> 01:18:37.530
For 4APs, that may not be
true, but the following
01:18:37.530 --> 01:18:38.375
is true, right?
01:18:38.375 --> 01:18:48.668
So then there exists a
non-zero quadratic polynomial.
01:18:55.500 --> 01:19:08.620
F and N variables over F5
such that the indicator
01:19:08.620 --> 01:19:13.280
function of A correlates with
this quadratic exponential
01:19:13.280 --> 01:19:13.780
face.
01:19:23.200 --> 01:19:24.833
So Fourier analysis,
the conclusion
01:19:24.833 --> 01:19:26.250
that we got from
counting lemma is
01:19:26.250 --> 01:19:28.440
that you have some
linear function
01:19:28.440 --> 01:19:33.650
F, such that this
quantity is large,
01:19:33.650 --> 01:19:35.630
this large Fourier coefficient.
01:19:35.630 --> 01:19:37.980
OK, so that is not true 4APs.
01:19:37.980 --> 01:19:39.870
But what is true
is that now you can
01:19:39.870 --> 01:19:44.860
look at quadratic exponential
faces, and then it is true.
01:19:44.860 --> 01:19:48.720
So that's the content
of higher order Fourier.
01:19:48.720 --> 01:19:51.660
I mean, that's the example of
higher-order Fourier analysis.
01:19:51.660 --> 01:19:56.010
And you can imagine with
this type of result,
01:19:56.010 --> 01:19:58.200
and with quite a
bit more work, you
01:19:58.200 --> 01:20:01.560
can try to follow a
similar density increment
01:20:01.560 --> 01:20:07.110
strategy to prove
similarities term for 4APs.