WEBVTT
00:00:19.195 --> 00:00:19.820
YUFEI ZHAO: OK.
00:00:19.820 --> 00:00:22.760
I want to begin by giving
some comments regarding
00:00:22.760 --> 00:00:23.865
the Wikipedia assignment.
00:00:23.865 --> 00:00:26.480
So I sent out an email
about this last night.
00:00:26.480 --> 00:00:29.510
And so first of all, thank
you for your contributions
00:00:29.510 --> 00:00:32.280
to this assignment,
to Wikipedia.
00:00:32.280 --> 00:00:36.410
It plays a really important role
in educating a wider audience
00:00:36.410 --> 00:00:39.260
what this subject is about
because as many of you
00:00:39.260 --> 00:00:41.555
have experienced,
the first time--
00:00:41.555 --> 00:00:43.520
if you heard of
some term, you have
00:00:43.520 --> 00:00:45.560
no idea what it is,
you put it into Google,
00:00:45.560 --> 00:00:49.220
and often Wikipedia's entry
is one of the top results that
00:00:49.220 --> 00:00:50.120
come up.
00:00:50.120 --> 00:00:52.700
And what gets written
in there actually
00:00:52.700 --> 00:00:56.990
plays a fairly influential role
in educating a broader audience
00:00:56.990 --> 00:00:59.520
about what this topic is about.
00:00:59.520 --> 00:01:04.370
And so I want to emphasize
that this is not simply
00:01:04.370 --> 00:01:05.930
some homework assignment.
00:01:05.930 --> 00:01:08.300
It's something that is
a real contribution.
00:01:08.300 --> 00:01:10.610
And it's something
that contributes
00:01:10.610 --> 00:01:13.370
to the dissemination
of knowledge.
00:01:13.370 --> 00:01:15.860
And for that, it is really
important to do a good job,
00:01:15.860 --> 00:01:21.020
to do it right, to do it well,
so that next time someone--
00:01:21.020 --> 00:01:22.962
maybe even yourselves--
maybe you've
00:01:22.962 --> 00:01:24.920
forgotten what the subject
is about and go back
00:01:24.920 --> 00:01:28.750
and you want to look it up
again and remind yourself.
00:01:28.750 --> 00:01:32.420
You will have a useful
resource to look into.
00:01:32.420 --> 00:01:35.450
But also let's say someone
wants to find out what
00:01:35.450 --> 00:01:38.180
is external graph theory about?
00:01:38.180 --> 00:01:40.700
What is additive
combinatorics about?
00:01:40.700 --> 00:01:43.190
You want them to land on
the page that points you
00:01:43.190 --> 00:01:46.160
to the right type of
places, that points you
00:01:46.160 --> 00:01:50.870
to useful resources, that opens
doors so that you can explore
00:01:50.870 --> 00:01:51.680
further.
00:01:51.680 --> 00:01:55.480
And some of the
contributions, indeed,
00:01:55.480 --> 00:01:57.170
serve you well in that purpose.
00:01:57.170 --> 00:01:59.600
It opens doors to many things.
00:01:59.600 --> 00:02:01.520
And part of the spirit
of this assignment
00:02:01.520 --> 00:02:04.010
is for you to do
your own research,
00:02:04.010 --> 00:02:07.940
do your own literature search,
to learn more about a subject,
00:02:07.940 --> 00:02:11.060
more than what has been
taught in these lectures
00:02:11.060 --> 00:02:13.770
so that you can write
about it on Wikipedia.
00:02:13.770 --> 00:02:15.970
You can link to
more references, you
00:02:15.970 --> 00:02:20.460
know, show the world what
the subject is about.
00:02:20.460 --> 00:02:25.130
OK, continuing with
our program, so we
00:02:25.130 --> 00:02:28.160
spent the past few lectures
developing tools regarding
00:02:28.160 --> 00:02:31.220
the structure of set
addition so that we
00:02:31.220 --> 00:02:33.050
can prove Freiman's theorem.
00:02:33.050 --> 00:02:35.630
So that's been our goal
for the past few lectures.
00:02:35.630 --> 00:02:38.570
And today we'll finally
prove Freiman's theorem.
00:02:38.570 --> 00:02:40.690
But let me first remind
you the statement--
00:02:40.690 --> 00:02:43.300
so in Freiman's theorem, we
would like to show that if you
00:02:43.300 --> 00:02:44.580
are in a subset--
00:02:44.580 --> 00:02:47.780
if you're in the integers, you
have a set A that has bounded
00:02:47.780 --> 00:02:52.130
doubling-- doubling
constant, constant k--
00:02:52.130 --> 00:02:55.430
then the said must be contained
in a small, generalized
00:02:55.430 --> 00:02:59.120
arithmetic progression,
down to dimension and size,
00:02:59.120 --> 00:03:03.750
only a constant
factor larger than a.
00:03:03.750 --> 00:03:08.190
We developed various tools the
past three lectures building up
00:03:08.190 --> 00:03:09.750
to your intermediate results.
00:03:09.750 --> 00:03:12.000
But we also collected this
very nice set of tools
00:03:12.000 --> 00:03:14.580
for proving Freiman's theorem.
00:03:14.580 --> 00:03:16.890
So let me review
some of them, which
00:03:16.890 --> 00:03:19.890
we'll encounter again today.
00:03:19.890 --> 00:03:24.510
PlünneckeRuzsa inequality tells
you that if you have a set with
00:03:24.510 --> 00:03:27.690
small doubling, then the
further iterated sums are also
00:03:27.690 --> 00:03:28.860
controlled.
00:03:28.860 --> 00:03:33.590
So I want you to think of these
parameters as k is a constant,
00:03:33.590 --> 00:03:36.040
so k to the some power
is still a constant,
00:03:36.040 --> 00:03:40.740
but also I don't really care
about polynomial changes in k.
00:03:40.740 --> 00:03:43.750
So I-- you know, we should
ignore polynomial changes in k
00:03:43.750 --> 00:03:46.900
and view this constant more or
less as the original k itself.
00:03:46.900 --> 00:03:51.140
So if some is-- the a plus a
is around the same size as a,
00:03:51.140 --> 00:03:56.470
then further iterations also do
not change the sizes very much.
00:03:56.470 --> 00:03:58.470
Ruzsa covering lemma: so
this was some statement
00:03:58.470 --> 00:04:03.550
that if x plus b looks like
it could be covered by copies
00:04:03.550 --> 00:04:06.580
of b, just in terms of their
sizes alone, then in fact,
00:04:06.580 --> 00:04:09.760
x could be covered by a
small number of translates
00:04:09.760 --> 00:04:11.560
of a slightly larger ball.
00:04:11.560 --> 00:04:14.110
But here B can be any set.
00:04:14.110 --> 00:04:16.870
We had a thing called
Ruzsa modeling lemma.
00:04:16.870 --> 00:04:18.370
In particular, a
consequence of it
00:04:18.370 --> 00:04:21.100
is that if a has small
doubling, then there
00:04:21.100 --> 00:04:23.290
exists the prime n
that's not too much
00:04:23.290 --> 00:04:26.450
bigger than the size of a,
and a very large proportion--
00:04:26.450 --> 00:04:29.440
an eighth of a subset
of an eighth of a such
00:04:29.440 --> 00:04:32.080
that this subset
a prime is prime
00:04:32.080 --> 00:04:34.500
and 8 isomorphic to
a subset of z mod n.
00:04:34.500 --> 00:04:36.250
So even though you
start with a set that's
00:04:36.250 --> 00:04:38.350
potentially very
spread out, provided
00:04:38.350 --> 00:04:40.990
they have small
doubling, I can pick out
00:04:40.990 --> 00:04:43.540
a pretty large piece
of it and model it
00:04:43.540 --> 00:04:46.720
by something in a fairly
small cyclic group.
00:04:46.720 --> 00:04:48.530
And here the modeling
is 8 isomorphic,
00:04:48.530 --> 00:04:52.060
so it preserves sums
up to eight term sums.
00:04:55.060 --> 00:04:56.730
We had Bogolyubovs
lemma so now we're
00:04:56.730 --> 00:04:59.310
inside a small cyclic
group of a large subset
00:04:59.310 --> 00:05:01.200
of a small cyclic group.
00:05:01.200 --> 00:05:05.820
Then Bogolyubovs lemma says
that 2a minus 2a contains
00:05:05.820 --> 00:05:10.080
a large bore set, of large
structure within the situated
00:05:10.080 --> 00:05:11.210
subset.
00:05:11.210 --> 00:05:14.270
And last time we showed that
the geometry of numbers,
00:05:14.270 --> 00:05:16.470
Minkowskis second
term, one can deduce
00:05:16.470 --> 00:05:22.800
that every bore set of small
dimension and small width
00:05:22.800 --> 00:05:24.310
contains--
00:05:24.310 --> 00:05:31.980
of large width contains a
proper GAP that's pretty large.
00:05:31.980 --> 00:05:34.650
So putting these two together,
putting the last two things
00:05:34.650 --> 00:05:39.480
together, we obtain
that if you have
00:05:39.480 --> 00:05:47.690
a subset of the cyclic
group and n is prime--
00:05:47.690 --> 00:05:52.670
OK, so here in previous
statement n is prime.
00:05:52.670 --> 00:05:53.630
So n is prime.
00:05:56.591 --> 00:05:57.950
And a is pretty large.
00:06:00.680 --> 00:06:10.910
Then 2a minus 2a contains a
proper generalized arithmetic
00:06:10.910 --> 00:06:18.560
progression of dimension at
most alpha to the minus 2
00:06:18.560 --> 00:06:28.014
and size at least 1 over
40 to the d times n.
00:06:31.250 --> 00:06:33.510
So it's just starting
from the size of a.
00:06:33.510 --> 00:06:36.390
2a minus 2a contains
a pretty large GAP.
00:06:36.390 --> 00:06:38.640
So we're going to put
all of these ingredients
00:06:38.640 --> 00:06:40.380
together and show
that you can now
00:06:40.380 --> 00:06:46.580
contain the original
set, a, in a small GAP.
00:06:46.580 --> 00:06:51.130
So just from knowing that some
subset of it, 2a minus 2a,
00:06:51.130 --> 00:06:55.030
so think 2a prime minus 2a
prime, contains this large GAP.
00:06:55.030 --> 00:06:57.030
We're going to use it to
boost it up to a cover.
00:07:05.760 --> 00:07:08.160
So now let's prove
Freiman's theorem.
00:07:13.940 --> 00:07:16.024
Using the modeling lemma--
00:07:21.440 --> 00:07:24.740
using the modeling lemma--
the corollary of the modeling
00:07:24.740 --> 00:07:32.630
lemma-- we find that
since a plus a is size
00:07:32.630 --> 00:07:35.760
at most k kind of
size of a, there
00:07:35.760 --> 00:07:45.510
exists some prime n at
most 2k to the 16 times a.
00:07:45.510 --> 00:07:51.050
And so I'm just copying the
consequence of this modeling
00:07:51.050 --> 00:07:51.900
lemma.
00:07:51.900 --> 00:07:56.190
So I find a pretty large subset
of a such that a prime is prime
00:07:56.190 --> 00:08:03.170
and 8 isomorphic to
a subset of z mod n.
00:08:09.550 --> 00:08:17.690
Now, applying the
final corollary
00:08:17.690 --> 00:08:29.220
with alpha being the size
of this a prime, which
00:08:29.220 --> 00:08:38.049
is at least the size of a over
n, which is at least 1 over 16
00:08:38.049 --> 00:08:43.179
to the times k to the
power 16, so all constants.
00:08:43.179 --> 00:08:48.050
We see that 2a prime minus--
00:08:48.050 --> 00:08:49.970
so let me actually-- let
me change the letters
00:08:49.970 --> 00:08:54.380
and call a prime b so I don't
have to keep on writing primes.
00:08:57.450 --> 00:09:02.090
So subset of a is called b.
00:09:02.090 --> 00:09:07.575
OK, so 2b minus 2b now
contains a large GAP.
00:09:12.850 --> 00:09:18.800
And the GAP has
dimension d bounded.
00:09:18.800 --> 00:09:22.370
So the dimension is bounded
by alpha to the minus 2.
00:09:22.370 --> 00:09:23.755
So it's some constant.
00:09:28.880 --> 00:09:32.490
And the size is pretty large.
00:09:32.490 --> 00:09:39.775
So size is at least 1 over 40 d.
00:09:39.775 --> 00:09:41.150
If you only care
about constants,
00:09:41.150 --> 00:09:44.900
just remember that everything
that depends on k or d
00:09:44.900 --> 00:09:45.760
is a constant.
00:09:49.580 --> 00:09:50.080
OK.
00:09:53.920 --> 00:10:00.370
Because b is Freiman's
8 isomorphic,
00:10:00.370 --> 00:10:06.340
b is Freiman 8 isomorphic to--
00:10:11.080 --> 00:10:13.450
ah, sorry.
00:10:13.450 --> 00:10:23.800
b is-- a prime is a subset of a
and b is the subset of z mod--
00:10:23.800 --> 00:10:28.540
b is a subset of z mod n that
a prime is 8 isomorphic, too.
00:10:28.540 --> 00:10:37.330
So since b is 8 isomorphic
to a prime, every GAP in b--
00:10:40.330 --> 00:10:45.360
so if you think about what
8 isomorphism preserves,
00:10:45.360 --> 00:10:53.880
you find that if you look at 2b
minus 2b, it must be 2 prime--
00:10:53.880 --> 00:11:00.760
2 isomorphic to 2a
prime minus 2a prime.
00:11:00.760 --> 00:11:02.730
So the point of
prime and isomorphism
00:11:02.730 --> 00:11:07.565
is that we just want to preserve
enough additive structure.
00:11:07.565 --> 00:11:09.940
Well, we're doing to preserve
all the additive structure,
00:11:09.940 --> 00:11:11.357
but just enough
additive structure
00:11:11.357 --> 00:11:13.330
to do what we need to do.
00:11:13.330 --> 00:11:16.150
And being able to preserve
an arithmetic progression,
00:11:16.150 --> 00:11:18.820
or in general or generalized
arithmetic progression,
00:11:18.820 --> 00:11:23.400
requires you to preserve
Freiman 2 isomorphism.
00:11:23.400 --> 00:11:25.280
And that's where the a comes in.
00:11:25.280 --> 00:11:28.180
So I want to analyze
2b minus 2b and I
00:11:28.180 --> 00:11:30.400
want that to preserve
2 isomorphisms.
00:11:30.400 --> 00:11:36.000
So initially I want b to
preserve 8 isomorphisms.
00:11:36.000 --> 00:11:40.690
So 2b minus 2b is Freiman
isomorphic to 2a prime minus 2a
00:11:40.690 --> 00:11:42.070
prime.
00:11:42.070 --> 00:11:50.690
So the GAP, which we found
earlier in 2B minus 2B
00:11:50.690 --> 00:11:58.830
is mapped via this
Freiman isomorphism
00:11:58.830 --> 00:12:04.880
to a proper GAP,
which we'll call q,
00:12:04.880 --> 00:12:11.580
now setting aside 2a
minus 2a and preserving
00:12:11.580 --> 00:12:15.456
the same dimension and size.
00:12:15.456 --> 00:12:19.200
So Freiman isomorphisms
are good for preserving
00:12:19.200 --> 00:12:21.900
these partial additive
structures like GAPs.
00:12:21.900 --> 00:12:22.610
Yes?
00:12:22.610 --> 00:12:25.060
AUDIENCE: So are we using
this smaller structure to be
00:12:25.060 --> 00:12:26.040
[INAUDIBLE]?
00:12:33.357 --> 00:12:34.190
YUFEI ZHAO: Correct.
00:12:34.190 --> 00:12:37.430
So question is, we're using--
00:12:37.430 --> 00:12:41.630
so because we have to
pass the 2b minus 2b,
00:12:41.630 --> 00:12:44.600
we want 2b minus 2b to be
prime and isomorphic to 2a
00:12:44.600 --> 00:12:45.710
prime minus 2a prime.
00:12:48.240 --> 00:12:52.730
So that's why in
the proof I want b
00:12:52.730 --> 00:12:55.500
to be 8 isomorphic to a prime.
00:12:55.500 --> 00:12:58.800
So you see, so I'm skipping
details of this step.
00:12:58.800 --> 00:13:02.330
But if you read the definition
of Freiman's s isomorphism,
00:13:02.330 --> 00:13:04.111
you see that this
implication holds.
00:13:04.111 --> 00:13:06.465
AUDIENCE: [INAUDIBLE]
00:13:06.465 --> 00:13:07.090
YUFEI ZHAO: No.
00:13:07.090 --> 00:13:10.367
2 isomorphism is a
weaker condition.
00:13:13.110 --> 00:13:17.170
2 isomorphism just means that
you are preserving two y sums.
00:13:21.030 --> 00:13:25.870
So think about the definition
of Freiman 2 isomorphisms.
00:13:25.870 --> 00:13:30.160
In particular, if two sets
are Freiman 2 isomorphic
00:13:30.160 --> 00:13:33.160
and you have a arithmetic
progression in one,
00:13:33.160 --> 00:13:35.620
then that arithmetic progression
is also an arithmetic
00:13:35.620 --> 00:13:38.800
progression in the other.
00:13:38.800 --> 00:13:41.310
So it's just enough
additive structure
00:13:41.310 --> 00:13:43.860
to preserve things like
arithmetic progressions
00:13:43.860 --> 00:13:48.456
and generalized
arithmetic progressions.
00:13:48.456 --> 00:13:49.360
OK.
00:13:49.360 --> 00:13:56.020
So we found this large
GAP in 2a minus 2a.
00:13:56.020 --> 00:13:58.180
So this is very good.
00:13:58.180 --> 00:14:01.540
So we wanted to
contain a in the GAP.
00:14:01.540 --> 00:14:03.800
Seems that by now we're
doing something slightly
00:14:03.800 --> 00:14:04.900
in the opposite direction.
00:14:04.900 --> 00:14:10.917
We find a large GAP
within 2a minus 2a.
00:14:10.917 --> 00:14:12.500
But something we've
seen before, we're
00:14:12.500 --> 00:14:16.100
going to use this to boost
ourselves to a covering of a
00:14:16.100 --> 00:14:18.750
via the Ruzsa covering lemma.
00:14:18.750 --> 00:14:21.540
So once you find
this large structure,
00:14:21.540 --> 00:14:27.800
you can now try to take
translates of it to cover a.
00:14:27.800 --> 00:14:30.170
And this is-- if there's
any takeaway from the spirit
00:14:30.170 --> 00:14:31.970
of this proof is this idea.
00:14:31.970 --> 00:14:34.160
Even though I want to cover
the whole set, it's OK.
00:14:34.160 --> 00:14:37.220
I just find a large
structure within it
00:14:37.220 --> 00:14:39.140
and then I use
translaters to cover.
00:14:41.770 --> 00:14:43.450
How do we do this?
00:14:43.450 --> 00:14:50.350
So since q is
containing 2a minus 2a,
00:14:50.350 --> 00:14:53.650
we find that q plus a is
containing 3a minus 2a.
00:15:00.120 --> 00:15:07.110
Therefore, by Plünnecke-Ruzsa--
by Plünnecke-Ruzsa inequality,
00:15:07.110 --> 00:15:13.790
the size of cube plus a is at
most the size of 3a minus 2a,
00:15:13.790 --> 00:15:17.720
which is, at most, k to the
fifth power times the size
00:15:17.720 --> 00:15:18.220
of a.
00:15:23.120 --> 00:15:27.060
And I claim that this
final quantity is also not
00:15:27.060 --> 00:15:34.120
so different from the
size of cube, because--
00:15:34.120 --> 00:15:35.590
so all of these--
00:15:35.590 --> 00:15:37.672
I mean, the point
here is, we are
00:15:37.672 --> 00:15:39.130
doing all of these
transformations,
00:15:39.130 --> 00:15:41.830
passing down to subsets, putting
something bigger, putting--
00:15:41.830 --> 00:15:44.260
getting to something smaller,
but each time we only
00:15:44.260 --> 00:15:47.910
loose something that
is polynomial in k.
00:15:47.910 --> 00:15:49.700
We're not losing much more.
00:15:49.700 --> 00:15:52.450
I am also only losing
a constant factor.
00:15:52.450 --> 00:15:56.110
There is sometimes a bit
more than polynomial,
00:15:56.110 --> 00:16:01.280
but any case, we're losing only
a constant factor at each step.
00:16:01.280 --> 00:16:12.110
So in particular, since n upper
bounds the size of a prime
00:16:12.110 --> 00:16:17.610
is here where we ended up
embedding into z modern,
00:16:17.610 --> 00:16:23.270
n is larger than a prime, which
is at least a constant fraction
00:16:23.270 --> 00:16:31.390
of a and the size
of q is at least 1
00:16:31.390 --> 00:16:38.210
over 40 raised d times n.
00:16:38.210 --> 00:16:43.870
So we find that this bound--
00:16:43.870 --> 00:16:46.980
upper bound earlier on q plus a.
00:16:46.980 --> 00:16:56.360
We can write it in
terms of size of cube
00:16:56.360 --> 00:17:01.760
where k prime is-- you put
all of these numbers together.
00:17:01.760 --> 00:17:05.180
What it is specifically
doesn't matter so much,
00:17:05.180 --> 00:17:06.589
other than that
it is a constant.
00:17:11.829 --> 00:17:14.359
d is polynomial in k.
00:17:14.359 --> 00:17:18.050
So what we have here is
something that is exponential,
00:17:18.050 --> 00:17:19.069
polynomial of k.
00:17:27.790 --> 00:17:31.110
OK, so now we're in a position
to apply the Ruzsa covering
00:17:31.110 --> 00:17:32.400
lemma.
00:17:32.400 --> 00:17:35.440
So look at that
statement up there.
00:17:35.440 --> 00:17:37.210
So what is the
saying, that a plus q
00:17:37.210 --> 00:17:41.310
looks like it could be covered
by q, just in terms of size.
00:17:41.310 --> 00:17:45.450
So I should expect to cover a
by a small number of translates
00:17:45.450 --> 00:17:46.590
of q minus q.
00:17:51.430 --> 00:17:57.690
So by covering lemma, a is
containing some x plus q
00:17:57.690 --> 00:18:05.630
minus q for some x in
a where the size of x
00:18:05.630 --> 00:18:07.640
is at most k prime.
00:18:11.350 --> 00:18:14.920
I claim-- so we've
covered a by something.
00:18:14.920 --> 00:18:18.740
q is a GAP.
00:18:18.740 --> 00:18:22.690
x is a bounded size
set, and I claim
00:18:22.690 --> 00:18:27.950
that this is the type of object
that we're happy to have.
00:18:27.950 --> 00:18:31.980
Just to spell out
some details, first
00:18:31.980 --> 00:18:45.480
note that x is contained in a
GAP dimension x or x minus 1
00:18:45.480 --> 00:18:48.217
with length 2 in each direction.
00:18:54.560 --> 00:18:56.820
So add a new direction
for every element of x.
00:18:56.820 --> 00:19:00.450
It's wasteful, but
everything's constant.
00:19:00.450 --> 00:19:07.230
And recall that the
dimension of Q as a GAP is d.
00:19:07.230 --> 00:19:22.980
So x plus q minus q is
contained in a GAP of dimension.
00:19:22.980 --> 00:19:24.900
OK, so what's the dimension?
00:19:24.900 --> 00:19:28.890
So when I do q minus q,
it's like taking a box
00:19:28.890 --> 00:19:32.490
and doubling its dimension--
doubling its lengths.
00:19:32.490 --> 00:19:34.870
I'm not changing the
number of dimensions.
00:19:34.870 --> 00:19:37.530
So the dimension of
q minus q is still d.
00:19:37.530 --> 00:19:41.250
The dimension of x is,
at most, the size of x.
00:19:44.220 --> 00:19:46.850
All of these things
are constants.
00:19:46.850 --> 00:19:47.755
So we're happy.
00:19:47.755 --> 00:19:52.210
But to spell it out,
the constant here is--
00:19:52.210 --> 00:19:55.800
well, k prime is what
we wrote up there.
00:19:59.800 --> 00:20:04.040
So this is a constant.
00:20:04.040 --> 00:20:11.570
And the size-- so what
is the size of the GAP
00:20:11.570 --> 00:20:13.940
that contains this guy here?
00:20:13.940 --> 00:20:19.730
So I'm expanding x to a GAP
by adding a new direction
00:20:19.730 --> 00:20:22.340
for every element of x.
00:20:22.340 --> 00:20:25.250
And I might expand
that size a little bit.
00:20:25.250 --> 00:20:29.930
But the size of this GAP that
contains x is no more than 2
00:20:29.930 --> 00:20:33.100
to the power x--
00:20:33.100 --> 00:20:37.220
2 raised to the size of x.
00:20:37.220 --> 00:20:42.860
What is the size
of GAP q minus q?
00:20:42.860 --> 00:20:45.740
So q is the GAP of dimension d.
00:20:45.740 --> 00:20:48.050
And we know that a
GAP of dimension d
00:20:48.050 --> 00:20:52.710
has doubling constant
and those 2 to the d--
00:20:52.710 --> 00:20:56.960
2 to the d times the size of q.
00:20:56.960 --> 00:20:58.060
OK.
00:20:58.060 --> 00:21:03.580
And because q is
contained in qa minus 2a,
00:21:03.580 --> 00:21:10.340
we find that q is
contained in 2a minus 2a.
00:21:10.340 --> 00:21:12.830
And 2 to the x, well, I
know what the site of x
00:21:12.830 --> 00:21:22.800
is bounded by, it's k prime
plus the size of x is--
00:21:22.800 --> 00:21:24.390
size of x is, at most, k prime.
00:21:24.390 --> 00:21:27.890
And then I have 2
to the d over here,
00:21:27.890 --> 00:21:34.150
so 2a minus 2a by
Plünnecke-Ruzsa is at most k
00:21:34.150 --> 00:21:36.490
to the 4 times the size of a.
00:21:36.490 --> 00:21:39.820
OK, you put everything together,
we find that this bound here
00:21:39.820 --> 00:21:44.790
is doubly exponential in
the-- it's a polynomial in k.
00:21:47.920 --> 00:21:50.490
And that's it.
00:21:50.490 --> 00:21:52.240
This proves Freiman's theory.
00:21:58.792 --> 00:22:03.785
Now, to recap-- we went
through several steps.
00:22:03.785 --> 00:22:07.050
So first, using
the modeling lemma,
00:22:07.050 --> 00:22:09.990
we know that if a set
a has small doubling,
00:22:09.990 --> 00:22:14.940
then we can pass a large
part of a to a relatively
00:22:14.940 --> 00:22:17.290
small cyclic group.
00:22:17.290 --> 00:22:20.710
Going to work inside
our cyclic group.
00:22:20.710 --> 00:22:28.220
Using Bogolyubovs lemma and its
geometry of numbers corollary,
00:22:28.220 --> 00:22:30.890
we find that inside
the cyclic group,
00:22:30.890 --> 00:22:33.950
the corresponding set,
which we called b,
00:22:33.950 --> 00:22:37.330
is such that 2b minus
2b contains a large GAP.
00:22:40.390 --> 00:22:43.450
We pass that GAP back
to the original set
00:22:43.450 --> 00:22:49.290
a because we are preserving
8 isomorphisms Freiman 8
00:22:49.290 --> 00:22:51.260
isomorphisms in
Ruzsa modeling lemma
00:22:51.260 --> 00:22:53.600
so we can pass to
the original set a
00:22:53.600 --> 00:22:58.770
and find the large
GAP in 2a minus 2a.
00:22:58.770 --> 00:23:02.880
Once we find this
large GAP 2a minus 2a,
00:23:02.880 --> 00:23:06.510
then we're going to
use the Ruzsa covering
00:23:06.510 --> 00:23:15.390
lemma to contain a inside a
small number of translates
00:23:15.390 --> 00:23:16.560
of this GAP.
00:23:20.060 --> 00:23:20.750
OK.
00:23:20.750 --> 00:23:23.450
You put all of these things
together and the appropriate
00:23:23.450 --> 00:23:27.310
bounds coming from
Plünnecke-Ruzsa inequalities
00:23:27.310 --> 00:23:30.218
and you get a final theorem.
00:23:30.218 --> 00:23:32.010
And this is the proof
of Freiman's theorem.
00:23:34.735 --> 00:23:36.185
AUDIENCE: [INAUDIBLE]
00:23:36.185 --> 00:23:36.810
YUFEI ZHAO: OK.
00:23:36.810 --> 00:23:40.420
The question is how
do you make it proper?
00:23:40.420 --> 00:23:44.320
Up until the step with
q, it is still proper.
00:23:44.320 --> 00:23:48.790
So the very last step
over here it is--
00:23:48.790 --> 00:23:51.610
you might have
destroyed properness.
00:23:51.610 --> 00:23:54.410
So this proof here doesn't
give you properness.
00:23:54.410 --> 00:23:58.000
So I mentioned at the beginning
that in Freiman's theorem,
00:23:58.000 --> 00:24:02.170
you can obtain properness
of the additional arguments.
00:24:02.170 --> 00:24:03.660
So that I'm not going to show.
00:24:03.660 --> 00:24:08.125
There's some more work which is
related to geometry of numbers.
00:24:10.940 --> 00:24:13.825
So for example, you can look up
in the textbook by Tao and Vu,
00:24:13.825 --> 00:24:21.340
and see how to get from a GAP
to contain it in a proper GAP,
00:24:21.340 --> 00:24:25.780
without losing too
much in terms of size.
00:24:25.780 --> 00:24:28.532
So think about it this way--
00:24:28.532 --> 00:24:30.490
when do you have something
which is not proper?
00:24:30.490 --> 00:24:33.160
When you have-- and if
some linear dependence,
00:24:33.160 --> 00:24:35.680
you have some integer
linear dependence.
00:24:35.680 --> 00:24:40.060
And in that case, you
kind of lost a dimension.
00:24:40.060 --> 00:24:42.940
When you have improperness, you
actually go down a dimension.
00:24:42.940 --> 00:24:46.270
But then you need
to salvage the size,
00:24:46.270 --> 00:24:48.830
make sure that the size
doesn't blow up too much.
00:24:48.830 --> 00:24:50.960
And so there are some
arguments to be done there.
00:24:50.960 --> 00:24:52.810
And we're not going
to do it here.
00:24:52.810 --> 00:24:54.960
AUDIENCE: Well, I
guess my [INAUDIBLE]
00:24:54.960 --> 00:24:58.880
do they change [INAUDIBLE]
within the proof,
00:24:58.880 --> 00:25:01.510
like say [INAUDIBLE]
q or whatever?
00:25:01.510 --> 00:25:05.060
Or do they use the same
proof, but later on,
00:25:05.060 --> 00:25:08.725
say that [INAUDIBLE]?
00:25:08.725 --> 00:25:09.600
YUFEI ZHAO: OK, good.
00:25:09.600 --> 00:25:11.340
Yeah, so the question
is, to get properness,
00:25:11.340 --> 00:25:13.882
do I have to modify the proof,
or can I use Freiman's theorem
00:25:13.882 --> 00:25:15.350
as witness of a black box.
00:25:15.350 --> 00:25:17.020
So my understanding
is that I can
00:25:17.020 --> 00:25:20.422
use the statement as a black
box and obtain properness.
00:25:20.422 --> 00:25:21.880
But if you want to
get good bounds,
00:25:21.880 --> 00:25:24.250
maybe you have to go into
the proof, although that,
00:25:24.250 --> 00:25:24.820
I'm not sure.
00:25:27.450 --> 00:25:31.500
OK, any more questions?
00:25:31.500 --> 00:25:34.670
So this took a while.
00:25:34.670 --> 00:25:38.620
This was the most involved
proof we've done in this course
00:25:38.620 --> 00:25:40.630
so far, in proving
Freiman's theorem.
00:25:40.630 --> 00:25:43.500
We had to develop a
large number of tools.
00:25:43.500 --> 00:25:44.930
And we came up--
00:25:44.930 --> 00:25:47.140
so we eventually arrived at--
00:25:47.140 --> 00:25:48.260
it's a beautiful theorem.
00:25:48.260 --> 00:25:51.190
So this is a
fantastic result that
00:25:51.190 --> 00:25:55.030
gives you an inverse
structure, something--
00:25:55.030 --> 00:25:57.520
we know that GAP's
have small doubling.
00:25:57.520 --> 00:26:00.170
And conversely, if something
has a small doubling,
00:26:00.170 --> 00:26:03.650
it has to, in some
sense, look like a GAP.
00:26:03.650 --> 00:26:05.590
So you see that the
proof is quite involved
00:26:05.590 --> 00:26:08.780
and has a lot of
beautiful ideas.
00:26:08.780 --> 00:26:10.690
In the remainder
of today's lecture,
00:26:10.690 --> 00:26:15.250
I want to present some remarks
on additional extensions
00:26:15.250 --> 00:26:17.710
and generalizations
of Freiman's theorem.
00:26:17.710 --> 00:26:20.660
And while we're not
going to do any proofs,
00:26:20.660 --> 00:26:23.230
there's a lot of deep
and beautiful mathematics
00:26:23.230 --> 00:26:26.230
that are involved
in the subject.
00:26:26.230 --> 00:26:31.150
So I want to take you on a
tour through some more things
00:26:31.150 --> 00:26:34.480
that we can talk about when
it comes to Freiman's theorem.
00:26:34.480 --> 00:26:38.350
But first, let me
mention a few things
00:26:38.350 --> 00:26:42.040
that I mentioned very
quickly when we first
00:26:42.040 --> 00:26:43.990
introduced Freiman's
theorem, namely
00:26:43.990 --> 00:26:45.310
some remarks on the bounds.
00:26:52.120 --> 00:26:55.890
So the proof that we
just saw gives you
00:26:55.890 --> 00:27:00.550
a bound, which is basically
exponential in the dimension
00:27:00.550 --> 00:27:04.770
and doubly exponential
for the size blow-up.
00:27:04.770 --> 00:27:08.530
They're all constants, so if
you only care about constants,
00:27:08.530 --> 00:27:10.540
then this is just fine.
00:27:10.540 --> 00:27:13.790
But you may ask, are we
losing too much here?
00:27:13.790 --> 00:27:18.120
What is the right
type of dependence?
00:27:18.120 --> 00:27:20.610
So what is the right
type of dependence?
00:27:20.610 --> 00:27:23.030
So we saw an example.
00:27:23.030 --> 00:27:28.020
So we saw an example where
if you start with A being
00:27:28.020 --> 00:27:35.930
a highly dissociated set,
where there is basically
00:27:35.930 --> 00:27:40.370
no additive structure
within A, then you do need--
00:27:40.370 --> 00:27:49.600
so this example shows that
you cannot do better than
00:27:49.600 --> 00:27:50.900
polynomial--
00:27:50.900 --> 00:27:58.080
well, actually, than linear
in K, in the dimension,
00:27:58.080 --> 00:28:03.240
and exponential in
the size blow-up.
00:28:03.240 --> 00:28:06.030
So in particular, you
do need to blow up
00:28:06.030 --> 00:28:11.690
the size by some
exponential quantity in K.
00:28:11.690 --> 00:28:16.450
So here, K is roughly the size
of A over 2 in this example.
00:28:16.450 --> 00:28:19.130
And you can create
modifications of the example
00:28:19.130 --> 00:28:22.020
to keep K constant
and A getting larger.
00:28:22.020 --> 00:28:25.160
But the point is that you
cannot do better than this type
00:28:25.160 --> 00:28:27.740
of dependence simply
from that example.
00:28:27.740 --> 00:28:31.100
And it's conjecture
that that is the truth.
00:28:31.100 --> 00:28:33.590
We're almost there in
proving this conjecture,
00:28:33.590 --> 00:28:36.020
but not quite, although
the proof that we just gave
00:28:36.020 --> 00:28:41.400
is somewhat far, because you
lose an exponent in each bound.
00:28:41.400 --> 00:28:45.210
There is a refinement of the
final step in the argument,
00:28:45.210 --> 00:28:46.680
so let me comment on that.
00:28:46.680 --> 00:28:53.750
So we can refine the final
step or the final steps
00:28:53.750 --> 00:28:59.045
in the proof to get
polynomial bounds.
00:29:04.000 --> 00:29:10.690
And to get a polynomial
bounds, which
00:29:10.690 --> 00:29:13.570
is much more in
the right ballpark
00:29:13.570 --> 00:29:15.710
compared to what we got.
00:29:15.710 --> 00:29:18.820
And the idea is
basically over here,
00:29:18.820 --> 00:29:20.380
we used the Ruzsa
covering lemma.
00:29:20.380 --> 00:29:23.500
So we started with
that Q up there.
00:29:23.500 --> 00:29:30.660
So up until this point, you
should think of this step
00:29:30.660 --> 00:29:34.390
as everything coming from
Bogolyubov and its corollary.
00:29:34.390 --> 00:29:36.650
So that stays the same.
00:29:36.650 --> 00:29:39.830
And now the question is starting
with our Q, what would you use?
00:29:39.830 --> 00:29:45.340
How would you use this
Q to try to cover it?
00:29:45.340 --> 00:29:48.673
Well, what we do, we apply
Ruzsa covering lemma.
00:29:48.673 --> 00:29:50.840
Remember how the proof of
Ruzsa covering lemma goes.
00:29:50.840 --> 00:29:54.920
You take a maximal
set of translates,
00:29:54.920 --> 00:29:56.260
disjoint translates.
00:29:56.260 --> 00:29:58.010
And if you blow
everything up a factor 2,
00:29:58.010 --> 00:30:00.500
then you've got a cover.
00:30:00.500 --> 00:30:03.030
But it turns out to
be somewhat wasteful.
00:30:03.030 --> 00:30:06.860
And you see, there was a lot
of waste in going from x to 2
00:30:06.860 --> 00:30:08.760
to the x.
00:30:08.760 --> 00:30:12.550
So you could do that
step more slowly.
00:30:12.550 --> 00:30:21.100
So starting with Q,
cover now some, not all
00:30:21.100 --> 00:30:34.500
of A. So cover parts of A by
translates of 2 minus Q, say.
00:30:34.500 --> 00:30:36.390
So we do Ruzsa
covering lemma, you
00:30:36.390 --> 00:30:40.110
don't cover the whole thing,
but nibble away, cover
00:30:40.110 --> 00:30:44.870
a little bit, and then look
at the thing that you get,
00:30:44.870 --> 00:30:51.230
which is that Q will become
some new thing, let's say Q1.
00:30:51.230 --> 00:31:00.530
And now cover more
by Q1 minus Q1.
00:31:00.530 --> 00:31:06.670
So apparently, if you do the
covering step more slowly,
00:31:06.670 --> 00:31:08.530
you can obtain better bounds.
00:31:08.530 --> 00:31:14.320
And that's enough to
save you this exponent,
00:31:14.320 --> 00:31:19.230
to go down to polynomial-type
bounds for Freiman's theorem.
00:31:19.230 --> 00:31:21.830
So I'm not giving details,
but this is roughly the idea.
00:31:21.830 --> 00:31:26.250
So you can modify the final
step to obtain this bound.
00:31:26.250 --> 00:31:35.410
The best bound so far is
due to Tom Sanders, who
00:31:35.410 --> 00:31:40.300
proved Freiman's
theorem for bounds
00:31:40.300 --> 00:31:46.540
on dimension that's
like K times poly log K,
00:31:46.540 --> 00:31:54.310
and the size blowup to be E
to the K times poly log K.
00:31:54.310 --> 00:31:57.310
So in other words, other than
this polylogarithmic factor,
00:31:57.310 --> 00:32:00.740
it's basically the right answer.
00:32:00.740 --> 00:32:04.220
And so this proof is
much more sophisticated.
00:32:04.220 --> 00:32:07.390
So it goes much more
in depth into analyzing
00:32:07.390 --> 00:32:09.530
the structure of set addition.
00:32:09.530 --> 00:32:12.250
So Sanders has a very
nice survey article
00:32:12.250 --> 00:32:14.500
called "The structure
of Set Addition"
00:32:14.500 --> 00:32:19.090
that analyzes some of the
modern techniques that
00:32:19.090 --> 00:32:21.180
are used to prove
these types of results.
00:32:28.480 --> 00:32:30.220
There is one more
issue, which I want
00:32:30.220 --> 00:32:33.360
to discuss at length in the
second half of this lecture,
00:32:33.360 --> 00:32:36.940
which is that you
might be very unhappy
00:32:36.940 --> 00:32:41.770
with this exponential blowup,
because if you think about what
00:32:41.770 --> 00:32:43.720
happens in these examples--
00:32:43.720 --> 00:32:46.440
I mean, not the examples, but
if you think about what happens,
00:32:46.440 --> 00:32:49.390
like the spirit of what
we're trying to say,
00:32:49.390 --> 00:32:53.390
Freiman's theorem is some
kind of an inverse theorem.
00:32:53.390 --> 00:32:57.410
And to go forward,
you're trying to say
00:32:57.410 --> 00:33:05.060
that if you have a GAP
of dimension d, then
00:33:05.060 --> 00:33:11.000
the size blowup is
like 2 to the d.
00:33:11.000 --> 00:33:22.130
So we want to say some structure
applies small doubling,
00:33:22.130 --> 00:33:25.200
and Freiman's theorem
tells the reverse, that you
00:33:25.200 --> 00:33:33.030
have small doubling, then
you obtain this structure.
00:33:33.030 --> 00:33:39.420
And seems like you are losing.
00:33:39.420 --> 00:33:44.340
Getting from here to here, there
is a polynomial type of loss,
00:33:44.340 --> 00:33:46.290
whereas going from
here to here, it
00:33:46.290 --> 00:33:49.950
seems that we're incurring
some exponential type of loss.
00:33:49.950 --> 00:33:52.740
And it'll be nice to have
some kind of inverse theorem
00:33:52.740 --> 00:33:57.180
that also preserves these
relationships qualitatively.
00:33:57.180 --> 00:33:59.070
So that may not make
sense in this moment,
00:33:59.070 --> 00:34:00.903
but we'll get back to
it later this lecture.
00:34:04.250 --> 00:34:05.660
Point is, there's
more, much more
00:34:05.660 --> 00:34:07.520
to be said about
the bounds here,
00:34:07.520 --> 00:34:09.530
even though right now
it looks as if they're
00:34:09.530 --> 00:34:12.860
very close to each other.
00:34:12.860 --> 00:34:15.980
One more thing that
I want to expand on
00:34:15.980 --> 00:34:19.300
is, we've stated and
proved Freiman's theorem
00:34:19.300 --> 00:34:21.409
in the integers.
00:34:21.409 --> 00:34:26.850
And you might ask, what
about in other groups?
00:34:26.850 --> 00:34:31.620
We also proved Freiman's
theorem in F2 to the m, or more
00:34:31.620 --> 00:34:33.989
generally, groups
of bounded exponent
00:34:33.989 --> 00:34:39.270
or bounded portion, so abelian
groups of bounded exponent.
00:34:39.270 --> 00:34:46.600
For general abelian groups,
so Freiman's theorem
00:34:46.600 --> 00:34:54.900
in general abelian groups, you
might ask what happens here?
00:34:54.900 --> 00:34:58.230
And in some sense what is even
the statement of the theorem?
00:34:58.230 --> 00:35:01.120
So we want something
which combines,
00:35:01.120 --> 00:35:04.280
somehow, two different
types of behavior.
00:35:04.280 --> 00:35:07.670
On one hand, you have z,
which is what we just did.
00:35:07.670 --> 00:35:10.810
And here the model
structures are GAP's.
00:35:10.810 --> 00:35:14.330
And on the other hand, we
have, which we also proved,
00:35:14.330 --> 00:35:17.570
things like F2 to the m,
where the model structures are
00:35:17.570 --> 00:35:18.200
subgroups.
00:35:22.520 --> 00:35:24.270
And there's a sense
in which these are not
00:35:24.270 --> 00:35:26.400
the GAP's and subgroups.
00:35:26.400 --> 00:35:27.900
They have some
similar properties,
00:35:27.900 --> 00:35:31.350
but they're not really
like each other.
00:35:31.350 --> 00:35:33.500
So now if I give you
a general group, which
00:35:33.500 --> 00:35:36.740
might be some combination
of infinite torsion
00:35:36.740 --> 00:35:39.590
or very large torsion elements
versus very small torsion
00:35:39.590 --> 00:35:42.410
elements-- so for example,
take a Cartesian product
00:35:42.410 --> 00:35:43.820
of these groups.
00:35:43.820 --> 00:35:45.900
Is there a Freiman's theorem?
00:35:45.900 --> 00:35:48.780
And what does such
a theorem look like?
00:35:48.780 --> 00:35:51.340
What are the structures?
00:35:51.340 --> 00:35:54.080
What are the subsets
of bounded doubling?
00:35:58.040 --> 00:36:01.590
So that's kind of the thing
we want to think about.
00:36:01.590 --> 00:36:05.010
So it turns out for Freiman's
theorem in general abelian
00:36:05.010 --> 00:36:06.330
groups--
00:36:06.330 --> 00:36:07.570
so there is a theorem.
00:36:07.570 --> 00:36:10.950
So this theorem was
proved by Green and Ruzsa.
00:36:14.880 --> 00:36:17.730
So following a very similar
type of proof framework,
00:36:17.730 --> 00:36:21.330
although the individual
steps, in particular
00:36:21.330 --> 00:36:24.840
the modeling lemma
needs to be modified.
00:36:24.840 --> 00:36:27.180
And let me tell you
what the statement is.
00:36:27.180 --> 00:36:32.200
So the common generalization
of GAP's and subgroups is
00:36:32.200 --> 00:36:34.740
something called a
"co-set progression."
00:36:41.330 --> 00:36:46.790
So a co-set
progression is a subset
00:36:46.790 --> 00:36:54.070
which is a direct sum
of the form P plus H,
00:36:54.070 --> 00:36:57.180
where P is a proper GAP.
00:36:59.845 --> 00:37:03.990
So the definition of GAP works
just fine in every abelian
00:37:03.990 --> 00:37:04.490
group.
00:37:04.490 --> 00:37:08.390
You start with the initial
point, a few directions,
00:37:08.390 --> 00:37:13.590
and you look at a grid
expansion of those directions.
00:37:13.590 --> 00:37:17.120
P is a proper GAP,
and H is a subgroup.
00:37:20.300 --> 00:37:26.100
And here, the direct sum
refers to the fact that every--
00:37:26.100 --> 00:37:34.330
so if P plus H equals to P prime
plus H prime for some P and P
00:37:34.330 --> 00:37:39.310
prime in the set P, and H
and H prime in the set H,
00:37:39.310 --> 00:37:43.780
then P equals to P prime
and H equals to H prime.
00:37:43.780 --> 00:37:46.150
So every element
in here is written
00:37:46.150 --> 00:37:49.872
in a unique way as
some P plus some H. So
00:37:49.872 --> 00:37:51.330
that's what I mean
by "direct sum."
00:37:54.400 --> 00:37:58.930
For such an object, so
such a co-set progression,
00:37:58.930 --> 00:38:09.640
I call its dimension to be
the dimension of the GAP, P.
00:38:09.640 --> 00:38:12.820
And its size in
this case, actually,
00:38:12.820 --> 00:38:16.930
is just the size of the set,
which is also the size of P
00:38:16.930 --> 00:38:23.620
times the size of
H. So the theorem
00:38:23.620 --> 00:38:36.960
is that if A is a subset of
an arbitrary abelian group
00:38:36.960 --> 00:38:49.650
and it has bounded doubling,
then A is contained
00:38:49.650 --> 00:39:03.930
in a co-set progression of
bounded dimension and size,
00:39:03.930 --> 00:39:10.596
bounded blowup of the size of A.
00:39:10.596 --> 00:39:14.400
And here, these constants
D and K are universal.
00:39:14.400 --> 00:39:17.360
They do not depend on the group.
00:39:17.360 --> 00:39:19.050
So there are some
specific numbers,
00:39:19.050 --> 00:39:20.480
functions you can write down.
00:39:20.480 --> 00:39:24.040
They do not depend on the group.
00:39:24.040 --> 00:39:29.600
So this theorem gives
you the characterization
00:39:29.600 --> 00:39:32.668
of subsets in general
abelian groups
00:39:32.668 --> 00:39:33.710
that have small doubling.
00:39:36.610 --> 00:39:38.056
Any questions?
00:39:38.056 --> 00:39:39.808
Yes?
00:39:39.808 --> 00:39:40.690
AUDIENCE: [INAUDIBLE]
00:39:40.690 --> 00:39:41.510
YUFEI ZHAO: That's
a good question.
00:39:41.510 --> 00:39:43.177
So I think you could
go into their paper
00:39:43.177 --> 00:39:48.050
and see that you can get
polynomial type bounds.
00:39:48.050 --> 00:39:51.920
And I think Sander's
results also
00:39:51.920 --> 00:39:57.020
work for this type of setting to
give you these type of bounds.
00:39:57.020 --> 00:39:59.960
But I-- yes, so you should
look into Sanders' paper,
00:39:59.960 --> 00:40:00.830
and he will explain.
00:40:00.830 --> 00:40:04.200
I think in Sanders' paper
he walks in general abelian
00:40:04.200 --> 00:40:04.700
groups.
00:40:07.960 --> 00:40:10.640
The next question I
want to address is--
00:40:10.640 --> 00:40:15.520
well, what do you think
is the next question?
00:40:15.520 --> 00:40:20.370
Non-abelian groups, so Freiman's
theorem in non-abelian groups,
00:40:20.370 --> 00:40:25.250
or rather the Freiman problem
in non-abelian groups.
00:40:32.500 --> 00:40:35.710
So here's a basic
question-- if I give you
00:40:35.710 --> 00:40:41.390
a non-abelian group, what
subsets have bounded doubling?
00:40:41.390 --> 00:40:43.730
Of course, the examples
from abelian groups
00:40:43.730 --> 00:40:47.330
also work in non-abelian groups,
where you have subgroups,
00:40:47.330 --> 00:40:50.930
you have generalized
arithmetical progressions.
00:40:50.930 --> 00:40:54.290
But are there
genuinely new examples
00:40:54.290 --> 00:40:57.980
of sets in non-abelian groups
that have bounded doubling?
00:41:01.160 --> 00:41:03.370
So think about that, and
let's take a quick break.
00:41:07.790 --> 00:41:11.420
Can you think of examples
in non-abelian groups
00:41:11.420 --> 00:41:14.518
that have small doubling, that
do not come from the examples
00:41:14.518 --> 00:41:15.560
that we have seen before?
00:41:22.470 --> 00:41:24.090
So let me show you
one construction.
00:41:24.090 --> 00:41:26.690
And this is that
important construction
00:41:26.690 --> 00:41:27.740
for non-abelian groups.
00:41:34.662 --> 00:41:35.370
So it has a name.
00:41:35.370 --> 00:41:43.690
It's called a discrete
Heisenberg group,
00:41:43.690 --> 00:41:51.820
which is the matrix group
consisting of matrices that
00:41:51.820 --> 00:41:54.130
look like what I've written.
00:41:54.130 --> 00:41:56.740
So you have integer
entries above the diagonal,
00:41:56.740 --> 00:42:01.090
1 on the diagonal, and
0 below the diagonal.
00:42:01.090 --> 00:42:03.030
So let's do some
elementary matrix
00:42:03.030 --> 00:42:08.680
multiplication to see how group
multiplication in this group
00:42:08.680 --> 00:42:10.190
works.
00:42:10.190 --> 00:42:17.910
So if I have two such matrices,
I multiply them together.
00:42:17.910 --> 00:42:20.475
And then you see
that the diagonal
00:42:20.475 --> 00:42:22.240
is preserved, of course.
00:42:22.240 --> 00:42:28.800
But this entry over
here is simply addition.
00:42:28.800 --> 00:42:31.860
So this entry here
is just addition.
00:42:31.860 --> 00:42:34.140
This entry over here
is also addition.
00:42:37.040 --> 00:42:40.870
And the top right entry
is a bit more complicated.
00:42:40.870 --> 00:42:45.100
It's some addition, but
there's an additional twist.
00:42:52.230 --> 00:42:53.900
So this is how
matrix multiplication
00:42:53.900 --> 00:42:55.642
works in this group.
00:42:55.642 --> 00:42:57.350
I mean, this is how
matrix multiplication
00:42:57.350 --> 00:43:00.530
works, but in terms of
elements of this group, that's
00:43:00.530 --> 00:43:02.400
what happens.
00:43:02.400 --> 00:43:05.300
So you see it's kind of
like an abelian group,
00:43:05.300 --> 00:43:09.720
but there's an extra twist,
so it's almost abelian, so
00:43:09.720 --> 00:43:13.990
the first step you can
take away from abelian.
00:43:13.990 --> 00:43:16.220
And there's a way to
quantify this notion.
00:43:16.220 --> 00:43:17.470
It's called "nilpotency."
00:43:17.470 --> 00:43:20.040
And we'll get to
that in a second.
00:43:20.040 --> 00:43:25.070
But in particular, if you
set S to be the following
00:43:25.070 --> 00:43:25.850
generators--
00:43:32.490 --> 00:43:35.290
so if you take S to be
these four elements,
00:43:35.290 --> 00:43:40.720
and you ask what does the
r-th power of S look like,
00:43:40.720 --> 00:43:42.250
so I look at all
the elements which
00:43:42.250 --> 00:43:48.010
can be written by r or at
most r elements from S,
00:43:48.010 --> 00:43:49.960
what do these
elements look like?
00:43:54.410 --> 00:43:55.350
What do you think?
00:43:55.350 --> 00:44:01.420
So if you look at
elements in here,
00:44:01.420 --> 00:44:06.232
how large can this entry,
the 1, comma, 2 entry be?
00:44:06.232 --> 00:44:07.080
r.
00:44:07.080 --> 00:44:10.140
So each time you do
addition, so it's at most r.
00:44:10.140 --> 00:44:13.090
So let me be a bit rough
here, and say it's big O of r.
00:44:13.090 --> 00:44:16.630
And likewise, the 2, 1, 2,
3, entry is also big O of r.
00:44:16.630 --> 00:44:19.660
What about the top
right entry over here?
00:44:23.240 --> 00:44:25.380
So it grows like r
squared, because there is
00:44:25.380 --> 00:44:27.170
an extra multiplication term.
00:44:29.790 --> 00:44:33.060
So you can be much more
precise about the growth rate
00:44:33.060 --> 00:44:35.320
of these individual entries.
00:44:35.320 --> 00:44:39.150
But very roughly, it looks
like this ball over here.
00:44:39.150 --> 00:44:47.230
So the size of S,
the r-th ball of S,
00:44:47.230 --> 00:44:52.760
is roughly, it's on the
order of 4th power of r.
00:44:55.650 --> 00:45:02.348
So in particular, the
doubling constant,
00:45:02.348 --> 00:45:05.960
if r is reasonably
large, is what?
00:45:10.440 --> 00:45:12.765
What happens when
we go from r to 2r?
00:45:12.765 --> 00:45:16.185
The size increases by
a factor of around 16.
00:45:21.610 --> 00:45:26.340
So that's an example of a
set in a non-abelian group
00:45:26.340 --> 00:45:28.900
with bounded doubling,
which is genuinely
00:45:28.900 --> 00:45:31.410
different from the examples
we have seen so far.
00:45:31.410 --> 00:45:32.974
So that's non-abelian.
00:45:32.974 --> 00:45:33.474
Yeah.
00:45:33.474 --> 00:45:37.370
AUDIENCE: [INAUDIBLE]
00:45:37.370 --> 00:45:41.220
YUFEI ZHAO: The question
is, is the size--
00:45:41.220 --> 00:45:44.740
we've shown the size is--
00:45:44.740 --> 00:45:47.280
I'm not being very
precise here, but you can
00:45:47.280 --> 00:45:48.790
do upper bound and lower bound.
00:45:48.790 --> 00:45:51.810
So size turns out to be
the order of r to the 4.
00:45:55.538 --> 00:45:57.330
So you want to show
that there are actually
00:45:57.330 --> 00:45:59.290
enough elements over here
that you can fill in,
00:45:59.290 --> 00:46:00.770
but I'll leave that to you.
00:46:08.218 --> 00:46:10.010
Can you build other
examples like this one?
00:46:13.370 --> 00:46:14.810
Yeah.
00:46:14.810 --> 00:46:16.240
AUDIENCE: How do
we know that this
00:46:16.240 --> 00:46:20.793
isn't similar to a co-set,
the direct sum [INAUDIBLE]??
00:46:20.793 --> 00:46:22.210
YUFEI ZHAO: Question
is, how do we
00:46:22.210 --> 00:46:26.050
know this isn't like a co-set
sum or a co-set progression?
00:46:26.050 --> 00:46:29.830
For one thing, this
is not abelian.
00:46:29.830 --> 00:46:33.970
S, if you multiply entries
of S in different orders,
00:46:33.970 --> 00:46:36.830
you get different elements.
00:46:36.830 --> 00:46:40.030
So already in that way, it's
different from the examples
00:46:40.030 --> 00:46:41.135
that we have seen before.
00:46:41.135 --> 00:46:42.010
But no, you're right.
00:46:42.010 --> 00:46:44.740
So maybe we can write
this a semi-direct product
00:46:44.740 --> 00:46:46.630
in terms of things
we have seen before.
00:46:46.630 --> 00:46:49.600
And it is, in some sense,
a semi-direct product,
00:46:49.600 --> 00:46:51.970
but it's a very special
kind of semi-direct product.
00:46:58.202 --> 00:47:00.660
From that example, you can
build bigger examples, of course
00:47:00.660 --> 00:47:03.060
with more entries in the matrix.
00:47:03.060 --> 00:47:06.480
But more generally,
these things are
00:47:06.480 --> 00:47:09.270
what are known as
"nilpotent groups."
00:47:09.270 --> 00:47:17.970
So that's an example
of a nilpotent group.
00:47:17.970 --> 00:47:20.430
And to remind you, the
definition of a nilpotent group
00:47:20.430 --> 00:47:23.700
is a group where the lower
central series eventually
00:47:23.700 --> 00:47:24.360
terminates.
00:47:32.790 --> 00:47:36.500
In particular, inside
that if you look at--
00:47:36.500 --> 00:47:41.100
so this is the commutator of
G, so look at all the elements
00:47:41.100 --> 00:47:44.208
that we recognize
x, y, x inverse,
00:47:44.208 --> 00:47:46.750
y inverse-- the set of elements
that can be written this way.
00:47:46.750 --> 00:47:48.240
So that's a subgroup.
00:47:48.240 --> 00:47:57.820
And if I repeat this
operation enough times,
00:47:57.820 --> 00:48:02.340
I eventually would
get just the identity.
00:48:02.340 --> 00:48:03.940
And you could trade
on that group.
00:48:03.940 --> 00:48:09.150
If you do the commutator,
so essentially you get rid
00:48:09.150 --> 00:48:15.630
of abelian-ness and you
move up the whole diagonal,
00:48:15.630 --> 00:48:19.630
you create a commutator,
you'd get rid of these--
00:48:19.630 --> 00:48:20.650
all these two entries.
00:48:20.650 --> 00:48:23.790
So you get z alone.
00:48:23.790 --> 00:48:26.095
If you do it one more time,
you zero out that entry.
00:48:33.710 --> 00:48:39.930
And so more generally, all
of these nilpotent groups
00:48:39.930 --> 00:48:49.680
have this phenomenon, have the
polynomial growth phenomenon.
00:48:55.180 --> 00:49:00.220
So if you take a set of
generators and look at a ball,
00:49:00.220 --> 00:49:01.680
and look at the
volume of the ball,
00:49:01.680 --> 00:49:04.210
how does the volume of the
ball grow with the radius?
00:49:04.210 --> 00:49:07.160
It grows like a polynomial.
00:49:07.160 --> 00:49:09.250
And so let me define that.
00:49:09.250 --> 00:49:21.330
So given G, a finitely generated
group, so generated by set S,
00:49:21.330 --> 00:49:36.560
we say that G has polynomial
growth if the size S to the r
00:49:36.560 --> 00:49:40.350
grows like at most
a polynomial in r.
00:49:48.470 --> 00:49:50.990
It's worth noting that
this definition is really
00:49:50.990 --> 00:49:54.740
a definition about
G. It does not depend
00:49:54.740 --> 00:49:56.487
on the choice of generators.
00:50:02.340 --> 00:50:04.790
You can have different choices,
generators for the group.
00:50:04.790 --> 00:50:07.640
But if it has polynomial
growth with respect
00:50:07.640 --> 00:50:10.210
to one set of generators,
then it's the same.
00:50:10.210 --> 00:50:12.260
It also has polynomial
growth with regards
00:50:12.260 --> 00:50:13.820
to every other set.
00:50:13.820 --> 00:50:18.970
So we've seen an example of
groups with polynomial growth.
00:50:18.970 --> 00:50:21.118
Abelian groups have
polynomial growth.
00:50:21.118 --> 00:50:22.660
So if you think of
polynomial growth,
00:50:22.660 --> 00:50:25.690
think lattice or z to the m.
00:50:25.690 --> 00:50:27.930
So if you take a
ball growing, so it
00:50:27.930 --> 00:50:34.480
has size growing like
r to the dimension.
00:50:34.480 --> 00:50:36.600
But nilpotent groups
is another example
00:50:36.600 --> 00:50:39.360
of groups with
polynomial growth.
00:50:39.360 --> 00:50:42.600
And these are, intuitively
at least for now, related
00:50:42.600 --> 00:50:44.707
to bounded doubling.
00:50:44.707 --> 00:50:47.040
If it's polynomial growth,
then it has bounded doubling.
00:50:49.580 --> 00:50:52.840
So is there a classification
of groups with bounded--
00:50:52.840 --> 00:50:54.740
with polynomial growth?
00:50:54.740 --> 00:50:57.400
So if I tell you a group--
so an infinite group always,
00:50:57.400 --> 00:51:00.400
because otherwise if finite,
then it maxes out already
00:51:00.400 --> 00:51:01.120
at some point.
00:51:01.120 --> 00:51:03.530
So I give you an infinite group.
00:51:03.530 --> 00:51:05.890
I tell you it has
polynomial growth.
00:51:05.890 --> 00:51:07.648
What can you tell
me about this group?
00:51:07.648 --> 00:51:09.190
Is there some
characterization that's
00:51:09.190 --> 00:51:12.133
an inverse of what
we've seen so far?
00:51:12.133 --> 00:51:13.050
And the answer is yes.
00:51:13.050 --> 00:51:16.060
And this is a famous and
deep result of Gromov.
00:51:19.290 --> 00:51:25.180
So Gromov's theorem on
groups of polynomial growth
00:51:25.180 --> 00:51:26.660
from the '80s.
00:51:26.660 --> 00:51:33.910
Gromov showed that
a finitely generated
00:51:33.910 --> 00:51:42.400
group has polynomial
growth if and only
00:51:42.400 --> 00:51:55.230
if it's virtually
nilpotent, where "virtually"
00:51:55.230 --> 00:52:01.200
is an adverb in
group theory where
00:52:01.200 --> 00:52:04.950
you have some property like
"abelian," or "solvable,"
00:52:04.950 --> 00:52:06.030
or whatever.
00:52:06.030 --> 00:52:16.490
So virtually P means that there
exists a finite index subgroup
00:52:16.490 --> 00:52:24.950
with property P. So
"virtually nilpotent"
00:52:24.950 --> 00:52:29.100
means there is a finite index
subgroup that is nilpotent.
00:52:29.100 --> 00:52:34.140
So it completely characterizes
groups of polynomial growth.
00:52:34.140 --> 00:52:37.080
So basically, all the
examples we've seen so far
00:52:37.080 --> 00:52:43.680
are representative, so up to
changing by a finite index
00:52:43.680 --> 00:52:45.780
subgroup, which as
you would expect,
00:52:45.780 --> 00:52:50.520
shouldn't change the
growth nature by so much.
00:52:50.520 --> 00:52:53.730
There are some analogies
to be made here with,
00:52:53.730 --> 00:52:59.130
for example in geometry,
you ask in Euclidean space,
00:52:59.130 --> 00:53:03.560
how fast is the ball
of radius r growing?
00:53:03.560 --> 00:53:07.580
In dimension d, it
grows like r to the d.
00:53:07.580 --> 00:53:10.920
What about in the
hyperbolic space?
00:53:10.920 --> 00:53:13.530
Does anyone know how fast,
in a hyperbolic space,
00:53:13.530 --> 00:53:15.570
a ball of radius r grows?
00:53:19.130 --> 00:53:23.100
It's exponential in the radius.
00:53:23.100 --> 00:53:27.390
So for non-negatively
curved spaces,
00:53:27.390 --> 00:53:30.240
the balls grow polynomially.
00:53:30.240 --> 00:53:33.720
But for something that's
negatively curvatured,
00:53:33.720 --> 00:53:35.970
in particular the
hyperbolic space,
00:53:35.970 --> 00:53:39.940
the ball growth
might be exponential.
00:53:39.940 --> 00:53:43.190
You have a similar
phenomenon happening here.
00:53:43.190 --> 00:53:44.870
The opposite of
polynomial growth
00:53:44.870 --> 00:53:46.400
is, well, super
polynomial growth,
00:53:46.400 --> 00:53:53.050
but one specific example is
that of a free group, where
00:53:53.050 --> 00:53:56.730
there are no relations
between the generators.
00:53:56.730 --> 00:54:03.930
In that case, the balls,
they grow like exponentially.
00:54:03.930 --> 00:54:08.550
So the balls grow
exponentially in the radius.
00:54:08.550 --> 00:54:11.470
Gromov's theorem
is a deep theorem.
00:54:11.470 --> 00:54:16.150
And its original proof
used some very hard tools
00:54:16.150 --> 00:54:17.830
coming from geometry.
00:54:17.830 --> 00:54:23.230
And Gromov developed a notion
of convergence of metric spaces,
00:54:23.230 --> 00:54:26.740
somewhat akin to our
discussion of graph limits.
00:54:26.740 --> 00:54:28.960
So starting with
discrete objects,
00:54:28.960 --> 00:54:32.890
he looked at some convergence
to some continuous objects,
00:54:32.890 --> 00:54:37.420
and then used some
very deep results
00:54:37.420 --> 00:54:41.740
from the classification
of locally compact groups
00:54:41.740 --> 00:54:44.410
to derive this result over here.
00:54:48.060 --> 00:54:50.840
So this proof has been
quite influential,
00:54:50.840 --> 00:54:53.930
and is related to
something called
00:54:53.930 --> 00:55:05.640
"Hilbert's fifth problem, which
concerns characterizations
00:55:05.640 --> 00:55:07.336
of Lie groups.
00:55:07.336 --> 00:55:09.620
So all of these are
inverse-type problems.
00:55:09.620 --> 00:55:11.690
I tell you some structure
has some property.
00:55:11.690 --> 00:55:15.770
Describe that structure.
00:55:15.770 --> 00:55:20.030
What does this all have to
do with Freiman's theorem?
00:55:20.030 --> 00:55:21.310
Already you see some relation.
00:55:21.310 --> 00:55:23.602
So there seems, at least
intuitively, some relationship
00:55:23.602 --> 00:55:26.270
between groups of polynomial
growth versus subsets
00:55:26.270 --> 00:55:27.650
of bounded doubling.
00:55:27.650 --> 00:55:31.320
One implies the other,
although not in the converse.
00:55:31.320 --> 00:55:32.570
And they are indeed related.
00:55:32.570 --> 00:55:35.720
And this comes out of
some very recent work.
00:55:35.720 --> 00:55:37.790
I should also mention
that Gromov's theorem has
00:55:37.790 --> 00:55:40.730
been made simplified
by Kleiner, who
00:55:40.730 --> 00:55:44.930
gave an important
simplification, a more
00:55:44.930 --> 00:55:47.080
elementary proof of
Gromov's theorem.
00:55:49.790 --> 00:55:52.160
So let's talk about
the non-abelian version
00:55:52.160 --> 00:55:53.330
of Freiman's theorem.
00:55:59.630 --> 00:56:02.270
We would like some
result that says
00:56:02.270 --> 00:56:10.220
that is it true that every
set, most every set of--
00:56:10.220 --> 00:56:12.705
so previously, we
had small doubling.
00:56:12.705 --> 00:56:15.080
You want to have some similar
notion, although it may not
00:56:15.080 --> 00:56:19.330
be exactly small doubling,
but let me not be very precise
00:56:19.330 --> 00:56:24.043
and to say, "small doubling."
00:56:24.043 --> 00:56:26.210
In literature, these things
are sometimes also known
00:56:26.210 --> 00:56:27.770
as "approximate groups."
00:56:32.027 --> 00:56:34.930
So if you look this up, you will
get to the relevant literature
00:56:34.930 --> 00:56:36.250
on the subject.
00:56:36.250 --> 00:56:39.160
Most every set of small doubling
in some non-abelian group
00:56:39.160 --> 00:56:45.670
behaves like one of these
known examples, something
00:56:45.670 --> 00:56:58.160
which is some combination of
subgroups and nilpotent balls.
00:57:03.300 --> 00:57:06.240
So these combinations are
sometimes known as "co-set
00:57:06.240 --> 00:57:07.729
nilprogressions."
00:57:15.560 --> 00:57:18.910
So this was something
that was only
00:57:18.910 --> 00:57:21.190
explored in the
past 10 years or so
00:57:21.190 --> 00:57:25.540
in a series of very
difficult works.
00:57:25.540 --> 00:57:27.940
Previously, it had
been known, and still
00:57:27.940 --> 00:57:30.190
was being investigated for
various special classes
00:57:30.190 --> 00:57:32.800
of matrix groups or
special classes of groups
00:57:32.800 --> 00:57:35.850
like solvable
groups and whatnot,
00:57:35.850 --> 00:57:38.900
that are more explicit
or easier to handle
00:57:38.900 --> 00:57:40.730
or closer to the abelian analog.
00:57:43.580 --> 00:57:50.370
There was important
work of Hrushovski,
00:57:50.370 --> 00:57:55.090
which was published
about 10 years ago,
00:57:55.090 --> 00:57:58.320
who showed using model
theory techniques,
00:57:58.320 --> 00:58:05.700
so using methods from
logic, that a weak version
00:58:05.700 --> 00:58:12.970
of Freiman's theorem is
true for non-abelian groups.
00:58:12.970 --> 00:58:22.842
And later on, Breuillard, Green,
and Tao building on Hrushovskis
00:58:22.842 --> 00:58:24.800
work-- so this actually
came quite a bit later,
00:58:24.800 --> 00:58:26.730
even though the journal
publication dates are the same
00:58:26.730 --> 00:58:27.380
year--
00:58:27.380 --> 00:58:31.430
so they were able to build
on Hrushovski's work,
00:58:31.430 --> 00:58:35.480
and greatly expanding on it, and
going back to some of the older
00:58:35.480 --> 00:58:40.070
techniques coming from Hilbert's
fifth problem, and as a result,
00:58:40.070 --> 00:58:43.580
proved an inverse
structure theorem
00:58:43.580 --> 00:58:46.760
that gave some kind of
answer to this question
00:58:46.760 --> 00:58:48.830
of non-abelian Freiman.
00:58:48.830 --> 00:58:51.380
So we now do have
some theorem which
00:58:51.380 --> 00:58:54.320
is like Freiman's theorem
for abelian groups that
00:58:54.320 --> 00:58:57.560
says in a non-abelian group,
if you have something that
00:58:57.560 --> 00:59:02.060
resembles small doubling, then
the set must, in some sense,
00:59:02.060 --> 00:59:06.680
look like a combination of
subgroups and nilpotent balls.
00:59:06.680 --> 00:59:09.350
But let me not be
precise at all.
00:59:09.350 --> 00:59:10.940
The methods here
build on Hrushovski.
00:59:10.940 --> 00:59:14.930
And Hrushovski used model
theory, which is kind of--
00:59:14.930 --> 00:59:19.640
it's something
where-- in particular,
00:59:19.640 --> 00:59:21.560
one feature of all
of these proofs
00:59:21.560 --> 00:59:22.910
is that they give no bounds.
00:59:26.950 --> 00:59:29.650
Similar to what we've seen
earlier in the course,
00:59:29.650 --> 00:59:33.790
in proofs that involved
compactness, what happens
00:59:33.790 --> 00:59:36.055
here is that the arguments
use ultra filters.
00:59:39.420 --> 00:59:43.190
So there are these constructions
from mathematical logic.
00:59:43.190 --> 00:59:46.340
And like compactness,
they give no bounds.
00:59:46.340 --> 00:59:49.720
So it remains an open problem
to prove Freiman's theorem
00:59:49.720 --> 00:59:52.420
for non-abelian groups
with some concrete bounds.
00:59:52.420 --> 00:59:53.197
Question.
00:59:53.197 --> 00:59:57.230
AUDIENCE: [INAUDIBLE]
nilpotent ball?
00:59:57.230 --> 00:59:59.480
YUFEI ZHAO: What
is nilpotent ball?
00:59:59.480 --> 01:00:01.610
I don't want to give
a precise definition,
01:00:01.610 --> 01:00:04.610
but roughly speaking,
it's balls that
01:00:04.610 --> 01:00:07.790
come out of those
types of constructions.
01:00:07.790 --> 01:00:09.610
So you take a
nilpotent subgroup.
01:00:09.610 --> 01:00:11.030
You take a nilpotent group.
01:00:11.030 --> 01:00:13.400
You look at an image
of a nilpotent group
01:00:13.400 --> 01:00:23.640
into your group, and then look
at the image of that ball, so
01:00:23.640 --> 01:00:29.130
something that looks like one
of the previous constructions.
01:00:29.130 --> 01:00:31.495
So that's all I want to say
about non-abelian extensions
01:00:31.495 --> 01:00:32.370
of Freiman's theorem.
01:00:32.370 --> 01:00:33.598
Any questions?
01:00:33.598 --> 01:00:35.140
AUDIENCE: Would you
say one more time
01:00:35.140 --> 01:00:37.408
what you mean by
"approximate group?"
01:00:37.408 --> 01:00:38.700
YUFEI ZHAO: So what I mean by--
01:00:38.700 --> 01:00:41.670
you can look in the papers and
see the precise definitions,
01:00:41.670 --> 01:00:47.700
but roughly speaking,
it's that if you have--
01:00:47.700 --> 01:00:51.090
there are different kinds of
definitions and most of them
01:00:51.090 --> 01:00:51.930
are equivalent.
01:00:51.930 --> 01:00:54.120
But one version is
that you have a set A
01:00:54.120 --> 01:01:05.940
such that A is coverable
by K translates of A,
01:01:05.940 --> 01:01:08.760
so it's a bit more than
just the size information,
01:01:08.760 --> 01:01:11.770
but it's actually related
to size information.
01:01:11.770 --> 01:01:16.060
So we've already seen in
this course how many of these
01:01:16.060 --> 01:01:18.060
different notions can go
back and forth from one
01:01:18.060 --> 01:01:21.040
to the other, covering
to size, and whatnot.
01:01:27.110 --> 01:01:28.880
The final thing I
want to discuss today
01:01:28.880 --> 01:01:31.780
is one of the most
central open problems
01:01:31.780 --> 01:01:35.230
in additive combinatorics going
back to the abelian version.
01:01:35.230 --> 01:01:37.930
So this is known as the
"polynomial Freiman-Ruzsa
01:01:37.930 --> 01:01:38.933
conjecture."
01:01:48.600 --> 01:01:52.050
So we would like some
kind of a Freiman theorem
01:01:52.050 --> 01:01:56.370
that preserves the constants
up to polynomial changes
01:01:56.370 --> 01:01:59.788
without losing an exponent.
01:01:59.788 --> 01:02:02.980
Now, from earlier
discussions, I showed you
01:02:02.980 --> 01:02:09.950
that the bounds that we almost
proved is close to the truth.
01:02:09.950 --> 01:02:11.710
You do need some kind
of exponential loss
01:02:11.710 --> 01:02:14.975
in the blowup size of the GAP.
01:02:14.975 --> 01:02:16.600
But it turns out
those kind of examples
01:02:16.600 --> 01:02:18.100
are slightly misleading.
01:02:18.100 --> 01:02:20.800
So let's look at the examples
of the constructions again.
01:02:20.800 --> 01:02:26.500
So if A-- so just for
simplicity in exposition,
01:02:26.500 --> 01:02:31.550
I'm going to stick with F2
to the n, at least initially.
01:02:31.550 --> 01:02:41.090
So if A is an independent
set of size n,
01:02:41.090 --> 01:02:45.660
then K, being the
doubling constant of A,
01:02:45.660 --> 01:02:48.650
is roughly like n over 2.
01:02:48.650 --> 01:02:52.145
And yet the subgroup
that contains A
01:02:52.145 --> 01:03:00.110
has size 2 to the something
on the order of K times A.
01:03:00.110 --> 01:03:04.830
So you necessarily incur an
exponential loss over here.
01:03:04.830 --> 01:03:08.740
Now, you might complain that the
size of A here is basically K.
01:03:08.740 --> 01:03:11.570
But of course, I can
blow up this example
01:03:11.570 --> 01:03:17.600
by considering what happens
if you take each element here,
01:03:17.600 --> 01:03:19.610
and blow it up into
an entire subspace.
01:03:25.110 --> 01:03:28.150
So the e's are the
coordinate vectors.
01:03:28.150 --> 01:03:32.340
So now I'm sitting inside
F2 to the m plus n.
01:03:32.340 --> 01:03:33.690
And that gives me this set.
01:03:36.800 --> 01:03:39.220
The doubling constant is
still the same as before.
01:03:44.900 --> 01:03:54.270
And yet, we see that the
subgroup generated by A
01:03:54.270 --> 01:03:56.310
still has this
exponential blowup
01:03:56.310 --> 01:04:00.630
in this constant, exponential
in the doubling constant.
01:04:00.630 --> 01:04:04.050
But now you see in
this example here,
01:04:04.050 --> 01:04:07.320
even though the
subgroup generated by A
01:04:07.320 --> 01:04:11.190
can be much larger than A, so
everything's still constant, so
01:04:11.190 --> 01:04:14.100
much larger in terms of as
a function of the doubling
01:04:14.100 --> 01:04:18.740
constant, A has a
very large structure.
01:04:18.740 --> 01:04:25.070
So A contains a
very large subspace.
01:04:30.625 --> 01:04:32.685
By "subspace," I
mean affine subspace.
01:04:36.300 --> 01:04:40.920
And the subspace here is
comparable to the size
01:04:40.920 --> 01:04:41.850
of A itself.
01:04:44.900 --> 01:04:47.980
So you might
wonder, if you don't
01:04:47.980 --> 01:04:52.540
care about containing A
inside a single subspace,
01:04:52.540 --> 01:04:56.920
can you do much better
in terms of bounds?
01:04:56.920 --> 01:04:59.170
And that's the content of
the polynomial Freiman-Ruzsa
01:04:59.170 --> 01:05:01.030
conjecture.
01:05:01.030 --> 01:05:07.030
The PFR conjecture
for F2 to the m
01:05:07.030 --> 01:05:13.609
says that if you have
a subset of F2 to the m
01:05:13.609 --> 01:05:21.010
and A plus A is size at
most K times the size of A,
01:05:21.010 --> 01:05:28.840
then there exists a
subspace V of size
01:05:28.840 --> 01:05:44.880
at most A such that V contains
a large proportion of A.
01:05:44.880 --> 01:05:46.350
And the large here--
01:05:46.350 --> 01:05:50.220
we only lose something that is
polynomial in these doubling
01:05:50.220 --> 01:05:51.162
constants.
01:05:54.120 --> 01:05:55.540
So that's the case.
01:05:55.540 --> 01:05:57.300
It's over here.
01:05:57.300 --> 01:06:04.160
So instead of containing A
inside an entire subspace,
01:06:04.160 --> 01:06:07.850
I just want to contain a large
fraction of A in a subspace.
01:06:07.850 --> 01:06:09.740
And the conjecture
is that I do not
01:06:09.740 --> 01:06:12.725
need to incur exponential
losses in the constants.
01:06:12.725 --> 01:06:15.520
AUDIENCE: So V is
an affine subspace?
01:06:15.520 --> 01:06:18.390
YUFEI ZHAO: V is--
01:06:18.390 --> 01:06:20.552
question is, V is
an affine subspace.
01:06:20.552 --> 01:06:22.260
You can think of V as
an affine subspace.
01:06:22.260 --> 01:06:23.910
You can think of
V as a subspace.
01:06:23.910 --> 01:06:26.179
It doesn't actually matter
in this formulation.
01:06:35.170 --> 01:06:36.860
There's an equivalent
formulation
01:06:36.860 --> 01:06:41.660
which you might like better,
where you might complain,
01:06:41.660 --> 01:06:43.382
initially, PFR is initially--
01:06:43.382 --> 01:06:44.840
Freiman's theorem
is about covering
01:06:44.840 --> 01:06:49.005
A. And now we've only
covered a part of A.
01:06:49.005 --> 01:06:50.880
But of course, we saw
from earlier arguments,
01:06:50.880 --> 01:06:53.160
you can use Ruzsa's
covering lemma to go
01:06:53.160 --> 01:06:56.890
from covering a part of
A to covering all of A.
01:06:56.890 --> 01:07:00.370
Indeed, it this the case
that this formulation
01:07:00.370 --> 01:07:04.240
is equivalent to the
formulation that if A
01:07:04.240 --> 01:07:14.300
is in F to the n and A plus
A size at most K times A,
01:07:14.300 --> 01:07:23.150
then there exists some subspace
V with the size of V no larger
01:07:23.150 --> 01:07:28.010
than the size of
A, such that A can
01:07:28.010 --> 01:07:43.100
be covered by polynomial
in K many co-sets of V.
01:07:43.100 --> 01:07:46.130
We see that here.
01:07:46.130 --> 01:07:52.790
Here A has doubling constant K,
which is around the same as n.
01:07:52.790 --> 01:07:58.760
And even though I cannot
contain A by a single subspace
01:07:58.760 --> 01:08:04.710
of roughly the same size, I
can use K different translates
01:08:04.710 --> 01:08:14.440
to cover A. Any questions?
01:08:19.340 --> 01:08:22.460
So I want to leave it
to you as an exercise
01:08:22.460 --> 01:08:27.790
to prove that these two versions
are equivalent to each other.
01:08:27.790 --> 01:08:29.154
It's not too hard.
01:08:29.154 --> 01:08:31.510
It's something if I had
more time, I would show you.
01:08:31.510 --> 01:08:37.279
It uses Ruzsa covering lemma
to prove this equivalence.
01:08:39.859 --> 01:08:41.220
The nice thing about the--
01:08:41.220 --> 01:08:44.180
so the polynomial Freiman-Ruzsa
conjecture, PFR conjecture,
01:08:44.180 --> 01:08:46.609
is considered a
central conjecture
01:08:46.609 --> 01:08:51.140
in additive combinatorics,
because it has many equivalent
01:08:51.140 --> 01:08:54.590
formulations and relates
to many problems that
01:08:54.590 --> 01:08:56.990
are central to the subject.
01:08:56.990 --> 01:08:59.720
So we would like some kind of an
inverse theorem that gives you
01:08:59.720 --> 01:09:01.220
these polynomial bounds.
01:09:01.220 --> 01:09:06.640
And I'll mention a couple of
these equivalent formulations.
01:09:06.640 --> 01:09:10.200
Here is an equivalent
formulation
01:09:10.200 --> 01:09:16.700
which is rather attractive,
where instead of considering
01:09:16.700 --> 01:09:18.859
subsets, we're
going to formulate
01:09:18.859 --> 01:09:22.354
something that has to do with
approximate homomorphisms.
01:09:27.310 --> 01:09:32.010
So the statement
still conjecture
01:09:32.010 --> 01:09:37.950
is that if F is a function
from a Boolean space
01:09:37.950 --> 01:09:45.380
to another Boolean space is
such that F is approximately
01:09:45.380 --> 01:09:52.727
a homomorphism in the sense that
the set of possible errors--
01:09:52.727 --> 01:09:54.480
so if it's actually
a homomorphism,
01:09:54.480 --> 01:09:56.460
then this quantity is
always equal to 0--
01:09:56.460 --> 01:09:59.760
but it's approximately a
homomorphism in the sense
01:09:59.760 --> 01:10:11.140
that the set of such errors
is bounded by K in size,
01:10:11.140 --> 01:10:14.730
the conclusion, the conjecture
claims that then there
01:10:14.730 --> 01:10:24.650
exists an actual homomorphism,
an actual linear map G,
01:10:24.650 --> 01:10:32.810
such that F is
very close to G, as
01:10:32.810 --> 01:10:40.450
in that the set of possible
discrepancies between F and G
01:10:40.450 --> 01:10:52.890
is bounded, where you only
lose at most a polynomial in K.
01:10:52.890 --> 01:10:55.600
So if you are an approximate
homomorphism in this sense,
01:10:55.600 --> 01:11:00.500
then you are actually very
close to an actual linear map.
01:11:00.500 --> 01:11:06.050
Now, it is not too hard to prove
a much quantitatively weaker
01:11:06.050 --> 01:11:07.580
version of this statement.
01:11:07.580 --> 01:11:15.985
So I claim that it is trivial
to show upper bound of at most 2
01:11:15.985 --> 01:11:20.860
to the K over here.
01:11:20.860 --> 01:11:22.300
So think about that.
01:11:22.300 --> 01:11:25.390
So if I give you
an F, I can just
01:11:25.390 --> 01:11:30.050
think about what the values
of F are on the basis,
01:11:30.050 --> 01:11:32.660
and extend it to a linear map.
01:11:35.570 --> 01:11:44.160
Then this set is necessarily
a span of that set,
01:11:44.160 --> 01:11:51.150
so has size at most 2 to the K.
But it's open to show you only
01:11:51.150 --> 01:12:01.870
have to lose a polynomial in K.
01:12:01.870 --> 01:12:06.890
There is also a version of
the polynomial Freiman-Ruzsa
01:12:06.890 --> 01:12:12.500
conjecture which is related to
things we've discussed earlier
01:12:12.500 --> 01:12:15.970
regarding Szemeredi's theorem.
01:12:15.970 --> 01:12:20.680
And in fact, the polynomial
Freiman-Ruzsa conjecture kind
01:12:20.680 --> 01:12:24.820
of came back into
popularity partly because
01:12:24.820 --> 01:12:28.400
of Gowers' proof of
Szemeredi's theorem
01:12:28.400 --> 01:12:31.260
that used many of these tools.
01:12:31.260 --> 01:12:34.250
So let me state it here.
01:12:34.250 --> 01:12:41.320
So we've seen some statement
like this in an earlier
01:12:41.320 --> 01:12:48.670
lecture, but not very precisely
or not precisely in this form.
01:12:48.670 --> 01:12:53.600
And I won't define for
you all the notation here,
01:12:53.600 --> 01:12:57.050
but hopefully, you get a rough
sense of what it's about.
01:12:57.050 --> 01:12:59.200
So we want some kind
of an inverse statement
01:12:59.200 --> 01:13:03.670
for what's known as a
"quadratic uniformity norm,"
01:13:03.670 --> 01:13:05.380
"quadratic Gowers'
uniformity norm."
01:13:12.700 --> 01:13:17.020
So recall back to our discussion
of the proof of Roth's theorem,
01:13:17.020 --> 01:13:20.050
the Fourier analytic
proof of Roth's theorem.
01:13:20.050 --> 01:13:21.820
We want to say that--
01:13:21.820 --> 01:13:26.590
but now think about not
three APs, but four APs.
01:13:26.590 --> 01:13:32.910
So we want to know if you have a
function F on the Boolean cube,
01:13:32.910 --> 01:13:43.667
and this function
is 1 bounded, and--
01:13:43.667 --> 01:13:45.500
I'm going to write down
some notation, which
01:13:45.500 --> 01:13:47.190
we are not going to define--
01:13:47.190 --> 01:13:54.880
but the Gowers' u3 norm
is at least some delta.
01:13:54.880 --> 01:13:58.570
So this is something which
is related to 4 AP counts.
01:13:58.570 --> 01:14:00.868
So in particular, if
this number is small,
01:14:00.868 --> 01:14:03.160
then you have a counting
lemma for four-term arithmetic
01:14:03.160 --> 01:14:06.510
progressions.
01:14:06.510 --> 01:14:17.330
If this is true, then there
exists a quadratic polynomial q
01:14:17.330 --> 01:14:27.520
in n variables over F2
such that your function
01:14:27.520 --> 01:14:34.660
F correlates with this
quadratic exponential in q.
01:14:34.660 --> 01:14:44.750
And the correlation here is
something where you only lose
01:14:44.750 --> 01:14:48.320
a polynomial in the parameters.
01:14:48.320 --> 01:14:49.830
So previously, I
quoted something
01:14:49.830 --> 01:14:55.410
where you lose something that's
only a constant in delta,
01:14:55.410 --> 01:14:56.400
and that is true.
01:14:56.400 --> 01:14:57.410
That is known.
01:14:57.410 --> 01:15:00.940
But we believe, so
it's conjecture,
01:15:00.940 --> 01:15:03.650
that you only lose a
polynomial in these parameters.
01:15:03.650 --> 01:15:05.300
So this type of
statement-- remember,
01:15:05.300 --> 01:15:07.800
in our proof of Roth's theorem,
something like this came up.
01:15:07.800 --> 01:15:10.120
So something like this
came up as a crucial step
01:15:10.120 --> 01:15:11.440
in the proof of Roth's theorem.
01:15:11.440 --> 01:15:17.680
If you have something where
you look at counting lemma,
01:15:17.680 --> 01:15:19.180
and you exhibit
something like this,
01:15:19.180 --> 01:15:22.110
then you can exhibit a
large Fourier character.
01:15:22.110 --> 01:15:24.130
And in higher order
Fourier analysis,
01:15:24.130 --> 01:15:27.850
something like this
corresponds to having
01:15:27.850 --> 01:15:29.541
a large Fourier transform.
01:15:32.310 --> 01:15:34.440
It turns out that all
of these formulations
01:15:34.440 --> 01:15:36.540
of polynomial
Freiman-Ruzsa conjecture
01:15:36.540 --> 01:15:40.410
are equivalent to each other.
01:15:40.410 --> 01:15:45.420
And they're all equivalent
in a very quantitative sense,
01:15:45.420 --> 01:15:53.840
so up to polynomial
changes in the bounds.
01:15:57.563 --> 01:15:58.980
So in particular,
if you can prove
01:15:58.980 --> 01:16:01.230
some bound for some version,
then that automatically
01:16:01.230 --> 01:16:04.320
leads to bounds for
the other versions.
01:16:04.320 --> 01:16:06.870
The proof of equivalences
is not trivial,
01:16:06.870 --> 01:16:12.210
but it's also not
too complicated.
01:16:12.210 --> 01:16:15.630
It takes some work, but
it's not too complicated.
01:16:15.630 --> 01:16:19.680
The best bounds for the
polynomial Freiman-Ruzsa
01:16:19.680 --> 01:16:22.500
conjecture, and hence for
all of these versions,
01:16:22.500 --> 01:16:26.100
is again due to Tom Sanders.
01:16:26.100 --> 01:16:45.360
And he proved a version of PFR
with quasi-polynomial bounds,
01:16:45.360 --> 01:16:48.310
where by "quasi-polynomial
bounds," I mean,
01:16:48.310 --> 01:16:53.430
for instance over
here, instead of K.
01:16:53.430 --> 01:17:02.650
He proved it for something which
is like e to the poly log K,
01:17:02.650 --> 01:17:07.980
so like K to the log K,
but K to the poly log K.
01:17:07.980 --> 01:17:11.410
So it's almost polynomial,
but not quite there.
01:17:14.180 --> 01:17:17.570
And it's considered a
central open problem
01:17:17.570 --> 01:17:22.010
to better understand the
polynomial Freiman-Ruzsa
01:17:22.010 --> 01:17:24.150
conjecture.
01:17:24.150 --> 01:17:26.010
And we believe that
this is something
01:17:26.010 --> 01:17:30.330
that could lead to a lot
of important new tools
01:17:30.330 --> 01:17:32.340
and techniques that are
relevant to the rest
01:17:32.340 --> 01:17:35.016
of additive combinatorics.
01:17:35.016 --> 01:17:37.330
Yeah.
01:17:37.330 --> 01:17:39.790
AUDIENCE: Using the fact that
all of these are equivalent,
01:17:39.790 --> 01:17:43.000
is it possible to get a proof
of Freiman's theorem using
01:17:43.000 --> 01:17:46.062
the bound of 2 to the K to
be approximate [INAUDIBLE]??
01:17:46.062 --> 01:17:47.520
YUFEI ZHAO: OK, so
the question is,
01:17:47.520 --> 01:17:51.890
we know that that up
there has 2 to the K,
01:17:51.890 --> 01:17:56.550
so you're asking can
you use this 2 to the K
01:17:56.550 --> 01:18:00.790
to get some bound
for polynomial,
01:18:00.790 --> 01:18:03.130
for something like this?
01:18:03.130 --> 01:18:04.380
And the answer is yes.
01:18:04.380 --> 01:18:07.250
So you can use that proof
to go through some proofs
01:18:07.250 --> 01:18:09.030
and get here.
01:18:09.030 --> 01:18:12.870
I don't remember how this
equivalence goes, but remember
01:18:12.870 --> 01:18:16.920
that the proof of Freiman's
theorem for F2 to the n
01:18:16.920 --> 01:18:19.260
wasn't so hard.
01:18:19.260 --> 01:18:22.500
So we didn't use
very many tools.
01:18:22.500 --> 01:18:24.870
Unfortunately, I don't
have time to tell you
01:18:24.870 --> 01:18:29.070
the formulations of polynomial
Freiman-Ruzsa conjecture
01:18:29.070 --> 01:18:33.330
over the integers, and also
over arbitrary abelian groups.
01:18:33.330 --> 01:18:36.480
But there are formulations
over the integers,
01:18:36.480 --> 01:18:40.140
and that's one that people
care just as much about.
01:18:40.140 --> 01:18:42.990
And there are also different
equivalent versions,
01:18:42.990 --> 01:18:47.457
but things are a bit
nicer in the Boolean case.
01:18:47.457 --> 01:18:48.894
Yeah.
01:18:48.894 --> 01:18:50.331
AUDIENCE: You said [INAUDIBLE]?
01:18:52.496 --> 01:18:54.621
YUFEI ZHAO: I'm sorry, can
you repeat the question?
01:18:54.621 --> 01:18:58.389
AUDIENCE: [INAUDIBLE].
01:18:58.389 --> 01:18:59.472
Yeah, what does that mean?
01:18:59.472 --> 01:19:01.431
YUFEI ZHAO: Are you asking
what does this mean?
01:19:01.431 --> 01:19:02.163
AUDIENCE: Yeah.
01:19:02.163 --> 01:19:04.580
YUFEI ZHAO: So this is what's
called a "Gowers' uniformity
01:19:04.580 --> 01:19:06.090
norm."
01:19:06.090 --> 01:19:13.820
So something I encourage
you to look up.
01:19:13.820 --> 01:19:17.790
In fact, there is an unassigned
problem in the problem set
01:19:17.790 --> 01:19:21.150
that's related to the
Gowers' uniformity norm
01:19:21.150 --> 01:19:24.610
before you U2, which just
relates to Fourier analysis.
01:19:24.610 --> 01:19:29.500
But U3 is related to 4 AP's
and quadratic Fourier analysis.