WEBVTT
00:00:00.080 --> 00:00:02.430
The following content is
provided under a Creative
00:00:02.430 --> 00:00:03.820
Commons license.
00:00:03.820 --> 00:00:06.050
Your support will help
MIT OpenCourseWare
00:00:06.050 --> 00:00:10.150
continue to offer high quality
educational resources for free.
00:00:10.150 --> 00:00:12.690
To make a donation or to
view additional materials
00:00:12.690 --> 00:00:16.600
from hundreds of MIT courses,
visit MIT OpenCourseWare
00:00:16.600 --> 00:00:17.262
at ocw.mit.edu.
00:00:25.845 --> 00:00:28.880
PROFESSOR: All right, today we
are going to learn to count.
00:00:28.880 --> 00:00:34.460
One, two, three-- In a
algorithmic sense of course,
00:00:34.460 --> 00:00:37.320
and prove hardness of
counting style problems.
00:00:37.320 --> 00:00:41.830
Or more generally, any problem
where the output is an integer.
00:00:41.830 --> 00:00:45.370
Like approximation
algorithms, we
00:00:45.370 --> 00:00:48.630
need to define a slightly
stronger version of our NP
00:00:48.630 --> 00:00:49.215
style problem.
00:00:49.215 --> 00:00:52.200
It's not going to be
a decision problem.
00:00:52.200 --> 00:00:54.360
The remaining type
is a search problem.
00:01:03.380 --> 00:01:05.379
Search problem you're
trying to find a solution.
00:01:05.379 --> 00:01:07.070
This is just like
optimization problem,
00:01:07.070 --> 00:01:08.980
but with no objective function.
00:01:08.980 --> 00:01:12.250
Today all solutions
are created equal.
00:01:12.250 --> 00:01:16.510
So we just want to know
how many there are.
00:01:16.510 --> 00:01:19.320
So we're given an
instance, the problem.
00:01:19.320 --> 00:01:21.555
We want to produce a solution.
00:01:24.234 --> 00:01:25.650
Whereas in the
decision problem we
00:01:25.650 --> 00:01:29.530
want to know whether
a solution exists.
00:01:29.530 --> 00:01:35.234
With a search problem we
want to find a solution.
00:01:35.234 --> 00:01:36.150
Almost the same thing.
00:01:56.040 --> 00:01:58.490
So that would be a search
problem in general.
00:01:58.490 --> 00:02:02.900
For an NP search problem you can
recognize what is an instance.
00:02:02.900 --> 00:02:04.490
And you could
recognize solutions
00:02:04.490 --> 00:02:07.290
to instances in polynomial time.
00:02:07.290 --> 00:02:10.050
Given an instance in the
solution you can say,
00:02:10.050 --> 00:02:11.295
yes that's a valid solution.
00:02:14.620 --> 00:02:16.687
OK for such a search
problem of course,
00:02:16.687 --> 00:02:18.270
is the corresponding
decision problem.
00:02:18.270 --> 00:02:21.080
Which is, does there
exist a solution?
00:02:21.080 --> 00:02:25.890
If you can solve this,
then you can solve that.
00:02:25.890 --> 00:02:29.160
You could solve the
decision problem in NP
00:02:29.160 --> 00:02:32.460
by guessing a
solution and so on.
00:02:32.460 --> 00:02:37.010
So this is intricately related
to the notion of a certificate
00:02:37.010 --> 00:02:38.130
for an NP problem.
00:02:38.130 --> 00:02:40.250
The idea solutions
are certificates.
00:02:40.250 --> 00:02:41.890
But when we say
problem is in NP,
00:02:41.890 --> 00:02:44.080
we say there is some way
to define certificate
00:02:44.080 --> 00:02:47.730
so that this kind of
problem can be set up.
00:02:47.730 --> 00:02:49.360
And the goal here--
the point here
00:02:49.360 --> 00:02:51.675
is to solidify a specific
notion of certificate.
00:02:51.675 --> 00:02:53.700
We can't just use any
one, because we're
00:02:53.700 --> 00:02:55.550
going to count them
the-- If you formulate
00:02:55.550 --> 00:02:56.966
the certificates
in different ways
00:02:56.966 --> 00:02:58.380
you'll get different counts.
00:02:58.380 --> 00:03:03.350
But in general,
every NP problem can
00:03:03.350 --> 00:03:09.210
be converted into an NP search
problem in at least one way.
00:03:13.720 --> 00:03:15.880
But each notional
certificate gives you
00:03:15.880 --> 00:03:18.510
a notion of the search problem.
00:03:18.510 --> 00:03:22.700
OK, in some complexity context
these are called NP relations,
00:03:22.700 --> 00:03:24.450
the way you specify
what a certificate is.
00:03:24.450 --> 00:03:27.680
But I think this is the more
algorithmic perspective.
00:03:27.680 --> 00:03:29.150
All right, so
given such a search
00:03:29.150 --> 00:03:31.145
problem we turn it into
a counting problem.
00:03:38.800 --> 00:03:41.860
So that's a search
problem, is called A.
00:03:41.860 --> 00:03:46.800
Then counting problem will
be sharp A, not hashtag
00:03:46.800 --> 00:03:52.410
A. Some people call it
number A. The problem is,
00:03:52.410 --> 00:03:59.305
count the number of solutions
for a given instance.
00:04:09.770 --> 00:04:12.780
OK so in particular, you
detect whether the number
00:04:12.780 --> 00:04:14.540
is zero, or not zero.
00:04:14.540 --> 00:04:16.850
So this is strictly
harder in some sense
00:04:16.850 --> 00:04:20.260
than the decision problem,
does there exist a solution.
00:04:20.260 --> 00:04:23.670
And we will see some problems
where the search problem is
00:04:23.670 --> 00:04:26.330
polynomial, but the
corresponding counting problem
00:04:26.330 --> 00:04:30.200
is actually hard.
00:04:30.200 --> 00:04:31.770
I can't say NP
hard, there's going
00:04:31.770 --> 00:04:34.310
to be a new notion of hardness.
00:04:34.310 --> 00:04:36.760
So some examples.
00:04:36.760 --> 00:04:40.630
Pretty much every problem we've
defined as a decision problem
00:04:40.630 --> 00:04:44.110
had a search problem in mind.
00:04:44.110 --> 00:04:47.607
So something like SAT, you need
to satisfy all of the things.
00:04:47.607 --> 00:04:49.190
So there's no objective
function here,
00:04:49.190 --> 00:04:50.750
but you want to know
how many different ways,
00:04:50.750 --> 00:04:53.200
how many different variable
assignments are there that
00:04:53.200 --> 00:04:55.350
satisfy the given formula?
00:04:55.350 --> 00:05:00.070
Or take your favorite
pencil and paper puzzle,
00:05:00.070 --> 00:05:06.720
we'll be looking at
Shakashaka today, again.
00:05:06.720 --> 00:05:08.540
How many different
solutions are there?
00:05:08.540 --> 00:05:10.190
You'd like, when
designing a puzzle,
00:05:10.190 --> 00:05:12.370
usually you want to
know that it's unique.
00:05:12.370 --> 00:05:14.920
So it'd be nice if you could
count the number of solutions
00:05:14.920 --> 00:05:16.930
and show that it's one.
00:05:16.930 --> 00:05:20.700
These problems are going to turn
out to be very hard of course.
00:05:20.700 --> 00:05:24.540
So let's define a
notion of hardness.
00:05:24.540 --> 00:05:28.325
Sharp P is going to be the
class of all of these counting
00:05:28.325 --> 00:05:28.825
problems.
00:05:42.290 --> 00:05:44.095
This is the sort of certificate.
00:05:44.095 --> 00:05:44.720
Yeah, question?
00:05:44.720 --> 00:05:46.040
AUDIENCE: Just for the
puzzle application,
00:05:46.040 --> 00:05:47.965
is it going to turn out
that counting if there's
00:05:47.965 --> 00:05:50.048
one solution versus
one-to-one solution is as hard
00:05:50.048 --> 00:05:51.898
as just counting total numbers?
00:05:51.898 --> 00:05:56.350
PROFESSOR: It's
not quite as hard,
00:05:56.350 --> 00:06:00.320
but we will show that
distinguishing one
00:06:00.320 --> 00:06:04.430
for more than one is very
hard, is NP complete actually.
00:06:04.430 --> 00:06:07.514
That's a decision problem that
could show that's NP complete.
00:06:07.514 --> 00:06:09.180
So normally we think
of zero versus one,
00:06:09.180 --> 00:06:11.060
but it turns out one
versus two is not--
00:06:11.060 --> 00:06:15.870
or one versus more than one
is about the same difficulty.
00:06:15.870 --> 00:06:19.290
Counting is even
harder I would say.
00:06:19.290 --> 00:06:20.730
But it's bad news all around.
00:06:20.730 --> 00:06:22.210
There's different
notions of bad.
00:06:24.720 --> 00:06:29.810
Cool, so this is the sort
of certificate perspective.
00:06:29.810 --> 00:06:32.190
With NP we-- I had given you
two different definitions
00:06:32.190 --> 00:06:35.880
of certificate perspective, and
a non-deterministic computation
00:06:35.880 --> 00:06:36.770
perspective.
00:06:36.770 --> 00:06:40.290
You can do the same
computational perspective here.
00:06:40.290 --> 00:06:47.290
You could say sharp P
is the set of problems,
00:06:47.290 --> 00:06:51.165
solved by polynomial time.
00:06:55.660 --> 00:06:57.995
Call it a non-deterministic
counting algorithm.
00:07:03.297 --> 00:07:05.380
We don't need Turing
machines for this definition,
00:07:05.380 --> 00:07:08.040
although that was of course
the original definition.
00:07:08.040 --> 00:07:11.320
So take your favorite
non-deterministic algorithm
00:07:11.320 --> 00:07:14.920
as usual for NP, it makes
guesses at various points,
00:07:14.920 --> 00:07:16.580
multiple branches.
00:07:16.580 --> 00:07:20.720
With the NP algorithm the way
we'd execute it on an NP style
00:07:20.720 --> 00:07:22.970
machine is that we would
see whether there's
00:07:22.970 --> 00:07:24.910
any path that led to a yes.
00:07:24.910 --> 00:07:27.790
Again, it's going to output
yes or no at the end.
00:07:27.790 --> 00:07:30.930
In this case, what the computer
does, the sharp P style
00:07:30.930 --> 00:07:34.290
computer, is it-- conceptually
it runs all the branches.
00:07:34.290 --> 00:07:37.310
It counts the number of yeses
and returns that number.
00:07:37.310 --> 00:07:41.210
So even though the algorithm is
designed to return yes or no,
00:07:41.210 --> 00:07:44.030
when it executes it
actually outputs a number.
00:07:44.030 --> 00:07:47.190
The original paper
says magically.
00:07:47.190 --> 00:07:50.014
It's just as magic as an NP
machine, but a little-- even
00:07:50.014 --> 00:07:50.930
a little more magical.
00:07:53.630 --> 00:07:57.080
OK so you-- I mean if you're not
comfortable with that we just
00:07:57.080 --> 00:08:00.330
use this definition, same thing.
00:08:00.330 --> 00:08:07.727
This is all work done by Les
Valiant in the late '70s.
00:08:07.727 --> 00:08:08.310
These notions.
00:08:22.710 --> 00:08:24.320
So it's pretty clear,
it's pretty easy
00:08:24.320 --> 00:08:26.410
to show all these
problems are in sharp P,
00:08:26.410 --> 00:08:28.951
because they were-- the
corresponding decision problems
00:08:28.951 --> 00:08:29.450
were in NP.
00:08:29.450 --> 00:08:34.110
We convert them into
sharp P algorithms.
00:08:34.110 --> 00:08:37.210
Now let's think about hardness
with respect to sharp P.
00:08:37.210 --> 00:08:38.990
As usual we want
it to mean as hard
00:08:38.990 --> 00:08:40.409
as all problems in that class.
00:08:40.409 --> 00:08:45.270
Meaning that we can reduce all
those problems to our problem.
00:08:45.270 --> 00:08:48.520
And the question is, by
what kind of reductions?
00:08:48.520 --> 00:08:51.785
And here we're going to allow
very powerful reductions.
00:08:56.972 --> 00:08:58.430
We've talked about
these reductions
00:08:58.430 --> 00:09:01.320
but never actually been
allowed to use them.
00:09:01.320 --> 00:09:03.550
Multicall, Cook-style
reductions.
00:09:06.080 --> 00:09:08.770
I think in general, this
is a common approach
00:09:08.770 --> 00:09:14.530
for FNP, which is Functions
NP style functions.
00:09:14.530 --> 00:09:17.520
Which you can also think of
this as kind of-- the counting
00:09:17.520 --> 00:09:19.850
version is, the
output is a value.
00:09:19.850 --> 00:09:24.050
So you have a function instead
of just a decision question.
00:09:24.050 --> 00:09:28.190
When the output is some thing,
some number, in this case.
00:09:28.190 --> 00:09:30.790
You might have to manipulate
that number at the end.
00:09:30.790 --> 00:09:33.180
And so at the very least,
you need to make a call
00:09:33.180 --> 00:09:35.360
and do some stuff before
you return your modified
00:09:35.360 --> 00:09:37.290
answer in your reduction.
00:09:37.290 --> 00:09:39.870
But in fact we're going
to allow full tilt,
00:09:39.870 --> 00:09:43.790
you could make multiple calls
to some hypothetical solution
00:09:43.790 --> 00:09:48.757
to your problem in order to
solve all problems in sharp P.
00:09:48.757 --> 00:09:50.840
And we'll actually use
multicall a bunch of times.
00:09:50.840 --> 00:09:53.140
We won't always need multicall.
00:09:53.140 --> 00:09:55.850
Often we'll be able to get
away with a much simpler kind
00:09:55.850 --> 00:09:56.910
of reduction.
00:09:56.910 --> 00:09:58.570
Let me tell you that kind now.
00:10:01.990 --> 00:10:03.881
But in general we allow
arbitrary multicall.
00:10:03.881 --> 00:10:04.380
Yeah?
00:10:04.380 --> 00:10:07.820
AUDIENCE: Your not limited
in the number of multicalls?
00:10:07.820 --> 00:10:08.550
PROFESSOR: Right.
00:10:08.550 --> 00:10:10.580
You could do polynomial
number of multicalls.
00:10:10.580 --> 00:10:14.030
As before, reduction
should be polynomial time.
00:10:14.030 --> 00:10:18.070
But, so you're basically
given an algorithm.
00:10:18.070 --> 00:10:24.160
That's usually called an Oracle
that solves your problem,
00:10:24.160 --> 00:10:28.960
solves your problem B. And
you want to solve-- yeah,
00:10:28.960 --> 00:10:33.030
you want to solve A by
multiple calls to B. So
00:10:33.030 --> 00:10:35.370
what, we'll see a bunch
of examples of that.
00:10:35.370 --> 00:10:39.370
Here's a more familiar
style of reduction.
00:10:39.370 --> 00:10:42.060
And often, for a lot of problems
we can get away with this.
00:10:42.060 --> 00:10:43.950
But especially a lot
of the early proofs
00:10:43.950 --> 00:10:45.380
needed the multicall.
00:10:45.380 --> 00:10:47.380
And as you'll see you can
do lots of cool tricks
00:10:47.380 --> 00:10:50.500
with multicall
using number theory.
00:10:53.060 --> 00:10:56.790
So parsimonious reduction.
00:10:56.790 --> 00:10:58.855
This is for NP search problems.
00:11:03.470 --> 00:11:09.080
This is going to be a lot like
an NP reduction, regular style.
00:11:09.080 --> 00:11:16.470
So again we convert an instance
x of a, five function f,
00:11:16.470 --> 00:11:22.700
into an instance x prime of b.
00:11:26.880 --> 00:11:32.030
And that function should
be computable in poly time.
00:11:34.970 --> 00:11:37.386
So far just like
an NP reduction.
00:11:40.060 --> 00:11:43.450
And usually we would say,
for a search problem,
00:11:43.450 --> 00:11:45.630
we would say there
exists a solution
00:11:45.630 --> 00:11:48.740
for x, if and only if, there
exists a solution for x prime.
00:11:48.740 --> 00:11:53.670
That would be the direct analog
of an NP style reduction.
00:11:53.670 --> 00:11:55.670
But we're going to ask
for a stronger condition.
00:11:55.670 --> 00:12:01.040
Which is that the number
of solutions to problem
00:12:01.040 --> 00:12:10.890
A to instance x, equals the
number of solutions of type B
00:12:10.890 --> 00:12:13.540
to instance x prime.
00:12:13.540 --> 00:12:16.050
OK so in particular, this
one's going to equal to one,
00:12:16.050 --> 00:12:18.520
this one will be greater than
equal to one, and vice versa.
00:12:18.520 --> 00:12:20.829
So this is stronger
than an NP style
00:12:20.829 --> 00:12:22.870
reduction for the
corresponding decision problem.
00:12:27.820 --> 00:12:29.780
Yeah, so I even wrote that down.
00:12:29.780 --> 00:12:34.100
This implies that the decision
problems have the same answer.
00:12:42.100 --> 00:12:45.435
So in particular, this implies
that we have an NP reduction.
00:12:52.020 --> 00:12:59.540
So in particular if A,
the decision version of A
00:12:59.540 --> 00:13:02.680
is NP hard, then the decision
version of B is NP hard.
00:13:02.680 --> 00:13:10.400
But more interesting is
that, if A is sharp P hard,
00:13:10.400 --> 00:13:12.130
then B is sharp P hard.
00:13:17.820 --> 00:13:21.080
But also this holds for NP.
00:13:21.080 --> 00:13:25.130
For the decision versions
of A and B. Sorry,
00:13:25.130 --> 00:13:30.505
sharp A and sharp B. OK this
is a subtle distinction.
00:13:33.600 --> 00:13:36.710
For sharp P hardness we're
talking about the counting
00:13:36.710 --> 00:13:38.630
problems, and we're
talking about making calls
00:13:38.630 --> 00:13:40.067
to other counting solutions.
00:13:40.067 --> 00:13:42.400
Then doing things with those
numbers and who knows what,
00:13:42.400 --> 00:13:44.290
making many calls.
00:13:44.290 --> 00:13:45.930
With parsimonious
reduction we're
00:13:45.930 --> 00:13:48.990
thinking about the
non-counting version.
00:13:48.990 --> 00:13:50.830
Just the search problem.
00:13:50.830 --> 00:13:55.050
And so we're not worried about
counting solutions directly,
00:13:55.050 --> 00:13:57.300
I mean it-- what's nice
about parsimonious reductions
00:13:57.300 --> 00:13:59.040
is they look just
like NP reductions
00:13:59.040 --> 00:14:01.250
for the regular old problems.
00:14:01.250 --> 00:14:03.240
We just need this
extra property,
00:14:03.240 --> 00:14:07.180
parsimony that the number
of solutions to the unit
00:14:07.180 --> 00:14:09.210
is preserved through
the transformation.
00:14:09.210 --> 00:14:14.350
And a lot of the proofs
that we've covered follow--
00:14:14.350 --> 00:14:17.160
have this property and
will be good for us.
00:14:17.160 --> 00:14:19.410
If we can get our
source problems hard,
00:14:19.410 --> 00:14:22.430
then we'll get a lot of
target problems hard as well.
00:14:25.390 --> 00:14:34.100
Well let me tell you about one
more version which I made up.
00:14:34.100 --> 00:14:35.305
C-monious reductions.
00:14:43.550 --> 00:14:45.060
This is my attempt
at understanding
00:14:45.060 --> 00:14:47.760
the entomology of parsimonious.
00:14:47.760 --> 00:14:53.880
Which is something like little
money, being very thrifty.
00:14:53.880 --> 00:14:56.030
So this is having a
little bit more money.
00:14:56.030 --> 00:14:58.047
You have C dollars.
00:14:58.047 --> 00:14:59.880
But you have to be very
consistent about it.
00:14:59.880 --> 00:15:01.920
I should probably add
some word that means
00:15:01.920 --> 00:15:03.110
uniform in the middle there.
00:15:03.110 --> 00:15:08.390
But I want the number of
solutions of x prime to equal
00:15:08.390 --> 00:15:13.315
c times the number
of solutions to x.
00:15:16.110 --> 00:15:19.080
C a fixed constant.
00:15:19.080 --> 00:15:22.510
And it has to be the
same for every x.
00:15:25.700 --> 00:15:28.020
This would be just as good
from a sharp P perspective.
00:15:28.020 --> 00:15:31.690
Because if I could solve
B and I wanted to solve A,
00:15:31.690 --> 00:15:35.680
I would convert A to B, run
the thing, then divide by C.
00:15:35.680 --> 00:15:39.830
There will never be a remainder
and then I have my answer to A.
00:15:39.830 --> 00:15:41.690
As long as C's not zero.
00:15:41.690 --> 00:15:45.930
C should be an integer here.
00:15:45.930 --> 00:15:51.790
We will see a bunch of
C-monious reductions.
00:15:51.790 --> 00:15:58.110
I guess, yeah it doesn't have
to be totally independent of x.
00:15:58.110 --> 00:16:00.480
It can depend on things like n.
00:16:00.480 --> 00:16:03.870
Something that we can
compute easily I guess.
00:16:03.870 --> 00:16:06.030
Shouldn't be too dependent on x.
00:16:06.030 --> 00:16:10.340
All right, let's do-- let's
look at some examples.
00:16:10.340 --> 00:16:23.030
So, going to make a list here
of sharp P complete problems.
00:16:25.830 --> 00:16:29.560
And we'll start with versions
of SAT because we like SAT.
00:16:29.560 --> 00:16:36.120
So I'm just going to tell
you that sharp 3SAT is hard.
00:16:36.120 --> 00:16:40.490
First sharp SAT is hard, and
the usual proof of SAT hardness
00:16:40.490 --> 00:16:45.460
shows sharp P completeness
for sharp SAT.
00:16:45.460 --> 00:16:49.340
And if you're careful about
the conversion from SAT to 3CNF
00:16:49.340 --> 00:16:51.290
you can get sharp
three satisfied.
00:16:51.290 --> 00:16:53.240
It's not terribly
interesting and tedious.
00:16:53.240 --> 00:16:57.660
So I will skip that one.
00:16:57.660 --> 00:17:01.120
So what about Planar 3SAT?
00:17:03.710 --> 00:17:06.970
I stared at this diagram
many times for a while.
00:17:06.970 --> 00:17:12.130
This is lecture seven for
replacing a crossing in a 3SAT
00:17:12.130 --> 00:17:14.210
thing with this picture.
00:17:14.210 --> 00:17:17.950
And all of this argument and
this table in particular,
00:17:17.950 --> 00:17:20.599
was concluding
that the variables
00:17:20.599 --> 00:17:21.900
are forced in this scenario.
00:17:21.900 --> 00:17:25.089
If you know what
A and B are-- so
00:17:25.089 --> 00:17:26.650
once you choose
what A and B are,
00:17:26.650 --> 00:17:28.650
these two have to
be copies of A and B
00:17:28.650 --> 00:17:31.760
and then it ended up
that A-- We proved
00:17:31.760 --> 00:17:34.680
that A1 equaled A2
and B1 equaled B2,
00:17:34.680 --> 00:17:36.860
and then these
variables were all
00:17:36.860 --> 00:17:38.820
determined by these formula.
00:17:38.820 --> 00:17:41.440
And so once you know A and
B all the variable settings
00:17:41.440 --> 00:17:42.220
are forced.
00:17:42.220 --> 00:17:45.180
Which means you preserve
the number of solutions.
00:17:45.180 --> 00:17:50.330
So planar sharp 3SAT
is sharp P complete.
00:17:53.360 --> 00:17:56.200
I'd like to pretend that there's
some debate within the sharp P
00:17:56.200 --> 00:18:00.640
community about whether the
sharp P are here or here.
00:18:00.640 --> 00:18:03.580
I kind of prefer it here,
but I've seen it over here,
00:18:03.580 --> 00:18:04.240
so you know.
00:18:07.650 --> 00:18:10.140
I don't think I've even
seen that mentioned but I'm
00:18:10.140 --> 00:18:12.530
sure it's out there somewhere.
00:18:12.530 --> 00:18:16.550
So let's flip through our
other planar hardness proofs.
00:18:16.550 --> 00:18:19.920
This is planar monotone
rectilinear 3SAT.
00:18:19.920 --> 00:18:21.790
Mainly the monotone aspect.
00:18:21.790 --> 00:18:24.170
We wanted there to
be all the variables
00:18:24.170 --> 00:18:26.440
to be positive or
negative in every clause.
00:18:26.440 --> 00:18:29.820
And so we had this trick
for forcing this thing
00:18:29.820 --> 00:18:31.070
to be not equal to this thing.
00:18:31.070 --> 00:18:34.790
Basically copying the variable
with a flip in its truths.
00:18:34.790 --> 00:18:36.610
But again everything
here is forced.
00:18:36.610 --> 00:18:38.662
The variables we
make are guaranteed
00:18:38.662 --> 00:18:40.620
to be copies or indications
of other variables.
00:18:40.620 --> 00:18:42.550
So we preserve
number of solutions.
00:18:42.550 --> 00:18:43.790
So this is hard too.
00:19:00.260 --> 00:19:03.060
OK, what else?
00:19:03.060 --> 00:19:05.210
I think we can-- I
just made this up
00:19:05.210 --> 00:19:07.870
--but we can add the
dash three as usual
00:19:07.870 --> 00:19:10.000
by replacing each variable
with little cycle.
00:19:12.520 --> 00:19:14.530
What about planar 1-in-3 SAT?
00:19:14.530 --> 00:19:17.240
So we had, we actually had
two ways to prove this.
00:19:17.240 --> 00:19:23.180
This is one of the
reductions and we
00:19:23.180 --> 00:19:26.960
check that this set
of SAT clauses, these
00:19:26.960 --> 00:19:29.477
are the SAT clauses implemented
this 1-in-3 SAT clause.
00:19:29.477 --> 00:19:32.060
And I stared at this for a while
and its kind of hard to tell.
00:19:32.060 --> 00:19:34.420
So I just wrote a program
to enumerate all cases
00:19:34.420 --> 00:19:36.970
and found there's
exactly one case where
00:19:36.970 --> 00:19:39.790
this is not parsimonious.
00:19:39.790 --> 00:19:45.760
And that's when this is
false and these two are true.
00:19:45.760 --> 00:19:47.900
And because of these
negations you can either
00:19:47.900 --> 00:19:49.400
solve the internal
things like this,
00:19:49.400 --> 00:19:51.380
or you could flip all
of the internal nodes
00:19:51.380 --> 00:19:53.550
and that will also be satisfied.
00:19:53.550 --> 00:19:57.600
Now this is bad news, because
all the other cases there
00:19:57.600 --> 00:20:00.279
is a unique solution over here.
00:20:00.279 --> 00:20:02.320
But in this case there
are exactly two solutions.
00:20:02.320 --> 00:20:03.945
If it was two everywhere
we'd be happy,
00:20:03.945 --> 00:20:05.500
that would be twomonious.
00:20:05.500 --> 00:20:07.540
If it was one everywhere
we'd be happier, happy
00:20:07.540 --> 00:20:08.910
that would be parsimonious.
00:20:08.910 --> 00:20:11.210
But because it's a
mixture of one and two
00:20:11.210 --> 00:20:12.975
we have approximately
preserved the counts
00:20:12.975 --> 00:20:13.980
with a factor of two.
00:20:13.980 --> 00:20:15.890
But that's not good
enough for sharp P,
00:20:15.890 --> 00:20:19.240
we need exact preservation.
00:20:19.240 --> 00:20:20.960
So this is no good.
00:20:20.960 --> 00:20:23.200
Luckily we had another
proof which was actually
00:20:23.200 --> 00:20:27.400
a stronger result, Planar
Positive Rectilinear 1-in-3SAT.
00:20:27.400 --> 00:20:29.280
This was a version
with no negation,
00:20:29.280 --> 00:20:32.580
and this one does work.
00:20:32.580 --> 00:20:36.200
There's first of all
this, the not equal gadget
00:20:36.200 --> 00:20:37.220
and the equal gadget.
00:20:37.220 --> 00:20:40.850
I don't want to go through them,
but again A forced to be zero,
00:20:40.850 --> 00:20:43.780
C was forced to be one,
which forces B and D to be
00:20:43.780 --> 00:20:45.250
zero in this picture.
00:20:45.250 --> 00:20:48.930
So all is good.
00:20:48.930 --> 00:20:50.550
And again parsimonious there.
00:20:50.550 --> 00:20:53.400
And then this one was too
complicated to think about,
00:20:53.400 --> 00:20:55.413
so I again wrote a
program to check-- try
00:20:55.413 --> 00:20:59.120
all the cases and
every choice of XYZ
00:20:59.120 --> 00:21:02.040
over here that's satisfied,
this is a reduction from 3SAT.
00:21:02.040 --> 00:21:03.870
So if at least one
of these is true,
00:21:03.870 --> 00:21:06.420
there will be exactly
one solution over here.
00:21:06.420 --> 00:21:09.555
And just as before after--
if zero of them are true,
00:21:09.555 --> 00:21:10.930
then they'll be
no solution here.
00:21:10.930 --> 00:21:13.400
That we already knew.
00:21:13.400 --> 00:21:14.860
So good news.
00:21:14.860 --> 00:21:17.110
I should check whether that's
mentioned in their paper
00:21:17.110 --> 00:21:20.830
but it proves planar positive
rectilinear 1-in-3SAT
00:21:20.830 --> 00:21:22.270
is sharp P complete.
00:21:42.460 --> 00:21:43.720
Sharp go here, here.
00:21:46.260 --> 00:21:48.470
OK, so lots of fun results.
00:21:48.470 --> 00:21:50.680
We get a lot of results
just from-- by looking
00:21:50.680 --> 00:21:51.360
at old proofs.
00:21:51.360 --> 00:21:54.930
Now they're not
all going to work,
00:21:54.930 --> 00:21:57.650
but I have one more
example that does work.
00:21:57.650 --> 00:22:00.410
Shakashaka, remember the puzzle?
00:22:00.410 --> 00:22:01.850
It's a Nikoli puzzle.
00:22:01.850 --> 00:22:03.800
Every square,
every white squared
00:22:03.800 --> 00:22:07.250
can be filled in with a black
thing, but adjacent to it too,
00:22:07.250 --> 00:22:09.870
there should be
exactly two of those.
00:22:09.870 --> 00:22:12.050
And you want all of the
resulting white regions
00:22:12.050 --> 00:22:14.450
to be rectangles
possibly rotated.
00:22:14.450 --> 00:22:19.520
And we had this reduction
from planar 3SAT,
00:22:19.520 --> 00:22:22.480
and this basically,
this type of wire.
00:22:22.480 --> 00:22:26.674
And there's exactly two
ways to solve a wire.
00:22:26.674 --> 00:22:27.840
One for true, one for false.
00:22:27.840 --> 00:22:29.423
So once you know
what the variable is,
00:22:29.423 --> 00:22:31.240
you're forced what
to do, there is also
00:22:31.240 --> 00:22:33.700
a parity-shifting gadget
and splits and turns.
00:22:33.700 --> 00:22:36.530
But again, exactly two
ways to solve everything.
00:22:36.530 --> 00:22:39.300
So parsimonious, and then
the clause everything
00:22:39.300 --> 00:22:41.260
was basically
forced just-- you're
00:22:41.260 --> 00:22:44.100
forced whether to have
these square diamonds.
00:22:44.100 --> 00:22:45.700
And you just
eliminated the one case
00:22:45.700 --> 00:22:47.440
where the clause
is not satisfied.
00:22:47.440 --> 00:22:50.550
So there's really no flexibility
here, one way to solve it.
00:22:50.550 --> 00:22:52.310
And so it's a
parsimonious reduction
00:22:52.310 --> 00:22:54.450
and indeed in the
paper we mentioned
00:22:54.450 --> 00:22:57.330
this implies sharp P
completeness of counting
00:22:57.330 --> 00:22:59.770
Shakashaka solutions.
00:22:59.770 --> 00:23:02.010
Cool.
00:23:02.010 --> 00:23:04.900
Here's an example
that doesn't work.
00:23:04.900 --> 00:23:07.770
A little different,
Hamiltonicity or I
00:23:07.770 --> 00:23:11.130
guess I want to count the
number of Hamiltonian cycles.
00:23:11.130 --> 00:23:16.510
The natural counting
version of Hamiltonicity.
00:23:16.510 --> 00:23:19.080
So we had two proofs for
this, neither of them
00:23:19.080 --> 00:23:22.370
work in a sharp P sense.
00:23:22.370 --> 00:23:24.720
This one, remember
the idea was that you
00:23:24.720 --> 00:23:28.980
would traverse back and
forth one way or the other
00:23:28.980 --> 00:23:30.295
to get all of these nodes.
00:23:30.295 --> 00:23:31.670
That was a variable,
that's fine.
00:23:31.670 --> 00:23:33.425
There're exactly
two ways to do that.
00:23:33.425 --> 00:23:35.700
But then the clause,
the clause had
00:23:35.700 --> 00:23:39.290
to be satisfied by at least
one of the three variables.
00:23:39.290 --> 00:23:41.220
And if it's satisfied
for example,
00:23:41.220 --> 00:23:42.920
by all three variables,
then it could
00:23:42.920 --> 00:23:44.650
be this node is
picked up like this,
00:23:44.650 --> 00:23:46.934
or the node could be
picked up this way,
00:23:46.934 --> 00:23:48.600
or the node could be
picked up this way.
00:23:48.600 --> 00:23:50.700
So there are three
different solutions even
00:23:50.700 --> 00:23:53.130
for one fixed
variable assignment.
00:23:53.130 --> 00:23:55.370
So that's bad, we're
not allowed to do that.
00:23:55.370 --> 00:23:57.860
It'd be fine if every one was
three, but some will be one,
00:23:57.860 --> 00:23:59.870
some will be two,
some will be three.
00:23:59.870 --> 00:24:01.786
That's going to be some
weird product of those
00:24:01.786 --> 00:24:03.200
over the clauses.
00:24:03.200 --> 00:24:04.770
So that doesn't work.
00:24:04.770 --> 00:24:06.075
We had this other proof.
00:24:09.850 --> 00:24:13.430
This was a notation
for this gadget which
00:24:13.430 --> 00:24:16.610
forced either this directed
edge or this directed edge
00:24:16.610 --> 00:24:20.910
to be used, but not both.
00:24:20.910 --> 00:24:22.410
It's and x or.
00:24:22.410 --> 00:24:24.390
So that's, remember
what these things meant
00:24:24.390 --> 00:24:26.990
and that we have the
variable true or false.
00:24:26.990 --> 00:24:28.740
And then we connected
them to the clauses,
00:24:28.740 --> 00:24:31.860
then separately we
had a crossover.
00:24:31.860 --> 00:24:34.940
But the trouble
is in the clauses,
00:24:34.940 --> 00:24:38.810
because again, the idea was that
the variable chose this guy,
00:24:38.810 --> 00:24:40.500
this one was forbidden.
00:24:40.500 --> 00:24:42.910
That's actually the
good case I think.
00:24:42.910 --> 00:24:46.390
If the variable chose
this, chose this one,
00:24:46.390 --> 00:24:48.340
then this one must be included.
00:24:48.340 --> 00:24:52.661
That's bad news, if you
followed this, and then this,
00:24:52.661 --> 00:24:54.910
and then this, then you cut
off this part of the graph
00:24:54.910 --> 00:24:56.980
and you don't get one
Hamiltonian cycle.
00:24:56.980 --> 00:25:02.050
You want at least one variable
to allow you to go left here
00:25:02.050 --> 00:25:05.230
and then you can go and grab
all this stuff and come back.
00:25:05.230 --> 00:25:08.000
But again if multiple
variables satisfy this thing,
00:25:08.000 --> 00:25:10.800
any one of them could
grab the left rectangle.
00:25:10.800 --> 00:25:14.115
And so they get multiple
solutions, not parsimonious.
00:25:16.630 --> 00:25:18.990
But parts of this
proof are useful
00:25:18.990 --> 00:25:22.240
and they are used to make
a parsimonious proof.
00:25:22.240 --> 00:25:25.550
So the part that was useful is
this x or gadget, and the way
00:25:25.550 --> 00:25:27.200
to implement crossovers.
00:25:27.200 --> 00:25:32.390
So just remember that
you can build x ors,
00:25:32.390 --> 00:25:35.670
and that you can cross them
over using a bunch of x ors.
00:25:35.670 --> 00:25:38.610
So only x or connections,
these notations,
00:25:38.610 --> 00:25:41.010
can be crossed in this view.
00:25:41.010 --> 00:25:42.870
We're going to
build more gadgets
00:25:42.870 --> 00:25:46.195
and this is a proof
by Sato in 2002.
00:25:46.195 --> 00:25:50.120
It was a bachelor's
thesis actually in Japan.
00:25:50.120 --> 00:25:53.199
And so here you see
redrawn the x or gadget.
00:25:53.199 --> 00:25:54.990
Here it's going to be
for undirected graph,
00:25:54.990 --> 00:25:58.290
same structure works.
00:25:58.290 --> 00:26:01.110
And this is do the
crossover using x or.
00:26:01.110 --> 00:26:05.710
So he's denoting x or's as this
big X connected to two things.
00:26:05.710 --> 00:26:09.270
Now given that we can
build an or gadget, which
00:26:09.270 --> 00:26:11.880
says that either we use this
edge or we use this edge,
00:26:11.880 --> 00:26:14.400
or both.
00:26:14.400 --> 00:26:17.450
Here we're not using an x
or, but this is the graph.
00:26:17.450 --> 00:26:21.640
And the key is this has
to be done uniquely.
00:26:21.640 --> 00:26:24.330
That's in particular
the point of these dots.
00:26:24.330 --> 00:26:27.450
This looks asymmetric,
it's kind of weird.
00:26:27.450 --> 00:26:33.630
For example, if they're both
in, then this guy can do this,
00:26:33.630 --> 00:26:35.890
and this guy can do that.
00:26:35.890 --> 00:26:36.970
But it's not symmetric.
00:26:36.970 --> 00:26:40.594
You couldn't flip-- this guy
can't grab these points, that
00:26:40.594 --> 00:26:42.510
would be a second solution
which would be bad,
00:26:42.510 --> 00:26:43.850
but we missed these points.
00:26:43.850 --> 00:26:47.370
So this guy has to stay
down here if he's in at all.
00:26:47.370 --> 00:26:49.250
And then this guy's
the only one who
00:26:49.250 --> 00:26:52.120
can grab those extra points.
00:26:52.120 --> 00:26:57.565
Or if just the top guy's
in then you do this.
00:27:00.320 --> 00:27:06.080
And if just the bottom guy's
in I think it's symmetric.
00:27:06.080 --> 00:27:10.040
That is symmetric,
and it's unique.
00:27:10.040 --> 00:27:11.350
Good.
00:27:11.350 --> 00:27:16.130
If there's an implication-- so
this says if this edge is in,
00:27:16.130 --> 00:27:20.210
then that edge must be
in the Hamiltonian cycle
00:27:20.210 --> 00:27:23.310
and this is
essentially by copying.
00:27:23.310 --> 00:27:25.290
And we just have to
grab an extra edge
00:27:25.290 --> 00:27:28.780
and add this little extra
thing just for copying value.
00:27:28.780 --> 00:27:33.330
So if this one is in, then
this edge must not be used,
00:27:33.330 --> 00:27:36.147
which means this edge if
it's used must go straight.
00:27:36.147 --> 00:27:37.480
In particular, this is not used.
00:27:37.480 --> 00:27:39.520
And we have an or that
means that this one must
00:27:39.520 --> 00:27:41.930
be forced by a property of or.
00:27:41.930 --> 00:27:47.950
On the other hand, if this is
not set, this one must be used.
00:27:47.950 --> 00:27:51.150
So I guess this must be
an edge that was already
00:27:51.150 --> 00:27:53.640
going to be used for something.
00:27:53.640 --> 00:27:56.350
So that edge is just going
to get diverted down here,
00:27:56.350 --> 00:27:59.390
and then the or doesn't
constrain us at all.
00:27:59.390 --> 00:28:03.390
Because zero or one, this one
is one so the or is happy.
00:28:03.390 --> 00:28:05.200
So that's an implication.
00:28:05.200 --> 00:28:08.666
It's going to be a little more
subtle how we combine these.
00:28:08.666 --> 00:28:12.700
This is the tricky gadget.
00:28:12.700 --> 00:28:15.390
I sort of understand
it, but it has
00:28:15.390 --> 00:28:17.110
a lot of details to
check, especially
00:28:17.110 --> 00:28:18.770
on the uniqueness front.
00:28:18.770 --> 00:28:20.560
But this is a three-way
or which we're
00:28:20.560 --> 00:28:24.480
using for clause, SAT
clause, 3SAT clause.
00:28:24.480 --> 00:28:27.420
We want at least one
of these three edges
00:28:27.420 --> 00:28:29.230
to be in the Hamiltonian cycle.
00:28:29.230 --> 00:28:34.570
And so here we use x ors, ors,
implications, and more x ors.
00:28:34.570 --> 00:28:36.260
I'll show you the
intended solution,
00:28:36.260 --> 00:28:39.670
assuming I can remember it.
00:28:39.670 --> 00:28:43.940
So let's say for example, this
edge is in the Hamiltonian
00:28:43.940 --> 00:28:45.750
cycle.
00:28:45.750 --> 00:28:54.130
Then we're going to do
something like go over here,
00:28:54.130 --> 00:28:57.250
come back around like this.
00:28:57.250 --> 00:28:57.980
And then
00:28:57.980 --> 00:29:00.584
AUDIENCE: Did you just
violate the x or already?
00:29:00.584 --> 00:29:01.820
PROFESSOR: This x or?
00:29:01.820 --> 00:29:02.565
AUDIENCE: Yeah.
00:29:02.565 --> 00:29:03.690
PROFESSOR: No, I went here.
00:29:03.690 --> 00:29:04.610
AUDIENCE: And then you went up.
00:29:04.610 --> 00:29:06.150
PROFESSOR: Oh, then
I went up, good.
00:29:06.150 --> 00:29:08.970
So actually I have to
do this and around.
00:29:08.970 --> 00:29:11.230
Yes, so all these constraints
are thrown in, basically
00:29:11.230 --> 00:29:12.271
to force it to be unique.
00:29:12.271 --> 00:29:14.890
Without them you could--
still it would still work,
00:29:14.890 --> 00:29:17.620
but it wouldn't be parsimonious.
00:29:17.620 --> 00:29:19.620
OK, let's see, this--
00:29:19.620 --> 00:29:20.620
AUDIENCE: In the middle.
00:29:20.620 --> 00:29:22.700
PROFESSOR: --doesn't
constrain me.
00:29:22.700 --> 00:29:23.700
This one?
00:29:23.700 --> 00:29:24.846
AUDIENCE: Yeah.
00:29:24.846 --> 00:29:27.270
PROFESSOR: Good, go up there.
00:29:27.270 --> 00:29:32.650
So I did exactly one of those
and then I need to grab these.
00:29:32.650 --> 00:29:34.060
OK, so that's one picture.
00:29:34.060 --> 00:29:37.300
I think it's not
totally symmetric.
00:29:37.300 --> 00:29:39.390
Again, so you have to
check all three of them.
00:29:39.390 --> 00:29:42.580
And you have to check for
example, It's not so hard.
00:29:42.580 --> 00:29:45.270
Like if this guy was also in,
I could have just gone here
00:29:45.270 --> 00:29:47.610
and this guy would
pick up those nodes.
00:29:47.610 --> 00:29:50.900
So as long as, in general, as
long as at least one of them
00:29:50.900 --> 00:29:53.450
is on you're OK.
00:29:53.450 --> 00:29:55.496
And furthermore,
if they're all on,
00:29:55.496 --> 00:29:57.120
there's still a unique
way to solve it.
00:29:57.120 --> 00:29:58.430
And I'm not going
to go through that.
00:29:58.430 --> 00:30:00.340
But it's thanks to all
of these constraints
00:30:00.340 --> 00:30:04.110
they're cutting out
multiple solutions.
00:30:04.110 --> 00:30:08.210
OK, so now we just had
to put these together,
00:30:08.210 --> 00:30:09.830
this is pretty easy.
00:30:09.830 --> 00:30:11.410
It's pretty much
like the old proof.
00:30:11.410 --> 00:30:14.010
Again, we have-- we
represent variables
00:30:14.010 --> 00:30:18.250
by having these double edges
in a Hamiltonian cycle.
00:30:18.250 --> 00:30:21.580
You're going to choose
one or the other.
00:30:21.580 --> 00:30:24.200
And then we have this exclusive
or, forcing y and y bar
00:30:24.200 --> 00:30:25.950
to choose opposite choices.
00:30:25.950 --> 00:30:27.950
And then there are these
exclusive words to say,
00:30:27.950 --> 00:30:29.908
if you chose that one,
then you can't choose it
00:30:29.908 --> 00:30:31.222
for this clause.
00:30:31.222 --> 00:30:32.930
And then the clauses
are just represented
00:30:32.930 --> 00:30:34.190
by those three-way ors.
00:30:34.190 --> 00:30:36.190
So this overall
structure is the same.
00:30:36.190 --> 00:30:38.340
The crossovers
are done as before
00:30:38.340 --> 00:30:41.150
and it's really these
gadgets that have changed.
00:30:41.150 --> 00:30:43.420
And they're complicated,
but it's parsimonious.
00:30:43.420 --> 00:30:45.700
So with a little more work.
00:30:45.700 --> 00:30:48.630
We get the number of
Hamiltonian cycles
00:30:48.630 --> 00:30:54.400
in planar max degree three
graphs is sharp P complete.
00:31:16.920 --> 00:31:20.030
So that's nice.
00:31:20.030 --> 00:31:24.390
And from that, we
get Slitherlink.
00:31:24.390 --> 00:31:27.684
So this is-- I've sort of been
hiding these facts from you.
00:31:27.684 --> 00:31:29.350
When we originally
covered these proofs,
00:31:29.350 --> 00:31:33.240
these papers actually talk
about-- This one doesn't quite
00:31:33.240 --> 00:31:35.640
talk about sharp P, but it
is also sharp P complete.
00:31:35.640 --> 00:31:37.960
Counting the number of
solutions to Slitherlink
00:31:37.960 --> 00:31:40.207
is sharp P complete.
00:31:40.207 --> 00:31:41.040
This was the puzzle.
00:31:41.040 --> 00:31:45.530
Again you have that number of
adjacent edges-- each number.
00:31:45.530 --> 00:31:49.740
And the proof was a
reduction from planar max
00:31:49.740 --> 00:31:52.810
degree three Hamiltonian cycle.
00:31:52.810 --> 00:31:54.740
And at the time I
said, oh you could just
00:31:54.740 --> 00:31:56.420
assume it's a grid graph.
00:31:56.420 --> 00:31:59.880
And then you just need
the required gadget,
00:31:59.880 --> 00:32:01.550
which is the b part.
00:32:01.550 --> 00:32:02.970
Just need this gadget.
00:32:02.970 --> 00:32:05.010
This was a gadget because
it had these ones.
00:32:05.010 --> 00:32:08.045
It meant it had to be
traversed like a regular vertex
00:32:08.045 --> 00:32:10.380
in Hamiltonian cycle
and it turns out
00:32:10.380 --> 00:32:13.260
there was a way to traverse
it straight or with returns.
00:32:13.260 --> 00:32:15.610
And then we could block
off edges wherever there
00:32:15.610 --> 00:32:16.860
wasn't supposed to be an edge.
00:32:16.860 --> 00:32:19.390
And so if you're reducing from
Hamiltonicity and grid graphs
00:32:19.390 --> 00:32:22.720
that was the whole
proof and we were happy.
00:32:22.720 --> 00:32:25.280
Now we don't know whether
Hamiltonicity and grid
00:32:25.280 --> 00:32:26.586
graphs is sharp P complete.
00:32:30.796 --> 00:32:32.170
To prove that we
would need to be
00:32:32.170 --> 00:32:34.040
able to put bipartite
in here, and I
00:32:34.040 --> 00:32:35.610
don't know of a proof of that.
00:32:35.610 --> 00:32:37.390
Good open problem.
00:32:37.390 --> 00:32:40.450
So this was the reason that they
have this other gadget, which
00:32:40.450 --> 00:32:45.140
was to make Hamiltonian-- to
make these white vertices that
00:32:45.140 --> 00:32:46.920
don't have to be traversed.
00:32:46.920 --> 00:32:49.390
They're just implementing
an edge basically.
00:32:49.390 --> 00:32:51.600
So just the black vertices
have to be traversed.
00:32:51.600 --> 00:32:54.660
We needed that for drawing
things on the grid.
00:32:54.660 --> 00:32:57.790
But if you just don't put in
the ones and add in these zeros
00:32:57.790 --> 00:33:01.320
again you can traverse
it or not all.
00:33:01.320 --> 00:33:03.340
And this one, as
you can read here,
00:33:03.340 --> 00:33:06.400
says this would be
bad to have happen.
00:33:06.400 --> 00:33:08.150
In fact you can rule
it out, it will never
00:33:08.150 --> 00:33:10.390
happen in the
situations we want.
00:33:10.390 --> 00:33:13.280
Because the white vertices
only have two neighbors
00:33:13.280 --> 00:33:15.010
that are traversable.
00:33:18.310 --> 00:33:22.240
Cool, and then furthermore, all
of these solutions are unique.
00:33:22.240 --> 00:33:24.540
There is exactly one
way to go straight.
00:33:24.540 --> 00:33:26.830
There's exactly one way to
turn right, exactly one way
00:33:26.830 --> 00:33:27.980
to turn left.
00:33:27.980 --> 00:33:30.490
That's the subtle thing that
was not revealed before.
00:33:30.490 --> 00:33:34.240
But if you stare at it long
enough you will be convinced.
00:33:34.240 --> 00:33:36.590
So this is a
parsimonious reduction
00:33:36.590 --> 00:33:38.940
from planar max degree
three Hamiltonian
00:33:38.940 --> 00:33:40.840
cycle to Slitherlink.
00:33:40.840 --> 00:33:43.870
So counting solutions
in Slitherlink is hard.
00:33:43.870 --> 00:33:48.860
That was the fully
worked out example.
00:33:48.860 --> 00:33:50.412
Any questions?
00:33:50.412 --> 00:33:50.912
Yeah.
00:33:50.912 --> 00:33:53.200
AUDIENCE: So are there
examples of problems in P
00:33:53.200 --> 00:33:56.380
who's counting versions
are sharp P complete?
00:33:56.380 --> 00:33:57.060
PROFESSOR: Yes.
00:33:57.060 --> 00:34:00.401
And that will be the next topic.
00:34:00.401 --> 00:34:02.650
Well it's going to be a
little while til we get there.
00:34:02.650 --> 00:34:04.440
But I'm going to proof
things and then we
00:34:04.440 --> 00:34:06.390
will get to an answer,
but the answer is yes.
00:34:10.531 --> 00:34:11.322
AUDIENCE: Question.
00:34:11.322 --> 00:34:12.530
PROFESSOR: Yeah.
00:34:12.530 --> 00:34:14.874
AUDIENCE: So, for if
we wanted-- if we're
00:34:14.874 --> 00:34:16.540
like thinking about
a problem that we're
00:34:16.540 --> 00:34:18.329
trying to prove something
P-hard and we start thinking
00:34:18.329 --> 00:34:20.912
maybe stop, we would just show
it's in P by finding algorithm.
00:34:20.912 --> 00:34:25.500
Is there a nice way to show
that our problem is not P hard?
00:34:25.500 --> 00:34:28.900
PROFESSOR: Well you usually
would say that sharp,
00:34:28.900 --> 00:34:31.300
that problem is NP.
00:34:31.300 --> 00:34:33.830
You find a polynomial
counting algorithm.
00:34:33.830 --> 00:34:37.190
And they're lot's of examples of
polynomial counting algorithms
00:34:37.190 --> 00:34:38.840
especially on like trees.
00:34:38.840 --> 00:34:40.771
Typical thing is
your dynamic program.
00:34:40.771 --> 00:34:42.520
So like maybe you want
to know-- let's say
00:34:42.520 --> 00:34:44.179
you have a rooted binary
tree and for each node
00:34:44.179 --> 00:34:45.889
you could flip it
this way or not.
00:34:45.889 --> 00:34:48.250
How many different ways
are there to do that?
00:34:48.250 --> 00:34:50.699
And maybe have some
constraints on how that's done.
00:34:50.699 --> 00:34:53.122
Then you just try it
flipped, and try it not.
00:34:53.122 --> 00:34:54.580
You do dynamic
programming and then
00:34:54.580 --> 00:34:57.937
you multiply the two
solution sizes together,
00:34:57.937 --> 00:34:59.520
and you get the
overall solution size.
00:34:59.520 --> 00:35:01.380
So you basically
do combinatorics
00:35:01.380 --> 00:35:03.130
and if there's
independent choices you
00:35:03.130 --> 00:35:06.540
multiply, if they're
opposing choices you add,
00:35:06.540 --> 00:35:07.410
that kind of thing.
00:35:07.410 --> 00:35:10.070
And from that you get polynomial
time counting algorithms.
00:35:10.070 --> 00:35:14.040
In tree-like things that
often works inbound treewidth.
00:35:14.040 --> 00:35:16.562
AUDIENCE: Do you know that NP
hard problems who's counting
00:35:16.562 --> 00:35:19.567
problems are not sharp P hard?
00:35:19.567 --> 00:35:21.108
I guess this technique
wouldn't work.
00:35:21.108 --> 00:35:23.890
PROFESSOR: I would say
generally most problems
00:35:23.890 --> 00:35:26.190
that are hard to decide
are hard to count.
00:35:26.190 --> 00:35:29.120
And where NP hard
implies sharp P hard.
00:35:29.120 --> 00:35:33.329
I don't think there's a
hard theorem in that there's
00:35:33.329 --> 00:35:35.620
nothing that really says--
meta-theorem that says that,
00:35:35.620 --> 00:35:36.578
but that's the feeling.
00:35:39.142 --> 00:35:40.600
It'd be nice, then
we wouldn't have
00:35:40.600 --> 00:35:42.070
to do all the parsimonious work.
00:35:45.700 --> 00:35:50.350
All right so it's time for a
little bit of linear algebra.
00:35:54.410 --> 00:35:57.270
Let me remind you, I
guess linear algebra's not
00:35:57.270 --> 00:35:59.350
a pre-req for this class,
but probably you've
00:35:59.350 --> 00:36:01.489
seen the determinant
of a matrix and use,
00:36:01.489 --> 00:36:03.530
if it's zero then it's
non-invertible blah, blah,
00:36:03.530 --> 00:36:04.784
blah.
00:36:04.784 --> 00:36:06.200
Let me remind you
of a definition.
00:36:11.980 --> 00:36:13.640
And we rarely use
matrix notation.
00:36:13.640 --> 00:36:17.540
So let me remind you
of the usual one.
00:36:17.540 --> 00:36:19.050
N by n, square matrix.
00:36:19.050 --> 00:36:21.130
This is a polynomial
time problem.
00:36:21.130 --> 00:36:25.330
It is-- but I'm going to define
it in an exponential way.
00:36:25.330 --> 00:36:28.410
But you probably know a
polynomial time algorithm.
00:36:28.410 --> 00:36:31.937
This is not an algorithms class
so you don't need to know it.
00:36:34.760 --> 00:36:37.010
But it's based on Gaussian
elimination, the usual one.
00:36:56.370 --> 00:37:00.790
So you look at all
permutation matrices,
00:37:00.790 --> 00:37:02.580
all n by n permutation
matrices which
00:37:02.580 --> 00:37:05.860
you can think of as a
permutation pi on the numbers 1
00:37:05.860 --> 00:37:06.880
through n.
00:37:06.880 --> 00:37:09.780
And you look at i
comma pi event, that
00:37:09.780 --> 00:37:11.410
defines the permutation matrix.
00:37:11.410 --> 00:37:14.500
You take the product
of the matrix
00:37:14.500 --> 00:37:20.140
values-- if you superimpose
the permutation matrix on that,
00:37:20.140 --> 00:37:23.995
on the given matrix A.
00:37:23.995 --> 00:37:28.040
You take that product
you possibly negate it
00:37:28.040 --> 00:37:30.690
if the sign of your
permutation was negative,
00:37:30.690 --> 00:37:35.960
if it does an even number-- an
odd number of transpositions
00:37:35.960 --> 00:37:37.990
then this will be negative.
00:37:37.990 --> 00:37:40.780
Otherwise, it'd be positive
and you add those up.
00:37:40.780 --> 00:37:42.840
So of course, an exponential
number permutations
00:37:42.840 --> 00:37:45.730
you wouldn't want to do this as
an algorithm but turns out it
00:37:45.730 --> 00:37:49.745
can be done in polynomial time.
00:37:49.745 --> 00:37:51.120
The reason for
talking about this
00:37:51.120 --> 00:37:56.690
is by analogy I want the
notion of permanent, of an n
00:37:56.690 --> 00:37:57.450
by n matrix.
00:37:57.450 --> 00:37:59.190
The same thing, but
with this removed.
00:38:02.370 --> 00:38:06.620
The determinant of A is the
sum over all permutations pi,
00:38:06.620 --> 00:38:13.900
of the product from i equals
one to n of ai, pi of i.
00:38:13.900 --> 00:38:15.870
Now this may not look
like a counting problem.
00:38:15.870 --> 00:38:17.900
It turns out it is a
counting problem, sort of,
00:38:17.900 --> 00:38:19.232
a weighted counting problem.
00:38:19.232 --> 00:38:21.315
We will get back to counting
problems in a moment.
00:38:21.315 --> 00:38:24.740
This Is related to the number
of perfect matchings in a graph.
00:38:24.740 --> 00:38:26.520
But at this point
it's just-- it's
00:38:26.520 --> 00:38:27.800
a quantity we want to compute.
00:38:27.800 --> 00:38:29.630
This is a function of a matrix.
00:38:29.630 --> 00:38:33.600
And computing this function
is sharp P complete.
00:38:33.600 --> 00:38:34.100
Yeah.
00:38:34.100 --> 00:38:37.260
AUDIENCE: Could it just be
sine minus 1 2e sine of pi?
00:38:40.276 --> 00:38:42.310
PROFESSOR: I don't
know, it depends.
00:38:42.310 --> 00:38:46.695
If you call the number of
transpositions mod two so then
00:38:46.695 --> 00:38:48.045
it's zero one.
00:38:48.045 --> 00:38:48.920
You know what I mean.
00:38:53.210 --> 00:38:56.040
All right.
00:38:56.040 --> 00:38:58.017
So the claim is, permanent
is sharp P complete.
00:38:58.017 --> 00:38:59.100
We're going to prove this.
00:38:59.100 --> 00:39:01.970
This was the original problem
proved sharp P complete.
00:39:01.970 --> 00:39:05.230
Well other than
sharp 3SAT I guess.
00:39:05.230 --> 00:39:05.780
Same paper.
00:39:09.680 --> 00:39:15.130
Great so let me give
you a little intuition
00:39:15.130 --> 00:39:16.740
of what the permanent is.
00:39:16.740 --> 00:39:20.496
We'd like a definition
that's not so algebraic.
00:39:20.496 --> 00:39:23.980
At least I would like one more
graph-theoretic would be nice.
00:39:23.980 --> 00:39:25.810
So here's what
we're going to do.
00:39:25.810 --> 00:39:30.120
We're going to convert A, our
matrix, into a weighted graph.
00:39:42.130 --> 00:39:49.520
And then let me go
to the other board.
00:40:00.580 --> 00:40:02.251
How do we convert
into a weighted graph,
00:40:02.251 --> 00:40:03.250
weighted directed graph?
00:40:03.250 --> 00:40:08.420
Well the weight from,
vertex I to vertex J is AIJ.
00:40:08.420 --> 00:40:10.950
The obvious transformation
if it's zero then
00:40:10.950 --> 00:40:13.450
there's not going to be an edge,
although it doesn't matter.
00:40:13.450 --> 00:40:15.325
You could leave the edge
in with weight zero,
00:40:15.325 --> 00:40:17.004
it will be the same.
00:40:17.004 --> 00:40:18.920
Because what we're
interested in, in the claim
00:40:18.920 --> 00:40:28.430
is the permanent of the
matrix equals the sum,
00:40:28.430 --> 00:40:42.217
of the product of edge
weights, over all cycle covers
00:40:42.217 --> 00:40:46.040
of the graph.
00:40:46.040 --> 00:40:49.240
OK, this is really just the same
thing, but a little bit easier
00:40:49.240 --> 00:40:50.400
to think about.
00:40:50.400 --> 00:40:53.330
So a cycle cover it's kind
of like a Hamiltonian cycle
00:40:53.330 --> 00:40:55.440
but there could be
multiple cycles.
00:40:55.440 --> 00:40:58.600
So at every vertex you
should enter and leave.
00:40:58.600 --> 00:41:01.430
And so you have sort of in
degree one and out degree one.
00:41:01.430 --> 00:41:04.980
So you end up decomposing the
graph into vertex destroyed
00:41:04.980 --> 00:41:08.940
cycles which hit every vertex.
00:41:08.940 --> 00:41:11.250
That's a cycle cover.
00:41:11.250 --> 00:41:14.190
Important note here, we do
have loops in the graph.
00:41:14.190 --> 00:41:17.140
We can't have loops
if aii is not zero.
00:41:17.140 --> 00:41:18.370
Then you have a loop.
00:41:18.370 --> 00:41:22.840
So the idea is to look
at every cycle cover
00:41:22.840 --> 00:41:25.630
and just take the product
of the edge weights
00:41:25.630 --> 00:41:28.550
that are edges in
the cycles and then
00:41:28.550 --> 00:41:30.270
add that up overall
cycle covers.
00:41:30.270 --> 00:41:32.210
So it's the same thing
if you stare at it
00:41:32.210 --> 00:41:34.135
long enough because
you're going from I.
00:41:34.135 --> 00:41:35.680
I mean this is
basically the cycle
00:41:35.680 --> 00:41:39.410
decomposition of the permutation
if you know permutation theory.
00:41:39.410 --> 00:41:40.910
So if you don't,
don't worry this is
00:41:40.910 --> 00:41:43.880
your definition of permanent.
00:41:43.880 --> 00:41:46.840
So we're going to prove this
problem, is sharp P complete?
00:41:55.360 --> 00:42:05.100
And we're going to prove
it by a C-monious reduction
00:42:05.100 --> 00:42:07.380
from sharp 3SAT.
00:42:10.718 --> 00:42:14.742
AUDIENCE: You said this is just
the original thing introducing
00:42:14.742 --> 00:42:15.242
that?
00:42:15.242 --> 00:42:16.530
PROFESSOR: Yes.
00:42:16.530 --> 00:42:20.130
AUDIENCE: Did they not call it--
you made up the term C-monious.
00:42:20.130 --> 00:42:22.010
What did they call it?
00:42:22.010 --> 00:42:27.260
PROFESSOR: I think they
just called it a reduction.
00:42:27.260 --> 00:42:29.820
I think pretty sure, they
just called it reduction.
00:42:29.820 --> 00:42:32.060
And for them-- and they
said at the beginning,
00:42:32.060 --> 00:42:34.990
--reduction means
multicall reduction.
00:42:34.990 --> 00:42:36.850
So they're thinking about that.
00:42:36.850 --> 00:42:40.190
But it turns out to be
a C-monious reduction.
00:42:40.190 --> 00:42:42.120
C will not be
literally constant,
00:42:42.120 --> 00:42:46.140
but it will be a function
of the problem sets.
00:42:46.140 --> 00:42:48.020
And this is the reduction.
00:42:48.020 --> 00:42:51.910
So as usual, we have a clause
gadget, and a variable gadget,
00:42:51.910 --> 00:42:53.770
and then there's
this shaded thing
00:42:53.770 --> 00:42:58.776
which is this matrix, which
you can think of as a graph.
00:42:58.776 --> 00:43:00.650
But in this case will
be easier to just think
00:43:00.650 --> 00:43:03.180
of as a black box matrix.
00:43:03.180 --> 00:43:07.990
OK all of the edges in these
pictures have weight one.
00:43:07.990 --> 00:43:09.580
And then these
edges are special,
00:43:09.580 --> 00:43:12.730
and here you have some
loops in the center.
00:43:12.730 --> 00:43:14.980
No one else has a loop.
00:43:14.980 --> 00:43:18.500
So the high-level
idea is if you're
00:43:18.500 --> 00:43:22.680
thinking of a cycle
cover in a vertex
00:43:22.680 --> 00:43:25.140
because you--
sorry, in a variable
00:43:25.140 --> 00:43:28.400
because you've got a vertex
here and a vertex here you
00:43:28.400 --> 00:43:29.650
have to cover them somehow.
00:43:29.650 --> 00:43:33.290
And the intent is that you
either cover this one this way,
00:43:33.290 --> 00:43:35.470
or you cover this
one that way, those
00:43:35.470 --> 00:43:38.620
would be the true and the false.
00:43:38.620 --> 00:43:42.100
And then from the
clouds perspective
00:43:42.100 --> 00:43:46.031
we need to understand, so then
these things are connected.
00:43:46.031 --> 00:43:47.655
This thing would go
here and this thing
00:43:47.655 --> 00:43:50.440
would go here, and
generally connect variables
00:43:50.440 --> 00:43:53.530
to clauses in the obvious way.
00:43:53.530 --> 00:43:56.515
And in general, for every
occurrence of the positive form
00:43:56.515 --> 00:43:59.420
and the negative form, sorry
positive or negative form
00:43:59.420 --> 00:44:02.500
you'll have one of these blobs
that connects to the clause.
00:44:02.500 --> 00:44:04.920
So overall architecture
should be clear.
00:44:04.920 --> 00:44:07.100
What does this gadget do?
00:44:07.100 --> 00:44:13.550
It has some nifty properties
let me write them down.
00:44:13.550 --> 00:44:17.790
So this matrix is
called x in the paper.
00:44:17.790 --> 00:44:23.299
So first of all, permanent
of x equals zero.
00:44:23.299 --> 00:44:24.840
I'm just going to
state these, I mean
00:44:24.840 --> 00:44:27.120
you could check them by
doing the computation.
00:44:27.120 --> 00:44:32.090
So we're interested in cycle
covers whose products are not
00:44:32.090 --> 00:44:32.590
zero.
00:44:32.590 --> 00:44:34.850
Otherwise they don't
contribute to the sum.
00:44:34.850 --> 00:44:39.310
So I could also add in non-zero.
00:44:39.310 --> 00:44:41.860
Meaning the product is non-zero.
00:44:41.860 --> 00:44:46.840
OK so if we had a cycle cover
that just-- where the cycle
00:44:46.840 --> 00:44:50.040
cover just locally solved this
thing by traversing these four
00:44:50.040 --> 00:44:52.170
vertices all by themselves.
00:44:52.170 --> 00:44:56.162
Then that cycle would
have permanent zero.
00:44:56.162 --> 00:44:57.870
And then the permanent
of the whole cycle
00:44:57.870 --> 00:45:01.090
covers the product
of those things.
00:45:01.090 --> 00:45:02.780
And so the overall
thing would be zero.
00:45:02.780 --> 00:45:04.220
So if you look at
a nonzero cycle
00:45:04.220 --> 00:45:06.400
cover you can't just
leave these isolated,
00:45:06.400 --> 00:45:08.180
you have to visit them.
00:45:08.180 --> 00:45:11.384
You have to enter
them and leave them.
00:45:11.384 --> 00:45:13.800
Now the question is, where
could you enter and leave them?
00:45:13.800 --> 00:45:16.710
This is maybe not totally
clear from the drawing
00:45:16.710 --> 00:45:21.230
but, the intent is that
the first vertex in-- which
00:45:21.230 --> 00:45:24.980
corresponds to row one, column
one here, is the left side.
00:45:24.980 --> 00:45:29.580
And the column four, row
four is the right side.
00:45:29.580 --> 00:45:31.940
I claim that you have
to enter at one of those
00:45:31.940 --> 00:45:33.960
and leave at the other.
00:45:33.960 --> 00:45:36.040
Why is that true?
00:45:36.040 --> 00:45:37.040
For a couple reasons.
00:45:37.040 --> 00:45:42.950
One is that the permanent
of x with row and column
00:45:42.950 --> 00:45:51.552
one removed is zero,
and so is the permanent
00:45:51.552 --> 00:45:54.210
of x with row and
column four removed.
00:45:58.810 --> 00:46:04.660
OK vertex one and four
out of this x matrix
00:46:04.660 --> 00:46:06.650
are the only places you
could enter and leave.
00:46:06.650 --> 00:46:08.950
But it's possible you enter
and then immediately leave.
00:46:08.950 --> 00:46:10.842
So you just touch
the thing and leave.
00:46:10.842 --> 00:46:12.800
That would correspond to
leaving behind a three
00:46:12.800 --> 00:46:14.330
by three sub-matrix.
00:46:14.330 --> 00:46:16.070
Either by deleting
this row and column,
00:46:16.070 --> 00:46:18.236
or by deleting this row and
column if you just visit
00:46:18.236 --> 00:46:19.549
this vertex and leave.
00:46:19.549 --> 00:46:20.840
Those also have permanent zero.
00:46:20.840 --> 00:46:23.210
So if you're looking at
a nonzero cycle cover
00:46:23.210 --> 00:46:24.740
you can't do that.
00:46:24.740 --> 00:46:27.360
So together those mean
that you enter one of them
00:46:27.360 --> 00:46:29.530
and leave at the other.
00:46:29.530 --> 00:46:43.380
And furthermore, if you look
at the permanent of x with rows
00:46:43.380 --> 00:46:49.980
and columns one and four
removed, both removed,
00:46:49.980 --> 00:46:51.600
that's also zero.
00:46:51.600 --> 00:46:55.550
So the permanent of
this sub-matrix is zero
00:46:55.550 --> 00:46:58.360
and therefore you can't
just enter here, jump here,
00:46:58.360 --> 00:46:59.280
and leave.
00:46:59.280 --> 00:47:02.894
Which means finally you have
to traverse all four vertices.
00:47:02.894 --> 00:47:04.810
You enter at one of them,
traverse everything,
00:47:04.810 --> 00:47:06.470
and leave at the other.
00:47:06.470 --> 00:47:10.960
So basically this
is a forced edge.
00:47:10.960 --> 00:47:15.050
If you touch here you have to
then traverse and leave there,
00:47:15.050 --> 00:47:17.470
in any cycle cover.
00:47:17.470 --> 00:47:20.200
So we're used to seeing forced
edges in Hamiltonian cycle.
00:47:20.200 --> 00:47:22.430
This is sort of a
stronger form of it.
00:47:22.430 --> 00:47:26.300
That's cool now one catch.
00:47:26.300 --> 00:47:29.230
So if you do that, if you
enter let's say vertex one
00:47:29.230 --> 00:47:33.510
and leave at vertex four, you
will end up-- your contribution
00:47:33.510 --> 00:47:38.020
to the cycle will end up
being the permanent of x
00:47:38.020 --> 00:47:41.930
minus row one and column four.
00:47:41.930 --> 00:47:44.810
Or symmetrically
with four and one.
00:47:44.810 --> 00:47:49.740
You'd like this to be
one, but it's four.
00:47:49.740 --> 00:47:54.410
So there are four ways to
traverse the forced edge.
00:47:54.410 --> 00:47:56.910
But because it's
always four, or zero,
00:47:56.910 --> 00:47:58.520
and then it doesn't
contribute at all.
00:47:58.520 --> 00:48:01.410
It's always going to be four
so this will be a nice uniform
00:48:01.410 --> 00:48:03.862
blow up in our
number of solutions.
00:48:03.862 --> 00:48:05.320
C is not going to
be four, but it's
00:48:05.320 --> 00:48:08.400
going to be four to some power.
00:48:08.400 --> 00:48:10.310
It's going to be
four to the power,
00:48:10.310 --> 00:48:12.800
the number of those gadgets.
00:48:12.800 --> 00:48:14.247
So because you can
predict that, I
00:48:14.247 --> 00:48:16.830
mean in the reduction you know
exactly how many there it's not
00:48:16.830 --> 00:48:19.875
depend on the solution,
it's only dependent on how
00:48:19.875 --> 00:48:20.750
you built this thing.
00:48:20.750 --> 00:48:22.670
So at the end, we're
going to divide by four
00:48:22.670 --> 00:48:31.210
to the power-- it's I think five
times the number of clauses.
00:48:31.210 --> 00:48:36.215
So C is going to be four to
the power of five times number
00:48:36.215 --> 00:48:37.590
of clauses because
there are five
00:48:37.590 --> 00:48:41.054
of these gadgets per clause.
00:48:41.054 --> 00:48:43.220
So at the end we'll just
divide by that and be done.
00:48:43.220 --> 00:48:46.503
AUDIENCE: Would it be 10
times because you got it
00:48:46.503 --> 00:48:48.848
on the variable side too?
00:48:48.848 --> 00:48:50.650
PROFESSOR: Yes 10
times, thank you.
00:48:50.650 --> 00:48:52.755
AUDIENCE: Eight-- two
of those don't actually
00:48:52.755 --> 00:48:53.630
connect to variables.
00:48:53.630 --> 00:48:55.171
PROFESSOR: Two of
them do not connect
00:48:55.171 --> 00:48:57.642
to variables, yeah eight.
00:48:57.642 --> 00:48:59.333
Eight times the
number of clauses.
00:49:02.650 --> 00:49:07.090
All right so now it's
just a matter of-- now
00:49:07.090 --> 00:49:10.570
that you understand
what this is now
00:49:10.570 --> 00:49:12.980
you could sort of see how
information is communicated.
00:49:12.980 --> 00:49:15.940
Because if the variable they
choose is the true setting
00:49:15.940 --> 00:49:17.780
it must visit these edges.
00:49:17.780 --> 00:49:20.070
And once it touches here
it has to leave here
00:49:20.070 --> 00:49:22.430
and this is a edge
going the wrong way.
00:49:22.430 --> 00:49:26.480
So you can't try to traverse--
from here you cannot touch
00:49:26.480 --> 00:49:28.651
the clauses down below.
00:49:28.651 --> 00:49:30.400
Once you touch here
to you have to go here
00:49:30.400 --> 00:49:33.370
and then you must
leave here and so on.
00:49:33.370 --> 00:49:35.350
But you leave this
one behind and it
00:49:35.350 --> 00:49:38.070
must be traversed by the
clause and vice versa.
00:49:38.070 --> 00:49:42.090
If I choose this one then these
must be visited by the clauses.
00:49:42.090 --> 00:49:44.250
So from the clause
perspective as he
00:49:44.250 --> 00:49:47.140
said there are five of these
gadgets, but only three of them
00:49:47.140 --> 00:49:47.810
are connected.
00:49:47.810 --> 00:49:53.340
So these guys are forced, and
there's a bunch of edges here
00:49:53.340 --> 00:49:54.392
and it's a case analysis.
00:49:54.392 --> 00:49:55.600
So it'd be the short version.
00:49:55.600 --> 00:49:58.350
Let's just do a
couple of examples.
00:49:58.350 --> 00:50:00.410
If none of these
have been traversed
00:50:00.410 --> 00:50:03.300
by the variable then
the-- pretty much
00:50:03.300 --> 00:50:05.820
you have to go straight through.
00:50:05.820 --> 00:50:08.030
But then you're in trouble.
00:50:08.030 --> 00:50:11.000
Because there's no pointer
back to the beginning.
00:50:11.000 --> 00:50:14.840
You can only go back this far.
00:50:14.840 --> 00:50:17.590
So if none of-- if you have
to traverse all these things
00:50:17.590 --> 00:50:20.350
it's not possible
with the cycle cover.
00:50:20.350 --> 00:50:22.740
But if any one of them
has been traversed.
00:50:22.740 --> 00:50:25.130
So for example, if this
one has been traversed then
00:50:25.130 --> 00:50:28.280
we'll jump over it,
visit these guys,
00:50:28.280 --> 00:50:31.430
jump back here, visit this
guy, and jump back there.
00:50:31.430 --> 00:50:35.340
And if you check that's the
only way to do it, it's unique.
00:50:35.340 --> 00:50:38.190
And similarly any one of
them has been covered,
00:50:38.190 --> 00:50:40.690
or if all three of them have
been covered, or if two of them
00:50:40.690 --> 00:50:42.148
have been covered
in all cases this
00:50:42.148 --> 00:50:44.880
is a unique way from
the clause perspective
00:50:44.880 --> 00:50:46.155
to connect all the things.
00:50:46.155 --> 00:50:47.530
Now in reality,
there's four ways
00:50:47.530 --> 00:50:48.680
to traverse each
of these things.
00:50:48.680 --> 00:50:50.346
So the whole thing
grows by this product
00:50:50.346 --> 00:50:53.050
but it doesn't matter
which cycle they appear in.
00:50:53.050 --> 00:50:56.880
We'll always scale the number
of solutions by factor of four.
00:50:56.880 --> 00:51:02.510
So it's nice uniform
scaling C-monious.
00:51:02.510 --> 00:51:03.960
And that's permanent.
00:51:03.960 --> 00:51:05.410
Pretty cool.
00:51:05.410 --> 00:51:07.410
Pretty cool proof.
00:51:07.410 --> 00:51:10.340
Now one not so nice
thing about this proof
00:51:10.340 --> 00:51:16.640
is it involves numbers negative
one, zero, one, two, and three.
00:51:16.640 --> 00:51:18.650
For reasons we will
see in a moment
00:51:18.650 --> 00:51:21.260
it'd be really to
just use zero and one.
00:51:21.260 --> 00:51:24.520
And it turns out
that's possible,
00:51:24.520 --> 00:51:27.240
but it's kind of annoying.
00:51:27.240 --> 00:51:28.000
It's nifty though.
00:51:28.000 --> 00:51:30.416
I think this will demonstrate
a bunch of fun number theory
00:51:30.416 --> 00:51:32.120
things you could do.
00:51:32.120 --> 00:51:34.660
So next goal is
zero one permanent.
00:51:34.660 --> 00:51:36.830
Now the matrix is
just zero's and one's.
00:51:36.830 --> 00:51:41.770
This is sharp P complete.
00:51:41.770 --> 00:51:47.508
I'll tell you the reason that
is particularly interesting.
00:51:47.508 --> 00:51:49.300
As it gets rid of
the weights it makes
00:51:49.300 --> 00:51:57.710
it more clearly-- makes it more
clearly a counting problem.
00:51:57.710 --> 00:52:00.440
So first of all is the
number of cycle covers
00:52:00.440 --> 00:52:03.300
in the corresponding graph.
00:52:03.300 --> 00:52:05.190
No more weights it's
just-- if it's a zero
00:52:05.190 --> 00:52:06.100
there's not an edge.
00:52:06.100 --> 00:52:07.933
Since you can't traverse
there if it's a one
00:52:07.933 --> 00:52:10.120
you can traverse there
and then every cycle cover
00:52:10.120 --> 00:52:12.660
will have product one.
00:52:12.660 --> 00:52:14.350
So it's just counting them.
00:52:14.350 --> 00:52:18.750
But it also happens to be the
number of perfect matchings
00:52:18.750 --> 00:52:20.190
in a bipartite graph.
00:52:33.380 --> 00:52:36.048
Which bipartite graph?
00:52:36.048 --> 00:52:39.480
The bipartite graph where
one side of the bi-partition
00:52:39.480 --> 00:52:43.600
is the rows and the other side
is the columns of the matrix.
00:52:47.620 --> 00:52:50.290
And you have an edge I j.
00:52:55.240 --> 00:53:02.770
If and only if a i sorry
j is not a good not
00:53:02.770 --> 00:53:05.270
the best terminology
for I-- where I here's
00:53:05.270 --> 00:53:09.660
is a vertex of v one and
j is a vertex of v two.
00:53:09.660 --> 00:53:12.390
Whenever I j equals 1, if
it's zero there's no edge.
00:53:15.170 --> 00:53:19.530
It's a little confusing because
the matrix is not symmetric
00:53:19.530 --> 00:53:23.040
so you might say well, how does
this make an undirected graph?
00:53:23.040 --> 00:53:25.890
Because you could have and edge
from i to j, but not to j to I.
00:53:25.890 --> 00:53:28.640
That's OK because I is
interpreted in the left space
00:53:28.640 --> 00:53:31.010
and j is interpreted
in the right space.
00:53:31.010 --> 00:53:33.730
So the asymmetric
case would be the case
00:53:33.730 --> 00:53:36.831
that there's an edge here to
here, but not an edge from here
00:53:36.831 --> 00:53:37.330
to here.
00:53:37.330 --> 00:53:38.850
That's perfectly OK.
00:53:38.850 --> 00:53:41.930
This is one, two, three,
for a three by three matrix
00:53:41.930 --> 00:53:45.620
we get a six vertex
graph, not three vertex.
00:53:45.620 --> 00:53:49.094
That's what allows you to be
asymmetric on a loop, what
00:53:49.094 --> 00:53:51.010
we were normally thinking
is a loop just means
00:53:51.010 --> 00:53:53.500
a horizontal edge.
00:53:53.500 --> 00:53:57.000
OK so it turns out if you
look at this graph, which
00:53:57.000 --> 00:53:59.750
is a different graph from
what we started with,
00:53:59.750 --> 00:54:01.950
the number of perfect
matchings in that graph
00:54:01.950 --> 00:54:05.824
is equal to the number of cycle
covers in the other graph,
00:54:05.824 --> 00:54:06.740
in the directed graph.
00:54:06.740 --> 00:54:08.880
So this one's undirected
and bipartite.
00:54:08.880 --> 00:54:12.090
This one was directed
and not bipartite.
00:54:12.090 --> 00:54:15.380
And the rough intuition
is that we're just
00:54:15.380 --> 00:54:17.684
kind of pulling
the vertices apart
00:54:17.684 --> 00:54:20.100
into two versions, the left
version and the right version.
00:54:20.100 --> 00:54:23.010
If you imagine there being
a connection from one
00:54:23.010 --> 00:54:26.740
right to one left and similarly
from two right to two left
00:54:26.740 --> 00:54:28.157
directed.
00:54:28.157 --> 00:54:30.740
And three right to three left,
hey it looks like cycles again.
00:54:30.740 --> 00:54:33.507
You went here, then you went
around, then you went here,
00:54:33.507 --> 00:54:34.340
then you went there.
00:54:34.340 --> 00:54:36.920
That was a cycle and
similarly this is a cycle.
00:54:36.920 --> 00:54:39.390
So they're the same thing.
00:54:39.390 --> 00:54:41.710
This is cool because
perfect matchings
00:54:41.710 --> 00:54:44.150
are interesting in
particular perfect matchings
00:54:44.150 --> 00:54:46.470
are easy to find
in polynomial time.
00:54:46.470 --> 00:54:49.481
But counting them
as Sharp P complete.
00:54:49.481 --> 00:54:50.730
Getting back to that question.
00:54:53.960 --> 00:54:57.990
So that's why we care about zero
one permanence or one reason
00:54:57.990 --> 00:54:58.890
to care about it.
00:54:58.890 --> 00:55:00.312
Let's prove that it's hard.
00:55:16.050 --> 00:55:19.350
And here we'll use a number
theory, pretty basic number
00:55:19.350 --> 00:55:19.850
theory.
00:55:28.600 --> 00:55:30.320
So first claim is
that computing the
00:55:30.320 --> 00:55:35.600
permanent of a matrix, general
matrix nonzero one modulo
00:55:35.600 --> 00:55:37.840
given integer r is hard.
00:55:42.260 --> 00:55:44.200
r has to be an input
for this problem.
00:55:44.200 --> 00:55:47.055
It's not like computing at
Mod three necessarily as hard,
00:55:47.055 --> 00:55:50.977
but computing at modular
anything is hard.
00:55:50.977 --> 00:55:53.560
And here we're going to finally
use some multi-color reduction
00:55:53.560 --> 00:55:54.910
power.
00:55:54.910 --> 00:55:58.340
So the idea is, suppose you
could solve this problem then
00:55:58.340 --> 00:56:00.551
I claim I can solve permanent.
00:56:00.551 --> 00:56:01.700
Anyone know how?
00:56:01.700 --> 00:56:03.200
AUDIENCE: Chinese
remainder theorem?
00:56:03.200 --> 00:56:15.927
PROFESSOR: Chinese
remainder theorem
00:56:15.927 --> 00:56:18.010
OK if you don't know the
Chinese remainder theorem
00:56:18.010 --> 00:56:19.030
but you should know it.
00:56:19.030 --> 00:56:21.490
It's good.
00:56:21.490 --> 00:56:24.740
So the idea is
we're going to set r
00:56:24.740 --> 00:56:33.970
to be all primes up to how big?
00:56:33.970 --> 00:56:38.230
Or we can stop when the
product of all the primes
00:56:38.230 --> 00:56:44.060
that we've considered is bigger
than the potential permanent.
00:56:44.060 --> 00:56:49.190
And the permanent is at
most m to the n times
00:56:49.190 --> 00:56:52.350
n factorial, where
m is the largest
00:56:52.350 --> 00:56:53.640
absolute value in the matrix.
00:57:00.350 --> 00:57:03.480
That's one upper bound,
doesn't really matter.
00:57:03.480 --> 00:57:05.110
But the point is,
it is computable
00:57:05.110 --> 00:57:07.560
and if you take the
logarithm it's not so big.
00:57:07.560 --> 00:57:09.490
All these numbers
are at least two.
00:57:09.490 --> 00:57:11.550
All primes are at least two.
00:57:11.550 --> 00:57:15.900
So the number of primes
we'll have to consider
00:57:15.900 --> 00:57:19.480
is at most log base 2 of that.
00:57:19.480 --> 00:57:25.150
So this means the
number of primes,
00:57:25.150 --> 00:57:29.000
so that's log of m to
the n times n factorial,
00:57:29.000 --> 00:57:32.700
and this is roughly--
I'll put a total here.
00:57:32.700 --> 00:57:36.400
This is like m
log n that's exact
00:57:36.400 --> 00:57:42.200
plus n log n that's approximate
but it's minus order n
00:57:42.200 --> 00:57:44.530
so it won't hurt us.
00:57:44.530 --> 00:57:46.400
So that's good,
that's a polynomial.
00:57:46.400 --> 00:57:50.040
Log m is a reasonable thing,
m was given us as a number
00:57:50.040 --> 00:57:52.680
so log n is part of this input.
00:57:52.680 --> 00:57:57.860
So as our input and n is
obviously good as the dimension
00:57:57.860 --> 00:57:58.640
of the matrix.
00:57:58.640 --> 00:58:01.274
So this is a polynomial
number of calls.
00:58:01.274 --> 00:58:02.940
We're going to compute
the permanent mod
00:58:02.940 --> 00:58:05.580
r for all these things and
our Chinese remainder theorem
00:58:05.580 --> 00:58:08.850
tells you if you
know a number which
00:58:08.850 --> 00:58:12.920
is the permanent module
all primes whose products--
00:58:12.920 --> 00:58:16.770
Well if you know a number of
modulo a bunch of primes then
00:58:16.770 --> 00:58:21.550
you can figure out the number of
modulo, the product of primes.
00:58:21.550 --> 00:58:24.290
And if we set the product to
be bigger than the number could
00:58:24.290 --> 00:58:27.730
be, then we know [? any ?]
modulo that is actually
00:58:27.730 --> 00:58:29.500
knowing the number itself.
00:58:29.500 --> 00:58:31.820
So then we can
reconstruct the permanent.
00:58:35.540 --> 00:58:37.420
So here we're really
using multicall.
00:58:45.984 --> 00:58:47.400
I think that's the
first time I've
00:58:47.400 --> 00:58:50.150
used Chinese remainder theorem
in a class that I've taught.
00:58:53.101 --> 00:58:53.600
Good stuff.
00:58:57.837 --> 00:59:00.295
AUDIENCE: Do you know why it's
called the Chinese remainder
00:59:00.295 --> 00:59:00.860
theorem?
00:59:00.860 --> 00:59:02.830
PROFESSOR: I assume
it was proved
00:59:02.830 --> 00:59:06.040
by a Chinese mathematician
but I-- anyone know?
00:59:06.040 --> 00:59:07.291
AUDIENCE: Sun Tzu.
00:59:07.291 --> 00:59:08.460
PROFESSOR: Sun Tzu.
00:59:08.460 --> 00:59:12.645
OK I've heard of him.
00:59:12.645 --> 00:59:14.650
AUDIENCE: Not the guy
who did Art of War.
00:59:14.650 --> 00:59:15.400
PROFESSOR: Oh, OK.
00:59:15.400 --> 00:59:17.800
I don't know him then.
00:59:17.800 --> 00:59:21.100
Another Sun Tzu.
00:59:21.100 --> 00:59:23.970
Cool, yeah so permanent
mod r is hard.
00:59:23.970 --> 00:59:32.400
Next claim is that the zero
one permanent mod r is hard.
00:59:34.932 --> 00:59:36.390
The whole reason
we're going to mod
00:59:36.390 --> 00:59:38.473
r-- I mean it's interesting
to know that computing
00:59:38.473 --> 00:59:40.680
permanent mod r is just
as hard as permanent
00:59:40.680 --> 00:59:43.710
but it's kind of standard thing.
00:59:43.710 --> 00:59:45.610
The reason I wanted
to go to mod r
00:59:45.610 --> 00:59:47.800
was to get rid of
these negative numbers.
00:59:47.800 --> 00:59:49.730
Negative numbers are annoying.
00:59:49.730 --> 00:59:54.960
Once I go to mod any r
negative numbers flip around
00:59:54.960 --> 00:59:58.480
and I can figure everything
is positive or non-negative.
00:59:58.480 --> 01:00:01.680
Now I can go back
to gadget land.
01:00:01.680 --> 01:00:04.220
So suppose I have an
edge of weight five,
01:00:04.220 --> 01:00:06.270
this will work for any
number greater than one.
01:00:06.270 --> 01:00:08.380
If It's one I don't touch it.
01:00:08.380 --> 01:00:11.680
If it's greater than one I'm
going to make this thing.
01:00:14.900 --> 01:00:19.360
And the claim is there
are exactly five ways
01:00:19.360 --> 01:00:20.390
to use this edge.
01:00:20.390 --> 01:00:22.470
Either you don't use it
and you don't use it.
01:00:22.470 --> 01:00:26.290
But if you do use it,
exactly five ways to use it.
01:00:26.290 --> 01:00:29.229
If you make this traversal,
there are five ways to do it.
01:00:29.229 --> 01:00:30.770
Remember this has
to be a cycle cover
01:00:30.770 --> 01:00:32.860
so we have to visit everything.
01:00:32.860 --> 01:00:36.450
If you come up here, can't go
that way gotta go straight.
01:00:36.450 --> 01:00:38.295
Can't go that way,
got to go straight.
01:00:38.295 --> 01:00:40.580
Hmm, OK.
01:00:40.580 --> 01:00:42.830
So in fact, the rest
of this cycle cover
01:00:42.830 --> 01:00:44.910
must be entirely
within this picture.
01:00:44.910 --> 01:00:47.210
And so for example, you
could say that's a cycle
01:00:47.210 --> 01:00:49.830
and then that's a
cycle, these are loops.
01:00:49.830 --> 01:00:51.950
And then that's a
cycle and then that's
01:00:51.950 --> 01:00:55.390
a cycle and then that's a
cycle and then that's a cycle.
01:00:55.390 --> 01:00:57.620
And whoops I'm left
with one vertex.
01:00:57.620 --> 01:01:00.930
So in fact I have to choose
exactly one of the loops,
01:01:00.930 --> 01:01:03.160
put that in and the
rest can be covered.
01:01:03.160 --> 01:01:06.202
For example, if I choose this
guy then this will be a cycle,
01:01:06.202 --> 01:01:07.910
this'll be a cycle,
this will be a cycle.
01:01:07.910 --> 01:01:10.490
Perfect parity this will be
a cycle, is will be a cycle,
01:01:10.490 --> 01:01:11.750
and this will be a cycle.
01:01:11.750 --> 01:01:12.730
Can't choose two loops.
01:01:12.730 --> 01:01:14.271
If I chose like
these two groups then
01:01:14.271 --> 01:01:15.960
this guy would be isolated.
01:01:15.960 --> 01:01:18.140
So you have to choose
exactly one loop
01:01:18.140 --> 01:01:19.710
and there are
exactly five of them.
01:01:19.710 --> 01:01:23.890
In general you make-- there'd be
k of them if you want weight k,
01:01:23.890 --> 01:01:25.550
and you can simulate.
01:01:25.550 --> 01:01:29.030
If you don't use this edge then
there's actually only one way
01:01:29.030 --> 01:01:29.610
to do it.
01:01:29.610 --> 01:01:33.050
You have to go all the
way around like this.
01:01:33.050 --> 01:01:33.762
So that's cool.
01:01:33.762 --> 01:01:35.970
If you don't use the edge,
didn't mess with anything.
01:01:35.970 --> 01:01:38.097
If you use the edge you
get a weight of five.
01:01:38.097 --> 01:01:40.180
Multiplicative, because
this is independent of all
01:01:40.180 --> 01:01:42.170
the other such choices.
01:01:42.170 --> 01:01:47.490
So you can simulate a
high-weight edge, module r.
01:01:47.490 --> 01:01:49.990
In general you can simulate
a non-negative weight edge
01:01:49.990 --> 01:01:51.460
like this.
01:01:51.460 --> 01:01:54.030
Just these are all weight one
obviously and absent edges
01:01:54.030 --> 01:01:55.310
are zero.
01:01:55.310 --> 01:01:57.940
So we can convert
in the module r
01:01:57.940 --> 01:02:01.840
setting there are no
negative numbers essentially.
01:02:01.840 --> 01:02:03.885
So we can just do that
simulation, everything
01:02:03.885 --> 01:02:05.820
will work mod r.
01:02:05.820 --> 01:02:07.110
So use gadget.
01:02:16.530 --> 01:02:19.300
Now finally we can prove
zero one permanent is hard.
01:02:24.500 --> 01:02:25.010
Why?
01:02:25.010 --> 01:02:27.093
Because if we could compute
the zero one permanent
01:02:27.093 --> 01:02:28.730
we can compute at mod r.
01:02:28.730 --> 01:02:32.290
Just compute the permanent,
take the answer mod r and you
01:02:32.290 --> 01:02:34.250
solve zero one permanent mod r.
01:02:34.250 --> 01:02:37.410
So this is what I might
call a one-call reduction.
01:02:37.410 --> 01:02:39.550
It's not our usual notion
of one-call reduction
01:02:39.550 --> 01:02:42.990
because we're doing
stuff at the end.
01:02:42.990 --> 01:02:45.520
But I'm going to use
one-call in this setting.
01:02:50.700 --> 01:02:53.630
Just call it-- in fact we don't
even have to change the input.
01:02:53.630 --> 01:02:55.600
Then we get the output
when we computed mod r.
01:02:55.600 --> 01:02:56.100
Question.
01:02:56.100 --> 01:02:59.920
AUDIENCE: So in
that the weight you
01:02:59.920 --> 01:03:01.348
can have really high weight.
01:03:01.348 --> 01:03:04.522
But then your
[INAUDIBLE] nodes so then
01:03:04.522 --> 01:03:08.387
when you use that wouldn't n?
01:03:08.387 --> 01:03:09.220
PROFESSOR: Oh I see.
01:03:09.220 --> 01:03:10.845
If the weights were
exponentially large
01:03:10.845 --> 01:03:12.350
this would be bad news.
01:03:12.350 --> 01:03:13.940
But they're not
exponentially large
01:03:13.940 --> 01:03:17.000
because we know they're
all at most three.
01:03:17.000 --> 01:03:20.570
They're between
negative one and three.
01:03:20.570 --> 01:03:26.760
Oh but negative
one-- Yes, OK good
01:03:26.760 --> 01:03:28.300
I need the prime number theorem.
01:03:28.300 --> 01:03:28.800
Thank you.
01:03:28.800 --> 01:03:32.220
So we have numbers
negative one then
01:03:32.220 --> 01:03:34.150
we're mapping them
modulo some prime.
01:03:34.150 --> 01:03:38.090
Now negative one is actually
the prime minus one.
01:03:38.090 --> 01:03:43.440
So indeed, we do get weights
that are as large as r.
01:03:43.440 --> 01:03:47.870
So this is-- r here we're
assuming is encoded in unary.
01:03:51.610 --> 01:03:55.710
It turns out we can afford
to encode the primes in unary
01:03:55.710 --> 01:03:59.130
because the number
of primes that we use
01:03:59.130 --> 01:04:04.370
is this polynomial thing,
a weakly polynomial thing.
01:04:04.370 --> 01:04:09.670
And by the prime number theorem,
the prime with that index
01:04:09.670 --> 01:04:11.000
is roughly that big.
01:04:11.000 --> 01:04:12.950
It's the log factor larger.
01:04:12.950 --> 01:04:16.550
So the primes are going
to-- the actual value
01:04:16.550 --> 01:04:21.860
of the prime is going to
be something like n log
01:04:21.860 --> 01:04:32.270
m, times log base e, and log m.
01:04:32.270 --> 01:04:35.470
By the prime number theorem and
that's again weakly polynomial.
01:04:35.470 --> 01:04:37.520
And so we can assume
ours encoded in unary.
01:04:41.450 --> 01:04:45.643
Wow, we get to use the prime
number theorem that's fun.
01:04:45.643 --> 01:04:48.226
AUDIENCE: Is that the first time
you had to use prime numbers?
01:04:48.226 --> 01:04:50.511
PROFESSOR: Probably.
01:04:50.511 --> 01:04:52.760
Pretty sure I've used prime
number theorem if and only
01:04:52.760 --> 01:04:55.630
if I've used Chinese
remainder theorem.
01:04:55.630 --> 01:04:56.620
They go hand in hand.
01:04:59.430 --> 01:04:59.930
Clear?
01:04:59.930 --> 01:05:02.564
So this was kind
of roundabout way,
01:05:02.564 --> 01:05:04.730
but we ended up getting rid
of the negative numbers.
01:05:04.730 --> 01:05:06.230
Luckily it still worked.
01:05:06.230 --> 01:05:08.210
And now it's zero
one permanence hard,
01:05:08.210 --> 01:05:10.890
therefore counting the number
of perfect matchings in a given
01:05:10.890 --> 01:05:12.890
bipartite graph isn't
going to be hard.
01:05:12.890 --> 01:05:15.464
These problems were equivalent,
like they're identical.
01:05:15.464 --> 01:05:17.380
So you could-- this was
a reduction in one way
01:05:17.380 --> 01:05:20.980
but that's equally
reducible both ways.
01:05:20.980 --> 01:05:23.430
So you can reduce this
one, reduce this one
01:05:23.430 --> 01:05:27.306
to perfect matchings
or vice versa.
01:05:27.306 --> 01:05:27.805
All right.
01:05:31.047 --> 01:05:32.630
Here's some more fun
things we can do.
01:05:50.000 --> 01:05:51.480
Guess I can add to my list here.
01:05:51.480 --> 01:05:55.600
But in particular we
have zero one permanent.
01:06:00.200 --> 01:06:04.141
So there are other ways you
might be interested in counting
01:06:04.141 --> 01:06:04.640
matchings.
01:06:12.010 --> 01:06:15.380
So, so far we know it's hard
to count perfect matchings.
01:06:15.380 --> 01:06:20.774
So in a balanced bipartite graph
we had n vertices on the left,
01:06:20.774 --> 01:06:21.940
and n vertices on the right.
01:06:21.940 --> 01:06:23.790
We're going to use that.
01:06:23.790 --> 01:06:26.790
Then the hope is that
there's matching a size n.
01:06:26.790 --> 01:06:29.129
But in general, you could
just count maybe those
01:06:29.129 --> 01:06:29.920
don't exist at all.
01:06:29.920 --> 01:06:32.060
Maybe we just want to
count maximal matchings.
01:06:32.060 --> 01:06:34.280
These are not maximum matchings.
01:06:34.280 --> 01:06:37.340
Maximal, meaning you can't
add any more edges to them.
01:06:37.340 --> 01:06:40.800
They're locally maximum.
01:06:40.800 --> 01:06:43.140
So that's going to
be a bigger number,
01:06:43.140 --> 01:06:45.880
it's going to be always
bigger than zero.
01:06:45.880 --> 01:06:47.850
Because the empty
matching, well you
01:06:47.850 --> 01:06:49.308
could start with
the empty matching
01:06:49.308 --> 01:06:52.000
and add things you'll get at
least one maximal matching.
01:06:52.000 --> 01:06:54.150
But this is also
sharp P complete,
01:06:54.150 --> 01:06:56.090
and you can prove
it using some tricks
01:06:56.090 --> 01:07:01.380
from bipartite perfect
matching, counting.
01:07:04.580 --> 01:07:06.070
Don't have a ton
of time so I'll go
01:07:06.070 --> 01:07:08.840
through this relatively quick.
01:07:08.840 --> 01:07:12.380
We're going to take each vertex
and convert it, basically
01:07:12.380 --> 01:07:14.920
make n copies of it.
01:07:14.920 --> 01:07:18.760
And when we have an edge
that's going to turn
01:07:18.760 --> 01:07:23.940
into a biclique between them.
01:07:23.940 --> 01:07:25.800
Why did I address
such a big one?
01:07:29.970 --> 01:07:32.650
This was supposed to be n and n.
01:07:32.650 --> 01:07:34.220
OK so just blow up every edge.
01:07:34.220 --> 01:07:37.850
My intent is to make matchings
become more plentiful.
01:07:40.480 --> 01:07:47.552
So in general, if I used to
have a matching of size i,
01:07:47.552 --> 01:07:54.310
I end up converting it into n
factorial to the i-th power,
01:07:54.310 --> 01:08:04.460
distinct matchings
of size n times i.
01:08:04.460 --> 01:08:07.480
OK because if I
use an edge here,
01:08:07.480 --> 01:08:09.790
now I get to put in a
arbitrary perfect matching
01:08:09.790 --> 01:08:14.160
in this biclique and
they're n factorial of them.
01:08:14.160 --> 01:08:17.229
Cool, why did I do that?
01:08:17.229 --> 01:08:19.410
Because now I
suppose that I knew
01:08:19.410 --> 01:08:22.627
how to count the number
of maximum actions.
01:08:22.627 --> 01:08:24.710
So they're going to be
matchings of various sizes.
01:08:24.710 --> 01:08:27.740
From that I want to extract the
number of perfect matchings.
01:08:27.740 --> 01:08:31.170
Sounds impossible, but
when you make things giant
01:08:31.170 --> 01:08:33.785
like this they kind of-- all
the matchings kind of separate.
01:08:33.785 --> 01:08:37.029
It's like biology or something.
01:08:37.029 --> 01:08:37.560
Chemistry.
01:08:37.560 --> 01:08:43.260
So let's see, it
shows how much I know.
01:08:43.260 --> 01:08:49.790
Number of maximal
matchings is going
01:08:49.790 --> 01:08:55.420
to be sum i equals
zero to n over 2.
01:08:55.420 --> 01:08:58.800
That's the possible
sizes of the matchings.
01:08:58.800 --> 01:09:00.370
Of the old matchings
I should say.
01:09:00.370 --> 01:09:05.670
Number of original
maximal matchings
01:09:05.670 --> 01:09:15.290
in the input graph of size i,
times n factorial to the i.
01:09:15.290 --> 01:09:16.830
OK this is just rewriting.
01:09:16.830 --> 01:09:19.189
I'm just this thing, but
I'm summing over all i.
01:09:19.189 --> 01:09:21.939
So I have however many matchings
I used to have a size i,
01:09:21.939 --> 01:09:24.010
but then I multiply them
by n factorial to the i,
01:09:24.010 --> 01:09:25.426
because these are
all independent.
01:09:29.970 --> 01:09:33.319
And this thing,
the original number
01:09:33.319 --> 01:09:36.770
of matchings in the worst case,
I mean the largest it could be
01:09:36.770 --> 01:09:38.529
is when everything
is a biclique.
01:09:38.529 --> 01:09:47.275
And then we have n over two
factorial maximal matchings
01:09:47.275 --> 01:09:47.775
originally.
01:09:50.569 --> 01:09:54.650
And this is smaller than that.
01:09:54.650 --> 01:09:58.920
And therefore we can pull
this apart as a number modulo
01:09:58.920 --> 01:10:00.650
n factorial.
01:10:00.650 --> 01:10:05.020
And each digit is the number of
maximum actions of each size.
01:10:05.020 --> 01:10:07.630
And we look at
the last digit, we
01:10:07.630 --> 01:10:11.800
get all the number of
maximum matchings of size n
01:10:11.800 --> 01:10:16.140
over two also known as the
number of perfect matchings.
01:10:16.140 --> 01:10:16.640
Question?
01:10:16.640 --> 01:10:18.970
AUDIENCE: So that
second max is maximal?
01:10:18.970 --> 01:10:20.340
PROFESSOR: Yes this is maximal.
01:10:24.680 --> 01:10:26.360
Other questions?
01:10:26.360 --> 01:10:29.000
So again I kind of numbered
[? through ?] trick
01:10:29.000 --> 01:10:33.270
by blowing up-- well
by making each of these
01:10:33.270 --> 01:10:36.130
get multiplied by a
different huge number
01:10:36.130 --> 01:10:39.070
we can pull them apart,
separate them-- In logarithm
01:10:39.070 --> 01:10:42.595
these numbers are not huge
so it is actually plausible
01:10:42.595 --> 01:10:43.850
you could compute this thing.
01:10:46.699 --> 01:10:48.490
And a tree graph you
definitely could do it
01:10:48.490 --> 01:10:51.950
by various multiplications
at each level.
01:10:51.950 --> 01:10:53.300
All right.
01:10:53.300 --> 01:10:58.475
So that's nice, let's do
another one of that flavor.
01:11:01.163 --> 01:11:02.704
AUDIENCE: So I was
actually wondering
01:11:02.704 --> 01:11:04.745
can you find all those
primes in polynomial time?
01:11:04.745 --> 01:11:07.820
PROFESSOR: You could use
the Sieve of Eratosthenes.
01:11:07.820 --> 01:11:09.836
AUDIENCE: Is that
like going to be--
01:11:09.836 --> 01:11:12.460
PROFESSOR: We can handle pseudo
poly here because as we argued,
01:11:12.460 --> 01:11:14.230
the primes are not that big.
01:11:14.230 --> 01:11:16.650
So we could just
spend-- we could just
01:11:16.650 --> 01:11:18.650
do Sieve of Eratosthenes,
we can afford
01:11:18.650 --> 01:11:21.620
to spend a linear time
on the largest prime.
01:11:21.620 --> 01:11:22.725
Or quadratic in that even.
01:11:22.725 --> 01:11:24.100
You could do the
naive algorithm.
01:11:29.510 --> 01:11:33.280
We're not doing real Hogan's
[? mid-number ?] theory here.
01:11:33.280 --> 01:11:37.250
I could tell where you
have to be more careful.
01:11:37.250 --> 01:11:40.190
All right. .
01:11:40.190 --> 01:11:43.492
So here's another question.
01:11:43.492 --> 01:11:45.700
What if I just want to count
the number of matchings?
01:11:45.700 --> 01:11:46.691
No condition?
01:11:50.930 --> 01:11:54.990
This is going to use
almost the reverse trick.
01:11:54.990 --> 01:11:57.800
So no maximal constraint
just all matchings,
01:11:57.800 --> 01:11:59.050
including the empty matchings.
01:11:59.050 --> 01:12:01.100
Count them all.
01:12:01.100 --> 01:12:04.970
We'll see why in a moment,
this is quite natural.
01:12:04.970 --> 01:12:09.660
So I claim I can do a multicall
reduction to bipartite
01:12:09.660 --> 01:12:11.780
number of maximal matchings.
01:12:11.780 --> 01:12:15.110
And here are my calls
for every graph g,
01:12:15.110 --> 01:12:18.190
I'm going to make
a graph g prime.
01:12:18.190 --> 01:12:21.020
Where if I have a
vertex I'm going to add
01:12:21.020 --> 01:12:25.130
k extra nodes connected
just to that vertex.
01:12:25.130 --> 01:12:28.022
So these are leaves
and so if this vertex
01:12:28.022 --> 01:12:30.230
was unmatched over here,
then I have k different ways
01:12:30.230 --> 01:12:32.580
to match it over here.
01:12:32.580 --> 01:12:36.840
And my intent is to measure this
number of matchings of size n
01:12:36.840 --> 01:12:39.080
over two minus k.
01:12:39.080 --> 01:12:40.060
That would be the hope.
01:12:44.500 --> 01:12:53.110
So if I have m r matchings
of size r over here,
01:12:53.110 --> 01:12:58.820
these will get mapped
to Mr times k plus one
01:12:58.820 --> 01:13:09.110
to the r, matchings of
size no, not of size.
01:13:09.110 --> 01:13:10.040
Matchings over here.
01:13:16.654 --> 01:13:18.820
Because for each one I can
either leave it unmatched
01:13:18.820 --> 01:13:20.620
or I could add this edge,
or I could add this edge,
01:13:20.620 --> 01:13:22.078
or I could add this
edge and that's
01:13:22.078 --> 01:13:24.060
true for every unmatched vertex.
01:13:24.060 --> 01:13:28.062
Sorry, this is not size r, this
is size n over two minus r.
01:13:28.062 --> 01:13:30.660
R is going to be the
number of leftover guys.
01:13:30.660 --> 01:13:33.610
Then they kind of
pops out over here.
01:13:33.610 --> 01:13:36.010
So I'm going to run
this algorithm-- I'm
01:13:36.010 --> 01:13:43.040
going to compute the number of
matchings in Gk for all k up
01:13:43.040 --> 01:13:49.250
to like n over two plus one.
01:13:49.250 --> 01:14:00.930
So I'm going to do
this and what I end up
01:14:00.930 --> 01:14:07.390
computing is number of matchings
in each Gk, which is going
01:14:07.390 --> 01:14:10.022
to be the sum over all r.
01:14:10.022 --> 01:14:15.885
r is zero to n over two
of Mr the original number
01:14:15.885 --> 01:14:19.830
of matchings the size
n over two minus r,
01:14:19.830 --> 01:14:22.510
times k plus one to the r.
01:14:22.510 --> 01:14:28.170
Now this is a polynomial
in k plus one.
01:14:28.170 --> 01:14:33.000
And if I evaluate a polynomial
at its degree plus one
01:14:33.000 --> 01:14:35.500
different points I can
reconstruct the coefficients
01:14:35.500 --> 01:14:36.720
of the polynomial.
01:14:36.720 --> 01:14:39.160
And therefore get all of
these and in particular
01:14:39.160 --> 01:14:44.980
get m zero, which is the
number of perfect matchings.
01:14:44.980 --> 01:14:45.980
Ta da.
01:14:58.130 --> 01:14:58.990
Having too much fun.
01:15:06.450 --> 01:15:07.190
Enough matchings.
01:15:10.700 --> 01:15:13.720
We have all these versions
of SAT, which are hard.
01:15:13.720 --> 01:15:17.230
But in fact there are
even funnier versions
01:15:17.230 --> 01:15:19.360
like positive 2SAT.
01:15:22.850 --> 01:15:25.600
Positive 2SAT is
really easy to decide,
01:15:25.600 --> 01:15:29.510
but it turns out if I add a
sharp it is sharp P complete.
01:15:29.510 --> 01:15:31.402
This is sort of like
max 2SAT, but here we
01:15:31.402 --> 01:15:32.860
have to satisfy
all the constraints
01:15:32.860 --> 01:15:34.985
but you want to count how
many different ways there
01:15:34.985 --> 01:15:37.090
are to solve it.
01:15:37.090 --> 01:15:39.350
This is the same thing
as vertex cover remember.
01:15:39.350 --> 01:15:41.550
Vertex cover's the
same as positive 2SAT.
01:15:41.550 --> 01:15:43.730
So also sharp
vertex cover's hard
01:15:43.730 --> 01:15:46.570
and therefore also
sharp clique is hard.
01:15:46.570 --> 01:15:49.300
So this is actually a
parsimonious reduction
01:15:49.300 --> 01:15:51.275
from bipartite matching.
01:15:51.275 --> 01:15:53.390
That's why I wanted
to get there.
01:15:53.390 --> 01:16:00.710
So for every edge we're going
to make a variable, x of e,
01:16:00.710 --> 01:16:05.500
which is true if the edge is
not in the perfect matching.
01:16:05.500 --> 01:16:11.230
And so then whenever I have
two incident edges in f,
01:16:11.230 --> 01:16:14.440
we're going to have a clause
which is either I don't use e,
01:16:14.440 --> 01:16:16.610
or I don't use f.
01:16:16.610 --> 01:16:18.424
That's 2SAT, done.
01:16:23.570 --> 01:16:24.220
Satisfying?
01:16:24.220 --> 01:16:25.050
It's parsimonious.
01:16:25.050 --> 01:16:27.330
So satisfying assignments
will be one to one
01:16:27.330 --> 01:16:29.730
with bipartite matchings.
01:16:29.730 --> 01:16:31.660
So this is why you should care.
01:16:31.660 --> 01:16:35.600
And if we instead reduce
from-- did I erase it?
01:16:35.600 --> 01:16:40.450
Bipartite maximal
matchings up here,
01:16:40.450 --> 01:16:43.220
then I get the
counting the number
01:16:43.220 --> 01:16:51.530
of maximally true assignments
that satisfy the 2SAT clause
01:16:51.530 --> 01:16:54.430
is also hard.
01:16:54.430 --> 01:16:58.120
For what it's worth, a
different way of counting that.
01:16:58.120 --> 01:16:58.820
(BREAK IN VIDEO)
01:16:58.820 --> 01:17:01.710
Maxwell means you can't set
any more variables true.
01:17:01.710 --> 01:17:02.210
Yeah.
01:17:02.210 --> 01:17:05.978
AUDIENCE: Since the edges on
your positive 2SAT reduction
01:17:05.978 --> 01:17:13.154
are the variables there are
true when the edge is not
01:17:13.154 --> 01:17:14.122
in a matching.
01:17:14.122 --> 01:17:15.962
I think it's maximally false.
01:17:15.962 --> 01:17:16.920
And not maximally true.
01:17:16.920 --> 01:17:19.740
Also maximally true, you just
said everything was true.
01:17:19.740 --> 01:17:20.810
PROFESSOR: Yes.
01:17:20.810 --> 01:17:22.390
Right, right, right, sorry.
01:17:22.390 --> 01:17:23.223
I see, you're right.
01:17:23.223 --> 01:17:25.450
So it's minimal solutions
for a 2SAT, thank you.
01:17:28.700 --> 01:17:40.650
OK in fact, it's known that
three regular bipartite planar
01:17:40.650 --> 01:17:41.720
sharp vertex cover.
01:17:46.262 --> 01:17:47.220
I won't prove this one.
01:17:47.220 --> 01:17:50.230
But in particular you
can make this planar,
01:17:50.230 --> 01:17:52.420
although it doesn't look
like we have positive here.
01:17:52.420 --> 01:17:54.461
And you can also make it
bipartite, which doesn't
01:17:54.461 --> 01:17:55.810
mean a lot in 2SAT land.
01:17:55.810 --> 01:17:59.830
But it means-- makes a lot of
sense in vertex cover land.
01:17:59.830 --> 01:18:01.620
In my zero remaining
minutes I want
01:18:01.620 --> 01:18:03.030
to mention one more concept.
01:18:05.736 --> 01:18:09.560
To go back to the
original question
01:18:09.560 --> 01:18:13.400
of uniqueness for puzzles.
01:18:13.400 --> 01:18:16.290
And you'll see this in the
literature if you look at it.
01:18:16.290 --> 01:18:18.730
Lot of people won't
even mention sharp P,
01:18:18.730 --> 01:18:21.690
but they'll mention ASP.
01:18:21.690 --> 01:18:25.880
ASP is slightly weaker but
for most intents and purposes
01:18:25.880 --> 01:18:28.210
essentially identical
notion to sharp P,
01:18:28.210 --> 01:18:30.300
from a hardness perspective.
01:18:30.300 --> 01:18:33.270
The goal for-- in general,
if I have a problem
01:18:33.270 --> 01:18:35.500
a search problem
like we started with.
01:18:35.500 --> 01:18:38.650
The ASP version of
that search problem is,
01:18:38.650 --> 01:18:43.180
I give you a solution and I want
to know is there another one?
01:18:43.180 --> 01:18:44.930
This is a little
different from everything
01:18:44.930 --> 01:18:48.085
we've seen because I actually
give you a starting point.
01:18:48.085 --> 01:18:49.960
There's some problems
where this helps a lot.
01:18:49.960 --> 01:18:54.110
For example, if the solution
is never unique like coloring,
01:18:54.110 --> 01:18:55.890
I give you a k
coloring, I can give you
01:18:55.890 --> 01:18:58.310
another one I'll just
swap colors one and two.
01:18:58.310 --> 01:19:01.890
Or more subtle if I give you
a Hamiltonian cycle in a three
01:19:01.890 --> 01:19:02.610
regular graph.
01:19:02.610 --> 01:19:06.860
There's always a way to switch
it and make another one.
01:19:06.860 --> 01:19:09.940
So ASP is sometimes easy.
01:19:09.940 --> 01:19:13.630
But you would basically
use the same reductions.
01:19:13.630 --> 01:19:16.700
If you can find a
parsimonious reduction that
01:19:16.700 --> 01:19:19.310
also has the property--
so parsimonious reduction
01:19:19.310 --> 01:19:22.490
there is a one to one
bijection between solutions
01:19:22.490 --> 01:19:23.900
to x and solutions to x prime.
01:19:23.900 --> 01:19:27.382
If you could also compute
that bijection, if you can
01:19:27.382 --> 01:19:29.090
convert a solution
from one to the other.
01:19:29.090 --> 01:19:31.080
Which we could do in every
single proof we've seen
01:19:31.080 --> 01:19:32.496
and even the ones
we haven't seen.
01:19:32.496 --> 01:19:35.000
You can always compute that
bijection between solutions
01:19:35.000 --> 01:19:38.110
of A to solutions of B.
And so what that means is,
01:19:38.110 --> 01:19:46.550
if I can solve B, consider
the ASP version of A
01:19:46.550 --> 01:19:47.890
where I give you a solution.
01:19:47.890 --> 01:19:50.610
If I can convert that solution
into a solution for my B
01:19:50.610 --> 01:19:52.800
problem, and then with
the parsimonious reduction
01:19:52.800 --> 01:19:58.830
I also get a B instance, then
I can solve the ASP problem
01:19:58.830 --> 01:20:00.560
for B. That's a
decision problem.
01:20:00.560 --> 01:20:02.000
Is there another solution?
01:20:02.000 --> 01:20:04.480
If yes, then A will also
have another solution,
01:20:04.480 --> 01:20:07.420
because it's parsimonious
the numbers will be the same.
01:20:07.420 --> 01:20:11.050
We need a weaker version of
parsimony we just need the one
01:20:11.050 --> 01:20:13.011
and more than one
are kept the same.
01:20:13.011 --> 01:20:15.260
If this one was unique then
this one should be unique.
01:20:15.260 --> 01:20:18.040
If this one wasn't unique then
this one should be not unique.
01:20:18.040 --> 01:20:20.180
If I can solve B
then I can solve A.
01:20:20.180 --> 01:20:24.070
Or if I can solve ASP B
then I can solve ASP A.
01:20:24.070 --> 01:20:27.750
A lot of these proofs are
done-- so-called ASP reduction
01:20:27.750 --> 01:20:30.660
is usually done by a
parsimonious reduction.
01:20:30.660 --> 01:20:33.740
And so in particular
this was introduced,
01:20:33.740 --> 01:20:36.410
this concept was introduced in
the likes of like Slitherlink,
01:20:36.410 --> 01:20:40.160
I think, was one of the early
ASP completeness proofs.
01:20:40.160 --> 01:20:42.310
And they were interested
in this because the idea
01:20:42.310 --> 01:20:43.935
is if you're designing
a puzzle usually
01:20:43.935 --> 01:20:46.390
you design a puzzle
with a solution in mind.
01:20:46.390 --> 01:20:48.780
But then you need to make sure
there's no other solution.
01:20:48.780 --> 01:20:52.380
So you have exactly
this set up and if you
01:20:52.380 --> 01:20:55.020
want a formal definition of ASP
completeness it's in the notes.
01:20:55.020 --> 01:21:00.430
But it's not that difficult.
01:21:00.430 --> 01:21:03.602
The key point here is if you're
going to prove sharp P or ASP
01:21:03.602 --> 01:21:06.060
completeness you might as well
prove the other one as well.
01:21:06.060 --> 01:21:09.400
Get twice the results for
basically the same reduction.
01:21:09.400 --> 01:21:11.460
All the versions of
3SAT and 1-in-3SAT,
01:21:11.460 --> 01:21:14.790
and all those things, those
were all parsimonious.
01:21:14.790 --> 01:21:17.730
And all of those are
ASP complete as well.
01:21:17.730 --> 01:21:19.330
But once you get
into C-monious you're
01:21:19.330 --> 01:21:23.470
no longer preserving one
versus more than one.
01:21:23.470 --> 01:21:26.252
So while you preserved--
from a counting perspective
01:21:26.252 --> 01:21:28.460
in multicall reductions
where you can divide that off
01:21:28.460 --> 01:21:29.320
that's fine.
01:21:29.320 --> 01:21:32.490
But from an ASP completeness
perspective you're not fine.
01:21:32.490 --> 01:21:36.010
So in fact all the
stuff that we just
01:21:36.010 --> 01:21:40.870
did with the permanent
matchings and sharp 2SAT,
01:21:40.870 --> 01:21:41.880
their sharp P complete.
01:21:41.880 --> 01:21:43.625
They may not be ASP complete.
01:21:46.230 --> 01:21:48.080
But you'll notice
the puzzles that we
01:21:48.080 --> 01:21:54.940
reduce from like, whatever,
Shakashaka is one of them.
01:21:54.940 --> 01:21:59.210
That was from a version of
3SAT, and the other one we had
01:21:59.210 --> 01:22:00.710
was from a version
of Hamiltonicity.
01:22:03.500 --> 01:22:05.130
This guy.
01:22:05.130 --> 01:22:07.090
These are all-- which
was also from 3SAT.
01:22:07.090 --> 01:22:10.250
So these are all
ASP complete also.
01:22:10.250 --> 01:22:12.850
But if you use the
weirder things like 2SAT
01:22:12.850 --> 01:22:14.170
you don't get that.
01:22:14.170 --> 01:22:17.070
So usually in one proof you can
get NP hardness, ASP hardness,
01:22:17.070 --> 01:22:19.127
and sharp P hardness.
01:22:19.127 --> 01:22:20.960
But if you're going to
go from these weirder
01:22:20.960 --> 01:22:22.777
problems with C-monious
reductions then
01:22:22.777 --> 01:22:24.860
you only get sharp P
hardness, because they're not
01:22:24.860 --> 01:22:26.900
even NP-hard.
01:22:26.900 --> 01:22:28.230
Cool, all right.
01:22:28.230 --> 01:22:30.000
Thank you.