Jukka Suomela · June 2020

If we want to show that some computational problem X is hard to solve, classic complexity theory and the theory of distributed computing seem to take very different routes, for example:

**Classic complexity theory**(e.g. solving X with Turing machines): Introduce complexity classes NP and NP-complete. Prove that X is NP-complete. Then we know that X is not solvable in polynomial time assuming P ≠ NP.**Distributed complexity theory**(e.g. solving X in the LOCAL model): Directly prove that X requires Θ(n) rounds using information-theoretic arguments.

In particular, the classic route that we follow in the context of Turing machines heavily relies on complexity classes and assumptions related to them, while the route that we follow in the theory of distributed computing makes usually no references to any complexity classes, as we can directly prove unconditional lower bounds.

Inspired by this, Stefan Mengel asked me an excellent question: **if we can prove unconditional lower bounds, what is the point of introducing complexity classes?**

I think complexity classes do play a key role (and an increasingly important role) in the theory distributed computing, but the **role is different** from their role in classic complexity theory. Let me try to explain this a bit better with the help of some examples.

Let us first look at classes that are similar to e.g. P that try to capture *resource-bounded computation in the “usual” models of computing*. Here an example might be e.g., ”the class of LCL problems solvable in $O(\log^* n)$ rounds in the deterministic LOCAL model”.

Here one difference is that in distributed computing, **these kinds of classes are often “discovered” instead of “defined”**. We e.g. first prove that any LCL problem is solvable in $O(\log^* n)$ rounds or it requires $\Omega(\log n)$ rounds, and there is nothing between, and this way we learn that there is a big gap in the complexity spectrum, and it follows that it is meaningful to make a distinction between “problems solvable in $O(\log^* n)$ rounds” and “problems requiring $\Omega(\log n)$ rounds”.

In some philosophical sense, one could argue that e.g. class P is a **“man-made construction”** that only exists because someone defined it (and here choosing the right definition is extremely important, so that we have classes that “behave nice” and capture lots of interesting problems). However, the class of LCL problems solvable in $O(\log^* n)$ rounds in the LOCAL model in a sense exists as a **“mathematical fact”** — once you fix the LOCAL model and the notion of LCL problems (more about this soon!), then the division into complexity classes $O(\log^* n)$ vs. $\Omega(\log n)$ follows regardless of whether we happen to like those classes.

Let us then look at classes that are similar to e.g. NP that refer to *resource-bounded computation in “unusual” models of computing*, e.g. in nondeterministic models of computing.

One could argue that the class of LCL problems is an example of such a complexity class in the distributed world. It is true that one could define LCL problems in a purely “syntactic” manner: an LCL problem is a finite set of labeled constant-radius neighborhoods. But one can define class LCL also in a “semantic” manner: they are problems that can be solved in O(1) rounds in the *nondeterministic* LOCAL model.

I would say that class LCL is a “man-made constructions”, similar to e.g. class NP. It is something that “behaves well”, contains lots of interesting problems that we’d like to understand, etc., but we cannot derive it as a mathematical fact that naturally arises as soon as we define e.g. the LOCAL model.

Here perhaps the main difference between the two worlds is in **how we use these definitions**.

In the classic setting, we’d like to e.g. understand if class NP is the same class as P. And while this is still a huge open problem, one can at least use these classes to introduce assumptions such as P ≠ NP, introduce a notion of completeness (more about it soon), and prove conditional hardness results.

In the distributed world, we already trivially know that some LCL problems are hard and some are easy, there isn’t that much need for conditional hardness results, and the idea of defining “LCL-complete” problems does not seem to make much sense. But we seek to nevertheless prove something about the complexity of LCL problems in general. The gap result mentioned above is one example of such a theorem. Here it was crucial that we had a very well-chosen class LCL as a starting point: Make it a bit too broad and you can’t prove anything useful about all LCLs. Make it a bit too narrow and the result becomes uninteresting as it doesn’t contain problems people care about. Choose it just right and the class of problems introduced in the early 1990s can still play a key role in our understanding of distributed computational complexity in 2020.

In the classic complexity theory, the notion of *completeness* plays a key role. We can either start with a class defined in terms of resource-bounded computation such as NP, or we can start with a class defined using a canonical problem such as graph isomorphism. Then we fix a notion of reductions, and this way define classes like NP-complete and GI-complete. But these are usually used in the context of conditional hardness results, so why would we need anything similar for the LOCAL model if we can prove unconditional results?

One reason is simply this: even if it is possible to prove tight lower bounds for many problems in the LOCAL model, **it doesn’t mean that we can do it easily** (or at all) for a specific problem of interest. Indeed, classifying the distributed complexity of a given LCL problem in the LOCAL model is undecidable, so there is an excuse for us when we fail to prove a tight lower bound…

So here something similar to the case of the graph isomorphism problem often happens: we encounter an elegant, elementary-looking problem X for which we can’t prove tight bounds, and we gradually learn that there seem to be many other problems that we can connect to X through local reductions. And eventually we have got a class of “X-complete” problems that contains lots of problems that we would very much like to understand.

I think such classes are a **very important driving force** that is guiding research effort in the right direction. However, they also tend to be **relatively short-lived**. Often these classes are not even formally defined, but they are primarily used in coffee-room discussions as a source for inspiration, but sometimes they also appear in published articles.

The class of P-SLOCAL-complete problems is perhaps a good illustration of such a class. It played a key role in the recent rapid development of the theory of distributed computing, and there were even some papers with result of the form “problem X is P-SLOCAL-complete”, very much like what we see in the context of NP-completeness, but then it was discovered that P-SLOCAL = P-LOCAL. The class P-SLOCAL-complete served a very useful role, but we do not need it any more. SLOCAL as a model of computing, however, is likely to have a permanent place as a key concept in our field.

I would like to emphasize that what I wrote above is primarily written with models similar to the LOCAL model in mind. If we switch to something like the **congested clique** model, then it is close enough to classical computation (especially circuit complexity) so that unconditional lower bounds are something we don’t even expect to have. Therefore work on computational complexity looks much more similar to what you see in the classical side, and complexity classes and the notion of completeness play a similar role.