# Space hierarchy theorem

In computational complexity theory, the space hierarchy theorems are separation results that show that both deterministic and nondeterministic machines can solve more problems in (asymptotically) more space, subject to certain conditions. For example, a deterministic Turing machine can solve more decision problems in space n log n than in space n. The somewhat weaker analogous theorems for time are the time hierarchy theorems.

The foundation for the hierarchy theorems lies in the intuition that with either more time or more space comes the ability to compute more functions (or decide more languages). The hierarchy theorems are used to demonstrate that the time and space complexity classes form a hierarchy where classes with tighter bounds contain fewer languages than those with more relaxed bounds. Here we define and prove the space hierarchy theorem.

The space hierarchy theorems rely on the concept of space-constructible functions. The deterministic and nondeterministic space hierarchy theorems state that for all space-constructible functions f(n),

${\displaystyle {\mathsf {SPACE}}\left(o(f(n))\right)\subsetneq {\mathsf {SPACE}}(f(n))}$ ,

where SPACE stands for either DSPACE or NSPACE, and o refers to the little o notation.

## Statement

Formally, a function ${\displaystyle f:\mathbb {N} \longrightarrow \mathbb {N} }$ is space-constructible if ${\displaystyle f(n)\geq \log ~n}$ and there exists a Turing machine which computes the function ${\displaystyle f(n)}$ in space ${\displaystyle O(f(n))}$ when starting with an input ${\displaystyle 1^{n}}$ , where ${\displaystyle 1^{n}}$ represents a string of n consecutive 1s. Most of the common functions that we work with are space-constructible, including polynomials, exponents, and logarithms.

For every space-constructible function ${\displaystyle f:\mathbb {N} \longrightarrow \mathbb {N} }$ , there exists a language L that is decidable in space ${\displaystyle O(f(n))}$ but not in space ${\displaystyle o(f(n))}$ .

## Proof

The goal here is to define a language that can be decided in space ${\displaystyle O(f(n))}$ but not space ${\displaystyle o(f(n))}$ . Here we define the language L:

${\displaystyle L=\{~(\langle M\rangle ,10^{k}):M{\mbox{ does not accept }}(\langle M\rangle ,10^{k}){\mbox{ using space }}\leq f(|\langle M\rangle ,10^{k}|)~\}}$

Now, for any machine M that decides a language in space ${\displaystyle o(f(n))}$ , L will differ in at least one spot from the language of M. Namely, for some large enough k, M will use space ${\displaystyle \leq f(|\langle M\rangle ,10^{k}|)}$ on ${\displaystyle (\langle M\rangle ,10^{k})}$ and will therefore differ at its value.

On the other hand, L is in ${\displaystyle {\mathsf {SPACE}}(f(n))}$ . The algorithm for deciding the language L is as follows:

1. On an input x, compute ${\displaystyle f(|x|)}$ using space-constructibility, and mark off ${\displaystyle f(|x|)}$ cells of tape. Whenever an attempt is made to use more than ${\displaystyle f(|x|)}$ cells, reject.
2. If x is not of the form ${\displaystyle \langle M\rangle ,10^{k}}$ for some TM M, reject.
3. Simulate M on input x for at most ${\displaystyle 2^{f(|x|)}}$ steps (using ${\displaystyle f(|x|)}$ space). If the simulation tries to use more than ${\displaystyle f(|x|)}$ space or more than ${\displaystyle 2^{f(|x|)}}$ operations, then reject.
4. If M accepted x during this simulation, then reject; otherwise, accept.

Note on step 3: Execution is limited to ${\displaystyle 2^{f(|x|)}}$ steps in order to avoid the case where M does not halt on the input x. That is, the case where M consumes space of only ${\displaystyle O(f(x))}$ as required, but runs for infinite time.

The above proof holds for the case of PSPACE whereas we must make some change for the case of NPSPACE. The crucial point is that while on a deterministic TM we may easily invert acceptance and rejection (crucial for step 4), this is not possible on a non-deterministic machine.

For the case of NPSPACE we will first redefine L:

${\displaystyle L=\{~(\langle M\rangle ,10^{k}):M{\mbox{ accepts }}(\langle M\rangle ,10^{k}){\mbox{ using space }}\leq f(|\langle M\rangle ,10^{k}|)~\}}$

Now, we need to change the algorithm to accept L by modifying step 4 to:

• If M accepted x during this simulation, then accept; otherwise, reject.

We will now prove by contradiction that L can not be decided by a TM using ${\displaystyle o(f(n))}$ cells. Assuming L can be decided by some TM M using ${\displaystyle o(f(n))}$ cells, and following from the Immerman–Szelepcsényi theorem, ${\displaystyle {\overline {L}}}$ can also be determined by a TM (which we will call ${\displaystyle {\overline {M}}}$ ) using ${\displaystyle o(f(n))}$ cells. Here lies the contradiction, therefore our assumption must be false:

1. If ${\displaystyle w=(\langle {\overline {M}}\rangle ,10^{k})}$ (for some large enough k) is not in ${\displaystyle {\overline {L}}}$ then M will accept it, therefore ${\displaystyle {\overline {M}}}$ rejects w, therefore w is in ${\displaystyle {\overline {L}}}$ (contradiction).
2. If ${\displaystyle w=(\langle {\overline {M}}\rangle ,10^{k})}$ (for some large enough k) is in ${\displaystyle {\overline {L}}}$ then M will reject it, therefore ${\displaystyle {\overline {M}}}$ accepts w, therefore w is not in ${\displaystyle {\overline {L}}}$ (contradiction).

## Comparison and improvements

The space hierarchy theorem is stronger than the analogous time hierarchy theorems in several ways:

• It only requires s(n) to be at least log n instead of at least n.
• It can separate classes with any asymptotic difference, whereas the time hierarchy theorem requires them to be separated by a logarithmic factor.
• It only requires the function to be space-constructible, not time-constructible.

It seems to be easier to separate classes in space than in time. Indeed, whereas the time hierarchy theorem has seen little remarkable improvement since its inception, the nondeterministic space hierarchy theorem has seen at least one important improvement by Viliam Geffert in his 2003 paper "Space hierarchy theorem revised". This paper made several generalizations of the theorem:

• It relaxes the space-constructibility requirement. Instead of merely separating the union classes ${\displaystyle {\mathsf {DSPACE}}(O(s(n))}$ and ${\displaystyle {\mathsf {DSPACE}}(o(s(n))}$ , it separates ${\displaystyle {\mathsf {DSPACE}}(f(n))}$ from ${\displaystyle {\mathsf {DSPACE}}(g(n))}$ where ${\displaystyle f(n)}$ is an arbitrary ${\displaystyle O(s(n))}$ function and g(n) is a computable ${\displaystyle o(s(n))}$ function. These functions need not be space-constructible or even monotone increasing.
• It identifies a unary language, or tally language, which is in one class but not the other. In the original theorem, the separating language was arbitrary.
• It does not require ${\displaystyle s(n)}$ to be at least log n; it can be any nondeterministically fully space-constructible function.

## Refinement of space hierarchy

If space is measured as the number of cells used regardless of alphabet size, then ${\displaystyle {\mathsf {SPACE}}(f(n))={\mathsf {SPACE}}(O(f(n)))}$ because one can achieve any linear compression by switching to a larger alphabet. However, by measuring space in bits, a much sharper separation is achievable for deterministic space. Instead of being defined up to a multiplicative constant, space is now defined up to an additive constant. However, because any constant amount of external space can be saved by storing the contents into the internal state, we still have ${\displaystyle {\mathsf {SPACE}}(f(n))={\mathsf {SPACE}}(f(n)+O(1))}$ .

Assume that f is space-constructible. SPACE is deterministic.

• For a wide variety of sequential computational models, including for Turing machines, SPACE(f(n)-ω(log(f(n)+n))) ⊊ SPACE(f(n)). This holds even if SPACE(f(n)-ω(log(f(n)+n))) is defined using a different computational model than ${\displaystyle {\mathsf {SPACE}}(f(n))}$ because the different models can simulate each other with ${\displaystyle O(\log(f(n)+n))}$ space overhead.
• For certain computational models, we even have SPACE(f(n)-ω(1)) ⊊ SPACE(f(n)). In particular, this holds for Turing machines if we fix the alphabet, the number of heads on the input tape, the number of heads on the worktape (using a single worktape), and add delimiters for the visited portion of the worktape (that can be checked without increasing space usage). SPACE(f(n)) does not depend on whether the worktape is infinite or semi-infinite. We can also have a fixed number of worktapes if f(n) is either a SPACE constructible tuple giving the per-tape space usage, or a SPACE(f(n)-ω(log(f(n)))-constructible number giving the total space usage (not counting the overhead for storing the length of each tape).

The proof is similar to the proof of the space hierarchy theorem, but with two complications: The universal Turing machine has to be space-efficient, and the reversal has to be space-efficient. One can generally construct universal Turing machines with ${\displaystyle O(\log(space))}$ space overhead, and under appropriate assumptions, just ${\displaystyle O(1)}$ space overhead (which may depend on the machine being simulated). For the reversal, the key issue is how to detect if the simulated machine rejects by entering an infinite (space-constrained) loop. Simply counting the number of steps taken would increase space consumption by about ${\displaystyle f(n)}$ . At the cost of a potentially exponential time increase, loops can be detected space-efficiently as follows:[1]

Modify the machine to erase everything and go to a specific configuration A on success. Use depth-first search to determine whether A is reachable in the space bound from the starting configuration. The search starts at A and goes over configurations that lead to A. Because of determinism, this can be done in place and without going into a loop.

We can also determine whether the machine exceeds a space bound (as opposed to looping within the space bound) by iterating over all configurations about to exceed the space bound and checking (again using depth-first search) whether the initial configuration leads to any of them.

## Corollaries

### Corollary 1

For any two functions ${\displaystyle f_{1}}$ , ${\displaystyle f_{2}:\mathbb {N} \longrightarrow \mathbb {N} }$ , where ${\displaystyle f_{1}(n)}$ is ${\displaystyle o(f_{2}(n))}$ and ${\displaystyle f_{2}}$ is space-constructible, ${\displaystyle {\mathsf {SPACE}}(f_{1}(n))\subsetneq {\mathsf {SPACE}}(f_{2}(n))}$ .

This corollary lets us separate various space complexity classes. For any function ${\displaystyle n^{k}}$ is space-constructible for any natural number k. Therefore for any two natural numbers ${\displaystyle k_{1} we can prove ${\displaystyle {\mathsf {SPACE}}(n^{k_{1}})\subsetneq {\mathsf {SPACE}}(n^{k_{2}})}$ . We can extend this idea for real numbers in the following corollary. This demonstrates the detailed hierarchy within the PSPACE class.

### Corollary 2

For any two nonnegative real numbers ${\displaystyle a_{1} .

### Corollary 3

NLPSPACE.

#### Proof

Savitch's theorem shows that ${\displaystyle {\mathsf {NL}}\subseteq {\mathsf {SPACE}}(\log ^{2}n)}$ , while the space hierarchy theorem shows that ${\displaystyle {\mathsf {SPACE}}(\log ^{2}n)\subsetneq {\mathsf {SPACE}}(n)}$ . Thus we get this corollary along with the fact that TQBF NL since TQBF is PSPACE-complete.

This could also be proven using the non-deterministic space hierarchy theorem to show that NL ⊊ NPSPACE, and using Savitch's theorem to show that PSPACE = NPSPACE.

### Corollary 4

PSPACEEXPSPACE.

This last corollary shows the existence of decidable problems that are intractable. In other words their decision procedures must use more than polynomial space.

### Corollary 5

There are problems in PSPACE requiring an arbitrarily large exponent to solve; therefore PSPACE does not collapse to DSPACE(nk) for some constant k.