# μ-recursive function

In mathematical logic and computer science, the general recursive functions (often shortened to recursive functions) or μ-recursive functions are a class of partial functions from natural numbers to natural numbers that are "computable" in an intuitive sense. In computability theory, it is shown that the μ-recursive functions are precisely the functions that can be computed by Turing machines[1][3](this is one of the theorems that supports the Church–Turing thesis). The μ-recursive functions are closely related to primitive recursive functions, and their inductive definition (below) builds upon that of the primitive recursive functions. However, not every μ-recursive function is a primitive recursive functionthe most famous example is the Ackermann function.

Other equivalent classes of functions are the λ-recursive functions and the functions that can be computed by Markov algorithms.

The subset of all total recursive functions with values in {0,1} is known in computational complexity theory as the complexity class R.

## Definition

The μ-recursive functions (or general recursive functions) are partial functions that take finite tuples of natural numbers and return a single natural number. They are the smallest class of partial functions that includes the initial functions and is closed under composition, primitive recursion, and the μ operator.

The smallest class of functions including the initial functions and closed under composition and primitive recursion (i.e. without minimisation) is the class of primitive recursive functions. While all primitive recursive functions are total, this is not true of partial recursive functions; for example, the minimisation of the successor function is undefined. The primitive recursive functions are a subset of the total recursive functions, which are a subset of the partial recursive functions. For example, the Ackermann function can be proven to be total recursive, and to be non-primitive.

Primitive or "basic" functions:

1. Constant functions Cnk: For each natural number ${\displaystyle n\,}$ and every ${\displaystyle k\,}$
${\displaystyle C_{n}^{k}(x_{1},\ldots ,x_{k}){\stackrel {\mathrm {def} }{=}}n}$
Alternative definitions use instead a zero function as a primitive function that always returns zero, and built the constant functions from the zero function, the successor function and the composition operator.
1. Successor function S:
${\displaystyle S(x){\stackrel {\mathrm {def} }{=}}x+1\,}$
2. Projection function ${\displaystyle P_{i}^{k}}$ (also called the Identity function: For all natural numbers ${\displaystyle i,k}$ such that ${\displaystyle 1\leq i\leq k}$:
${\displaystyle P_{i}^{k}(x_{1},\ldots ,x_{k}){\stackrel {\mathrm {def} }{=}}x_{i}\,.}$

Operators (the domain of a function defined by an operator is the set of the values of the arguments such that every function application that must be done during the computation provides a well-defined result):

1. Composition operator ${\displaystyle \circ \,}$ (also called the substitution operator): Given an m-ary function ${\displaystyle h(x_{1},\ldots ,x_{m})\,}$ and m k-ary functions ${\displaystyle g_{1}(x_{1},\ldots ,x_{k}),\ldots ,g_{m}(x_{1},\ldots ,x_{k})}$:
${\displaystyle h\circ (g_{1},\ldots ,g_{m}){\stackrel {\mathrm {def} }{=}}f,\quad {\text{where}}\quad f(x_{1},\ldots ,x_{k})=h(g_{1}(x_{1},\ldots ,x_{k}),\ldots ,g_{m}(x_{1},\ldots ,x_{k})).}$
This means that ${\displaystyle f(x_{1},\ldots ,x_{k})}$ is defined only if ${\displaystyle g_{1}(x_{1},\ldots ,x_{k}),\ldots ,g_{m}(x_{1},\ldots ,x_{k}),}$ and ${\displaystyle h(g_{1}(x_{1},\ldots ,x_{k}),\ldots ,g_{m}(x_{1},\ldots ,x_{k}))}$ are all defined.
1. Primitive recursion operator ${\displaystyle \rho \,}$: Given the k-ary function ${\displaystyle g(x_{1},\ldots ,x_{k})\,}$ and k+2 -ary function ${\displaystyle h(y,z,x_{1},\ldots ,x_{k})\,}$:
{\displaystyle {\begin{aligned}\rho (g,h)&{\stackrel {\mathrm {def} }{=}}f\quad {\text{where the k+1 -ary function }}f{\text{ is defined by}}\\f(0,x_{1},\ldots ,x_{k})&=g(x_{1},\ldots ,x_{k})\\f(S(y),x_{1},\ldots ,x_{k})&=h(y,f(y,x_{1},\ldots ,x_{k}),x_{1},\ldots ,x_{k})\,.\end{aligned}}}
This means that ${\displaystyle f(y,x_{1},\ldots ,x_{k})}$ is defined only if ${\displaystyle g(x_{1},\ldots ,x_{k})}$ and ${\displaystyle h(z,f(z,x_{1},\ldots ,x_{k}),x_{1},\ldots ,x_{k})}$ are defined for all ${\displaystyle z
1. Minimization operator ${\displaystyle \mu \,}$: Given a (k+1)-ary function ${\displaystyle f(y,x_{1},\ldots ,x_{k})\,}$, the k-ary function ${\displaystyle \mu (f)}$ is defined by:
{\displaystyle {\begin{aligned}\mu (f)(x_{1},\ldots ,x_{k})=z{\stackrel {\mathrm {def} }{\iff }}\ f(i,x_{1},\ldots ,x_{k})&>0\quad {\text{for}}\quad i=0,\ldots ,z-1\quad {\text{and}}\\f(z,x_{1},\ldots ,x_{k})&=0\quad \end{aligned}}}
Intuitively, minimisation seeks—beginning the search from 0 and proceeding upwards—the smallest argument that causes the function to return zero; if there is no such argument, of if one encounters an argument for which f is not defined, then the search never terminates, and ${\displaystyle \mu (f)}$ is not defined for the argument ${\displaystyle (x_{1},\ldots ,x_{k}).}$
(Note: While some textbooks use the μ-operator as defined here [4][5], others like [6][7] demand that the μ-operator is applied to total functions ${\displaystyle f}$ only. Although this restricts the μ-operator as compared to the definition given here, the class of μ-recursive functions remains the same, which follows from Kleene's Normalform Theorem [4][5]. The only difference is, that it becomes undecidable whether some text satisfies the requirements given for the base functions and operators as it is not semi-decidable (hence undecidable) whether a computable (i.e. μ-recursive) function is total [6].)

The strong equality operator ${\displaystyle \simeq }$ can be used to compare partial μ-recursive functions. This is defined for all partial functions f and g so that

${\displaystyle f(x_{1},\ldots ,x_{k})\simeq g(x_{1},\ldots ,x_{l})}$

holds if and only if for any choice of arguments either both functions are defined and their values are equal or both functions are undefined.

## Equivalence with other models of computability

In the equivalence of models of computability, a parallel is drawn between Turing machines that do not terminate for certain inputs and an undefined result for that input in the corresponding partial recursive function. The unbounded search operator is not definable by the rules of primitive recursion as those do not provide a mechanism for "infinite loops" (undefined values).

## Normal form theorem

A normal form theorem due to Kleene says that for each k there are primitive recursive functions ${\displaystyle U(y)\!}$ and ${\displaystyle T(y,e,x_{1},\ldots ,x_{k})\!}$ such that for any μ-recursive function ${\displaystyle f(x_{1},\ldots ,x_{k})\!}$ with k free variables there is an e such that

${\displaystyle f(x_{1},\ldots ,x_{k})\simeq U(\mu y\,T(y,e,x_{1},\ldots ,x_{k}))}$.

The number e is called an index or Gödel number for the function f.[8]:52-53 A consequence of this result is that any μ-recursive function can be defined using a single instance of the μ operator applied to a (total) primitive recursive function.

Minsky (1967) observes (as does Boolos-Burgess-Jeffrey (2002) pp. 94–95) that the U defined above is in essence the μ-recursive equivalent of the universal Turing machine:

To construct U is to write down the definition of a general-recursive function U(n, x) that correctly interprets the number n and computes the appropriate function of x. to construct U directly would involve essentially the same amount of effort, and essentially the same ideas, as we have invested in constructing the universal Turing machine. (italics in original, Minsky (1967) p. 189)

## Symbolism

A number of different symbolisms are used in the literature. An advantage to using the symbolism is a derivation of a function by "nesting" of the operators one inside the other is easier to write in a compact form. In the following we will abbreviate the string of parameters x1, ..., xn as x:

• Constant function: Kleene uses " Cqn(x) = q " and Boolos-Burgess-Jeffrey (2002) (B-B-J) use the abbreviation " constn( x) = n ":
e.g. C137 ( r, s, t, u, v, w, x ) = 13
e.g. const13 ( r, s, t, u, v, w, x ) = 13
• Successor function: Kleene uses x' and S for "Successor". As "successor" is considered to be primitive, most texts use the apostrophe as follows:
S(a) = a +1 =def a', where 1 =def 0', 2 =def 0 ' ', etc.
• Identity function: Kleene (1952) uses " Uin " to indicate the identity function over the variables xi; B-B-J use the identity function idin over the variables x1 to xn:
Uin( x ) = idin( x ) = xi
e.g. U37 = id37 ( r, s, t, u, v, w, x ) = t
• Composition (Substitution) operator: Kleene uses a bold-face Snm (not to be confused with his S for "successor" ! ). The superscript "m" refers to the mth of function "fm", whereas the subscript "n" refers to the nth variable "xn":
If we are given h( x )= g( f1(x), ... , fm(x) )
h(x) = Smn(g, f1, ... , fm )
In a similar manner, but without the sub- and superscripts, B-B-J write:
h(x')= Cn[g, f1 ,..., fm](x)
• Primitive Recursion: Kleene uses the symbol " Rn(base step, induction step) " where n indicates the number of variables, B-B-J use " Pr(base step, induction step)(x)". Given:
• base step: h( 0, x )= f( x ), and
• induction step: h( y+1, x ) = g( y, h(y, x),x )
Example: primitive recursion definition of a + b:
• base step: f( 0, a ) = a = U11(a)
• induction step: f( b' , a ) = ( f ( b, a ) )' = g( b, f( b, a), a ) = g( b, c, a ) = c' = S(U23( b, c, a ))
R2 { U11(a), S [ (U23( b, c, a ) ] }
Pr{ U11(a), S[ (U23( b, c, a ) ] }

Example: Kleene gives an example of how to perform the recursive derivation of f(b, a) = b + a (notice reversal of variables a and b). He starts with 3 initial functions

1. S(a) = a'
2. U11(a) = a
3. U23( b, c, a ) = c
4. g(b, c, a) = S(U23( b, c, a )) = c'
5. base step: h( 0, a ) = U11(a)
induction step: h( b', a ) = g( b, h( b, a ), a )

He arrives at:

a+b = R2[ U11, S13(S, U23) ]

## Examples

3. Turing, Alan Mathison (Dec 1937). "Computability and λ-Definability". Journal of Symbolic Logic. 2 (4): 153–163. JSTOR 2268280. Proof outline on p.153: ${\displaystyle \lambda {\mbox{-definable}}}$ ${\displaystyle {\stackrel {triv}{\implies }}}$ ${\displaystyle \lambda {\mbox{-}}K{\mbox{-definable}}}$ ${\displaystyle {\stackrel {160}{\implies }}}$ ${\displaystyle {\mbox{Turing computable}}}$ ${\displaystyle {\stackrel {161}{\implies }}}$ ${\displaystyle \mu {\mbox{-recursive}}}$ ${\displaystyle {\stackrel {Kleene}{\implies }}}$[2] ${\displaystyle \lambda {\mbox{-definable}}}$