# Classical capacity

In quantum information theory, the classical capacity of a quantum channel is the maximum rate at which classical data can be sent over it error-free in the limit of many uses of the channel. Holevo, Schumacher, and Westmoreland proved the following lower bound on the classical capacity of any quantum channel ${\displaystyle {\mathcal {N}}}$:

${\displaystyle \chi ({\mathcal {N}})=\max _{\rho ^{XA}}I(X;B)_{{\mathcal {N}}(\rho )}}$

where ${\displaystyle \rho ^{XA}}$ is a classical-quantum state of the following form:

${\displaystyle \rho ^{XA}=\sum _{x}p_{X}(x)\vert x\rangle \langle x\vert ^{X}\otimes \rho _{x}^{A},}$

${\displaystyle p_{X}(x)}$ is a probability distribution, and each ${\displaystyle \rho _{x}^{A}}$ is a density operator that can be input to the channel ${\displaystyle {\mathcal {N}}}$.

## Achievability using sequential decoding

We briefly review the HSW coding theorem (the statement of the achievability of the Holevo information rate ${\displaystyle I(X;B)}$ for communicating classical data over a quantum channel). We first review the minimal amount of quantum mechanics needed for the theorem. We then cover quantum typicality, and finally we prove the theorem using a recent sequential decoding technique.

## Review of quantum mechanics

In order to prove the HSW coding theorem, we really just need a few basic things from quantum mechanics. First, a quantum state is a unit trace, positive operator known as a density operator. Usually, we denote it by ${\displaystyle \rho }$, ${\displaystyle \sigma }$, ${\displaystyle \omega }$, etc. The simplest model for a quantum channel is known as a classical-quantum channel:

${\displaystyle x\mapsto \rho _{x}.}$

The meaning of the above notation is that inputting the classical letter ${\displaystyle x}$ at the transmitting end leads to a quantum state ${\displaystyle \rho _{x}}$ at the receiving end. It is the task of the receiver to perform a measurement to determine the input of the sender. If it is true that the states ${\displaystyle \rho _{x}}$ are perfectly distinguishable from one another (i.e., if they have orthogonal supports such that ${\displaystyle \mathrm {Tr} \,\left\{\rho _{x}\rho _{x^{\prime }}\right\}=0}$ for ${\displaystyle x\neq x^{\prime }}$), then the channel is a noiseless channel. We are interested in situations for which this is not the case. If it is true that the states ${\displaystyle \rho _{x}}$ all commute with one another, then this is effectively identical to the situation for a classical channel, so we are also not interested in these situations. So, the situation in which we are interested is that in which the states ${\displaystyle \rho _{x}}$ have overlapping support and are non-commutative.

The most general way to describe a quantum measurement is with a positive operator-valued measure (POVM). We usually denote the elements of a POVM as ${\displaystyle \left\{\Lambda _{m}\right\}_{m}}$. These operators should satisfy positivity and completeness in order to form a valid POVM:

${\displaystyle \Lambda _{m}\geq 0\ \ \ \ \forall m}$
${\displaystyle \sum _{m}\Lambda _{m}=I.}$

The probabilistic interpretation of quantum mechanics states that if someone measures a quantum state ${\displaystyle \rho }$ using a measurement device corresponding to the POVM ${\displaystyle \left\{\Lambda _{m}\right\}}$, then the probability ${\displaystyle p\left(m\right)}$ for obtaining outcome ${\displaystyle m}$ is equal to

${\displaystyle p\left(m\right)={\text{Tr}}\left\{\Lambda _{m}\rho \right\},}$

and the post-measurement state is

${\displaystyle \rho _{m}^{\prime }={\frac {1}{p\left(m\right)}}{\sqrt {\Lambda _{m}}}\rho {\sqrt {\Lambda _{m}}},}$

if the person measuring obtains outcome ${\displaystyle m}$. These rules are sufficient for us to consider classical communication schemes over cq channels.

## Quantum typicality

The reader can find a good review of this topic in the article about the typical subspace.

## Gentle operator lemma

The following lemma is important for our proofs. It demonstrates that a measurement that succeeds with high probability on average does not disturb the state too much on average:

Lemma: [Winter] Given an ensemble ${\displaystyle \left\{p_{X}\left(x\right),\rho _{x}\right\}}$ with expected density operator ${\displaystyle \rho \equiv \sum _{x}p_{X}\left(x\right)\rho _{x}}$, suppose that an operator ${\displaystyle \Lambda }$ such that ${\displaystyle I\geq \Lambda \geq 0}$ succeeds with high probability on the state ${\displaystyle \rho }$:

${\displaystyle {\text{Tr}}\left\{\Lambda \rho \right\}\geq 1-\epsilon .}$

Then the subnormalized state ${\displaystyle {\sqrt {\Lambda }}\rho _{x}{\sqrt {\Lambda }}}$ is close in expected trace distance to the original state ${\displaystyle \rho _{x}}$:

${\displaystyle \mathbb {E} _{X}\left\{\left\Vert {\sqrt {\Lambda }}\rho _{X}{\sqrt {\Lambda }}-\rho _{X}\right\Vert _{1}\right\}\leq 2{\sqrt {\epsilon }}.}$

(Note that ${\displaystyle \left\Vert A\right\Vert _{1}}$ is the nuclear norm of the operator ${\displaystyle A}$ so that ${\displaystyle \left\Vert A\right\Vert _{1}\equiv }$Tr${\displaystyle \left\{{\sqrt {A^{\dagger }A}}\right\}}$.)

The following inequality is useful for us as well. It holds for any operators ${\displaystyle \rho }$, ${\displaystyle \sigma }$, ${\displaystyle \Lambda }$ such that ${\displaystyle 0\leq \rho ,\sigma ,\Lambda \leq I}$:

${\displaystyle {\text{Tr}}\left\{\Lambda \rho \right\}\leq {\text{Tr}}\left\{\Lambda \sigma \right\}+\left\Vert \rho -\sigma \right\Vert _{1}.}$

(1)

The quantum information-theoretic interpretation of the above inequality is that the probability of obtaining outcome ${\displaystyle \Lambda }$ from a quantum measurement acting on the state ${\displaystyle \rho }$ is upper bounded by the probability of obtaining outcome ${\displaystyle \Lambda }$ on the state ${\displaystyle \sigma }$ summed with the distinguishability of the two states ${\displaystyle \rho }$ and ${\displaystyle \sigma }$.

## Non-commutative union bound

Lemma: [Sen's bound] The following bound holds for a subnormalized state ${\displaystyle \sigma }$ such that ${\displaystyle 0\leq \sigma }$ and ${\displaystyle Tr\left\{\sigma \right\}\leq 1}$ with ${\displaystyle \Pi _{1}}$, ... , ${\displaystyle \Pi _{N}}$ being projectors: ${\displaystyle {\text{Tr}}\left\{\sigma \right\}-{\text{Tr}}\left\{\Pi _{N}\cdots \Pi _{1}\ \sigma \ \Pi _{1}\cdots \Pi _{N}\right\}\leq 2{\sqrt {\sum _{i=1}^{N}{\text{Tr}}\left\{\left(I-\Pi _{i}\right)\sigma \right\}}},}$

We can think of Sen's bound as a "non-commutative union bound" because it is analogous to the following union bound from probability theory:

${\displaystyle \Pr \left\{\left(A_{1}\cap \cdots \cap A_{N}\right)^{c}\right\}=\Pr \left\{A_{1}^{c}\cup \cdots \cup A_{N}^{c}\right\}\leq \sum _{i=1}^{N}\Pr \left\{A_{i}^{c}\right\},}$

where ${\displaystyle A_{1}}$, \ldots, ${\displaystyle A_{N}}$ are events. The analogous bound for projector logic would be

${\displaystyle {\text{Tr}}\left\{\left(I-\Pi _{1}\cdots \Pi _{N}\cdots \Pi _{1}\right)\rho \right\}\leq \sum _{i=1}^{N}{\text{Tr}}\left\{\left(I-\Pi _{i}\right)\rho \right\},}$

if we think of ${\displaystyle \Pi _{1}\cdots \Pi _{N}}$ as a projector onto the intersection of subspaces. Though, the above bound only holds if the projectors ${\displaystyle \Pi _{1}}$, ..., ${\displaystyle \Pi _{N}}$ are commuting (choosing ${\displaystyle \Pi _{1}=\left\vert +\right\rangle \left\langle +\right\vert }$, ${\displaystyle \Pi _{2}=\left\vert 0\right\rangle \left\langle 0\right\vert }$, and ${\displaystyle \rho =\left\vert 0\right\rangle \left\langle 0\right\vert }$ gives a counterexample). If the projectors are non-commuting, then Sen's bound is the next best thing and suffices for our purposes here.

## HSW theorem with the non-commutative union bound

We now prove the HSW theorem with Sen's non-commutative union bound. We divide up the proof into a few parts: codebook generation, POVM construction, and error analysis.

Codebook Generation. We first describe how Alice and Bob agree on a random choice of code. They have the channel ${\displaystyle x\rightarrow \rho _{x}}$ and a distribution ${\displaystyle p_{X}\left(x\right)}$. They choose ${\displaystyle M}$ classical sequences ${\displaystyle x^{n}}$ according to the IID\ distribution ${\displaystyle p_{X^{n}}\left(x^{n}\right)}$. After selecting them, they label them with indices as ${\displaystyle \left\{x^{n}\left(m\right)\right\}_{m\in \left[M\right]}}$. This leads to the following quantum codewords:

${\displaystyle \rho _{x^{n}\left(m\right)}=\rho _{x_{1}\left(m\right)}\otimes \cdots \otimes \rho _{x_{n}\left(m\right)}.}$

The quantum codebook is then ${\displaystyle \left\{\rho _{x^{n}\left(m\right)}\right\}}$. The average state of the codebook is then

${\displaystyle \mathbb {E} _{X^{n}}\left\{\rho _{X^{n}}\right\}=\sum _{x^{n}}p_{X^{n}}\left(x^{n}\right)\rho _{x^{n}}=\rho ^{\otimes n},}$

(2)

where ${\displaystyle \rho =\sum _{x}p_{X}\left(x\right)\rho _{x}}$.

POVM Construction . Sens' bound from the above lemma suggests a method for Bob to decode a state that Alice transmits. Bob should first ask "Is the received state in the average typical subspace?" He can do this operationally by performing a typical subspace measurement corresponding to ${\displaystyle \left\{\Pi _{\rho ,\delta }^{n},I-\Pi _{\rho ,\delta }^{n}\right\}}$. Next, he asks in sequential order, "Is the received codeword in the ${\displaystyle m^{\text{th}}}$ conditionally typical subspace?" This is in some sense equivalent to the question, "Is the received codeword the ${\displaystyle m^{\text{th}}}$ transmitted codeword?" He can ask these questions operationally by performing the measurements corresponding to the conditionally typical projectors ${\displaystyle \left\{\Pi _{\rho _{x^{n}\left(m\right)},\delta },I-\Pi _{\rho _{x^{n}\left(m\right)},\delta }\right\}}$.

Why should this sequential decoding scheme work well? The reason is that the transmitted codeword lies in the typical subspace on average:

${\displaystyle \mathbb {E} _{X^{n}}\left\{{\text{Tr}}\left\{\Pi _{\rho ,\delta }\ \rho _{X^{n}}\right\}\right\}={\text{Tr}}\left\{\Pi _{\rho ,\delta }\ \mathbb {E} _{X^{n}}\left\{\rho _{X^{n}}\right\}\right\}}$
${\displaystyle ={\text{Tr}}\left\{\Pi _{\rho ,\delta }\ \rho ^{\otimes n}\right\}}$
${\displaystyle \geq 1-\epsilon ,}$

where the inequality follows from (\ref{eq:1st-typ-prop}). Also, the projectors ${\displaystyle \Pi _{\rho _{x^{n}\left(m\right)},\delta }}$ are "good detectors" for the states ${\displaystyle \rho _{x^{n}\left(m\right)}}$ (on average) because the following condition holds from conditional quantum typicality:

${\displaystyle \mathbb {E} _{X^{n}}\left\{{\text{Tr}}\left\{\Pi _{\rho _{X^{n}},\delta }\ \rho _{X^{n}}\right\}\right\}\geq 1-\epsilon .}$

Error Analysis. The probability of detecting the ${\displaystyle m^{\text{th}}}$ codeword correctly under our sequential decoding scheme is equal to

${\displaystyle {\text{Tr}}\left\{\Pi _{\rho _{X^{n}\left(m\right)},\delta }{\hat {\Pi }}_{\rho _{X^{n}\left(m-1\right)},\delta }\cdots {\hat {\Pi }}_{\rho _{X^{n}\left(1\right)},\delta }\ \Pi _{\rho ,\delta }^{n}\ \rho _{x^{n}\left(m\right)}\ \Pi _{\rho ,\delta }^{n}\ {\hat {\Pi }}_{\rho _{X^{n}\left(1\right)},\delta }\cdots {\hat {\Pi }}_{\rho _{X^{n}\left(m-1\right)},\delta }\Pi _{\rho _{X^{n}\left(m\right)},\delta }\right\},}$

where we make the abbreviation ${\displaystyle {\hat {\Pi }}\equiv I-\Pi }$. (Observe that we project into the average typical subspace just once.) Thus, the probability of an incorrect detection for the ${\displaystyle m^{\text{th}}}$ codeword is given by

${\displaystyle 1-{\text{Tr}}\left\{\Pi _{\rho _{X^{n}\left(m\right)},\delta }{\hat {\Pi }}_{\rho _{X^{n}\left(m-1\right)},\delta }\cdots {\hat {\Pi }}_{\rho _{X^{n}\left(1\right)},\delta }\ \Pi _{\rho ,\delta }^{n}\ \rho _{x^{n}\left(m\right)}\ \Pi _{\rho ,\delta }^{n}\ {\hat {\Pi }}_{\rho _{X^{n}\left(1\right)},\delta }\cdots {\hat {\Pi }}_{\rho _{X^{n}\left(m-1\right)},\delta }\Pi _{\rho _{X^{n}\left(m\right)},\delta }\right\},}$

and the average error probability of this scheme is equal to

${\displaystyle 1-{\frac {1}{M}}\sum _{m}{\text{Tr}}\left\{\Pi _{\rho _{X^{n}\left(m\right)},\delta }{\hat {\Pi }}_{\rho _{X^{n}\left(m-1\right)},\delta }\cdots {\hat {\Pi }}_{\rho _{X^{n}\left(1\right)},\delta }\ \Pi _{\rho ,\delta }^{n}\ \rho _{x^{n}\left(m\right)}\ \Pi _{\rho ,\delta }^{n}\ {\hat {\Pi }}_{\rho _{X^{n}\left(1\right)},\delta }\cdots {\hat {\Pi }}_{\rho _{X^{n}\left(m-1\right)},\delta }\Pi _{\rho _{X^{n}\left(m\right)},\delta }\right\}.}$

Instead of analyzing the average error probability, we analyze the expectation of the average error probability, where the expectation is with respect to the random choice of code:

${\displaystyle 1-\mathbb {E} _{X^{n}}\left\{{\frac {1}{M}}\sum _{m}{\text{Tr}}\left\{\Pi _{\rho _{X^{n}\left(m\right)},\delta }{\hat {\Pi }}_{\rho _{X^{n}\left(m-1\right)},\delta }\cdots {\hat {\Pi }}_{\rho _{X^{n}\left(1\right)},\delta }\ \Pi _{\rho ,\delta }^{n}\ \rho _{X^{n}\left(m\right)}\ \Pi _{\rho ,\delta }^{n}\ {\hat {\Pi }}_{\rho _{X^{n}\left(1\right)},\delta }\cdots {\hat {\Pi }}_{\rho _{X^{n}\left(m-1\right)},\delta }\Pi _{\rho _{X^{n}\left(m\right)},\delta }\right\}\right\}.}$

(3)

Our first step is to apply Sen's bound to the above quantity. But before doing so, we should rewrite the above expression just slightly, by observing that

${\displaystyle 1=\mathbb {E} _{X^{n}}\left\{{\frac {1}{M}}\sum _{m}{\text{Tr}}\left\{\rho _{X^{n}\left(m\right)}\right\}\right\}}$
${\displaystyle =\mathbb {E} _{X^{n}}\left\{{\frac {1}{M}}\sum _{m}{\text{Tr}}\left\{\Pi _{\rho ,\delta }^{n}\rho _{X^{n}\left(m\right)}\right\}+{\text{Tr}}\left\{{\hat {\Pi }}_{\rho ,\delta }^{n}\rho _{X^{n}\left(m\right)}\right\}\right\}}$
${\displaystyle =\mathbb {E} _{X^{n}}\left\{{\frac {1}{M}}\sum _{m}{\text{Tr}}\left\{\Pi _{\rho ,\delta }^{n}\rho _{X^{n}\left(m\right)}\Pi _{\rho ,\delta }^{n}\right\}\right\}+{\frac {1}{M}}\sum _{m}{\text{Tr}}\left\{{\hat {\Pi }}_{\rho ,\delta }^{n}\mathbb {E} _{X^{n}}\left\{\rho _{X^{n}\left(m\right)}\right\}\right\}}$
${\displaystyle =\mathbb {E} _{X^{n}}\left\{{\frac {1}{M}}\sum _{m}{\text{Tr}}\left\{\Pi _{\rho ,\delta }^{n}\rho _{X^{n}\left(m\right)}\Pi _{\rho ,\delta }^{n}\right\}\right\}+{\text{Tr}}\left\{{\hat {\Pi }}_{\rho ,\delta }^{n}\rho ^{\otimes n}\right\}}$
${\displaystyle \leq \mathbb {E} _{X^{n}}\left\{{\frac {1}{M}}\sum _{m}{\text{Tr}}\left\{\Pi _{\rho ,\delta }^{n}\rho _{X^{n}\left(m\right)}\Pi _{\rho ,\delta }^{n}\right\}\right\}+\epsilon }$

Substituting into (3) (and forgetting about the small ${\displaystyle \epsilon }$ term for now) gives an upper bound of

${\displaystyle \mathbb {E} _{X^{n}}\left\{{\frac {1}{M}}\sum _{m}{\text{Tr}}\left\{\Pi _{\rho ,\delta }^{n}\rho _{X^{n}\left(m\right)}\Pi _{\rho ,\delta }^{n}\right\}\right\}}$
${\displaystyle -\mathbb {E} _{X^{n}}\left\{{\frac {1}{M}}\sum _{m}{\text{Tr}}\left\{\Pi _{\rho _{X^{n}\left(m\right)},\delta }{\hat {\Pi }}_{\rho _{X^{n}\left(m-1\right)},\delta }\cdots {\hat {\Pi }}_{\rho _{X^{n}\left(1\right)},\delta }\ \Pi _{\rho ,\delta }^{n}\ \rho _{X^{n}\left(m\right)}\ \Pi _{\rho ,\delta }^{n}\ {\hat {\Pi }}_{\rho _{X^{n}\left(1\right)},\delta }\cdots {\hat {\Pi }}_{\rho _{X^{n}\left(m-1\right)},\delta }\Pi _{\rho _{X^{n}\left(m\right)},\delta }\right\}\right\}.}$

We then apply Sen's bound to this expression with ${\displaystyle \sigma =\Pi _{\rho ,\delta }^{n}\rho _{X^{n}\left(m\right)}\Pi _{\rho ,\delta }^{n}}$ and the sequential projectors as ${\displaystyle \Pi _{\rho _{X^{n}\left(m\right)},\delta }}$, ${\displaystyle {\hat {\Pi }}_{\rho _{X^{n}\left(m-1\right)},\delta }}$, ..., ${\displaystyle {\hat {\Pi }}_{\rho _{X^{n}\left(1\right)},\delta }}$. This gives the upper bound ${\displaystyle \mathbb {E} _{X^{n}}\left\{{\frac {1}{M}}\sum _{m}2\left[{\text{Tr}}\left\{\left(I-\Pi _{\rho _{X^{n}\left(m\right)},\delta }\right)\Pi _{\rho ,\delta }^{n}\rho _{X^{n}\left(m\right)}\Pi _{\rho ,\delta }^{n}\right\}+\sum _{i=1}^{m-1}{\text{Tr}}\left\{\Pi _{\rho _{X^{n}\left(i\right)},\delta }\Pi _{\rho ,\delta }^{n}\rho _{X^{n}\left(m\right)}\Pi _{\rho ,\delta }^{n}\right\}\right]^{1/2}\right\}.}$ Due to concavity of the square root, we can bound this expression from above by

${\displaystyle 2\left[\mathbb {E} _{X^{n}}\left\{{\frac {1}{M}}\sum _{m}{\text{Tr}}\left\{\left(I-\Pi _{\rho _{X^{n}\left(m\right)},\delta }\right)\Pi _{\rho ,\delta }^{n}\rho _{X^{n}\left(m\right)}\Pi _{\rho ,\delta }^{n}\right\}+\sum _{i=1}^{m-1}{\text{Tr}}\left\{\Pi _{\rho _{X^{n}\left(i\right)},\delta }\Pi _{\rho ,\delta }^{n}\rho _{X^{n}\left(m\right)}\Pi _{\rho ,\delta }^{n}\right\}\right\}\right]^{1/2}}$
${\displaystyle \leq 2\left[\mathbb {E} _{X^{n}}\left\{{\frac {1}{M}}\sum _{m}{\text{Tr}}\left\{\left(I-\Pi _{\rho _{X^{n}\left(m\right)},\delta }\right)\Pi _{\rho ,\delta }^{n}\rho _{X^{n}\left(m\right)}\Pi _{\rho ,\delta }^{n}\right\}+\sum _{i\neq m}{\text{Tr}}\left\{\Pi _{\rho _{X^{n}\left(i\right)},\delta }\Pi _{\rho ,\delta }^{n}\rho _{X^{n}\left(m\right)}\Pi _{\rho ,\delta }^{n}\right\}\right\}\right]^{1/2},}$

where the second bound follows by summing over all of the codewords not equal to the ${\displaystyle m^{\text{th}}}$ codeword (this sum can only be larger).

We now focus exclusively on showing that the term inside the square root can be made small. Consider the first term:

${\displaystyle \mathbb {E} _{X^{n}}\left\{{\frac {1}{M}}\sum _{m}{\text{Tr}}\left\{\left(I-\Pi _{\rho _{X^{n}\left(m\right)},\delta }\right)\Pi _{\rho ,\delta }^{n}\rho _{X^{n}\left(m\right)}\Pi _{\rho ,\delta }^{n}\right\}\right\}}$
${\displaystyle \leq \mathbb {E} _{X^{n}}\left\{{\frac {1}{M}}\sum _{m}{\text{Tr}}\left\{\left(I-\Pi _{\rho _{X^{n}\left(m\right)},\delta }\right)\rho _{X^{n}\left(m\right)}\right\}+\left\Vert \rho _{X^{n}\left(m\right)}-\Pi _{\rho ,\delta }^{n}\rho _{X^{n}\left(m\right)}\Pi _{\rho ,\delta }^{n}\right\Vert _{1}\right\}}$
${\displaystyle \leq \epsilon +2{\sqrt {\epsilon }}.}$

where the first inequality follows from (1) and the second inequality follows from the gentle operator lemma and the properties of unconditional and conditional typicality. Consider now the second term and the following chain of inequalities:

${\displaystyle \sum _{i\neq m}\mathbb {E} _{X^{n}}\left\{{\text{Tr}}\left\{\Pi _{\rho _{X^{n}\left(i\right)},\delta }\ \Pi _{\rho ,\delta }^{n}\ \rho _{X^{n}\left(m\right)}\ \Pi _{\rho ,\delta }^{n}\right\}\right\}}$
${\displaystyle =\sum _{i\neq m}{\text{Tr}}\left\{\mathbb {E} _{X^{n}}\left\{\Pi _{\rho _{X^{n}\left(i\right)},\delta }\right\}\ \Pi _{\rho ,\delta }^{n}\ \mathbb {E} _{X^{n}}\left\{\rho _{X^{n}\left(m\right)}\right\}\ \Pi _{\rho ,\delta }^{n}\right\}}$
${\displaystyle =\sum _{i\neq m}{\text{Tr}}\left\{\mathbb {E} _{X^{n}}\left\{\Pi _{\rho _{X^{n}\left(i\right)},\delta }\right\}\ \Pi _{\rho ,\delta }^{n}\ \rho ^{\otimes n}\ \Pi _{\rho ,\delta }^{n}\right\}}$
${\displaystyle \leq \sum _{i\neq m}2^{-n\left[H\left(B\right)-\delta \right]}\ {\text{Tr}}\left\{\mathbb {E} _{X^{n}}\left\{\Pi _{\rho _{X^{n}\left(i\right)},\delta }\right\}\ \Pi _{\rho ,\delta }^{n}\right\}}$

The first equality follows because the codewords ${\displaystyle X^{n}\left(m\right)}$ and ${\displaystyle X^{n}\left(i\right)}$ are independent since they are different. The second equality follows from (2). The first inequality follows from (\ref{eq:3rd-typ-prop}). Continuing, we have

${\displaystyle \leq \sum _{i\neq m}2^{-n\left[H\left(B\right)-\delta \right]}\ \mathbb {E} _{X^{n}}\left\{{\text{Tr}}\left\{\Pi _{\rho _{X^{n}\left(i\right)},\delta }\right\}\right\}}$
${\displaystyle \leq \sum _{i\neq m}2^{-n\left[H\left(B\right)-\delta \right]}\ 2^{n\left[H\left(B|X\right)+\delta \right]}}$
${\displaystyle =\sum _{i\neq m}2^{-n\left[I\left(X;B\right)-2\delta \right]}}$
${\displaystyle \leq M\ 2^{-n\left[I\left(X;B\right)-2\delta \right]}.}$

The first inequality follows from ${\displaystyle \Pi _{\rho ,\delta }^{n}\leq I}$ and exchanging the trace with the expectation. The second inequality follows from (\ref{eq:2nd-cond-typ}). The next two are straightforward.

Putting everything together, we get our final bound on the expectation of the average error probability:

${\displaystyle 1-\mathbb {E} _{X^{n}}\left\{{\frac {1}{M}}\sum _{m}{\text{Tr}}\left\{\Pi _{\rho _{X^{n}\left(m\right)},\delta }{\hat {\Pi }}_{\rho _{X^{n}\left(m-1\right)},\delta }\cdots {\hat {\Pi }}_{\rho _{X^{n}\left(1\right)},\delta }\ \Pi _{\rho ,\delta }^{n}\ \rho _{X^{n}\left(m\right)}\ \Pi _{\rho ,\delta }^{n}\ {\hat {\Pi }}_{\rho _{X^{n}\left(1\right)},\delta }\cdots {\hat {\Pi }}_{\rho _{X^{n}\left(m-1\right)},\delta }\Pi _{\rho _{X^{n}\left(m\right)},\delta }\right\}\right\}}$
${\displaystyle \leq \epsilon +2\left[\left(\epsilon +2{\sqrt {\epsilon }}\right)+M\ 2^{-n\left[I\left(X;B\right)-2\delta \right]}\right]^{1/2}.}$

Thus, as long as we choose ${\displaystyle M=2^{n\left[I\left(X;B\right)-3\delta \right]}}$, there exists a code with vanishing error probability.