The quadratic knapsack problem (QKP), first introduced in 19th century,[1] is an extension of knapsack problem that allows for quadratic terms in the objective function: Given a set of items, each with a weight, a value, and an extra profit that can be earned if two items are selected, determine the number of item to include in a collection without exceeding capacity of the knapsack, so as to maximize the overall profit. Usually, quadratic knapsack problems come with a restriction on the number of copies of each kind of item: either 0, or 1. This special type of QKP forms the 0-1 quadratic knapsack problem, which was first discussed by Gallo et al.[2] The 0-1 quadratic knapsack problem is a variation of knapsack problems, combining the features of unbounded knapsack problem, 0-1 knapsack problem and quadratic knapsack problem.

## Definition

Specifically, the 0–1 quadratic knapsack problem has the following form:

${\displaystyle {\text{maximize }}\left\{\sum _{i=1}^{n}p_{i}x_{i}+\sum _{i=1}^{n}\sum _{j=1,i\neq j}^{n}P_{ij}x_{i}x_{j}:x\in X,x{\text{ binary}}\right\}}$
${\displaystyle {\text{subject to }}X\equiv \left\{x\in \{0,1\}^{n}:\sum _{i=1}^{n}w_{i}x_{i}\leq W;x_{i}\in \{0,1\}{\text{ for }}i=1,\ldots ,n\right\}.}$

While the binary variable xi represents whether item i is included in the knapsack, ${\displaystyle p_{i}}$ is the profit earned by selecting item i and ${\displaystyle P_{ij}}$ is the profit achieved if both item i and j are added.

Informally, the problem is to maximize the sum of the values of the items in the knapsack so that the sum of the weights is less than or equal to the knapsack's capacity.

## Application

As one might expect, QKP has a wide range of applications including telecommunication, transportation network, computer science and economics. In fact, Witzgall first discussed QKP when selecting sites for satellite stations in order to maximize the global traffic with respect to a budget constraint. Similar model applies to problems like considering the location of airports, railway stations, or freight handling terminals.[3] Applications of QKP in the field of computer science is more common after the early days: compiler design problem,[4] clique problem,[5][6] very large scale integration (VLSI) design.[7] Additionally, pricing problems appear to be an application of QKP as described by Johnson et al.[8]

## Computational complexity

In general, the decision version of the knapsack problem (Can a value of at least V be achieved under a restriction of a certain capacity W?) is NP-complete.[9] Thus, a given solution can be verified to in polynomial time while no algorithm can identify a solution efficiently.

The optimization knapsack problem is NP-hard and there is no known algorithm that can solve the problem in polynomial time.

As a particular variation of the knapsack problem, the 0-1 quadratic knapsack problem is also NP-hard.

While no available efficient algorithm exists in the literature, there is a pseudo-polynomial time based on dynamic programming and other heuristic algorithms that can always generate “good” solutions.

## Solving

While the knapsack problem is one of the most commonly solved operation research (OR) problems, there are limited efficient algorithms that can solve 0-1 quadratic knapsack problems. Available algorithms include but are not limited to brute force, linearization,[10] and convex reformulation. Just like other NP-hard problems, it is usually enough to find a workable solution even if it is not necessarily optimal. Heuristic algorithms based on greedy algorithm, dynamic programming can give a relatively “good” solution to the 0-1 QKP efficiently.

### Brute force

The brute force algorithm to solve this problem is to identify all possible subsets of the items without exceeding the capacity and select the one with the optimal value. The pseudo-code is provided as follows:

 1 // Input:
2 // Profits (stored in array p)
3 // Quadratic profits (stored in matrix P)
4 // Weights (stored in array w)
5 // Number of items (n)
6 // Knapsack capacity (W)
7
8 int max =0
9 for all subset S do
10 	int value, weight = 0
11  	for i from 0 to S.size-1 do:
12 		value = value +p[i]
13 		weight = weight +w[i]
14 			for j from i+1 to S.size-1 do:
15 				value = value + P[i][j]
16 	if weight <= W then:
17 		if value > max then:
18 			max = value


Given n items, there will be at most ${\displaystyle 2^{n}}$ subsets and for each legal candidate set, the running time of computing the values earned is ${\displaystyle O(n^{2})}$. Thus, the efficiency class of brute force algorithm is ${\displaystyle (2^{n}n^{2})=\lambda (2^{n})}$, being exponential.

### Linearization

Problems of such form are difficult to solve directly using standard solvers and thus people try to reformulate it as a linear program using auxiliary variables and constraints so that the problem can be readily solved using commercial packages. Two well-known linearization approaches for the 0-1 QKP are the standard linearization and Glover’s linearization.[11][12][13]

#### Standard linearization

The first one is the standard linearization strategy, as shown below:

LP1: maximize
${\displaystyle \sum _{i=1}^{n}p_{i}x_{i}+\sum _{i=1}^{n}\left(\sum _{j=1,i\neq j}^{n}(P_{ij}+P_{ji})z_{ij}\right).}$
subject to
${\displaystyle z_{ij}\leq x_{i}}$ for all ${\displaystyle (i,j),i
${\displaystyle z_{ij}\leq x_{j}}$ for all ${\displaystyle (i,j),i
${\displaystyle x_{i}+x_{j}-1\leq z_{ij}}$ for all ${\displaystyle (i,j),i
${\displaystyle z_{ij}\geq 0}$ for all ${\displaystyle (i,j),i
${\displaystyle x\in X,x}$ binary

In the formulation LP1, we have replaced the xixj term with a continuous variable zij. This reformulates the QKP into a knapsack problem, which we can then solve optimally using standard solvers.

#### Glover's linearization

The second reformulation, which is more concise, is called Glover’s linearization.[14][15][16] The Glover formulation is shown below, where Li and Ui are lower and upper bounds on ${\displaystyle \sum _{j=1,i\neq j}^{n}P_{ij}x_{j}}$, respectively:

LP2: maximize
${\displaystyle \sum _{i=1}^{n}p_{i}x_{i}+\sum _{i=1}^{n}z_{i}}$
subject to
${\displaystyle L_{i}x_{i}\leq z_{i}\leq U_{i}x_{i}}$ for ${\displaystyle i=1,\ldots ,n}$
${\displaystyle \sum _{j=1,i\neq j}^{n}P_{ij}x_{j}-U_{i}(1-x_{i})\leq z_{i}\leq \sum _{j=1,i\neq j}^{n}P_{ij}x_{j}-L_{i}(1-x_{i})}$ for ${\displaystyle i=1,\ldots ,n}$
${\displaystyle x\in X,x}$ binary

In the formulation LP2, we have replaced the expression ${\displaystyle \sum _{j=1,i\neq j}^{n}P_{ij}x_{i}x_{j}}$ with a continuous variable zi. Similarly, we can use standard solvers to solve the linearization problem. Note that Glover’s linearization only includes ${\displaystyle n}$ auxiliary variables with ${\displaystyle 2n}$ constraints while standard linearization requires ${\displaystyle {n \choose 2}}$ auxiliary variables and ${\displaystyle 3{n \choose 2}}$ constraints to achieve linearity.

Note that nonlinear programs are hard to solve due to the possibility of being stuck at a local maximum. However, when the program is convex, any local maximum is the global maximum. A convex program is to maximize a concave function or minimize a convex function on a convex set. A set S is convex if ${\displaystyle \forall u,v\in S}$, ${\displaystyle \lambda u+(1-\lambda )v\in S}$ where ${\displaystyle \lambda \in [0,1]}$. That is to say, any point between two points in the set must also be an element of the set. A function f is concave if ${\displaystyle f[\lambda u+(1-\lambda )v]\leq \lambda f(u)+(1-\lambda )f(v)}$. A function f is convex if ${\displaystyle f[\lambda u+(1-\lambda )v]\geq \lambda f(u)+(1-\lambda )f(v)}$. Informally, a function is concave if the line segment connecting two points on the graph lies above or on the graph, while a function is convex if below or on the graph. Thus, by rewriting the objective function into an equivalent convex function, we can reformulate the program to be convex, which can be solved using optimization packages.

The objective function can be written as ${\displaystyle c^{T}x+x^{T}Cx}$ using linear algebra notation. We need to make P a positive semi-definite matrix in order to reformulate a convex function. In this case, we modify the objective function to be ${\displaystyle p^{T}x+x^{T}Px+\sum _{i=1}^{n}\left(\sum _{j=1,j\neq i}^{n}|P_{ij}|\right)(x_{i}^{2}-x_{i})}$ by applying results from linear algebra, where P is a diagonally dominant matrix and thus a positive semi-definite. This reformulation can be solved using a standard commercial mixed-integer quadratic package.[17]

### Greedy heuristic algorithm

George Dantzig[18] proposed a greedy approximation algorithm to unbounded knapsack problem which can also be used to solve the 0-1 QKP. The algorithm consists of two phrases: identify an initial solution and improve it.

First compute for each item, the total objective contribution realizable by selecting it, ${\displaystyle p_{i}+\sum _{i\neq j}^{n}P_{ij}}$, and sort the items in decreasing order of the potential value per unit of weight, ${\displaystyle (p_{i}+\sum _{i\neq j}^{n}P_{ij})/w_{i}}$. Then select the items with the maximal value-weight ratio into the knapsack until there is no space for more, which forms the initial solution. Starting with the initial solution, the improvement is conducted by pairwise exchange. For each item in the solution set, identify the items not in the set where swapping results in an improving objective. Select the pair with maximal improvement and swap. There are also possibilities that removing one from the set or adding one to the set will produce the greatest contribution. Repeat until there is no improving swapping. The complexity class of this algorithm is ${\displaystyle O(2^{n})}$ since for the worst case every possible combination of items will be identified.

Quadknap is an exact branch-and-bound algorithm raised by Caprara et al.,[19] where upper bounds are computed by considering a Lagrangian relaxation which approximate a difficult problem by a simpler problem and penalizes violations of constraints using Lagrange multiplier to impost a cost on violations. Quadknap releases the integer requirement when computing the upper bounds. Suboptimal Lagrangian multipliers are derived from sub-gradient optimization and provide a convenient reformulation of the problem. This algorithm is quite efficient since Lagrangian multipliers are stable, and suitable data structures are adopted to compute a tight upper bound in linear expected time in the number of variables. This algorithm was reported to generate exact solutions of instances with up to 400 binary variables, i.e., significantly larger than those solvable by other approaches. The code was written in C and is available online.[20]

### Dynamic programming heuristic

While dynamic programming can generate optimal solutions to knapsack problems, dynamic programming approaches for QKP[21] can only yield a relatively good quality solution, which can serve as a lower bound to the optimal objectives. While it runs in pseudo-polynomial time, it has a large memory requirement.

#### Dynamic programming algorithm

For simplicity, assume all weights are non-negative. The objective is to maximize total value subject to the constraint: that the total weight is less than or equal to W. Then for each ${\displaystyle w\leq W}$, define ${\displaystyle f(m,w)}$ to be the value of the most profitable packing of the first m items found with a total weight of w. That is, let

${\displaystyle f(m,w)=\max \left\{\sum _{i=1}^{m}p_{i}x_{i}+\sum _{i=1}^{m}\sum _{j=1,i\neq j}^{m}P_{ij}x_{i}x_{j}:\sum _{i=1}^{m}w_{i}=w,1\leq i\leq m\right\}.}$

Then, ${\displaystyle f(m,w)}$ is the solution to the problem. Note that by dynamic programming, the solution to a problem arises from the solution to its smaller sub-problems. In this particular case, start with the first item and try to find a better packing by considering adding items with an expected weight of 𝑤. If the weight of the item to be added exceeds 𝑤, then ${\displaystyle f(m,w)}$ is the same with ${\displaystyle f(m-1,w)}$. Given that the item has a smaller weight compared with the desired weight, ${\displaystyle f(m,w)}$ is either the same as ${\displaystyle f(m-1,w)}$ if adding makes no contribution, or the same as the solution for a knapsack with smaller capacity, specifically one with the capacity reduced by the weight of that chosen item, plus the value of one correct item, i.e. ${\displaystyle f(m-1,w-w_{m})+p_{m}+\sum _{i=1}^{m-1}P_{im}x_{i}}$. To conclude, we have that

${\displaystyle f(m,w)={\begin{cases}\max f(m-1,w),f(m-1,w-w_{m})+p_{m}+\sum _{i=1}^{m-1}P_{im}x_{i}&{\text{if }}w_{m}\leq w\\f(m-1,w)&{\text{otherwise}}\end{cases}}}$

Note on efficiency class: Clearly the running time of this algorithm is ${\displaystyle O(Wn^{2})}$, based on the nested loop and the computation of the profit of new packing. This does not contradict the fact the QKP is NP-hard since W is not polynomial in the length of the input.

#### Revised dynamic programming algorithm

Note that the previous algorithm requires ${\displaystyle O(Wn^{2})}$ space for storing the current packing of items for all m,w, which may not be able to handle large-size problems. In fact, this can be easily improved by dropping the index m from ${\displaystyle f(m,w)}$ since all the computations depend only on the results from the preceding stage.

Redefine ${\displaystyle f(w)}$ to be the current value of the most profitable packing found by the heuristic. That is,

${\displaystyle f(w)=\max \left\{\sum _{i=1}^{m}p_{i}x_{i}+\sum _{i=1}^{m}\sum _{j=1,i\neq j}^{m}P_{ij}x_{i}x_{j}:\sum _{i=1}^{m}w_{i}=w,m\leq n\right\}.}$

Accordingly, by dynamic programming we have that

${\displaystyle f(m)={\begin{cases}\max f(w),f(w-w_{m})+p_{m}+\sum _{i=1}^{m-1}P_{im}x_{i}&{\text{if }}w_{m}\leq w,\\f(w)&{\text{otherwise.}}\end{cases}}}$

Note this revised algorithm still runs in ${\displaystyle O(Wn^{2})}$ while only taking up ${\displaystyle O(Wn)}$ memory compared to the previous ${\displaystyle O(Wn^{2})}$.

Researchers have studied 0-1 quadratic knapsack problems for decades. One focus is to find effective algorithms or effective heuristics, especially those with an outstanding performance solving real world problems. The relationship between the decision version and the optimization version of the 0-1 QKP should not be ignored when working with either one. On one hand, if the decision problem can be solved in polynomial time, then one can find the optimal solution by applying this algorithm iteratively. On the other hand, if there exists an algorithm that can solve the optimization problem efficiently, then it can be utilized in solving the decision problem by comparing the input with the optimal value.

Another theme in literature is to identify what are the "hard" problems. Researchers who study the 0-1 QKP often perform computational studies[22] to show the superiority of their strategies. Such studies can also be conducted to assess the performance of different solution methods. For the 0-1 QKP, those computational studies often rely on randomly generated data, introduced by Gallo et al. Essentially every computational study of the 0-1 QKP utilizes data that is randomly generated as follows. The weights are integers taken from a uniform distribution over the interval [1, 50], and the capacity constraints is an integer taken from a uniform distribution between 50 and the sum of item weights. The objective coefficients, i.e. the values are randomly chosen from [1,100]. It has been observed that generating instances of this form yields problems with highly variable and unpredictable difficulty. Therefore, the computational studies presented in the literature may be unsound. Thus some researches aim to develop a methodology to generate instances of the 0-1 QKP with a predictable and consistent level of difficulty.

## Notes

1. C., Witzgall (1975). "Mathematical methods of site selection for Electronic Message Systems (EMS)". NBS Internal Report. doi:10.6028/nbs.ir.75-737.
2. Gallo, G.; Hammer, P.L.; Simeone, B. (1980). Quadratic knapsack problems. Mathematical Programming Studies Combinatorial Optimization. Mathematical Programming Studies. 12. pp. 132–149. doi:10.1007/bfb0120892. ISBN 978-3-642-00801-6.
3. Rhys, J.M.W. (1970). "A Selection Problem of Shared Fixed Costs and Network Flows". Management Science. 17 (3): 200–207. doi:10.1287/mnsc.17.3.200.
4. Helmberg, C.; Rendl, F.; Weismantel, R. (1996). Quadratic knapsack relaxations using cutting planes and semidefinite programming. Integer Programming and Combinatorial Optimization Lecture Notes in Computer Science. Lecture Notes in Computer Science. 1084. pp. 175–189. doi:10.1007/3-540-61310-2_14. ISBN 978-3-540-61310-7.
5. Dijkhuizen, G.; Faigle, U. (1993). "A cutting-plane approach to the edge-weighted maximal clique problem". European Journal of Operational Research. 69 (1): 121–130. doi:10.1016/0377-2217(93)90097-7.
6. Park, Kyungchul; Lee, Kyungsik; Park, Sungsoo (1996). "An extended formulation approach to the edge-weighted maximal clique problem". European Journal of Operational Research. 95 (3): 671–682. doi:10.1016/0377-2217(95)00299-5.
7. Ferreira, C.E.; Martin, A.; Souza, C.C.De; Weismantel, R.; Wolsey, L.A. (1996). "Formulations and valid inequalities for the node capacitated graph partitioning problem". Mathematical Programming. 74 (3): 247–266. doi:10.1007/bf02592198.
8. Johnson, Ellis L.; Mehrotra, Anuj; Nemhauser, George L. (1993). "Min-cut clustering". Mathematical Programming. 62 (1–3): 133–151. doi:10.1007/bf01585164.
9. Garey, Michael R.; Johnson, David S. (1979). Computers and intractibility: A guide to the theory of NP completeness. New York: Freeman and Co.
10. Adams, Warren P.; Sherali, Hanif D. (1986). "A Tight Linearization and an Algorithm for Zero-One Quadratic Programming Problems". Management Science. 32 (10): 1274–1290. doi:10.1287/mnsc.32.10.1274.
11. Adams, Warren P.; Forrester, Richard J.; Glover, Fred W. (2004). "Comparisons and enhancement strategies for linearizing mixed 0-1 quadratic programs". Discrete Optimization. 1 (2): 99–120. doi:10.1016/j.disopt.2004.03.006.
12. Adams, Warren P.; Forrester, Richard J. (2005). "A simple recipe for concise mixed 0-1 linearizations". Operations Research Letters. 33 (1): 55–61. doi:10.1016/j.orl.2004.05.001.
13. Adams, Warren P.; Forrester, Richard J. (2007). "Linear forms of nonlinear expressions: New insights on old ideas". Operations Research Letters. 35 (4): 510–518. doi:10.1016/j.orl.2006.08.008.
14. Glover, Fred; Woolsey, Eugene (1974). "Technical Note—Converting the 0-1 Polynomial Programming Problem to a 0-1 Linear Program". Operations Research. 22 (1): 180–182. doi:10.1287/opre.22.1.180.
15. Glover, Fred (1975). "Improved Linear Integer Programming Formulations of Nonlinear Integer Problems". Management Science. 22 (4): 455–460. doi:10.1287/mnsc.22.4.455.
16. Glover, Fred; Woolsey, Eugene (1973). "Further Reduction of Zero-One Polynomial Programming Problems to Zero-One linear Programming Problems". Operations Research. 21 (1): 156–161. doi:10.1287/opre.21.1.156.
17. Bliek, Christian; Bonami, Pierre; Lodi, Andrea (2014). "Solving Mixed-Integer Quadratic Programming problems with IBM-CPLEX: a progress report" (PDF). Cite journal requires |journal= (help)
18. Dantzig, George B. (1957). "Discrete-Variable Extremum Problems". Operations Research. 5 (2): 266–288. doi:10.1016/j.disopt.2004.03.006.
19. Caprara, Alberto; Pisinger, David; Toth, Paolo (1999). "Exact Solution of the Quadratic Knapsack Problem". INFORMS Journal on Computing. 11 (2): 125–137. CiteSeerX 10.1.1.22.2818. doi:10.1287/ijoc.11.2.125.