In economics and game theory, complete information is an economic situation or game in which knowledge about other market participants or players is available to all participants. The utility functions (including risk aversion), payoffs, strategies and "types" of players are thus common knowledge. Complete information is the concept that each player in the game is aware of the sequence, strategies, and payoffs throughout gameplay. Given this information, the players have the ability to plan accordingly based on the information to maximize their own strategies and utility at the end of the game.
Inversely, in a game with incomplete information, players do not possess full information about their opponents. Some players possess private information, a fact that the others should take into account when forming expectations about how those players will behave. A typical example is an auction: each player knows his own utility function (valuation for the item), but does not know the utility function of the other players.
Games of incomplete information arise frequently in social science. For instance, John Harsanyi was motivated by consideration of arms control negotiations, where the players may be uncertain both of the capabilities of their opponents and of their desires and beliefs.
It is often assumed that the players have some statistical information about the other players, e.g. in an auction, each player knows that the valuations of the other players are drawn from some probability distribution. In this case, the game is called a Bayesian game.
In games that have a varying degree of complete information and game type, there are different methods available to the player to solve the game based on this information. In games with static, complete information, the approach to solve is to use Nash Equilibrium to find viable strategies. In dynamic games with complete information, backwards induction is the solution concept, which eliminates non-credible threats as potential strategies for players.
A classic example of a dynamic game with complete information is Stackelberg's (1934) sequential-move version of Cournot duopoly. Other examples include Leontief's (1946) monopoly-union model and Rubenstein's bargaining model.
Lastly, when complete information is unavailable (incomplete information games), these solutions turn towards Bayesian Nash Equilibria since games with incomplete information become Bayesian games. In a game of complete information, the players' payoffs functions are common knowledge, whereas in a game of incomplete information at least one player is uncertain about another player's payoff function.
The extensive form can be used to visualize the concept of complete information. By definition, players know where they are as depicted by the nodes, and the final outcomes as illustrated by the utility payoffs. The players also understand the potential strategies of each player and as a result their own best course of action to maximize their payoffs.
Complete vs. Perfect Information
Complete information is importantly different from perfect information. In a game of complete information, the structure of the game and the payoff functions of the players are commonly known but players may not see all of the moves made by other players (for instance, the initial placement of ships in Battleship); there may also be a chance element (as in most card games). Conversely, in games of perfect information, every player observes other players' moves, but may lack some information on others' payoffs, or on the structure of the game. A game with complete information may or may not have perfect information, and vice versa.
- Examples of games with imperfect but complete information are card games, where each player's cards are hidden from other players but objectives are known, as in contract bridge and poker, if the outcomes are assumed to be binary (players can only win or lose in a zero-sum game). Games with complete information generally require one player to outwit the other by forcing them to make risky assumptions.
- Examples of games with incomplete but perfect information are conceptually more difficult to imagine, such as a Bayesian game. A game of chess is a commonly given example to illustrate how the lack of certain information influences the game, without the chess itself being such a game. One can readily observe all of the opponent's moves and viable strategies available to them but never ascertain which one the opponent is following until this might prove disastrous for one. Games with perfect information generally require one player to outwit the other by making them misinterpret one's decisions.
- Levin, Jonathan (2002). "Games with Incomplete Information" (PDF). Retrieved 25 August 2016.
- Gibbons, Robert (1992). A Primer in Game Theory. Harvester-Wheatsheaf. p. 133.
- Osborne, M. J.; Rubinstein, A. (1994). "Chapter 6: Extensive Games with Perfect Information". A Course in Game Theory. Cambridge M.A.: The MIT Press. ISBN 0-262-65040-1.
- Thomas, L. C. (2003). Games, Theory and Applications. Mineola N.Y.: Dover Publications. p. 19. ISBN 0-486-43237-8.
- Osborne, M. J.; Rubinstein, A. (1994). "Chapter 11: Extensive Games with Imperfect Information". A Course in Game Theory. Cambridge M.A.: The MIT Press. ISBN 0-262-65040-1.
- Watson, J. (2015) Strategy: An Introduction to Game Theory. Volume 139. New York, WW Norton
- Fudenberg, D. and Tirole, J. (1993) Game Theory. MIT Press. (see Chapter 6, sect 1)
- Gibbons, R. (1992) A primer in game theory. Harvester-Wheatsheaf. (see Chapter 3)
- Ian Frank, David Basin (1997), Artificial Intelligence 100 (1998) 87-123. "Search in games with incomplete information: a case study using Bridge card play".