Truth discovery problems can be divided into two sub-classes: single-truth and multi-truth. In the first case only one true value is allowed for a data item (e.g birthday of a person, capital city of a country). While in the second case multiple true values are allowed (e.g. cast of a movie, authors of a book).
Typically, truth discovery is the last step of a data integration pipeline, when the schemas of different data sources have been unified and the records referring to the same data item have been detected.
The abundance of data available on the web makes more and more probable to find that different sources provide (partially or completely) different values for the same data item. This, together with the fact that we are increasing our reliance on data to derive important decisions, motivates the need of developing good truth discovery algorithms.
Many currently available methods rely on a voting strategy to define the true value of a data item. Nevertheless, recent studies, have shown that, if we rely only on majority voting, we could get wrong results even in 30% of the data items.
The solution to this problem is to assess the trustworthiness of the sources and give more importance to votes coming from trusted sources.
Ideally, supervised learning techniques could be exploited to assign a reliability score to sources after hand-crafted labeling of the provided values; unfortunately, this is not feasible since the number of needed labeled examples should be proportional to the number of sources, and in many applications the number of sources can be prohibitive.
Single-truth vs multi-truth discovery
Single-truth discovery is characterized by the following properties:
- only one true value is allowed for each data item;
- different values provided for a given data item oppose to each other;
- values and sources can either be correct or erroneous.
While in the multi-truth case the following properties hold:
- the truth is composed by a set of values;
- different values could provide a partial truth;
- claiming one value for a given data item does not imply opposing to all the other values;
- the number of true values for each data item is not known a priori.
The examples below point out the main differences of the two methods. Knowing that in both examples the truth is provided by source 1, in the single truth case (first table) we can say that sources 2 and 3 oppose to the truth and as a result provide wrong values. On the other hand, in the second case (second table), sources 2 and 3 are neither correct nor erroneous, they instead provide a subset of the true values and at the same time they do not oppose the truth.
|S1||The nature of space and time||Stephen Hawking, Roger Penrose||Correct|
|S2||The nature of space and time||Stephen Hawking||Partial truth|
|S3||The nature of space and time||Roger Penrose||Partial truth|
|S4||The nature of space and time||J. K. Rowling||Erroneous|
The vast majority of truth discovery methods are based on a voting approach: each source votes for a value of a certain data item and, at the end, the value with the highest vote is select as the true one. In the more sophisticated methods, votes do not have the same weight for all the data sources, more importance is indeed given to votes coming from trusted sources.
Source trustworthiness usually is not known a priori but estimated with an iterative approach. At each step of the truth discovery algorithm the trustworthiness score of each data source is refined, improving the assessment of the true values that in turn leads to a better estimation of the trustworthiness of the sources. This process usually ends when all the values reach a convergence state.
Source trustworthiness can be based on different metrics, such as accuracy of provided values, copying values from other sources and domain coverage.
Detecting copying behaviors is very important, in fact, copy allows to spread false values easily making truth discovery very hard, since many sources would vote for the wrong values. Usually systems decrease the weight of votes associated to copied values or even don’t count them at all.
Majority voting is the simplest method, the most popular value is selected as the true one. Majority voting is commonly used as a baseline when assessing the performances of more complex methods.
These methods estimate source trustworthiness exploiting a similar technique to the one used to measure authority of web pages based on web links. The vote assigned to a value is computed as the sum of the trustworthiness of the sources that provide that particular value, while the trustworthiness of a source is computed as the sum of the votes assigned to the values that the source provides.
These methods estimate source trustworthiness using similarity measures typically used in information retrieval. Source trustworthiness is computed as the cosine similarity (or other similarity measures) between the set of values provided by the source and the set of values considered true (either selected in a probabilistic way or obtained from a ground truth).
These methods use Bayesian inference to define the probability of a value being true conditioned on the values provided by all the sources.
The trustworthiness of a source is then computed based on the accuracy of the values that provides. Other more complex methods exploit Bayesian inference to detect copying behaviors and use these insights to better assess source trustworthiness.
Due to its complexity, less attention has been devoted to the study of the multi-truth discovery
Below are reported two typologies of multi-truth methods and their characteristics.
These methods use Bayesian inference to define the probability of a group of values being true conditioned on the values provided by all the data sources. In this case, since there could be multiple true values for each data item, and sources can provide multiple values for a single data item, it is not possible to consider values individually. An alternative is to consider mappings and relations between set of provided values and sources providing them. The trustworthiness of a source is then computed based on the accuracy of the values that provides.
Probabilistic Graphical Models based
These methods use probabilistic graphical models to automatically define the set of true values of given data item and also to assess source quality without need of any supervision.
Many real-world applications can benefit from the use of truth discovery algorithms. Typical domains of application include: healthcare, crowd/social sensing, crowdsourcing aggregation, information extraction and knowledge base construction.
Truth discovery algorithms could be also used to revolutionize the way in which web pages are ranked in search engines, going form current methods based on link analysis like PageRank, to procedures that rank web pages based on the accuracy of the information they provide.
- Li, Yaliang; Gao, Jing; Meng, Chuishi; Li, Qi; Su, Lu; Zhao, Bo; Fan, Wei; Han, Jiawei (2016-02-25). "A Survey on Truth Discovery". ACM SIGKDD Explorations Newsletter. 17 (2): 1–16. doi:10.1145/2897350.2897352.
- Wang, Xianzhi; Sheng, Quan Z.; Fang, Xiu Susie; Yao, Lina; Xu, Xiaofei; Li, Xue (2015). "An Integrated Bayesian Approach for Effective Multi-Truth Discovery". Proceedings of the 24th ACM International on Conference on Information and Knowledge Management - CIKM '15. Melbourne, Australia: ACM Press: 493–502. doi:10.1145/2806416.2806443. hdl:2440/110033. ISBN 9781450337946.
- Lin, Xueling; Chen, Lei (2018). "Domain-aware Multi-truth Discovery from Conflicting Sources". VLDB Endowment. 11 (5): 635–647. doi:10.1145/3187009.3177739.
- Dong, Xin Luna; Srivastava, Divesh (2015-02-15). "Big Data Integration". Synthesis Lectures on Data Management. 7 (1): 1–198. doi:10.2200/S00578ED1V01Y201404DTM040. ISSN 2153-5418.
- Li, Xian; Dong, Xin Luna; Lyons, Kenneth; Meng, Weiyi; Srivastava, Divesh (2012-12-01). "Truth finding on the deep web: is the problem solved?". Proceedings of the VLDB Endowment. 6 (2): 97–108. arXiv:1503.00303. doi:10.14778/2535568.2448943.
- Ng, Andrew Y; Jordan, Michael I. (2001). "On Discriminative vs. Generative Classifiers: A Comparison of Logistic Regression and Naive Bayes". Proceedings of the 14th International Conference on Neural Information Processing Systems: Natural and Synthetic: 841–848.
- Dong, Xin Luna; Berti-Equille, Laure; Srivastava, Divesh (2009-08-01). "Integrating conflicting data: the role of source dependence". Proceedings of the VLDB Endowment. 2 (1): 550–561. doi:10.14778/1687627.1687690.
- Kleinberg, Jon M. (1999-09-01). "Authoritative sources in a hyperlinked environment". Journal of the ACM. 46 (5): 604–632. doi:10.1145/324133.324140.
- Galland, Alban; Abiteboul, Serge; Marian, Amélie; Senellart, Pierre (2010). "Corroborating information from disagreeing views". Proceedings of the Third ACM International Conference on Web Search and Data Mining - WSDM '10. New York, New York, USA: ACM Press: 131. doi:10.1145/1718487.1718504. ISBN 9781605588896.
- Xiaoxin Yin; Jiawei Han; Yu, P.S. (2008). "Truth Discovery with Multiple Conflicting Information Providers on the Web". IEEE Transactions on Knowledge and Data Engineering. 20 (6): 796–808. doi:10.1109/TKDE.2007.190745. ISSN 1041-4347.
- Zhao, Bo; Rubinstein, Benjamin I. P.; Gemmell, Jim; Han, Jiawei (2012-02-01). "A Bayesian approach to discovering truth from conflicting sources for data integration". Proceedings of the VLDB Endowment. 5 (6): 550–561. arXiv:1203.0058. doi:10.14778/2168651.2168656.
- "The huge implications of Google's idea to rank sites based on their accuracy". www.washingtonpost.com. 2015.