# F1 score

In statistical analysis of binary classification, the **F _{1} score** (also

**F-score**or

**F-measure**) is a measure of a test's accuracy. It considers both the precision

*p*and the recall

*r*of the test to compute the score:

*p*is the number of correct positive results divided by the number of all positive results returned by the classifier, and

*r*is the number of correct positive results divided by the number of all relevant samples (all samples that should have been identified as positive). The F

_{1}score is the harmonic mean of the precision and recall, where an F

_{1}score reaches its best value at 1 (perfect precision and recall) and worst at 0.

## Etymology

The name F-measure is believed to be named after a different F function in Van Rijsbergen's book, when introduced to MUC-4. [1]

## Definition

The traditional F-measure or balanced F-score (**F _{1} score**) is the harmonic mean of precision and recall:

- .

The general formula for positive real β, where β is chosen such that recall is considered β times as important as precision, is:

- .

The formula in terms of Type I and type II errors:

- .

Two commonly used values for β are those corresponding to the measure, which weighs recall higher than precision (by placing more emphasis on false negatives), and the measure, which weighs recall lower than precision (by attenuating the influence of false negatives).

The F-measure was derived so that "measures the effectiveness of retrieval with respect to a user who attaches β times as much importance to recall as precision".[2] It is based on Van Rijsbergen's effectiveness measure

- .

Their relationship is where .

The F_{1} score is also known as the Sørensen–Dice coefficient or Dice similarity coefficient (DSC).

## Diagnostic testing

This is related to the field of binary classification where recall is often termed "sensitivity". There are several reasons that the F_{1} score can be criticized in particular circumstances.

True condition | ||||||

Total population | Condition positive | Condition negative | Prevalence = Σ Condition positive/Σ Total population | Accuracy (ACC) = Σ True positive + Σ True negative/Σ Total population | ||

Predictedcondition |
Predicted condition positive |
True positive |
False positive,Type I error |
Positive predictive value (PPV), Precision = Σ True positive/Σ Predicted condition positive | False discovery rate (FDR) = Σ False positive/Σ Predicted condition positive | |

Predicted condition negative |
False negative,Type II error |
True negative |
False omission rate (FOR) = Σ False negative/Σ Predicted condition negative | Negative predictive value (NPV) = Σ True negative/Σ Predicted condition negative | ||

True positive rate (TPR), Recall, Sensitivity, probability of detection, Power = Σ True positive/Σ Condition positive | False positive rate (FPR), Fall-out, probability of false alarm = Σ False positive/Σ Condition negative | Positive likelihood ratio (LR+) = TPR/FPR | Diagnostic odds ratio (DOR) = LR+/LR− | F_{1} score = 2 · Precision · Recall/Precision + Recall | ||

False negative rate (FNR), Miss rate = Σ False negative/Σ Condition positive | Specificity (SPC), Selectivity, True negative rate (TNR) = Σ True negative/Σ Condition negative | Negative likelihood ratio (LR−) = FNR/TNR |

## Applications

The F-score is often used in the field of information retrieval for measuring search, document classification, and query classification performance.[3] Earlier works focused primarily on the F_{1} score, but with the proliferation of large scale search engines, performance goals changed to place more emphasis on either precision or recall[4] and so is seen in wide application.

The F-score is also used in machine learning.[5] Note, however, that the F-measures do not take the true negatives into account, and that measures such as the Matthews correlation coefficient, Informedness or Cohen's kappa may be preferable to assess the performance of a binary classifier.

The F-score has been widely used in the natural language processing literature,[6] such as the evaluation of named entity recognition and word segmentation.

## Criticism

David Hand and others criticize the widespread use of the F_{1}-score since it gives equal importance to precision and recall. In practice, different types of mis-classifications incur different costs. In other words, the relative importance of precision and recall is an aspect of the problem.[7]

## Difference from G-measure

While the F-measure is the harmonic mean of recall and precision, the G-measure is the geometric mean.

## See also

## References

- Sasaki, Y. (2007). "The truth of the F-measure" (PDF).
- Van Rijsbergen, C. J. (1979).
*Information Retrieval*(2nd ed.). Butterworth-Heinemann. - Beitzel., Steven M. (2006).
*On Understanding and Classifying Web Queries*(Ph.D. thesis). IIT. CiteSeerX 10.1.1.127.634. - X. Li; Y.-Y. Wang; A. Acero (July 2008).
*Learning query intent from regularized click graphs*(PDF).*Proceedings of the 31st SIGIR Conference*. - See, e.g., the evaluation of the .
- Derczynski, L. (2016).
*Complementarity, F-score, and NLP Evaluation*.*Proceedings of the International Conference on Language Resources and Evaluation*. - Hand, David. "A note on using the F-measure for evaluating record linkage algorithms - Dimensions".
*app.dimensions.ai*. Retrieved 2018-12-08.