Classification Raport
What is a classification raport?
A classification report is a performance evaluation tool used in machine learning to assess the quality of predictions from a classification algorithm.
The classification report typically includes the following metrics:
Precision: The ratio of true positive predictions to the total predicted positives. It answers the question, "Of all the instances predicted as positive, how many are actually positive?"
Recall (Sensitivity or True Positive Rate): The ratio of true positive predictions to the total actual positives. It answers the question, "Of all the actual positive instances, how many were correctly predicted?"
F1 Score: The harmonic mean of precision and recall, providing a single metric that balances the two. It is particularly useful when you need a balance between precision and recall.
Support: The number of actual occurrences of each class in the dataset. It helps in understanding the distribution of the dataset.
Calculating Model Accuracy
Accuracy is calculated by dividing the number of correct predictions by the total number of predictions across all classes. In binary classification, it can be expressed as:
Accuracy (ACC) = (TP + TN) / (TP + TN + FP + FN)
Where:
TP: True Positives (correctly predicted positive instances)
TN: True Negatives (correctly predicted negative instances)
FP: False Positives (negative instances predicted as positive)
FN: False Negatives (positive instances predicted as negative)
Classification report example
In order to use the accuracy and the rest of the metrics, we need to import the class
import classification report and accuracy:
define class lables:
Print accuracy:
Print the report:
Full report example:
Last updated