Welcome to toppred’s documentation!
Extension to sklearn.metrics to allow metrics for classifiers that output a top
Some classifiers output confidence levels for each class.
Oftentimes, you want to evaluate the performance of such classifiers assuming the correct prediction is the top
n predictions with the highest confidence level.
This library serves as an extension to the functions provided by sklearn.metrics to allow for evaluating classifiers that do not output a single prediction per sample, but rather a range of top predictions per sample.