explabox.examine
Calculate quantitative metrics on how the model performs, and examine where the model went wrong.
- class explabox.examine.Examiner(data=None, model=None, ingestibles=None, **kwargs)
Bases:
Readable
,ModelMixin
,IngestiblesMixin
The Examiner calculates quantitative metrics on how the model performs.
The Examiner requires ‘data’ and ‘model’ defined. It is included in the Explabox under the .examine property.
Examples
Construct the examiner:
>>> from explabox.examine import Examiner >>> examiner = Explainer(data=data, model=model)
Calculate model performance metrics on the validation set:
>>> examiner(split='validation')
See all wrongly classified examples in the test set:
>>> examiner.wrongly_classified(split='test')
- Parameters:
data (Optional[Environment], optional) – Data for ingestibles. Defaults to None.
model (Optional[AbstractClassifier], optional) – Model for ingestibles. Defaults to None.
ingestibles (Optional[Ingestible], optional) – Ingestible. Defaults to None.
- performance(split='test', **kwargs)
Determine performance metrics, the amount of predictions for each label in the test set and the values for the confusion matrix for each label in the test set.
- Parameters:
split (str, optional) – Split to calculate metrics on. Defaults to ‘test’.
- Returns:
Performance metrics of your model on the split.
- Return type:
- wrongly_classified(split='test', **kwargs)
Give all wrongly classified samples.
- Parameters:
split (str, optional) – Name of split. Defaults to ‘test’.
- Returns:
Wrongly classified examples in this split.
- Return type: