All Presets
An overview of the evaluations you can do with Evidently.
Last updated
An overview of the evaluations you can do with Evidently.
Last updated
Evidently has several pre-built reports and test suites. We call them Presets. Each preset evaluates or tests a particular aspect of the data or model quality.
This page links to the description of each preset. To see the code and interactive examples, head to example notebooks instead.
Metric presets are pre-built reports that help with visual exploration, debugging and documentation of the data and model performance. You can also use them to calculate and log metrics as JSON or Python dictionary.
Test presets are pre-built test suites that perform structured data and model checks as part of the pipeline.
You can also create custom test suites and reports from individual metrics and tests. You can explore 100+ available tests and metrics.
Shows the dataset statistics and feature behavior. Requirements: model inputs.
Explores the distribution shift in the model features. Requirements: model inputs, a reference dataset.
Explores the distribution shift in the model predictions or target. Requirements: model predictions and/or target, a reference dataset.
Evaluates the classification model quality and errors. Requirements: model predictions and true labels.
Evaluates the regression model quality and errors. Requirements: model predictions and actuals.
Evaluates text data drift and descriptive statistics. Requirements: model inputs (raw text data)
Tests the model performance without ground truth or actuals. Requirements: model inputs, predictions, a reference dataset.
Tests for distribution drift per column and overall dataset drift. Requirements: model inputs, a reference dataset.
Tests if a data batch is similar to reference. Checks schema, data ranges, etc. Requirements: inputs, a reference dataset.
Tests if the data quality is suitable for (re)training. Checks nulls, duplicates, etc. Requirements: model inputs.
Tests the performance of the regression model against expectation. Requirements: model predictions and actuals.
Tests the performance of a multi-class classification model against expectation. Requirements: model predictions, true labels.
Tests the performance of a binary classification model against expectation. Requirements: model predictions, true labels.
Tests the performance of a binary classification model at top-K. Requirements: model predictions, true labels.