latest
Search
⌃K
Links

All metrics

List of all the metrics and metric presets available in Evidently.
How to use this page
This is a reference page. It shows all the metrics and metric presets available in the library, and their parameters.
You can use the menu on the right to navigate the sections. We organize the metrics by logical groups. Note that these groups do not match the presets with a similar name. For example, there are more Data Quality metrics below than in the DataQualityPreset.
You can use this reference page to discover additional metrics to include in your custom report.

How to read the tables

  • Name: the name of the metric of a preset.
  • Description: plain text explanation of the metric, or the contents of the preset. For metrics, we also specify whether the metric applies to the whole dataset or individual columns.
  • Parameters: description of the required parameters and optional parameters you can pass to the corresponding metric or preset. For metrics, we also specify the default conditions. They apply if you do not pass a custom parameter.
Metric visualizations. Each metric also includes a default render. If you want to see the visualization, navigate to the example notebooks and run the notebook with all metrics or with all metric presets.
We are doing our best to maintain this page up to date. In case of discrepancies, consult the API reference or the current version of the "All metrics" example notebook in the Examples section. If you notice an error, please send us a pull request to update the documentation!

Metric Presets

Defaults: each Metric in a Preset uses the default parameters for this Metric. You can see them in the tables below.
Preset name and Description
Parameters
DataQualityPreset Evaluates the data quality and provides descriptive stats. Input features are required. Prediction and target are optional. Contents: DatasetSummaryMetric() ColumnSummaryMetric(column_name=column_name) for all or сolumns if provided DatasetMissingValuesMetric() DatasetCorrelationsMetric()
Optional: columns
DataDriftPreset Evaluates the data drift in the individual columns and the dataset. Input features are required. Contents: DataDriftTable(сolumns=сolumns) or all if not listed DatasetDriftMetric(сolumns=сolumns) or all if not listed
Optional:
  • columns
  • stattest
  • cat_stattest
  • num_stattest
  • per_column_stattest
  • text_stattest
  • stattest_threshold
  • cat_stattest_threshold
  • num_stattest_threshold
  • per_column_stattest_threshold
  • text_stattest_threshold
  • embeddings
  • embeddings_drift_method
  • drift_share
TargetDriftPreset Evaluates the prediction or target drift. Target or prediction is required. Input features are optional. Contents: ColumnDriftMetric(column_name=target, prediction) ColumnCorrelationsMetric(column_name=target, prediction) TargetByFeaturesTable(columns=columns) or all if not listed If regression: ColumnValuePlot(column_name=target, prediction)
Optional:
  • columns
  • stattest
  • cat_stattest
  • num_stattest
  • per_column_stattest
  • stattest_threshold
  • cat_stattest_threshold
  • num_stattest_threshold
  • per_column_stattest_threshold
RegressionPreset Evaluates the quality of a regression model. Prediction and target are required. Input features are optional. Contents: RegressionQualityMetric() RegressionPredictedVsActualScatter() RegressionPredictedVsActualPlot() RegressionErrorPlot() RegressionAbsPercentageErrorPlot() RegressionErrorDistribution() RegressionErrorNormality() RegressionTopErrorMetric() RegressionErrorBiasTable(columns=columns)or all if not listed
Optional: columns
ClassificationPreset Evaluates the quality of a classification model. Prediction and target are required. Input features are optional. Contents: ClassificationQualityMetric() ClassificationClassBalance() ClassificationConfusionMatrix() ClassificationQualityByClass() If probabilistic classification, also: ClassificationClassSeparationPlot() ClassificationProbDistribution() ClassificationRocCurve() ClassificationPRCurve() ClassificationPRTable() ClassificationQualityByFeatureTable(columns=columns) or all if not listed
Optional:
  • columns
  • probas_threshold
  • k
TextOverviewPreset(column_name=”text”) Evaluates data drift and descriptive statistics for text data. Input features (text) are required. Contents: ColumnSummaryMetric() TextDescriptorsDistribution() TextDescriptorsCorrelation() If reference data is provided, also: ColumnDriftMetric() TextDescriptorsDriftMetric()
Required: column_name

Data Integrity

Defaults for Missing Values. The metrics that calculate the number or share of missing values detect four types of the values by default: Pandas nulls (None, NAN, etc.), "" (empty string), Numpy "-inf" value, Numpy "inf" value. You can also pass a custom missing values as a parameter and specify if you want to replace the default list. Example:
DatasetMissingValuesMetric(missing_values=["", 0, "n/a", -9999, None], replace=True)
Metric name
Description
Parameters
DatasetSummaryMetric()
Dataset-level. Calculates various descriptive statistics for the dataset, incl. the number of columns, rows, cat/num features, missing values, empty values, and duplicate values.
Required: n/a Optional:
  • missing_values = [], replace = True/False (default = four types of missing values, see above)
  • almost_constant_threshold (default = 0.95)
  • almost_duplicated_threshold (default = 0.95)
DatasetMissingValuesMetric()
Dataset-level. Calculates the number and share of missing values in the dataset. Displays the number of missing values per column.
Required: n/a Optional:
  • missing_values = [], replace = True/False (default = four types of missing values, see above)
ColumnSummaryMetric(column_name="age")
Column-level. Calculates various descriptive statistics for the column, incl. the number of missing, empty, duplicate values, etc. The stats depend on the column type: numerical, categorical, text or DateTime.
Required: column_name Optional: n/a
ColumnMissingValuesMetric(column_name="education")
Column-level. Calculates the number and share of missing values in the column.
Required: n/a Optional:
  • missing_values = [], replace = True/False (default = four types of missing values, see above)
ColumnRegExpMetric(column_name="relationship", reg_exp=r".child.")
Column-level. Calculates the number and share of the values that do not match a defined regular expression.
Required:
  • column_name
  • reg_exp
Optional:
  • top (the number of the most mismatched columns to return, default = 10)

Data Quality

Metric name
Description
Parameters
ConflictPredictionMetric()
Dataset-level. Calculates the number of instances where the model returns a different output for an identical input. Can be a signal of low-quality model or data errors.
Required: n/a Optional: n/a
ConflictTargetMetric()
Dataset-level. Calculates the number of instances where there is a different target value or label for an identical input. Can be a signal of a labeling or data error.
Required: n/a Optional: n/a
DatasetCorrelationsMetric()
Dataset-level. Calculates the correlations between the columns in the dataset. Visualizes the heatmap.
Required: n/a Optional: n/a
ColumnDistributionMetric(column_name="education")
Column-level. Plots the distribution histogram and returns bin positions and values for the given column.
Required: column_name Optional: n/a
ColumnQuantileMetric(column_name="education-num", quantile=0.75)
Column-level. Calculates the defined quantile value and plots the distribution for the given column.
Required:
  • column_name
  • quantile
Optional: n/a
ColumnCorrelationsMetric(column_name="education")
Column-level. Calculates the correlations between the defined column and all the other columns in the dataset.
Required: column_name Optional: n/a
ColumnValueListMetric(column_name="relationship", values=["Husband", "Unmarried"])
Column-level. Calculates the number of values in the list / out of the list / not found in a given column. The value list should be specified.
Required:
  • column_name
  • values
Optional: n/a
ColumnValueRangeMetric(column_name="age", left=10, right=20)
Column-level. Calculates the number and share of values in the specified range / out of range in a given column. Plots the distributions.
Required:
  • column_name
  • left
  • right
TextDescriptorsDistribution(column_name=”text”)
Column-level. Calculates and visualizes distributions for auto-generated text descriptors (text length, the share of out-of-vocabulary words, etc.)
Required:
  • column_name
TextDescriptorsCorrelationMetric(column_name=”text”)
Column-level. Calculates and visualizes correlations between auto-generated text descriptors and other columns in the dataset.
Required:
  • column_name

Data Drift

Defaults for Data Drift. By default, all data drift tests use the Evidently drift detection logic that selects a different statistical test or metric based on feature type and volume. You always need a reference dataset.
To modify the logic or select a different test, you should set data drift parameters or embeddings drift parameters.
Metric name
Description
Parameters
DatasetDriftMetric()
Dataset-level. Calculates the number and share of drifted features. Returns true/false for the dataset drift at a given threshold (defined by the share of drifting features). Each feature is tested for drift individually using the default algorithm, unless a custom approach is specified.
Required: n/a Optional:
  • сolumns (default=all)
  • drift_share(default for dataset drift = 0.5)
  • stattest
  • cat_stattest
  • num_stattest
  • per_column_stattest
  • stattest_threshold
  • cat_stattest_threshold
  • num_stattest_threshold
  • per_column_stattest_threshold
DataDriftTable()
Dataset-level. Calculates data drift for all columns in the dataset, or for a defined list of columns. Returns drift detection results for each column and visualizes distributions in a table. Uses the default drift algorithm of test selection, unless a custom approach is specified.
Required: n/a Optional:
  • сolumns
  • stattest
  • cat_stattest
  • num_stattest
  • per_column_stattest
  • stattest_threshold
  • cat_stattest_threshold
  • num_stattest_threshold
  • per_column_stattest_threshold
ColumnDriftMetric('age')
Column-level. Calculates data drift for a defined column (tabular or text). Visualizes distributions. Uses the default-selected test unless a custom is specified.
Required:
  • column_name
Optional:
  • stattest
  • stattest_threshold
TextDescriptorsDriftMetric(column_name=”text”)
Column-level. Calculates data drift for auto-generated text descriptors and visualizes the distributions of text characteristics.
Required:
  • column_name
Optional:
  • stattest
  • stattest_threshold
EmbeddingsDriftMetric('small_subset')
Column-level. Calculates data drift for embeddings.
Required:
  • embeddings_name
Optional:
  • drift_method

Classification

The metrics work both for probabilistic and non-probabilistic classification. All metrics are dataset-level.
Metric name
Description
Parameters
ClassificationDummyMetric()
Calculates the quality of the dummy model built on the same data. This can serve as a baseline.
Required: n/a Optional: n/a
ClassificationQualityMetric()
Calculates various classification performance metrics, incl. precision, accuracy, recall, F1-score, TPR, TNR, FPR, and FNR. For probabilistic classification, also: ROC AUC score, LogLoss.
Required:: n/a Optional:
  • probas_threshold (default for classification = None; default for probabilistic classification = 0.5)
  • k (default = None)
ClassificationClassBalance()
Calculates the number of objects for each label. Plots the histogram.
Required: n/a Optional: n/a
ClassificationConfusionMatrix()
Calculates the TPR, TNR, FPR, FNR, and plots the confusion matrix.
Required: n/a Optional:
  • probas_threshold(default for classification = None; default for probabilistic classification = 0.5)
  • k (default = None)
ClassificationQualityByClass()
Calculates the classification quality metrics for each class. Plots the matrix.
Required: n/a Optional:
  • probas_threshold(default for classification = None; default for probabilistic classification = 0.5)
  • k (default = None)
ClassificationClassSeparationPlot()
Visualization of the predicted probabilities by class. Applicable for probabilistic classification only.
Required: n/a Optional: n/a
ClassificationProbDistribution()
Visualization of the probability distribution by class. Applicable for probabilistic classification only.
Required: n/a Optional: n/a
ClassificationRocCurve()
Plots ROC Curve. Applicable for probabilistic classification only.
Required: n/a Optional: n/a
ClassificationPRCurve()
Plots Precision-Recall Curve. Applicable for probabilistic classification only.
Required: n/a Optional: n/a
ClassificationPRTable()
Calculates the Precision-Recall table that shows model quality at a different decision threshold.
Required: n/a Optional: n/a
ClassificationQualityByFeatureTable()
Plots the relationship between feature values and model quality.
Required: n/a Optional:
  • columns(default = all categorical and numerical columns)

Regression

All metrics are dataset-level.
Metric name
Description
Parameters
RegressionDummyMetric()
Calculates the quality of the dummy model built on the same data. This can serve as a baseline.
Required: n/a Optional: n/a
RegressionQualityMetric()
Calculates various regression performance metrics, incl. Mean Error, MAE, MAPE, etc.
Required: n/a Optional: n/a
RegressionPredictedVsActualScatter()
Visualizes predicted vs actual values in a scatter plot.
Required: n/a Optional: n/a
RegressionPredictedVsActualPlot()
Visualizes predicted vs. actual values in a line plot.
Required: n/a Optional: n/a
RegressionErrorPlot()
Visualizes the model error (predicted - actual) in a line plot.
Required: n/a Optional: n/a
RegressionAbsPercentageErrorPlot()
Visualizes the absolute percentage error in a line plot.
Required: n/a Optional: n/a
RegressionErrorDistribution()
Visualizes the distribution of the model error in a histogram.
Required: n/a Optional: n/a
RegressionErrorNormality()
Visualizes the quantile-quantile plot (Q-Q plot) to estimate value normality.
Required: n/a Optional: n/a
RegressionTopErrorMetric()
Calculates the regression performance metrics for different groups: top-X% of predictions with overestimation, top-X% of predictions with underestimation, and the rest. Visualizes the group division on a scatter plot with predicted vs. actual values.
Required: n/a Optional:
  • top_error (default=0.05; the metrics are calculated for top-5% predictions with overestimation and underestimation).
RegressionErrorBiasTable()
Plots the relationship between feature values and model quality per group (for top-X% error groups, as above).
Required: n/a Optional:
  • columns(default = all categorical and numerical columns)
  • top_error (default=0.05; the metrics are calculated for top-5% predictions with overestimation and underestimation).