Options for Quality Metrics

An example of setting custom options in Data Drift and Probabilistic Classification Performance reports on Wine Quality Dataset:

Available Options

These options apply to different plots in the Evidently reports: Data Drift, Categorical Target Drift, Numerical Target Drift, Classification Performance, Probabilistic classification performance.

You can specify the following parameters:

  • conf_interval_n_sigmas: int Default = 1.

    • Defines the width of confidence interval depicted on plots. Confidence level indicated in sigmas (standard deviation).

    • Works to the feature or target distribution plots in the Data Drift and Numerical Target Drift reports.

  • classification_threshold: float. Default = 0.5.

    • Defines classification threshold for binary probabilistic classification.

    • Works to the Probabilistic Classification report.

  • cut_quantile: tuple[str, float] or dict[str, tuple[str, float]. Default = None.

    • Cut the data above the given quantile from the histogram plot if side parameter == 'right'.

    • Cut the data below the given quantile from the histogram plot if side parameter == 'left'.

    • Cut the data below the given quantile and above 1 - the given quantile from the histogram plot if side parameter == 'two-sided'.

    • Data used for metric calculation doesn't change.

    • Applies to all features (if passed as tuple) or certain features (if passed as dictionary).

    • Works to the Categorical Target Drift, Probabilistic Classification and Classification reports, and affects tables with Target/Prediction behavior by feature, and Classification Quality by Feature.

How to define Quality Metrics Options

1. Define a QualityMetricsOptions object.

options = QualityMetricsOptions(
                           conf_interval_n_sigmas=3, 
                           classification_threshold=0.8, 
                           cut_quantile={'feature_1': ('left': 0.01), 'feature_2': 0.95, 'feature_3': 'two-sided': 0.05})

2. Pass it to the Dashboard class:

dashboard = Dashboard(tabs=[DataDriftTab(), ProbClassificationPerformanceTab()], 
options=[options])

Last updated