All metrics

List of Metrics, Descriptors and Metric Presets available in Evidently.

How to use this page

This is a reference page. It shows all the available Metrics, Descriptors and Presets.

You can use the menu on the right to navigate the sections. We organize the Metrics by logical groups. Note that these groups do not match the Presets with a similar name. For example, there are more Data Quality Metrics than included in the DataQualityPreset.

How to read the tables

  • Name: the name of the Metric.

  • Description: plain text explanation. For Metrics, we also specify whether it applies to the whole dataset or individual columns.

  • Parameters: required and optional parameters for the Metric or Preset. We also specify the defaults that apply if you do not pass a custom parameter.

Metric visualizations. Each Metric includes a default render. To see the visualization, navigate to the example notebooks and run the notebook with all Metrics or Metric Presets.

We are doing our best to maintain this page up to date. In case of discrepancies, check the "All metrics" notebook in examples. If you notice an error, please send us a pull request with an update!

Metric Presets

Defaults: Presets use the default parameters for each Metric. You can see them in the tables below.

Data Quality Preset

DataQualityPreset captures column and dataset summaries. Input columns are required. Prediction and target are optional.

Composition:

  • DatasetSummaryMetric()

  • ColumnSummaryMetric() for all or specified сolumns

  • DatasetMissingValuesMetric()

Optional parameters:

  • columns

Data Drift Preset

DataDriftPreset evaluates the data distribution drift in all individual columns, and share of drifting columns in the dataset. Input columns are required.

Composition:

  • DataDriftTable() for all or specified columns

  • DatasetDriftMetric() for all or specified columns

Optional parameters:

  • columns

  • stattest

  • cat_stattest

  • num_stattest

  • per_column_stattest

  • text_stattest

  • stattest_threshold

  • cat_stattest_threshold

  • num_stattest_threshold

  • per_column_stattest_threshold

  • text_stattest_threshold

  • embeddings

  • embeddings_drift_method

  • drift_share

How to set data drift parameters, embeddings drift parameters.

Target Drift Preset

TargetDriftPreset evaluates the prediction or target drift. Target and/or prediction is required. Input features are optional.

Composition:

  • ColumnDriftMetric() for target and/or prediction columns

  • ColumnCorrelationsMetric() for target and/or prediction columns

  • TargetByFeaturesTable() for all or specified columns

  • ColumnValuePlot() for target and/or prediction columns - if the task is regression

Optional parameters:

  • columns

  • stattest

  • cat_stattest

  • num_stattest

  • per_column_stattest

  • stattest_threshold

  • cat_stattest_threshold

  • num_stattest_threshold

  • per_column_stattest_threshold

How to set data drift parameters.

Regression Preset

RegressionPreset evaluates the quality of a regression model. Prediction and target are required. Input features are optional.

Composition:

  • RegressionQualityMetric()

  • RegressionPredictedVsActualScatter()

  • RegressionPredictedVsActualPlot()

  • RegressionErrorPlot()

  • RegressionAbsPercentageErrorPlot()

  • RegressionErrorDistribution()

  • RegressionErrorNormality()

  • RegressionTopErrorMetric()

  • RegressionErrorBiasTable() for all or specified columns

Optional parameters:

  • columns

Classification Preset

ClassificationPreset evaluates the quality of a classification model. Prediction and target are required. Input features are optional.

Composition:

  • ClassificationQualityMetric()

  • ClassificationClassBalance()

  • ClassificationConfusionMatrix()

  • ClassificationQualityByClass()

  • ClassificationClassSeparationPlot() - if probabilistic classification

  • ClassificationProbDistribution()- if probabilistic classification

  • ClassificationRocCurve() - if probabilistic classification

  • ClassificationPRCurve() - if probabilistic classification

  • ClassificationPRTable() - if probabilistic classification

  • ClassificationQualityByFeatureTable() for all or specified columns

Optional parameters:

  • columns

  • probas_threshold

Text Overview Preset

TextOverviewPreset() provides a summary for a single or multiple text columns. Text columns are required.

Comoposition:

  • ColumnSummaryMetric() for text descriptors for all columns. Descriptors included:

    • Sentiment()

    • SentenceCount()

    • OOV()

    • TextLength()

    • NonLetterCharacterPercentage()

  • SemanticSimilarity() between each pair of text columns, if there more than one.

Required parameters:

  • column_name or columns list

Optional parameters:

  • descriptors list

Text Evals

TextEvals() provides a simplified interface to list Descriptors for a given text column. It it returns a summary of evaluation results.

Composition:

  • ColumnSummaryMetric() for text descriptors for the specified text column:

    • Sentiment()

    • SentenceCount()

    • OOV()

    • TextLength()

    • NonLetterCharacterPercentage()

Required parameters:

  • column_name

RecSys (Recommender System) Preset

RecsysPreset evaluates the quality of the recommender system. Recommendations and true relevance scores are required. For some metrics, training data and item features are required.

Composition:

  • PrecisionTopKMetric()

  • RecallTopKMetric()

  • FBetaTopKMetric()

  • MAPKMetric()

  • NDCGKMetric()

  • MRRKMetric()

  • HitRateKMetric()

  • PersonalizationMetric()

  • PopularityBias()

  • RecCasesTable()

  • ScoreDistribution()

  • DiversityMetric()

  • SerendipityMetric()

  • NoveltyMetric()

  • ItemBiasMetric() (pass column as a parameter)

  • UserBiasMetric()(pass column as a parameter)

Required parameter:

  • k

Optional parameters:

  • min_rel_score: Optional[int]

  • no_feedback_users: bool

  • normalize_arp: bool

  • user_ids: Optional[List[Union[int, str]]]

  • display_features: Optional[List[str]]

  • item_features: Optional[List[str]]

  • user_bias_columns: Optional[List[str]]

  • item_bias_columns: Optional[List[str]]

Data Quality

MetricParameters

DatasetSummaryMetric() Dataset-level. Calculates descriptive dataset statistics, including:

  • Number of columns by type

  • Number of rows

  • Missing values

  • Empty columns

  • Constant and almost constant columns

  • Duplicated and almost duplicated columns

Required: n/a Optional:

  • missing_values = [], replace = True/False (see default types below)

  • almost_constant_threshold (default = 0.95)

  • almost_duplicated_threshold (default = 0.95)

DatasetMissingValuesMetric() Dataset-level. Calculates the number and share of missing values in the dataset. Displays the number of missing values per column.

Required: n/a Optional:

  • missing_values = [], replace = True/False (default = four types of missing values, see above)

DatasetCorrelationsMetric() Dataset-level. Calculates the correlations between all columns in the dataset. Uses: Pearson, Spearman, Kendall, Cramer_V. Visualizes the heatmap.

Required: n/a Optional: n/a

ColumnSummaryMetric() Column-level. Calculates various descriptive statistics for numerical, categorical, text or DateTime columns, including:

  • Count

  • Min, max, mean (for numerical)

  • Standard deviation (for numerical)

  • Quantiles - 25%, 50%, 75% (for numerical)

  • Unique value share

  • Most common value share

  • Missing value share

  • New and missing categories (for categorical)

  • Last and first date (for DateTime)

  • Length, OOV% and Non-letter % (for text)

Plots the distribution histogram. If DateTime is provided, also plots the distribution over time. If Target is provided, also plots the relation with Target.

Required: column_name Optional: n/a

ColumnMissingValuesMetric() Column-level. Calculates the number and share of missing values in the column.

Required: n/a Optional:

  • missing_values = [], replace = True/False (default = four types of missing values, see below)

ColumnRegExpMetric() Column-level. Calculates the number and share of the values that do not match a defined regular expression. Example use: ColumnRegExpMetric(column_name="status", reg_exp=r".*child.*")

Required:

  • column_name

  • reg_exp

Optional:

  • top (the number of the most mismatched columns to return, default = 10)

ColumnDistributionMetric() Column-level. Plots the distribution histogram and returns bin positions and values for the given column.

Required: column_name Optional: n/a

ColumnValuePlot() Column-level. Plots the values in time.

Required: column_name Optional: n/a

ColumnQuantileMetric() Column-level. Calculates the defined quantile value and plots the distribution for the given numerical column. Example use: ColumnQuantileMetric(column_name="name", quantile=0.75)

Required:

  • column_name

  • quantile

Optional: n/a

ColumnCorrelationsMetric() Column-level. Calculates the correlations between the defined column and all the other columns in the dataset.

Required: column_name Optional: n/a

ColumnValueListMetric() Column-level. Calculates the number of values in the list / out of the list / not found in a given column. The value list should be specified. Example use: ColumnValueListMetric(column_name="city", values=["London", "Paris"])

Required:

  • column_name

  • values

Optional: n/a

ColumnValueRangeMetric() Column-level. Calculates the number and share of values in the specified range / out of range in a given column. Plots the distributions. Example use: ColumnValueRangeMetric(column_name="age", left=10, right=20)

Required:

  • column_name

  • left

  • right

ConflictPredictionMetric() Dataset-level. Calculates the number of instances where the model returns a different output for an identical input. Can be a signal of low-quality model or data errors.

Required: n/a Optional: n/a

ConflictTargetMetric() Dataset-level. Calculates the number of instances where there is a different target value or label for an identical input. Can be a signal of a labeling or data error.

Required: n/a Optional: n/a

Defaults for Missing Values. The metrics that calculate the number or share of missing values detect four types of missing values by default: Pandas nulls (None, NAN, etc.), "" (empty string), Numpy "-inf" value, Numpy "inf" value. You can also pass custom missing values as a parameter and specify if you want to replace the default list. Example:

DatasetMissingValuesMetric(missing_values=["", 0, "n/a", -9999, None], replace=True)

Text Evals

Text Evals only apply to text columns. To compute a Descriptor for a single text column, use a TextEvals Preset.

You can also explicitly specify the Evidently Metric (e.g., ColumnSummaryMetric) to visualize the descriptor, or pick a Test (e.g., TestColumnValueMin) to run validations.

Descriptors: Patterns

DescriptorParameters

RegExp()

  • Matches text against any specified regular expression.

  • Returns True/False for every input.

Example use: RegExp(reg_exp=r"^I")

Required: reg_exp Optional:

  • display_name

BeginsWith()

  • Checks if the text begins with a specified combination.

  • Returns True/False for every input.

Example use: BeginsWith(prefix="How")

Required: prefix Optional:

  • display_name

  • case_sensitive = True or False

EndsWith()

  • Checks if the text ends with a specified combination.

  • Returns True/False for every input.

Example use: EndsWith(suffix="Thank you.")

Required: suffix Optional:

  • display_name

  • case_sensitive = True or False

Contains()

  • Checks if the text contains any or all specified items (e.g. competitor names, etc.)

  • Returns True/False for every input.

Example use: Contains(items=["medical leave"])

Required: items: List[str] Optional:

  • display_name

  • mode = 'any' or 'all'

  • case_sensitive = True or False

DoesNotContain()

  • Checks if the text does not contain any or all specified items.

  • Returns True/False for every input.

Example use: DoesNotContain(items=["as a large language model"]

Required: items: List[str] Optional:

  • display_name

  • mode = 'all' or 'any'

  • case_sensitive = True or False

IncludesWords()

  • Checks if the text includes any (default) or all specified words.

  • Considers only vocabulary words (from NLTK vocabulary).

  • By default, considers inflected and variant forms of the same word.

  • Returns True/False for every input.

Example use: IncludesWords(words_list=['booking', 'hotel', 'flight']

Required: words_list: List[str] Optional:

  • display_name

  • mode = 'any' or 'all'

  • lemmatize = True or False

ExcludesWords()

  • Checks if the text excludes all specified words.

  • Considers only vocabulary words (from NLTK vocabulary).

  • By default, considers inflected and variant forms of the same word.

  • Returns True/False for every input.

Example use: ExcludesWords(words_list=['buy', 'sell', 'bet']

Required: words_list: List[str] Optional:

  • display_name

  • mode = 'all' or 'any'

  • lemmatize = True or False

Descriptors: Text stats

DescriptorParameters

TextLength()

  • Measures the length of the text.

  • Returns an absolute number.

Required: n/a Optional:

  • display_name

OOV()

  • Calculates the percentage of out-of-vocabulary words based on imported NLTK vocabulary.

  • Return a score on a scale: 0 to 100.

Required: n/a Optional:

  • display_name

  • ignore_words: Tuple = ()

NonLetterCharacterPercentage()

  • Calculates the percentage of non-letter characters.

  • Return a score on a scale: 0 to 100.

Required: n/a Optional:

  • display_name

SentenceCount()

  • Counts the number of sentences in the text.

  • Returns an absolute number.

Required: n/a Optional:

  • display_name

WordCount()

  • Counts the number of words in the text.

  • Returns an absolute number.

Required: n/a Optional:

  • display_name

Descriptors: Model-based

DescriptorParameters

Sentiment()

  • Analyzes the sentiment of the text.

  • Return a score on a scale: -1 (negative) to 1 positive).

Required: n/a Optional:

  • display_name

HuggingFaceToxicityModel()

  • Detects hate speech using HuggingFace Model.

  • Returns predicted probability for the “hate” label.

  • Scale: 0 to 1.

Optional:

  • toxic_label="hate" (default)

  • display_name

HuggingFaceModel() Scores the text using the selected HuggingFace model.

See docs with some example models (classification by topic, emotion, etc.)

OpenAIPrompting() Scores the text using the defined prompt and OpenAI model as LLM-as-a-judge.

See docs for examples.

Semantic Similarity()

  • Calculates pairwise semantic similarity between columns.

  • Generates text embeddings using a transformer model.

  • Calculates Cosine Similarity between each pair of texts.

  • Return a score on a scale: 0 to 1. (0: different, 0.5: unrelated, 1: identical).

Example use: ColumnSummaryMetric(column_name=SemanticSimilarity().on(["response", "new_response"])).

Required:

  • two column names

Optional:

  • display_name

Text-Specific Metrics

The following metrics only apply to text columns.

MetricParameters

TextDescriptorsDistribution()

  • Column-level.

  • Visualizes distributions for auto-generated text descriptors (TextLength(), OOV() etc.)

Required:

  • column_name

TextDescriptorsCorrelationMetric()

  • Column-level.

  • Calculates and visualizes correlations between auto-generated text descriptors and other columns in the dataset.

Required:

  • column_name

TextDescriptorsDriftMetric()

  • Column-level.

  • Calculates data drift for auto-generated text descriptors and visualizes the distributions of text characteristics.

Required:

  • column_name

Optional:

  • stattest

  • stattest_threshold

Data Drift

Defaults for Data Drift. By default, all data drift metrics use the Evidently drift detection logic that selects a drift detection method based on feature type and volume. You always need a reference dataset.

To modify the logic or select a different test, you should set data drift parameters or embeddings drift parameters. You can choose from 20+ drift detection methods and optionally pass feature importances.

MetricParameters

DatasetDriftMetric()

  • Dataset-level.

  • Calculates the number and share of drifted features in the dataset.

  • Each feature is tested for drift individually using the default algorithm, unless a custom approach is specified.

Required: n/a Optional:

  • сolumns (default=all)

  • drift_share(default for dataset drift = 0.5)

  • stattest

  • cat_stattest

  • num_stattest

  • per_column_stattest

  • stattest_threshold

  • cat_stattest_threshold

  • num_stattest_threshold

  • per_column_stattest_threshold

How to set data drift parameters.

DataDriftTable()

  • Dataset-level.

  • Calculates data drift for all or selected columns.

  • Returns drift detection results for each column.

  • Visualizes distributions for all columns in a table.

Required: n/a Optional:

  • сolumns

  • stattest

  • cat_stattest

  • num_stattest

  • per_column_stattest

  • stattest_threshold

  • cat_stattest_threshold

  • num_stattest_threshold

  • per_column_stattest_threshold

How to set data drift parameters, embeddings drift parameters.

ColumnDriftMetric()

  • Column-level.

  • Calculates data drift for a defined column (tabular or text).

  • Visualizes distributions.

Required:

  • column_name

Optional:

  • stattest

  • stattest_threshold

How to set data drift parameters

EmbeddingsDriftMetric()

  • Column-level.

  • Calculates data drift for embeddings.

  • Requires embedding column mapping.

Required:

  • embeddings_name

Optional:

  • drift_method

How to set embeddings drift parameters.

Classification

The metrics work both for probabilistic and non-probabilistic classification. All metrics are dataset-level. All metrics require column mapping of target and prediction.

MetricParameters

ClassificationDummyMetric() Calculates the quality of the dummy model built on the same data. This can serve as a baseline.

Required: n/a Optional: n/a

ClassificationQualityMetric() Calculates various classification performance metrics, including:

  • Accuracy

  • Precision

  • Recall

  • F-1 score

  • TPR (True Positive Rate)

  • TNR (True Negative Rate)

  • FPR (False Positive Rate)

  • FNR (False Negative Rate)

  • ROC AUC Score (for probabilistic classification)

  • LogLoss (for probabilistic classification)

Required:: n/a Optional:

  • probas_threshold (default for classification = None; default for probabilistic classification = 0.5)

  • k (default = None)

ClassificationClassBalance() Calculates the number of objects for each label. Plots the histogram.

Required: n/a Optional: n/a

ClassificationConfusionMatrix() Calculates the TPR, TNR, FPR, FNR, and plots the confusion matrix.

Required: n/a Optional:

  • probas_threshold(default for classification = None; default for probabilistic classification = 0.5)

  • k (default = None)

ClassificationQualityByClass() Calculates the classification quality metrics for each class. Plots the matrix.

Required: n/a Optional:

  • probas_threshold(default for classification = None; default for probabilistic classification = 0.5)

  • k (default = None)

ClassificationClassSeparationPlot() Visualization of the predicted probabilities by class. Applicable for probabilistic classification only.

Required: n/a Optional: n/a

ClassificationProbDistribution() Visualization of the probability distribution by class. Applicable for probabilistic classification only.

Required: n/a Optional: n/a

ClassificationRocCurve() Plots ROC Curve. Applicable for probabilistic classification only.

Required: n/a Optional: n/a

ClassificationPRCurve() Plots Precision-Recall Curve. Applicable for probabilistic classification only.

Required: n/a Optional: n/a

ClassificationPRTable() Calculates the Precision-Recall table that shows model quality at a different decision threshold.

Required: n/a Optional: n/a

ClassificationQualityByFeatureTable() Plots the relationship between feature values and model quality.

Required: n/a Optional:

  • columns(default = all categorical and numerical columns)

Regression

All metrics are dataset-level. All metrics require column mapping of target and prediction.

MetricParameters

RegressionDummyMetric() Calculates the quality of the dummy model built on the same data. This can serve as a baseline.

Required: n/a Optional: n/a

RegressionQualityMetric() Calculates various regression performance metrics, including:

  • RMSE

  • Mean error (+ standard deviation)

  • MAE(+ standard deviation)

  • MAPE (+ standard deviation)

  • Max absolute error

Required: n/a Optional: n/a

RegressionPredictedVsActualScatter() Visualizes predicted vs actual values in a scatter plot.

Required: n/a Optional: n/a

RegressionPredictedVsActualPlot() Visualizes predicted vs. actual values in a line plot.

Required: n/a Optional: n/a

RegressionErrorPlot() Visualizes the model error (predicted - actual) in a line plot.

Required: n/a Optional: n/a

RegressionAbsPercentageErrorPlot() Visualizes the absolute percentage error in a line plot.

Required: n/a Optional: n/a

RegressionErrorDistribution() Visualizes the distribution of the model error in a histogram.

Required: n/a Optional: n/a

RegressionErrorNormality() Visualizes the quantile-quantile plot (Q-Q plot) to estimate value normality.

Required: n/a Optional: n/a

RegressionTopErrorMetric() Calculates the regression performance metrics for different groups:

  • top-X% of predictions with overestimation

  • top-X% of predictions with underestimation

  • Majority(the rest)

Visualizes the group division on a scatter plot with predicted vs. actual values.

Required: n/a Optional:

  • top_error (default=0.05; the metrics are calculated for top-5% predictions with overestimation and underestimation).

RegressionErrorBiasTable() Plots the relationship between feature values and model quality per group (for top-X% error groups, as above).

Required: n/a Optional:

  • columns(default = all categorical and numerical columns)

  • top_error (default=0.05; the metrics are calculated for top-5% predictions with overestimation and underestimation).

Ranking and Recommendations

All metrics are dataset-level. Check individual metric descriptions here. All metrics require recommendations column mapping.

Optional shared parameters for multiple metrics:

  • no_feedback_users: bool = False. Specifies whether to include the users who did not select any of the items, when computing the quality metric. Default: False.

  • min_rel_score: Optional[int] = None. Specifies the minimum relevance score to consider relevant when calculating the quality metrics for non-binary targets (e.g., if a target is a rating or a custom score).

MetricParameters

RecallTopKMetric() Calculates the recall at k.

Required:

  • k

Optional:

  • no_feedback_users

  • min_rel_score

PrecisionTopKMetric() Calculates the precision at k.

Required:

  • k

Optional:

  • no_feedback_users

  • min_rel_score

FBetaTopKMetric() Calculates the F-measure at k.

Required:

  • beta(default = 1)

  • k

Optional:

  • no_feedback_users

  • min_rel_score

MAPKMetric() Calculates the Mean Average Precision (MAP) at k.

Required:

  • k

Optional:

  • no_feedback_users

  • min_rel_score

MARKMetric() Calculates the Mean Average Recall (MAR) at k.

Required:

  • k

Optional:

  • no_feedback_users

  • min_rel_score

NDCGKMetric() Calculates the Normalized Discounted Cumulative Gain at k.

Required:

  • k

Optional:

  • no_feedback_users

  • min_rel_score

MRRKMetric() Calculates the Mean Reciprocal Rank (MRR) at k.

Required:

  • k

Optional:

  • min_rel_score

  • no_feedback_users

HitRateKMetric() Calculates the hit rate at k: a share of users for which at least one relevant item is included in the K.

Required:

  • k

Optional:

  • min_rel_score

  • no_feedback_users

DiversityMetric() Calculates intra-list Diversity at k: diversity of recommendations shown to each user in top-K recommendations, averaged by all users.

Required:

  • k

  • item_features: List

Optional:

  • -

NoveltyMetric() Calculates novelty at k: novelty of recommendations shown to each user in top-K recommendations, averaged by all users. Requires a training dataset.

Required:

  • k

Optional:

  • -

SerendipityMetric() Calculates serendipity at k: how unusual the relevant recommendations are in top-K, averaged by all users. Requires a training dataset.

Required:

  • k

  • item_features: List

Optional:

  • min_rel_score

PersonalizationMetric() Measures the average uniqueness of each user's top-K recommendations.

Required:

  • k

Optional:

  • -

PopularityBias() Evaluates the popularity bias in recommendations by computing ARP (average recommendation popularity), Gini index, and coverage. Requires a training dataset.

Required:

  • K

  • normalize_arp (default: False) - whether to normalize ARP calculation by the most popular item in training

Optional:

  • -

ItemBiasMetric() Visualizes the distribution of recommendations by a chosen dimension (column), сomparative to its distribution in the training set. Requires a training dataset.

Required:

  • k

  • column_name

Optional:

  • -

UserBiasMetric() Visualizes the distribution of the chosen category (e.g. user characteristic), comparative to its distribution in the training dataset. Requires a training dataset.

Required:

  • k

  • column_name

Optional:

  • -

ScoreDistribution() Computes the predicted score entropy. Visualizes the distribution of the scores at k (and all scores, if available). Applies only when the recommendations_type is a score.

Required:

  • k

Optional:

  • -

RecCasesTable() Shows the list of recommendations for specific user IDs (or 5 random if not specified).

Required:

  • -

Optional:

  • display_features: List

  • user_ids: List

  • train_item_num: int

Last updated