All metrics
List of Metrics, Descriptors and Metric Presets available in Evidently.
We are doing our best to maintain this page up to date. In case of discrepancies, check the "All metrics" notebook in examples. If you notice an error, please send us a pull request with an update!
Metric Presets
Defaults: Presets use the default parameters for each Metric. You can see them in the tables below.
Data Quality
DatasetSummaryMetric() Dataset-level. Calculates descriptive dataset statistics, including:
Number of columns by type
Number of rows
Missing values
Empty columns
Constant and almost constant columns
Duplicated and almost duplicated columns
Required: n/a Optional:
missing_values = [], replace = True/False
(see default types below)almost_constant_threshold
(default = 0.95)almost_duplicated_threshold
(default = 0.95)
DatasetMissingValuesMetric() Dataset-level. Calculates the number and share of missing values in the dataset. Displays the number of missing values per column.
Required: n/a Optional:
missing_values = [], replace = True/False
(default = four types of missing values, see above)
DatasetCorrelationsMetric() Dataset-level. Calculates the correlations between all columns in the dataset. Uses: Pearson, Spearman, Kendall, Cramer_V. Visualizes the heatmap.
Required: n/a Optional: n/a
ColumnSummaryMetric() Column-level. Calculates various descriptive statistics for numerical, categorical, text or DateTime columns, including:
Count
Min, max, mean (for numerical)
Standard deviation (for numerical)
Quantiles - 25%, 50%, 75% (for numerical)
Unique value share
Most common value share
Missing value share
New and missing categories (for categorical)
Last and first date (for DateTime)
Length, OOV% and Non-letter % (for text)
Plots the distribution histogram. If DateTime is provided, also plots the distribution over time. If Target is provided, also plots the relation with Target.
Required:
column_name
Optional:
n/a
ColumnMissingValuesMetric() Column-level. Calculates the number and share of missing values in the column.
Required: n/a Optional:
missing_values = [], replace = True/False
(default = four types of missing values, see below)
ColumnRegExpMetric()
Column-level.
Calculates the number and share of the values that do not match a defined regular expression.
Example use: ColumnRegExpMetric(column_name="status", reg_exp=r".*child.*")
Required:
column_name
reg_exp
Optional:
top
(the number of the most mismatched columns to return, default = 10)
ColumnDistributionMetric() Column-level. Plots the distribution histogram and returns bin positions and values for the given column.
Required:
column_name
Optional:
n/a
ColumnValuePlot() Column-level. Plots the values in time.
Required:
column_name
Optional:
n/a
ColumnQuantileMetric()
Column-level.
Calculates the defined quantile value and plots the distribution for the given numerical column.
Example use: ColumnQuantileMetric(column_name="name", quantile=0.75)
Required:
column_name
quantile
Optional: n/a
ColumnCorrelationsMetric() Column-level. Calculates the correlations between the defined column and all the other columns in the dataset.
Required:
column_name
Optional:
n/a
ColumnValueListMetric()
Column-level.
Calculates the number of values in the list / out of the list / not found in a given column. The value list should be specified.
Example use: ColumnValueListMetric(column_name="city", values=["London", "Paris"])
Required:
column_name
values
Optional: n/a
ColumnValueRangeMetric()
Column-level.
Calculates the number and share of values in the specified range / out of range in a given column. Plots the distributions.
Example use: ColumnValueRangeMetric(column_name="age", left=10, right=20)
Required:
column_name
left
right
ConflictPredictionMetric() Dataset-level. Calculates the number of instances where the model returns a different output for an identical input. Can be a signal of low-quality model or data errors.
Required: n/a Optional: n/a
ConflictTargetMetric() Dataset-level. Calculates the number of instances where there is a different target value or label for an identical input. Can be a signal of a labeling or data error.
Required: n/a Optional: n/a
Defaults for Missing Values. The metrics that calculate the number or share of missing values detect four types of missing values by default: Pandas nulls (None, NAN, etc.), "" (empty string), Numpy "-inf" value, Numpy "inf" value. You can also pass custom missing values as a parameter and specify if you want to replace the default list. Example:
Text Evals
Text Evals only apply to text columns. To compute a Descriptor for a single text column, use a TextEvals
Preset. Read docs.
You can also explicitly specify the Evidently Metric (e.g., ColumnSummaryMetric
) to visualize the descriptor, or pick a Test (e.g., TestColumnValueMin
) to run validations.
Descriptors: Text Patterns
Check for regular expression matches.
RegExp()
Matches text against any specified regular expression.
Returns True/False for every input.
Example use:
RegExp(reg_exp=r"^I")
Required:
reg_exp
Optional:
display_name
BeginsWith()
Checks if the text begins with a specified combination.
Returns True/False for every input.
Example use:
BeginsWith(prefix="How")
Required:
prefix
Optional:
display_name
case_sensitive = True
orFalse
EndsWith()
Checks if the text ends with a specified combination.
Returns True/False for every input.
Example use:
EndsWith(suffix="Thank you.")
Required:
suffix
Optional:
display_name
case_sensitive = True
orFalse
Contains()
Checks if the text contains any or all specified items (e.g. competitor names, etc.)
Returns True/False for every input.
Example use:
Contains(items=["medical leave"])
Required:
items: List[str]
Optional:
display_name
mode = 'any'
or'all'
case_sensitive = True
orFalse
DoesNotContain()
Checks if the text does not contain any or all specified items.
Returns True/False for every input.
Example use:
DoesNotContain(items=["as a large language model"]
Required:
items: List[str]
Optional:
display_name
mode = 'all'
or'any'
case_sensitive = True
orFalse
IncludesWords()
Checks if the text includes any (default) or all specified words.
Considers only vocabulary words (from NLTK vocabulary).
By default, considers inflected and variant forms of the same word.
Returns True/False for every input.
Example use:
IncludesWords(words_list=['booking', 'hotel', 'flight']
Required:
words_list: List[str]
Optional:
display_name
mode = 'any'
or'all'
lemmatize = True
orFalse
ExcludesWords()
Checks if the text excludes all specified words.
Considers only vocabulary words (from NLTK vocabulary).
By default, considers inflected and variant forms of the same word.
Returns True/False for every input.
Example use:
ExcludesWords(words_list=['buy', 'sell', 'bet']
Required:
words_list: List[str]
Optional:
display_name
mode = 'all'
or'any'
lemmatize = True
orFalse
ItemMatch()
Checks whether the text contains any (default) or all specified items that are specific to each row (represented as tuples)
Returns True/False for each row.
Example use:
ItemMatch(with_column="expected")
Required:
with_column: str
Optional:
display_name
mode = 'all'
or'any'
case_sensitive = True
orFalse
ItemNoMatch()
Checks whether the text excludes any (default) or all specified items that are specific to each row (represented as tuples)
Returns True/False for each row.
Example use:
ItemMatch(with_column="forbidden")
Required:
with_column: str
Optional:
display_name
mode = 'all'
or'any'
case_sensitive = True
orFalse
WordMatch()
Checks whether the text includes any (default) or all specified words for each row (represented as tuples).
Considers only vocabulary words (from NLTK vocabulary).
By default, considers inflected and variant forms of the same word.
Returns True/False for every input.
Example use:
WordMatch(with_column="expected")
Required:
with_column: str
Optional:
display_name
mode = 'any'
or'all'
lemmatize = True
orFalse
WordNoMatch()
Checks whether the text excludes any (default) or all specified words for each row (represented as tuples).
Considers only vocabulary words (from NLTK vocabulary).
By default, considers inflected and variant forms of the same word.
Returns True/False for every input.
Example use:
WordMatch(with_column="forbidden")
Required:
with_column: str
Optional:
display_name
mode = 'any'
or'all'
lemmatize = True
orFalse
ExactMatch()
Checks if the text matches between two columns.
Returns True/False for every input.
Example use:
ExactMatch(with_column='reference')
Required:
with_column: str
Optional:
display_name
IsValidJSON()
Checks if the text in a specified column is a valid JSON.
Returns True/False for every input.
Required: n/a Optional:
display_name
JSONSchemaMatch()
Checks if the text contains a JSON object matching the expected_schema.
Supports exact (exact=True) or minimal (exact=False) matching, with optional strict type validation (validate_types=True).
Returns True/False for each row.
Example use:
JSONSchemaMatch(expected_schema={"name": str, "age": int}, exact_match=False, validate_types=True)
Required:
expected_schema: Dict[str, type]
Optional:
exact_match = True
orFalse
validate_types = True
orFalse
JSONMatch()
Compares two columns of a dataframe and checks whether the two objects in each row of the dataframe are matching JSONs or not.
Returns True/False for every input.
Example use:
JSONMatch(with_column="column_2")
Required:
with_column : str
Optional:
display_name
ContainsLink()
Checks if the text contains at least one valid URL.
Returns True/False for each row.
Required: n/a Optional:
display_name
IsValidPython()
Checks if the text is valid Python code without syntax errors.
Returns True/False for every input.
Required: n/a Optional:
display_name
IsValidSQL()
Checks if the text in a specified column is a valid SQL query without executing the query.
Returns True/False for every input.
Required: n/a Optional:
display_name
Descriptors: Text stats
Computes descriptive text statistics.
TextLength()
Measures the length of the text in symbols.
Returns an absolute number.
Required: n/a Optional:
display_name
OOV()
Calculates the percentage of out-of-vocabulary words based on imported NLTK vocabulary.
Return a score on a scale: 0 to 100.
Required: n/a Optional:
display_name
ignore_words: Tuple = ()
NonLetterCharacterPercentage()
Calculates the percentage of non-letter characters.
Return a score on a scale: 0 to 100.
Required: n/a Optional:
display_name
SentenceCount()
Counts the number of sentences in the text.
Returns an absolute number.
Required: n/a Optional:
display_name
WordCount()
Counts the number of words in the text.
Returns an absolute number.
Required: n/a Optional:
display_name
Descriptors: LLM-based
Use external LLMs with an evaluation prompt to score text data. (Also known as LLM-as-a-judge method).
LLMEval() Scores the text using the user-defined criteria, automatically formatted in a templated evaluation prompt.
See docs for examples and parameters.
DeclineLLMEval() Detects texts containing a refusal or a rejection to do something. Returns a label (DECLINE or OK) or score.
See docs for parameters.
PIILLMEval() Detects texts containing PII (Personally Identifiable Information). Returns a label (PII or OK) or score.
See docs for parameters.
NegativityLLMEval() Detects negative texts (containing critical or pessimistic tone). Returns a label (NEGATIVE or POSITIVE) or score.
See docs for parameters.
BiasLLMEval() Detects biased texts (containing prejudice for or against a person or group). Returns a label (BIAS or OK) or score.
See docs for parameters.
ToxicityLLMEval() Detects toxic texts (containing harmful, offensive, or derogatory language). Returns a label (TOXICITY or OK) or score.
See docs for parameters.
ContextQualityLLMEval() Evaluates if CONTEXT is VALID (has sufficient information to answer the QUESTION) or INVALID (has missing or contradictory information). Returns a label (VALID or INVALID) or score.
Run the descriptor over the context
column and pass the question
column as a parameter. See docs for parameters.
Descriptors: Model-based
Use pre-trained machine learning models for evaluation.
Semantic Similarity()
Calculates pairwise semantic similarity between columns.
Generates text embeddings using a transformer model.
Calculates Cosine Similarity between each pair of texts.
Return a score on a scale: 0 to 1. (0: different, 0.5: unrelated, 1: identical).
Example use:
SemanticSimilarity(with_column="response")
Required:
with_column
Optional:
display_name
Sentiment()
Analyzes the sentiment of the text using a word-based model.
Return a score on a scale: -1 (negative) to 1 positive).
Required: n/a Optional:
display_name
HuggingFaceModel() Scores the text using the user-selected HuggingFace model.
See docs with some example models (classification by topic, emotion, etc.)
HuggingFaceToxicityModel()
Detects hate speech using HuggingFace Model.
Returns predicted probability for the “hate” label.
Scale: 0 to 1.
Optional:
toxic_label="hate"
(default)display_name
BERTScore()
Calculates similarity between two text columns based on token embeddings from a pre-trained BERT model.
Returns BERTScore (F1 Score) based on cosine similarity between token embeddings.
Required:
with_column
Optional:
model
:Name of the pre-trained BERT model to use (default:"bert-base-uncased"
).tfidf_weighted
: Boolean indicating if embeddings should be weighted with inverse document frequency (IDF) scores (default:False
).display_name
Data Drift
Defaults for Data Drift. By default, all data drift metrics use the Evidently drift detection logic that selects a drift detection method based on feature type and volume. You always need a reference dataset.
To modify the logic or select a different test, you should set data drift parameters or embeddings drift parameters. You can choose from 20+ drift detection methods and optionally pass feature importances.
DatasetDriftMetric()
Dataset-level.
Calculates the number and share of drifted features in the dataset.
Each feature is tested for drift individually using the default algorithm, unless a custom approach is specified.
Required: n/a Optional:
сolumns
(default=all)drift_share
(default for dataset drift = 0.5)stattest
cat_stattest
num_stattest
per_column_stattest
stattest_threshold
cat_stattest_threshold
num_stattest_threshold
per_column_stattest_threshold
DataDriftTable()
Dataset-level.
Calculates data drift for all or selected columns.
Returns drift detection results for each column.
Visualizes distributions for all columns in a table.
Required: n/a Optional:
сolumns
stattest
cat_stattest
num_stattest
per_column_stattest
stattest_threshold
cat_stattest_threshold
num_stattest_threshold
per_column_stattest_threshold
How to set data drift parameters, embeddings drift parameters.
ColumnDriftMetric()
Column-level.
Calculates data drift for a defined column (tabular or text).
Visualizes distributions.
EmbeddingsDriftMetric()
Column-level.
Calculates data drift for embeddings.
Requires embedding column mapping.
Classification
The metrics work both for probabilistic and non-probabilistic classification. All metrics are dataset-level. All metrics require column mapping of target and prediction.
ClassificationDummyMetric() Calculates the quality of the dummy model built on the same data. This can serve as a baseline.
Required: n/a Optional: n/a
ClassificationQualityMetric() Calculates various classification performance metrics, including:
Accuracy
Precision
Recall
F-1 score
TPR (True Positive Rate)
TNR (True Negative Rate)
FPR (False Positive Rate)
FNR (False Negative Rate)
ROC AUC Score (for probabilistic classification)
LogLoss (for probabilistic classification)
Required:: n/a Optional:
probas_threshold
(default for classification = None; default for probabilistic classification = 0.5)k
(default = None)
ClassificationClassBalance() Calculates the number of objects for each label. Plots the histogram.
Required: n/a Optional: n/a
ClassificationConfusionMatrix() Calculates the TPR, TNR, FPR, FNR, and plots the confusion matrix.
Required: n/a Optional:
probas_threshold
(default for classification = None; default for probabilistic classification = 0.5)k
(default = None)
ClassificationQualityByClass() Calculates the classification quality metrics for each class. Plots the matrix.
Required: n/a Optional:
probas_threshold
(default for classification = None; default for probabilistic classification = 0.5)k
(default = None)
ClassificationClassSeparationPlot() Visualization of the predicted probabilities by class. Applicable for probabilistic classification only.
Required: n/a Optional: n/a
ClassificationProbDistribution() Visualization of the probability distribution by class. Applicable for probabilistic classification only.
Required: n/a Optional: n/a
ClassificationRocCurve() Plots ROC Curve. Applicable for probabilistic classification only.
Required: n/a Optional: n/a
ClassificationPRCurve() Plots Precision-Recall Curve. Applicable for probabilistic classification only.
Required: n/a Optional: n/a
ClassificationPRTable() Calculates the Precision-Recall table that shows model quality at a different decision threshold.
Required: n/a Optional: n/a
ClassificationQualityByFeatureTable() Plots the relationship between feature values and model quality.
Required: n/a Optional:
columns
(default = all categorical and numerical columns)
Regression
All metrics are dataset-level. All metrics require column mapping of target and prediction.
RegressionDummyMetric() Calculates the quality of the dummy model built on the same data. This can serve as a baseline.
Required: n/a Optional: n/a
RegressionQualityMetric() Calculates various regression performance metrics, including:
RMSE
Mean error (+ standard deviation)
MAE(+ standard deviation)
MAPE (+ standard deviation)
Max absolute error
Required: n/a Optional: n/a
RegressionPredictedVsActualScatter() Visualizes predicted vs actual values in a scatter plot.
Required: n/a Optional: n/a
RegressionPredictedVsActualPlot() Visualizes predicted vs. actual values in a line plot.
Required: n/a Optional: n/a
RegressionErrorPlot() Visualizes the model error (predicted - actual) in a line plot.
Required: n/a Optional: n/a
RegressionAbsPercentageErrorPlot() Visualizes the absolute percentage error in a line plot.
Required: n/a Optional: n/a
RegressionErrorDistribution() Visualizes the distribution of the model error in a histogram.
Required: n/a Optional: n/a
RegressionErrorNormality() Visualizes the quantile-quantile plot (Q-Q plot) to estimate value normality.
Required: n/a Optional: n/a
RegressionTopErrorMetric() Calculates the regression performance metrics for different groups:
top-X% of predictions with overestimation
top-X% of predictions with underestimation
Majority(the rest)
Visualizes the group division on a scatter plot with predicted vs. actual values.
Required: n/a Optional:
top_error
(default=0.05; the metrics are calculated for top-5% predictions with overestimation and underestimation).
RegressionErrorBiasTable() Plots the relationship between feature values and model quality per group (for top-X% error groups, as above).
Required: n/a Optional:
columns
(default = all categorical and numerical columns)top_error
(default=0.05; the metrics are calculated for top-5% predictions with overestimation and underestimation).
Ranking and Recommendations
All metrics are dataset-level. Check individual metric descriptions here. All metrics require recommendations column mapping.
Optional shared parameters for multiple metrics:
no_feedback_users: bool = False
. Specifies whether to include the users who did not select any of the items, when computing the quality metric. Default: False.min_rel_score: Optional[int] = None
. Specifies the minimum relevance score to consider relevant when calculating the quality metrics for non-binary targets (e.g., if a target is a rating or a custom score).
RecallTopKMetric()
Calculates the recall at k
.
Required:
k
Optional:
no_feedback_users
min_rel_score
PrecisionTopKMetric()
Calculates the precision at k
.
Required:
k
Optional:
no_feedback_users
min_rel_score
FBetaTopKMetric()
Calculates the F-measure at k
.
Required:
beta
(default = 1)k
Optional:
no_feedback_users
min_rel_score
MAPKMetric()
Calculates the Mean Average Precision (MAP) at k
.
Required:
k
Optional:
no_feedback_users
min_rel_score
MARKMetric()
Calculates the Mean Average Recall (MAR) at k
.
Required:
k
Optional:
no_feedback_users
min_rel_score
NDCGKMetric()
Calculates the Normalized Discounted Cumulative Gain at k
.
Required:
k
Optional:
no_feedback_users
min_rel_score
MRRKMetric()
Calculates the Mean Reciprocal Rank (MRR) at k
.
Required:
k
Optional:
min_rel_score
no_feedback_users
HitRateKMetric()
Calculates the hit rate at k
: a share of users for which at least one relevant item is included in the K.
Required:
k
Optional:
min_rel_score
no_feedback_users
DiversityMetric()
Calculates intra-list Diversity at k
: diversity of recommendations shown to each user in top-K recommendations, averaged by all users.
Required:
k
item_features: List
Optional:
-
NoveltyMetric()
Calculates novelty at k
: novelty of recommendations shown to each user in top-K recommendations, averaged by all users.
Requires a training dataset.
Required:
k
Optional:
-
SerendipityMetric()
Calculates serendipity at k
: how unusual the relevant recommendations are in top-K, averaged by all users.
Requires a training dataset.
Required:
k
item_features: List
Optional:
min_rel_score
PersonalizationMetric() Measures the average uniqueness of each user's top-K recommendations.
Required:
k
Optional:
-
PopularityBias() Evaluates the popularity bias in recommendations by computing ARP (average recommendation popularity), Gini index, and coverage. Requires a training dataset.
Required:
K
normalize_arp (default: False)
- whether to normalize ARP calculation by the most popular item in training
Optional:
-
ItemBiasMetric() Visualizes the distribution of recommendations by a chosen dimension (column), сomparative to its distribution in the training set. Requires a training dataset.
Required:
k
column_name
Optional:
-
UserBiasMetric() Visualizes the distribution of the chosen category (e.g. user characteristic), comparative to its distribution in the training dataset. Requires a training dataset.
Required:
k
column_name
Optional:
-
ScoreDistribution()
Computes the predicted score entropy. Visualizes the distribution of the scores at k
(and all scores, if available).
Applies only when the recommendations_type
is a score
.
Required:
k
Optional:
-
RecCasesTable() Shows the list of recommendations for specific user IDs (or 5 random if not specified).
Required:
-
Optional:
display_features: List
user_ids: List
train_item_num: int
Last updated