What is Evidently?
Evidently is an open-source Python library for data scientists and ML engineers. It helps evaluate, test and monitor the performance of ML models from validation to production.
You can think of it as an evaluation layer that fits into the existing ML stack.

Quick Start

Walk through a basic implementation to understand key Evidently features in under 10 minutes:
Explore the examples of the Evidently reports on different datasets and code tutorials:

How it works

Evidently has a modular approach with 3 interfaces on top of the shared Analyzer functionality.
  1. 1.
    Interactive visual reports
  2. 2.
    Data and model profiling
  3. 3.
    Real-time ML monitoring
Evidently generates interactive HTML reports from pandas.DataFrame or csv files. You can use them for visual model evaluation, debugging and documentation.
Each report covers a certain aspect of the model performance. You can display reports as Dashboard objects in Jupyter notebook or Colab or export as an HTML file.
Evidently currently works with tabular data. 7 reports are available. You can combine, customize the reports or contribute your own report.

Data Drift and Quality

Data Drift: detects changes in feature distribution. Data Quality: provides the feature overview.
Data Drift
Data Quality

Categorical and Numerical Target Drift

Detect changes in Numerical or Categorical target and feature behavior.
Categorical target drift
Numerical target drift

Classification Performance

Analyzes the performance and errors of a Classification or Probabilistic Classification model. Works both for binary and multi-class.
Classification Performance
Probabilistic Classification Performance

Regression Performance

Analyzes the performance and errors of a Regression model. Time series version coming soon.
Regression Performance
Time Series
Evidently also generates JSON Profiles. You can use them to integrate the data or model evaluation step into the ML pipeline.
For example, you can use it to perform scheduled batch checks of model health or log JSON profiles for further analysis. You can also build a conditional workflow based on the result of the check, e.g. to trigger alert, retraining, or generate a visual report.
Each Evidently dashboard has a corresponding JSON profile that returns the summary of metrics and statistical test results.
You can explore integrations with other tools:
Evidently also has Monitors that collect data and model metrics from a deployed ML service. You can use them to build live monitoring dashboards. Evidently helps configure the monitoring on top of the streaming data and emits the metrics. You can log and use the metrics elsewhere.
There is a lightweight integration with Prometheus and Grafana that comes with pre-built dashboards.

Overview

Here is a quick visual summary on how Evidently works. You can track and explore different facets of the ML model quality via reports, profiles or monitoring interface and flexibly fit it into your existing stack.

Community and support

Evidently is in active development, and we are happy to receive and incorporate feedback. If you have any questions, ideas or want to hang out and chat about doing ML in production, join our Discord community!
Last modified 9d ago
Copy link
Outline
Quick Start
How it works
1. Interactive visual reports
Data Drift and Quality
Categorical and Numerical Target Drift
Classification Performance
Regression Performance
2. Data and ML model profiling
3. Real-time ML monitoring
Overview
Community and support