Evidently helps you run tests and evaluations for your production ML systems. This includes:
evaluating prediction quality (e.g. classification or regression accuracy)
input data quality (e.g. missing values, out-of-range features)
data and prediction drift.
Evaluating distribution shifts (data drift) in ML inputs and predictions is a typical use case that helps you detect shifts in the model quality and environment even without ground truth labels.In this Quickstart, you’ll run a simple data drift report in Python and view the results in Evidently Cloud. If you want to stay fully local, you can also do that - just skip a couple steps.
Let’s split the data into two and introduce some artificial drift for demo purposes. Prod data will include people with education levels unseen in the reference dataset:
You can customize drift parameters by choosing different methods and thresholds. In our case we proceed as is so default tests selected by Evidently will apply.
Local Reports are great for one-off evaluations. To run continuous monitoring (e.g. track the share of drifting features over time), keep track of the results and collaborate with others, upload the results to Evidently Platform.Upload the Report with summary results:
View the Report. Go to Evidently Cloud, open your Project, navigate to “Reports” in the left and open the Report. You will see the summary with scores and Test results.
As you run repeated evals, you may want to track the results in time by creating a Dashboard. Evidently lets you configure the dashboard in the UI or using dashboards-as-code.
This will result in the following Dashboard you’ll be able to access in the Dashboard tab (left menu).For now, you will see only one datapoint, but as you add more Reports (e.g. daily or weekly), you’ll be able to track the results over time.
Alternatively, try DataSummaryPreset that will generate a summary of all columns in the dataset, and run auto-generated Tests to check for data quality and core descriptive stats.