The primary use for Evidently is the comparison between two datasets. These datasets can include model input features, predictions, and actuals (or true labels).
Reference dataset is the first dataset that serves as a basis for the comparison.
Current (Production) dataset is the second dataset that is compared to the first.
In practice, you can use it in different combinations:
To compare the performance of a model on a hold-out Test dataset to the performance in Training. In this case, pass the training data as "Reference", and test data as "Current".
To compare the Production performance of a model to the Training period. In this case, pass the training data as "Reference", and production data as "Current".
To compare the Current production performance to an Earlier period. For example, you can compare the last week to the previous week or month. In this case, pass the earlier data as "Reference", and newer data as "Current".
To compare any two models or datasets. For example, to estimate the historical drift in your training data or to compare the performance of the two models on the test set. Pass the first dataset as "Reference", and the second as "Current".
If you have a single dataset, pass it as "Reference". This only works for the Performance reports. In other cases, the tool expects two datasets to perform the statistical tests.
Right now, you cannot choose a custom name for your dataset.
Note: earlier, we referred to the second dataset as "Production". You might notice that in some older examples.
Evidently includes a set of pre-built Reports. Each of them addresses a specific aspect of the data or model performance.
Currently, you can choose between 6 different Report types.
The calculation results can be available in one of the following formats:
An interactive visual Dashboard displayed inside the Jupyter notebook.
An exportable HTML report. It is the same as dashboard, but available as a standalone file.
A JSON profile. It includes a text summary of the metrics, the results of statistical tests, and simple histograms.
Right now, you cannot change the composition of the report, e.g. to add or exclude metrics. Reports are pre-built to serve as good enough defaults. We expect to add configurations in the future.
To display the output inside the Jupyter notebook, you should create a visual Dashboard.
To specify which analysis you want to perform, you should select a Tab. You can combine several tabs together in a single Dashboard. Each tab will contain a combination of metrics, interactive plots, and tables that correspond to the chosen Report type.
You can also save the Dashboard as a standalone HTML file. You can group several Tabs in a single file.
You can generate HTML files from Jupyter notebook or using Terminal.
To get the calculation results as a JSON file, you should create a Profile.
To specify which analysis you want to perform, you should select a Section. You can combine several sections together in a single Profile. Each section will contain a summary of metrics, results of statistical tests, and simple histograms that correspond to the chosen Report type.
You can generate profiles from Jupyter notebook or using Terminal.