This relies on the core evaluation API of the Evidently Python library. Check the detailed guide.
Simple Example
To run a single eval with text evaluation results uploaded to a workspace:Workflow
The complete workflow looks as the following.1
Run a Report
Configure the evals and run the Evidently Report with optional Test conditions.
2
Upload to the platform
Upload the raw data or only the evaluation results.
3
Explore the results
Go to the Explore view inside your Project to debug the results and compare the outcomes between runs. Understand the Explore view.
4
(Optional) Set up a Dashboard
Set a Dashboard to track results over time. This helps you monitor metric changes across experiments or results of ongoing safety Tests. Check the docs on Dashboard.
5
(Optional) Configure alerts
Optionally, configure alerts on failed Tests. Check the section on Alerts.
Uploading data
Raw data upload is available only for Evidently Cloud and Enterprise.
- include only the resulting Metrics and a summary Report (with distribution summaries, etc.), or
- also upload the raw Dataset you evaluated, together with added Descriptors if any. This helps with row-level debugging and analysis.
include_data (default False) to specify whether to include the data.