Data and ML checks
ML monitoring “hello world
Need help? Ask on Discord.
1. Set up your environment
This quickstart shows both local open-source and cloud workflows.
You will run a simple evaluation in Python and explore results in Evidently Cloud.
1.1. Set up Evidently Cloud
-
Sign up for a free Evidently Cloud account.
-
Create an Organization if you log in for the first time. Get an ID of your organization. (Link).
-
Get an API token. Click the Key icon in the left menu. Generate and save the token. (Link).
1.2. Installation and imports
Install the Evidently Python library:
Components to run the evals:
Components to connect with Evidently Cloud:
1.3. Create a Project
Connect to Evidently Cloud using your API token:
Create a Project within your Organization, or connect to an existing Project:
You will run a simple evaluation in Python and explore results in Evidently Cloud.
1.1. Set up Evidently Cloud
-
Sign up for a free Evidently Cloud account.
-
Create an Organization if you log in for the first time. Get an ID of your organization. (Link).
-
Get an API token. Click the Key icon in the left menu. Generate and save the token. (Link).
1.2. Installation and imports
Install the Evidently Python library:
Components to run the evals:
Components to connect with Evidently Cloud:
1.3. Create a Project
Connect to Evidently Cloud using your API token:
Create a Project within your Organization, or connect to an existing Project:
You will run a simple evaluation locally and preview the results in your Python environment.
Install the Evidently Python library:
Components to run the evals:
2. Prepare a toy dataset
Let’s import a toy dataset with tabular data:
Let’s split the data into two and introduce some artificial drift for demo purposes. Prod
data will include people with education levels unseen in the reference dataset:
3. Get a Report
Let’s a summary of all columns in the dataset, and run auto-generated Tests to check for data quality and core statistics between two datasets:
Note: in this simple example we directly work with pandas dataframes, but it is recommended to create an Evidently Dataset Object and add data definition to specify column types.
4. Explore the results
Upload the Report with summary results:
View the Report. Go to Evidently Cloud, open your Project, navigate to “Reports” in the left and open the Report. You will see the summary with scores and Test results.
Get a Dashboard. As you run repeated evals, you may want to track the results in time. Go to the “Dashboard” tab in the left menu and enter the “Edit” mode. Add a new tab (using plus sign on the left), and select the “Columns” template.
You’ll see a set of panels that show column stats. Each has a single data point. As you log ongoing evaluation results, you can track trends and set up alerts.
Upload the Report with summary results:
View the Report. Go to Evidently Cloud, open your Project, navigate to “Reports” in the left and open the Report. You will see the summary with scores and Test results.
Get a Dashboard. As you run repeated evals, you may want to track the results in time. Go to the “Dashboard” tab in the left menu and enter the “Edit” mode. Add a new tab (using plus sign on the left), and select the “Columns” template.
You’ll see a set of panels that show column stats. Each has a single data point. As you log ongoing evaluation results, you can track trends and set up alerts.
To view the Report in an interactive Python environment like Jupyter notebook or Colab, run:
This will show the summary Report. In the separate Tab, you’ll see the pass/fail results for all Tests.
You can also view the results as a JSON or Python dictionary:
Or save and open an HTML file externally: