You can add monitoring Panels using the Python API or the Evidently Cloud user interface.
Here is the general flow:
Define the Panel type: Counter, Plot, Distribution, Test Counter, or Test Plot. (See Panel types).
Specify panel title and size.
Add optional Tags to filter data. Without Tags, the Panel will use data from all Project snapshots.
Select Panel parameters, e.g., aggregation level.
Define the Panel value(s) to show:
For Test Panels, specify test_id.
For Metric and Distribution Panels, specify metric_id and field_path.
If applicable, pass test_args or metric_args to identify the exact value when they repeat in a snapshot. For instance, to plot the mean value of a given column, pass the column name as an argument.
This page explains each step in detail.
Add a new Panel
You can add monitoring Panels using the Python API, or directly in the user interface (Evidently Cloud or Enterprise).
Enter "edit" mode on the Dashboard (top right corner) and click "add Panel." Follow the steps to create a Panel. You can Preview the Panel before publishing.
Some tips:
Use the "Show description" toggle to get help on specific steps.
You can identify the field_path in two ways. Use the "Manual mode" toggle to switch.
Default mode displays popular values from existing Project snapshots.
Manual mode mirrors the Python API. You can select any value, even if it's not yet in the Project. Note that the Panel may be empty until you add the snapshot.
Connect to a Project. Load the latest dashboard configuration into your Python environment.
project = ws.get_project("YOUR PROJECT ID HERE")
Add a new Panel. Use the add_panel method and pass the parameters. You can add multiple Panels: they will appear in the listed order. Save the configuration with project.save().
Go back to the web app to see the Dashboard. Refresh the page if needed.
Connect to a Project. Load the latest dashboard configuration into your Python environment.
project = ws.get_project("YOUR PROJECT ID HERE")
Add a new Test Panel. Use the add_panel method, set include_test_suites=True and pass the parameters. You can add multiple Panels: they will appear in the listed order. Save the configuration with project.save().
Filters define a subset of snapshots from which to display the values.
To select a group of snapshots as a data source, pass metadata_values or tag_values. You must add these Tags when logging Reports or Test Suites. (See docs).
To include Test Suites data, set include_test_suites as True (default: False).
Sets the Panel size to half-width or full-sized (Default).
See usage examples below together with panel-specific parameters.
Counter
DashboardPanelCounter shows a value count or works as a text-only Panel.
Parameter
Description
value: Optional[PanelValue] = None
Specifies the value to display. If empty, you get a text-only panel.
Refer to the Panel Value section below for examples.
text: Optional[str] = None
Supporting text to display on the Counter.
agg: CounterAggAvailable:SUM, LAST, NONE
Data aggregation options:
SUM: Calculates the value sum (from all snapshots or filtered by Tag).
LAST: Displays the last available value.
NONE: Reserved for text panels.
See examples:
Text Panel. To create a Panel with the Dashboard title only:
DashboardPanelPlot shows individual values over time.
Parameter
Description
values: List[PanelValue]
Specifies the value(s) to display in the Plot.
The field path must point to the individual MetricResult (e.g., not a dictionary or a histogram).
If you pass multiple values, they will appear together, e.g., as separate lines on a Line plot, bars on a Bar Chart, or points on a Scatter Plot.
Refer to the Panel Value section below for examples.
plot_type: PlotTypeAvailable:SCATTER, BAR, LINE, HISTOGRAM
Specifies the plot type: scatter, bar, line, or histogram.
See examples:
Single value on a Plot. To plot MAPE over time in a line plot:
Test filters select specific Test(s). Without a filter, the Panel considers the results of all Tests.
You must reference a test_id even if you used a Preset. You can check the Tests included in each Preset here.
Status filters select Tests with specific outcomes. (E.g., choose the FAIL status to display a counter for failed Tests). Without a filter, the Panel considers Tests with any status.
agg: CounterAggAvailable:
SUM, LAST
Data aggregation options:
SUM: Calculates the sum of Test results from all snapshots (or filtered by Tags).
LAST: Displays the last available Test result.
See examples.
Last Test. To display the result of the latest Test in the Project.
project.dashboard.add_panel(DashboardPanelTestSuiteCounter( title="Success of last", agg=CounterAgg.LAST ))
Filter by Test ID and Status. To display the number of failed Tests and errors for a specific Test (Number of unique values in the column "age").
project.dashboard.add_panel(DashboardPanelTestSuiteCounter( title="Success of 1", test_filters=[TestFilter(test_id="TestNumberOfUniqueValues", test_args={"column_name.name": "1"})], statuses=[TestStatus.ERROR, TestStatus.FAIL] ))
Test Plot
DashboardPanelTestSuite shows Test results over time.
Test filters select specific Test(s). Without a filter, the Panel shows the results of all Tests.
You must reference a test_id even if you used a Preset. Check the Preset composition.
Filtered by Test ID and Test Args. To show the results of individual column-level Tests with daily aggregation, you must use both test_id and test_arg (column name):
To define the value to show on a Metric Panel (Counter, Distribution, or Plot), you must pass the PanelValue. This includes source metric_id, field_path and metric_args.
Parameter
Description
metric_id
The ID corresponds to the Evidently metric in a snapshot.
Note that if you used a Metric Preset, you must still reference a metric_id. Check the Metric Preset composition.
If you used a Test Suite but want to plot individual values from it on a Metric Panel, you must also reference the metric_id that the Test relies on.
field_path
The path to the computed Result inside the Metric. You can provide a complete field path or a field_name. For Counter and Plot, the field_path must point to a single value. For the Distribution Panel, the field_path must point to a histogram.
metric_args (optional)
Use additional arguments (e.g., column name, text descriptor, drift detection method) to identify the exact value when it repeats inside the same snapshot.
legend (optional)
Value legend to show on the Plot.
See examples to specify the field_path:
Exact field name. To include the share_of_drifted_columns available inside the DatasetDriftMetric():
Metric parameters as arguments. To specify the euclidean drift detection method (when results from multiple methods are logged inside a snapshot) using metric_args:
Let's take an example of DataDriftPreset(). It contains two Metrics: DatasetDriftMetric() and DataDriftTable(). (Check the Preset ccomposition.
You can point to any of them as a metric_id, depending on what you’d like to plot.
Most Metrics contain multiple measurements inside (MetricResults) and some render data. To point to the specific value, use the field path.
To find available fields in the chosen Metric, you can explore the contents of the individual snapshot or use automated suggestions in UI or Python.
Each snapshot is a JSON file. You can download or open it in Python to see the available fields.
Alternatively, you can generate a Report with the selected Metrics on any test data. Get the output as a Python dictionary using as_dict() and explore the keys with field names.
Here is a partial example of the contents of DatasetDriftMetric():
Once you identify the value you’d like to plot (e.g., number_of_drifted_columns), pass it as the field_path to the PanelValue parameter. Include the DatasetDriftMetric as the metric_id.
Other Metrics and Tests follow the same logic.
You can use autocomplete in interactive Python environments (like Jupyter notebook or Colab) to see available fields inside a specific Metric. They appear as you start typing the .fields. path for a specific Metric.
Note: some types of values (e.g. mean, sum, max, min) will not be visible using this method. This is because they match the names of the standard Python fields.
When working in the Evidently Cloud, you can see available fields in the drop-down menu as you add a new Panel.
Note that some data inside the snapshots cannot currently be plotted on a monitoring Dashboard (for example, render data or dictionaries). You can only plot values that exist as individual data points or histograms.