Add monitoring Panels

Design your own Dashboard with custom Panels.

We recommend starting with pre-built Tabs for a quick start.

Code example

To see end-to-end examples with custom Panels, check:

You can also explore the source code for the open-source live demo dashboards.

Adding Panels

You can add monitoring Panels using the Python API or the Evidently Cloud user interface.

Here is the general flow:

  • Define the Panel type: Counter, Plot, Distribution, Test Counter, or Test Plot. (See Panel types).

  • Specify panel title and size.

  • Add optional Tags to filter data. Without Tags, the Panel will use data from all Project snapshots.

  • Select Panel parameters, e.g., aggregation level.

  • Define the Panel value(s) to show:

    • For Test Panels, specify test_id.

    • For Metric and Distribution Panels, specify metric_id and field_path. .

  • Pass test_args or metric_args to identify the exact value when they repeat in a snapshot. For instance, to plot the mean value of a given column, pass the column name as an argument.

This page explains each step in detail.

Add a new Panel

You can add monitoring Panels using the Python API, or directly in the user interface (Evidently Cloud only).

Enter "edit" mode on the Dashboard (top right corner) and click "add Panel." Follow the steps to create a Panel. You can Preview the Panel before publishing.

Some tips:

  • Use the "Show description" toggle to get help on specific steps.

  • You can identify the field_path in two ways. Use the "Manual mode" toggle to switch.

    • Default mode displays popular values from existing Project snapshots.

    • Manual mode mirrors the Python API. You can select any value, even if it's not yet in the Project. Note that the Panel may be empty until you add the snapshot.

Add a new Tab

Multiple tabs is a Pro feature available in the Evidently Cloud.

By default, you add Panels appear to a single monitoring Dashboard. You can create Tabs to organize them.

Enter the "edit" mode on the Dashboard (top right corner) and click "add Tab". To create a custom Tab, choose an “empty” tab and give it a name.

Proceed with adding Panels to this Tab as usual.

Delete Tabs or Panels

To delete all the existing monitoring Panels using the Python API:

project.dashboard.panels = []

Note: This does not delete the snapshots; it only deletes the Panel configuration.

To delete the Tabs or Panels in the UI, use the “Edit” mode and click the “Delete” sign on the corresponding Panel or Tab.

Panel parameters

Panel types. To preview all Panel types, check the previous docs section. This page details the parameters and API.

Class DashboardPanel is a base class. Its parameters apply to all Panel types.


title: str

Panel name visible at the header.

filter: ReportFilter metadata_values: Dict[str, str] tag_values: List[str] include_test_suites=False

Filters define a subset of snapshots from which to display the values.

  • To select a group of snapshots as a data source, pass metadata_values or tag_values. You must add these Tags when logging Reports or Test Suites. (See docs).

  • To include Test Suites data, set include_test_suites as True (default: False).

size: WidgetSize = WidgetSize.FULL Available: WidgetSize.FULL, WidgetSize.HALF

Sets the Panel size to half-width or full-sized (Default).

See usage examples below together with panel-specific parameters.


DashboardPanelCounter shows a value count or works as a text-only Panel.


value: Optional[PanelValue] = None

Specifies the value to display. If empty, you get a text-only panel. Refer to the Panel Value section below for examples.

text: Optional[str] = None

Supporting text to display on the Counter.

agg: CounterAgg Available: SUM, LAST, NONE

Data aggregation options: SUM: Calculates the value sum (from all snapshots or filtered by Tag). LAST: Displays the last available value. NONE: Reserved for text panels.

See examples:

Text Panel. To create a Panel with the Dashboard title only:

        filter=ReportFilter(metadata_values={}, tag_values=[]),
        title="Bike Rental Demand Forecast",


DashboardPanelPlot shows individual vales over time.


values: List[PanelValue]

Specifies the value(s) to display in the Plot. The field path must point to the individual MetricResult (e.g., not a dictionary or a histogram). If you pass multiple values, they will appear together, e.g., as separate lines on a Line plot, bars on a Bar Chart, or points on a Scatter Plot. Refer to the Panel Value section below for examples.

plot_type: PlotType Available: SCATTER, BAR, LINE, HISTOGRAM

Specifies the plot type: scatter, bar, line, or histogram.

See examples:

Single value on a Plot. To plot MAPE over time in a line plot:

        filter=ReportFilter(metadata_values={}, tag_values=[]),


DashboardPanelDistribution shows changes in the distribution over time.


value: PanelValue

Specifies the distribution to display on the Panel. The field_path must point to a histogram. Refer to the Panel Value section below for examples.

barmode: HistBarMode Available: STACK, GROUP, OVERLAY, RELATIVE

Specifies the distribution plot type: stacked, grouped, overlay or relative.

Example. To plot the distribution of the "education" column over time using STACK plot:

            title="Column Distribution: current",
            filter=ReportFilter(metadata_values={}, tag_values=[]),
                metric_args={"": "education"},
            barmode = HistBarMode.STACK

Test Counter

DashboardPanelTestSuiteCounter shows a counter with Test results.


test_filters: List[TestFilter]=[] test_id: test_id test_arg: List[str]

Test filters select specific Test(s). Without a filter, the Panel considers the results of all Tests. You must reference a test_id even if you used a Preset. You can check the Tests included in each Preset here.

statuses: List[statuses] Available: TestStatus.ERROR, TestStatus.FAIL, TestStatus.SUCCESS, TestStatus.WARNING, TestStatus.SKIPPED

Status filters select Tests with a specific outcomes. (E.g., choose the FAIL status to display a counter of failed Tests). Without a filter, the Panel considers Tests with any status.

agg: CounterAgg Available: SUM, LAST

Data aggregation options: SUM: Calculates the sum of Test results from all snapshots (or filtered by Tags). LAST: Displays the last available Test result.

See examples.

Last Test. To display the result of the latest Test in the Project.

        title="Success of last",

Test Plot

DashboardPanelTestSuite shows Test results over time.


test_filters: List[TestFilter]=[] test_id: test_id test_arg: List[str]

Test filters select specific Test(s). Without a filter, the Panel shows the results of all Tests. You must reference a test_id even if you used a Preset. Check the Preset composition.

statuses: List[statuses] Available: TestStatus.ERROR, TestStatus.FAIL, TestStatus.SUCCESS, TestStatus.WARNING, TestStatus.SKIPPED

Status filters select Tests with specific outcomes. Without a filter, the Panel shows all Test statuses.

panel_type=TestSuitePanelType Available: TestSuitePanelType.DETAILED TestSuitePanelType.AGGREGATE

Defines the Panel type. Detailed shows individual Test results. Aggregate (default) shows the total number of Tests by status.

time_agg: Optional[str] = None Available: 1H, 1D, 1W, 1M (see period aliases)

Groups all Test results in a period (e.g., 1 DAY).

Detailed Tests. To show the results of all individual Tests, with daily level aggregation.

        title="All tests: detailed",
        filter=ReportFilter(metadata_values={}, tag_values=[], include_test_suites=True),

Panel Value

To define the value to show on a Metric Panel (Counter, Distribution, or Plot), you must pass the PanelValue. This includes source metric_id, field_path and metric_args.



The ID corresponds to the Evidently metric in a snapshot. Note that if you used a Metric Preset, you must still reference a metric_id. Check the Metric Preset composition. If you used a Test Suite but want to plot individual values from it on a Metric Panel, you must also reference the metric_id that the Test relies on.


The path to the computed Result inside the Metric. You can provide a complete field path or a field_name. For Counter and Plot, the field_path must point to a single value. For the Distribution Panel, the field_path must point to a histogram.

metric_args (optional)

Use additional arguments (e.g., column name, text descriptor, drift detection method) to identify the exact value when it repeats inside the same snapshot.

legend (optional)

Value legend to show on the Plot.

See examples to specify the field_path:

Exact field name. To include the share_of_drifted_columns available inside the DatasetDriftMetric():


In this example, you pass the exact name of the field.

See examples using different metric_args:

Column names as arguments. To shows the mean values of target and prediction on a line plot.

        metric_args={"": "cnt"},
        legend="Target (daily mean)",
        metric_args={"": "prediction"},
        legend="Prediction (daily mean)",

How to find the field path?

Let's take an example of DataDriftPreset(). It contains two Metrics: DatasetDriftMetric() and DataDriftTable(). (Check the Preset ccomposition.

You can point to any of them as a metric_id, depending on what you’d like to plot.

Most Metrics contain multiple measurements inside (MetricResults) and some render data. To point to the specific value, use the field path.

To find available fields in the chosen Metric, you can explore the contents of the individual snapshot or use automated suggestions in UI or Python.

Each snapshot is a JSON file. You can download or open it in Python to see the available fields.

Alternatively, you can generate a Report with the selected Metrics on any test data. Get the output as a Python dictionary using as_dict() and explore the keys with field names.

Here is a partial example of the contents of DatasetDriftMetric():

'number_of_columns': 15,
'number_of_drifted_columns': 5,
'share_of_drifted_columns': 0.3333333333333333,
'dataset_drift': False,

Once you identify the value you’d like to plot (e.g., number_of_drifted_columns), pass it as the field_path to the PanelValue parameter. Include the DatasetDriftMetric as the metric_id.

Other Metrics and Tests follow the same logic.

Note that there is some data inside the snapshots that you cannot currently plot on a monitoring Dashboard (for example, render data or dictionaries). You can only plot values that exist as individual data points or histograms.

Last updated