Add dashboard panels
How to add and configure monitoring panels.
New dashboards are empty by default. You must define the dashboard composition in the code. You can choose which values or test results to display and select from several monitoring panel types.
Code example
Refer to the QuickStart Tutorial for a complete Python script with multiple monitoring panels.
pageSelf-host ML MonitoringYou can also explore live demo dashboards and the corresponding source code.
How it works
Evidently snapshots
contain multiple measurements. For example, when you log the DataDriftTable()
Metric in a snapshot
, it will contain the dataset drift summary, similar to this:
It will also contain data on individual column drift. Here is a partial example:
The same logic applies to other Metrics and Tests.
You can visualize any measurement captured in the snapshots
over time. To do that, you must add a panel
to a monitoring dashboard
of a specific project
and specify the value you'd like to plot.
For example, if you logged the DataDriftTable()
metric, you may later choose to plot measurements like:
share_of_drifted_columns
number_of_drifted_columns
drift_score
for a specific column.
All these measurements will be available as MetricResults inside the snapshot
.
To create a monitoring panel, you will also need to specify other parameters, such as panel type, width, title and legend. This docs section explains how.
Add panel
To add a new panel
to an existing dashboard
, use the add_panel()
method.
Example. To add a new Counter panel
showing the share of drifting columns:
Note: project.dashboard
is an exemplar of the DashboardConfig
class.
You can add multiple panels to a project dashboard. They will appear in the order listed in the project.
Panel types
You can choose between the following panel types.
Panel Type | Example |
---|---|
Metric counter ( | |
Metric plot ( | See below. |
Line plot ( | |
Scatter plot ( | |
Bar plot ( | |
Histogram ( | |
Test counter ( | |
Test plot ( | See below. |
Detailed plot ( | |
Aggregate plot ( |
Panel parameters
Class DashboardPanel
This is a base class. The parameters below apply to all panel types. There are also panel-specific parameters explained in the following sections.
Parameter | Description |
---|---|
| The name of the panel. It will be visible at the header of a panel on a dashboard. |
| Filters help specify a subset of snapshots from which to display the values.
|
| Sets the size of the panel as half-width or full-sized (Default). |
DashboardPanelCounter
DashboardPanelCounter
helps add metric counters or text panels. You can pull metric values from both Reports and Test Suites.
Example 1. To create a panel with the dashboard title only:
Example 2. To create a panel that sums up measurements (number of rows) over time.
Parameter | Description |
---|---|
| The value (MetricResult) to show in the Counter. You can create a simple text panel if you do not pass the Value. See the section below on Panel Values for more examples. |
| Supporting text to be shown on the Counter. |
| Data aggregation options:
|
DashboardPanelPlot
DashboardPanelPlot
allows creating scatter, bar, line, and histogram plots with metric values. You can pull metric values from both Reports and Test Suites.
Example. To plot MAPE over time in a line plot.
Parameter | Description |
---|---|
| You must pass at least one value (MetricResult). You can also pass multiple values as a list. They will appear together: for example, as separate lines on a Line plot, bars on a Bar Chart, or points on a Scatter Plot. If you use a Histogram, the values will be aggregated. See the section below on Panel Values for more examples. |
| Specifies the plot type. |
DashboardPanelTestSuiteCounter
DashboardPanelTestSuiteCounter
displays a counter of failed and passed tests. It applies to Test Suites only.
Example 1. To display the result of the last test.
Example 2. To display the number of failed tests and errors in the test results for a specific column.
See applicable parameters in the following section.
DashboardPanelTestSuite
DashboardPanelTestSuite
displays the results of failed and passed tests over time. It applies to Test Suites only.
Example 1. To show the results of all individual tests over time, with daily level aggregation.
Example 2. To show the results of individual tests for specific columns, with daily aggregation.
Example 3. To show the number of passed and failed tests, with daily level aggregation.
Parameter | Description |
---|---|
| Filters that help include the results only for specific Tests and/or columns. If not specified, all logged tests will be considered. |
| Filters that help include only the test results with a specific status. If not specified, tests with any status will be considered. |
| Applies to the |
| Applies to the |
| Applies to the |
Panel value
To add a numerical measurement to the plot, you must pass the PanelValue
. For example, you can display the number of drifting features, the share of empty columns, mean error, etc.
Parameters. To define which values to show on a specific panel, you must specify:
Parameter | Description |
---|---|
| A metric ID that corresponds to the Evidently Metric logged inside the snapshots. You must specify the |
| The path that corresponds to the specific MetricResult computed as part of this Metric or Test. You can pass either a complete field path or a "field_name". |
| Additional arguments that specify the metrics parameters. This is applicable when multiple instances of the same metric are logged in a snapshot. For example: column name, text descriptor, drift detection method used, etc. |
| The legend that will be visible in the plot. |
Example 1. To include the share_of_drifted_columns
MetricResult, available inside the DatasetDriftMetric()
:
In this example, you pass the exact name of the field.
Example 2. To include the current.share_of_missing_values
available inside the DatasetMissingValueMetric()
:
In this example, you pass the complete field path inside the Metric.
Note. You must always reference a metric_id, even if you used a Preset. For example, if you used a DataDriftPreset()
, you can reference either of the Metrics it contains (DataDriftTable()
or DatasetDriftMetric()
). You can verify the Metrics included in each Preset in the reference table.
Example 3. To display the mean values of target and prediction over time in a line plot.
In this example, you pass additional metric arguments to specify the column names.
Example 4. To specify the drift detection method (when results for multiple methods logged inside a snapshot) using metric_args
.
Example 5. To specify the text descriptor using metric_args
.
How to find the field path?
Option 1. Use autocomplete.
You can use autocomplete in interactive Python environments (like Jupyter notebook or Colab) to see available fields inside a specific Metric. They appear as you start typing the .fields.
path for a specific Metric.
Note: some types of values (e.g. mean, sum, max, min) will not be visible using this method. This is because they match the names of the standard Python fields.
Option 2. Explore the contents of the snapshot, Metric or Test and find the relevant keys.
To look at all available measurements, you can also:
Open an existing
snapshot
file and explore its contents.Generate a Report or a Test Suite, include the selected Metric or Test, and get the output as a Python dictionary. You can then explore the keys that contain the metric field names.
Once you identify the specific name of the field you would like to add to a panel, you can pass it as the field_path
to the PanelValue
parameter.
Last updated