evidently.metric_preset
class ClassificationPreset(columns: Optional[List[str]] = None, probas_threshold: Optional[float] = None, k: Optional[int] = None)
Bases: MetricPreset
Metrics preset for classification performance.
Contains metrics:
ClassificationQualityMetric
ClassificationClassBalance
ClassificationConfusionMatrix
ClassificationQualityByClass
Attributes:
columns : Optional[List[str]]
k : Optional[int]
probas_threshold : Optional[float]
Methods:
generate_metrics(data: InputData, columns: DatasetColumns)
class DataDriftPreset(columns: Optional[List[str]] = None, drift_share: float = 0.5, stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]] = None, cat_stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]] = None, num_stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]] = None, per_column_stattest: Optional[Dict[str, Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]]] = None, stattest_threshold: Optional[float] = None, cat_stattest_threshold: Optional[float] = None, num_stattest_threshold: Optional[float] = None, per_column_stattest_threshold: Optional[Dict[str, float]] = None)
Bases: MetricPreset
Metric Preset for Data Drift analysis.
Contains metrics:
DatasetDriftMetric
DataDriftTable
Attributes:
cat_stattest : Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]]
cat_stattest_threshold : Optional[float]
columns : Optional[List[str]]
drift_share : float
num_stattest : Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]]
num_stattest_threshold : Optional[float]
per_column_stattest : Optional[Dict[str, Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]]]
per_column_stattest_threshold : Optional[Dict[str, float]]
stattest : Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]]
stattest_threshold : Optional[float]
Methods:
generate_metrics(data: InputData, columns: DatasetColumns)
class DataQualityPreset(columns: Optional[List[str]] = None)
Bases: MetricPreset
Metric preset for Data Quality analysis.
Contains metrics:
DatasetSummaryMetric
ColumnSummaryMetric for each column
DatasetMissingValuesMetric
DatasetCorrelationsMetric
Parameters
columns
– list of columns for analysis.
Attributes:
columns : Optional[List[str]]
Methods:
generate_metrics(data: InputData, columns: DatasetColumns)
class RegressionPreset(columns: Optional[List[str]] = None)
Bases: MetricPreset
Metric preset for Regression performance analysis.
Contains metrics:
RegressionQualityMetric
RegressionPredictedVsActualScatter
RegressionPredictedVsActualPlot
RegressionErrorPlot
RegressionAbsPercentageErrorPlot
RegressionErrorDistribution
RegressionErrorNormality
RegressionTopErrorMetric
RegressionErrorBiasTable
Attributes:
columns : Optional[List[str]]
Methods:
generate_metrics(data: InputData, columns: DatasetColumns)
class TargetDriftPreset(columns: Optional[List[str]] = None, stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]] = None, cat_stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]] = None, num_stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]] = None, per_column_stattest: Optional[Dict[str, Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]]] = None, stattest_threshold: Optional[float] = None, cat_stattest_threshold: Optional[float] = None, num_stattest_threshold: Optional[float] = None, per_column_stattest_threshold: Optional[Dict[str, float]] = None)
Bases: MetricPreset
Metric preset for Target Drift analysis.
Contains metrics:
ColumnDriftMetric - for target and prediction if present in datasets.
ColumnValuePlot - if task is regression.
ColumnCorrelationsMetric - for target and prediction if present in datasets.
TargetByFeaturesTable
Attributes:
cat_stattest : Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]]
cat_stattest_threshold : Optional[float]
columns : Optional[List[str]]
num_stattest : Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]]
num_stattest_threshold : Optional[float]
per_column_stattest : Optional[Dict[str, Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]]]
per_column_stattest_threshold : Optional[Dict[str, float]]
stattest : Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]]
stattest_threshold : Optional[float]
Methods:
generate_metrics(data: InputData, columns: DatasetColumns)
Submodules
classification_performance module
class ClassificationPreset(columns: Optional[List[str]] = None, probas_threshold: Optional[float] = None, k: Optional[int] = None)
Bases: MetricPreset
Metrics preset for classification performance.
Contains metrics:
ClassificationQualityMetric
ClassificationClassBalance
ClassificationConfusionMatrix
ClassificationQualityByClass
Attributes:
columns : Optional[List[str]]
k : Optional[int]
probas_threshold : Optional[float]
Methods:
generate_metrics(data: InputData, columns: DatasetColumns)
data_drift module
class DataDriftPreset(columns: Optional[List[str]] = None, drift_share: float = 0.5, stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]] = None, cat_stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]] = None, num_stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]] = None, per_column_stattest: Optional[Dict[str, Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]]] = None, stattest_threshold: Optional[float] = None, cat_stattest_threshold: Optional[float] = None, num_stattest_threshold: Optional[float] = None, per_column_stattest_threshold: Optional[Dict[str, float]] = None)
Bases: MetricPreset
Metric Preset for Data Drift analysis.
Contains metrics:
DatasetDriftMetric
DataDriftTable
Attributes:
cat_stattest : Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]]
cat_stattest_threshold : Optional[float]
columns : Optional[List[str]]
drift_share : float
num_stattest : Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]]
num_stattest_threshold : Optional[float]
per_column_stattest : Optional[Dict[str, Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]]]
per_column_stattest_threshold : Optional[Dict[str, float]]
stattest : Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]]
stattest_threshold : Optional[float]
Methods:
generate_metrics(data: InputData, columns: DatasetColumns)
data_quality module
class DataQualityPreset(columns: Optional[List[str]] = None)
Bases: MetricPreset
Metric preset for Data Quality analysis.
Contains metrics:
DatasetSummaryMetric
ColumnSummaryMetric for each column
DatasetMissingValuesMetric
DatasetCorrelationsMetric
Parameters
columns
– list of columns for analysis.
Attributes:
columns : Optional[List[str]]
Methods:
generate_metrics(data: InputData, columns: DatasetColumns)
metric_preset module
class MetricPreset()
Bases: object
Base class for metric presets
Methods:
abstract generate_metrics(data: InputData, columns: DatasetColumns)
regression_performance module
class RegressionPreset(columns: Optional[List[str]] = None)
Bases: MetricPreset
Metric preset for Regression performance analysis.
Contains metrics:
RegressionQualityMetric
RegressionPredictedVsActualScatter
RegressionPredictedVsActualPlot
RegressionErrorPlot
RegressionAbsPercentageErrorPlot
RegressionErrorDistribution
RegressionErrorNormality
RegressionTopErrorMetric
RegressionErrorBiasTable
Attributes:
columns : Optional[List[str]]
Methods:
generate_metrics(data: InputData, columns: DatasetColumns)
target_drift module
class TargetDriftPreset(columns: Optional[List[str]] = None, stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]] = None, cat_stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]] = None, num_stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]] = None, per_column_stattest: Optional[Dict[str, Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]]] = None, stattest_threshold: Optional[float] = None, cat_stattest_threshold: Optional[float] = None, num_stattest_threshold: Optional[float] = None, per_column_stattest_threshold: Optional[Dict[str, float]] = None)
Bases: MetricPreset
Metric preset for Target Drift analysis.
Contains metrics:
ColumnDriftMetric - for target and prediction if present in datasets.
ColumnValuePlot - if task is regression.
ColumnCorrelationsMetric - for target and prediction if present in datasets.
TargetByFeaturesTable
Attributes:
cat_stattest : Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]]
cat_stattest_threshold : Optional[float]
columns : Optional[List[str]]
num_stattest : Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]]
num_stattest_threshold : Optional[float]
per_column_stattest : Optional[Dict[str, Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]]]
per_column_stattest_threshold : Optional[Dict[str, float]]
stattest : Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]]
stattest_threshold : Optional[float]
Methods:
generate_metrics(data: InputData, columns: DatasetColumns)
Last updated