latest
Search
K
Links

evidently.tests

Available tests for TestSuite reports. Tests grouped into modules. For detailed information see module documentation.

Submodules

base_test module

class BaseCheckValueTest(eq: Optional[Union[float, int]] = None, gt: Optional[Union[float, int]] = None, gte: Optional[Union[float, int]] = None, is_in: Optional[List[Union[float, int, str, bool]]] = None, lt: Optional[Union[float, int]] = None, lte: Optional[Union[float, int]] = None, not_eq: Optional[Union[float, int]] = None, not_in: Optional[List[Union[float, int, str, bool]]] = None)

Bases: BaseConditionsTest
Base class for all tests with checking a value condition

Attributes:

value : Union[float, int]

Methods:

abstract calculate_value_for_test()
Method for getting the checking value. Define it in a child class
check()
get_condition()
abstract get_description(value: Union[float, int])
Method for getting a description that we can use. The description can use the checked value. Define it in a child class
groups()

class BaseConditionsTest(eq: Optional[Union[float, int]] = None, gt: Optional[Union[float, int]] = None, gte: Optional[Union[float, int]] = None, is_in: Optional[List[Union[float, int, str, bool]]] = None, lt: Optional[Union[float, int]] = None, lte: Optional[Union[float, int]] = None, not_eq: Optional[Union[float, int]] = None, not_in: Optional[List[Union[float, int, str, bool]]] = None)

Bases: Test, ABC
Base class for all tests with a condition

Attributes:

condition : TestValueCondition

class GroupData(id: str, title: str, description: str, sort_index: int = 0, severity: Optional[str] = None)

Bases: object

Attributes:

description : str
id : str
severity : Optional[str] = None
sort_index : int = 0
title : str

class GroupTypeData(id: str, title: str, values: List[evidently.tests.base_test.GroupData] = )

Bases: object

Attributes:

id : str
title : str
values : List[GroupData]

Methods:

add_value(data: GroupData)

class GroupingTypes()

Bases: object

Attributes:

ByClass = GroupTypeData(id='by_class', title='By class', values=[])
ByFeature = GroupTypeData(id='by_feature', title='By feature', values=[GroupData(id='no group', title='Dataset-level tests', description='Some tests cannot be grouped by feature', sort_index=0, severity=None)])
TestGroup = GroupTypeData(id='test_group', title='By test group', values=[GroupData(id='no group', title='Ungrouped', description='Some tests don’t belong to any group under the selected condition', sort_index=0, severity=None), GroupData(id='classification', title='Classification', description='', sort_index=0, severity=None), GroupData(id='data_drift', title='Data Drift', description='', sort_index=0, severity=None), GroupData(id='data_integrity', title='Data Integrity', description='', sort_index=0, severity=None), GroupData(id='data_quality', title='Data Quality', description='', sort_index=0, severity=None), GroupData(id='regression', title='Regression', description='', sort_index=0, severity=None)])
TestType = GroupTypeData(id='test_type', title='By test type', values=[])

class Test()

Bases: object
all fields in test class with type that is subclass of Metric would be used as dependencies of test.

Attributes:

context = None
group : str
name : str

Methods:

abstract check()
get_result()
set_context(context)

class TestResult(name: str, description: str, status: str, groups: Dict[str, str] = )

Bases: object

Attributes:

ERROR = 'ERROR'
FAIL = 'FAIL'
SKIPPED = 'SKIPPED'
SUCCESS = 'SUCCESS'
WARNING = 'WARNING'
description : str
groups : Dict[str, str]
name : str
status : str

Methods:

is_passed()
mark_as_error(description: Optional[str] = None)
mark_as_fail(description: Optional[str] = None)
mark_as_success(description: Optional[str] = None)
mark_as_warning(description: Optional[str] = None)
set_status(status: str, description: Optional[str] = None)

class TestValueCondition(eq: Optional[Union[float, int]] = None, gt: Optional[Union[float, int]] = None, gte: Optional[Union[float, int]] = None, is_in: Optional[List[Union[float, int, str, bool]]] = None, lt: Optional[Union[float, int]] = None, lte: Optional[Union[float, int]] = None, not_eq: Optional[Union[float, int]] = None, not_in: Optional[List[Union[float, int, str, bool]]] = None)

Bases: object
Class for processing a value conditions - should it be less, greater than, equals and so on.
An object of the class stores specified conditions and can be used for checking a value by them.

Attributes:

eq : Optional[Union[float, int]] = None
gt : Optional[Union[float, int]] = None
gte : Optional[Union[float, int]] = None
is_in : Optional[List[Union[float, int, str, bool]]] = None
lt : Optional[Union[float, int]] = None
lte : Optional[Union[float, int]] = None
not_eq : Optional[Union[float, int]] = None
not_in : Optional[List[Union[float, int, str, bool]]] = None

Methods:

as_dict()
check_value(value: Union[float, int])
has_condition()
Checks if we have a condition in the object and returns True in this case. If we have no conditions - returns False.

generate_column_tests(test_class: Type[Test], columns: Optional[Union[str, list]] = None, parameters: Optional[Dict] = None)

Function for generating tests for columns

classification_performance_tests module

class ByClassClassificationTest(label: str, probas_threshold: Optional[float] = None, k: Optional[Union[float, int]] = None, eq: Optional[Union[float, int]] = None, gt: Optional[Union[float, int]] = None, gte: Optional[Union[float, int]] = None, is_in: Optional[List[Union[float, int, str, bool]]] = None, lt: Optional[Union[float, int]] = None, lte: Optional[Union[float, int]] = None, not_eq: Optional[Union[float, int]] = None, not_in: Optional[List[Union[float, int, str, bool]]] = None)

Bases: BaseCheckValueTest, ABC

Attributes:

by_class_metric : ClassificationQualityByClass
dummy_metric : ClassificationDummyMetric
group : str = 'classification'

Methods:

calculate_value_for_test()
Method for getting the checking value. Define it in a child class
get_condition()
abstract get_value(result: dict)

class SimpleClassificationTest(eq: Optional[Union[float, int]] = None, gt: Optional[Union[float, int]] = None, gte: Optional[Union[float, int]] = None, is_in: Optional[List[Union[float, int, str, bool]]] = None, lt: Optional[Union[float, int]] = None, lte: Optional[Union[float, int]] = None, not_eq: Optional[Union[float, int]] = None, not_in: Optional[List[Union[float, int, str, bool]]] = None)

Bases: BaseCheckValueTest

Attributes:

dummy_metric : ClassificationDummyMetric
group : str = 'classification'
name : str

Methods:

calculate_value_for_test()
Method for getting the checking value. Define it in a child class
get_condition()
abstract get_value(result: DatasetClassificationQuality)

class SimpleClassificationTestTopK(probas_threshold: Optional[float] = None, k: Optional[Union[float, int]] = None, eq: Optional[Union[float, int]] = None, gt: Optional[Union[float, int]] = None, gte: Optional[Union[float, int]] = None, is_in: Optional[List[Union[float, int, str, bool]]] = None, lt: Optional[Union[float, int]] = None, lte: Optional[Union[float, int]] = None, not_eq: Optional[Union[float, int]] = None, not_in: Optional[List[Union[float, int, str, bool]]] = None)

Bases: SimpleClassificationTest, ABC

Attributes:

dummy_metric : ClassificationDummyMetric
k : Optional[Union[float, int]]
probas_threshold : Optional[float]

Methods:

calculate_value_for_test()
Method for getting the checking value. Define it in a child class
get_condition()

class TestAccuracyScore(probas_threshold: Optional[float] = None, k: Optional[Union[float, int]] = None, eq: Optional[Union[float, int]] = None, gt: Optional[Union[float, int]] = None, gte: Optional[Union[float, int]] = None, is_in: Optional[List[Union[float, int, str, bool]]] = None, lt: Optional[Union[float, int]] = None, lte: Optional[Union[float, int]] = None, not_eq: Optional[Union[float, int]] = None, not_in: Optional[List[Union[float, int, str, bool]]] = None)

Bases: SimpleClassificationTestTopK

Attributes:

condition : TestValueCondition
dummy_metric : ClassificationDummyMetric
k : Optional[Union[float, int]]
name : str = 'Accuracy Score'
probas_threshold : Optional[float]
value : Union[float, int]

Methods:

get_description(value: Union[float, int])
Method for getting a description that we can use. The description can use the checked value. Define it in a child class
get_value(result: DatasetClassificationQuality)

class TestAccuracyScoreRenderer(color_options: Optional[ColorOptions] = None)

Bases: TestRenderer

Attributes:

color_options : ColorOptions

Methods:

render_html(obj: TestAccuracyScore)
render_json(obj: TestAccuracyScore)

class TestF1ByClass(label: str, probas_threshold: Optional[float] = None, k: Optional[Union[float, int]] = None, eq: Optional[Union[float, int]] = None, gt: Optional[Union[float, int]] = None, gte: Optional[Union[float, int]] = None, is_in: Optional[List[Union[float, int, str, bool]]] = None, lt: Optional[Union[float, int]] = None, lte: Optional[Union[float, int]] = None, not_eq: Optional[Union[float, int]] = None, not_in: Optional[List[Union[float, int, str, bool]]] = None)

Bases: ByClassClassificationTest

Attributes:

name : str = 'F1 Score by Class'

Methods:

get_description(value: Union[float, int])
Method for getting a description that we can use. The description can use the checked value. Define it in a child class
get_value(result: dict)

class TestF1ByClassRenderer(color_options: Optional[ColorOptions] = None)

Bases: TestRenderer

Attributes:

color_options : ColorOptions

Methods:

render_html(obj: TestF1ByClass)
render_json(obj: TestF1ByClass)

class TestF1Score(probas_threshold: Optional[float] = None, k: Optional[Union[float, int]] = None, eq: Optional[Union[float, int]] = None, gt: Optional[Union[float, int]] = None, gte: Optional[Union[float, int]] = None, is_in: Optional[List[Union[float, int, str, bool]]] = None, lt: Optional[Union[float, int]] = None, lte: Optional[Union[float, int]] = None, not_eq: Optional[Union[float, int]] = None, not_in: Optional[List[Union[float, int, str, bool]]] = None)

Bases: SimpleClassificationTestTopK

Attributes:

condition : TestValueCondition
dummy_metric : ClassificationDummyMetric
k : Optional[Union[float, int]]
name : str = 'F1 Score'
probas_threshold : Optional[float]
value : Union[float, int]

Methods:

get_description(value: Union[float, int])
Method for getting a description that we can use. The description can use the checked value. Define it in a child class
get_value(result: DatasetClassificationQuality)

class TestF1ScoreRenderer(color_options: Optional[ColorOptions] = None)

Bases: TestRenderer

Attributes:

color_options : ColorOptions

Methods:

render_html(obj: TestF1Score)
render_json(obj: TestF1Score)

class TestFNR(probas_threshold: Optional[float] = None, k: Optional[Union[float, int]] = None, eq: Optional[Union[float, int]] = None, gt: Optional[Union[float, int]] = None, gte: Optional[Union[float, int]] = None, is_in: Optional[List[Union[float, int, str, bool]]] = None, lt: Optional[Union[float, int]] = None, lte: Optional[Union[float, int]] = None, not_eq: Optional[Union[float, int]] = None, not_in: Optional[List[Union[float, int, str, bool]]] = None)

Bases: SimpleClassificationTestTopK

Attributes:

condition : TestValueCondition
dummy_metric : ClassificationDummyMetric
k : Optional[Union[float, int]]
name : str = 'False Negative Rate'
probas_threshold : Optional[float]
value : Union[float, int]

Methods:

get_condition()
get_description(value: Union[float, int])
Method for getting a description that we can use. The description can use the checked value. Define it in a child class
get_value(result: DatasetClassificationQuality)

class TestFNRRenderer(color_options: Optional[ColorOptions] = None)

Bases: TestRenderer

Attributes:

color_options : ColorOptions

Methods:

render_html(obj: TestF1Score)
render_json(obj: TestFNR)

class TestFPR(probas_threshold: Optional[float] = None, k: Optional[Union[float, int]] = None, eq: Optional[Union[float, int]] = None, gt: Optional[Union[float, int]] = None, gte: Optional[Union[float, int]] = None, is_in: Optional[List[Union[float, int, str, bool]]] = None, lt: Optional[Union[float, int]] = None, lte: Optional[Union[float, int]] = None, not_eq: Optional[Union[float, int]] = None, not_in: Optional[List[Union[float, int, str, bool]]] = None)

Bases: SimpleClassificationTestTopK

Attributes:

condition : TestValueCondition
dummy_metric : ClassificationDummyMetric
k : Optional[Union[float, int]]
name : str = 'False Positive Rate'
probas_threshold : Optional[float]
value : Union[float, int]

Methods:

get_condition()
get_description(value: Union[float, int])
Method for getting a description that we can use. The description can use the checked value. Define it in a child class
get_value(result: DatasetClassificationQuality)

class TestFPRRenderer(color_options: Optional[ColorOptions] = None)

Bases: TestRenderer

Attributes:

color_options : ColorOptions

Methods:

render_html(obj: TestF1Score)
render_json(obj: TestFPR)

class TestLogLoss(eq: Optional[Union[float, int]] = None, gt: Optional[Union[float, int]] = None, gte: Optional[Union[float, int]] = None, is_in: Optional[List[Union[float, int, str, bool]]] = None, lt: Optional[Union[float, int]] = None, lte: Optional[Union[float, int]] = None, not_eq: Optional[Union[float, int]] = None, not_in: Optional[List[Union[float, int, str, bool]]] = None)

Bases: SimpleClassificationTest

Attributes:

condition : TestValueCondition
dummy_metric : ClassificationDummyMetric
name : str = 'Logarithmic Loss'
value : Union[float, int]

Methods:

get_condition()
get_description(value: Union[float, int])
Method for getting a description that we can use. The description can use the checked value. Define it in a child class
get_value(result: DatasetClassificationQuality)

class TestLogLossRenderer(color_options: Optional[ColorOptions] = None)

Bases: TestRenderer

Attributes:

color_options : ColorOptions

Methods:

render_html(obj: TestLogLoss)
render_json(obj: TestLogLoss)

class TestPrecisionByClass(label: str, probas_threshold: Optional[float] = None, k: Optional[Union[float, int]] = None, eq: Optional[Union[float, int]] = None, gt: Optional[Union[float, int]] = None, gte: Optional[Union[float, int]] = None, is_in: Optional[List[Union[float, int, str, bool]]] = None, lt: Optional[Union[float, int]] = None, lte: Optional[Union[float, int]] = None, not_eq: Optional[Union[float, int]] = None, not_in: Optional[List[Union[float, int, str, bool]]] = None)

Bases: ByClassClassificationTest

Attributes:

name : str = 'Precision Score by Class'

Methods:

get_description(value: Union[float, int])
Method for getting a description that we can use. The description can use the checked value. Define it in a child class
get_value(result: dict)

class TestPrecisionByClassRenderer(color_options: Optional[ColorOptions] = None)

Bases: TestRenderer

Attributes:

color_options : ColorOptions

Methods:

render_html(obj: TestPrecisionByClass)
render_json(obj: TestPrecisionByClass)

class TestPrecisionScore(probas_threshold: Optional[float] = None, k: Optional[Union[float, int]] = None, eq: Optional[Union[float, int]] = None, gt: Optional[Union[float, int]] = None, gte: Optional[Union[float, int]] = None, is_in: Optional[List[Union[float, int, str, bool]]] = None, lt: Optional[Union[float, int]] = None, lte: Optional[Union[float, int]] = None, not_eq: Optional[Union[float, int]] = None, not_in: Optional[List[Union[float, int, str, bool]]] = None)

Bases: SimpleClassificationTestTopK

Attributes:

condition : TestValueCondition
dummy_metric : ClassificationDummyMetric
k : Optional[Union[float, int]]
name : str = 'Precision Score'
probas_threshold : Optional[float]
value : Union[float, int]

Methods:

get_description(value: Union[float, int])
Method for getting a description that we can use. The description can use the checked value. Define it in a child class
get_value(result: DatasetClassificationQuality)

class TestPrecisionScoreRenderer(color_options: Optional[ColorOptions] = None)

Bases: TestRenderer

Attributes:

color_options : ColorOptions

Methods:

render_html(obj: TestPrecisionScore)
render_json(obj: TestPrecisionScore)

class TestRecallByClass(label: str, probas_threshold: Optional[float] = None, k: Optional[Union[float, int]] = None, eq: Optional[Union[float, int]] = None, gt: Optional[Union[float, int]] = None, gte: Optional[Union[float, int]] = None, is_in: Optional[List[Union[float, int, str, bool]]] = None, lt: Optional[Union[float, int]] = None, lte: Optional[Union[float, int]] = None, not_eq: Optional[Union[float, int]] = None, not_in: Optional[List[Union[float, int, str, bool]]] = None)

Bases: ByClassClassificationTest

Attributes:

name : str = 'Recall Score by Class'

Methods:

get_description(value: Union[float, int])
Method for getting a description that we can use. The description can use the checked value. Define it in a child class
get_value(result: dict)

class TestRecallByClassRenderer(color_options: Optional[ColorOptions] = None)

Bases: TestRenderer

Attributes:

color_options : ColorOptions

Methods:

render_html(obj: TestRecallByClass)
render_json(obj: TestRecallByClass)

class TestRecallScore(probas_threshold: Optional[float] = None, k: Optional[Union[float, int]] = None, eq: Optional[Union[float, int]] = None, gt: Optional[Union[float, int]] = None, gte: Optional[Union[float, int]] = None, is_in: Optional[List[Union[float, int, str, bool]]] = None, lt: Optional[Union[float, int]] = None, lte: Optional[Union[float, int]] = None, not_eq: Optional[Union[float, int]] = None, not_in: Optional[List[Union[float, int, str, bool]]] = None)

Bases: SimpleClassificationTestTopK

Attributes:

condition : TestValueCondition
dummy_metric : ClassificationDummyMetric
k : Optional[Union[float, int]]
name : str = 'Recall Score'
probas_threshold : Optional[float]
value : Union[float, int]

Methods:

get_description(value: Union[float, int])
Method for getting a description that we can use. The description can use the checked value. Define it in a child class
get_value(result: DatasetClassificationQuality)

class TestRecallScoreRenderer(color_options: Optional[ColorOptions] = None)

Bases: TestRenderer

Attributes:

color_options : ColorOptions

Methods:

render_html(obj: TestRecallScore)
render_json(obj: TestRecallScore)

class TestRocAuc(eq: Optional[Union[float, int]] = None, gt: Optional[Union[float, int]] = None, gte: Optional[Union[float, int]] = None, is_in: Optional[List[Union[float, int, str, bool]]] = None, lt: Optional[Union[float, int]] = None, lte: Optional[Union[float, int]] = None, not_eq: Optional[Union[float, int]] = None, not_in: Optional[List[Union[float, int, str, bool]]] = None)

Bases: SimpleClassificationTest

Attributes:

name : str = 'ROC AUC Score'
roc_curve : ClassificationRocCurve

Methods:

get_description(value: Union[float, int])
Method for getting a description that we can use. The description can use the checked value. Define it in a child class
get_value(result: DatasetClassificationQuality)

class TestRocAucRenderer(color_options: Optional[ColorOptions] = None)

Bases: TestRenderer

Attributes:

color_options : ColorOptions

Methods:

render_html(obj: TestRocAuc)
render_json(obj: TestRocAuc)

class TestTNR(probas_threshold: Optional[float] = None, k: Optional[Union[float, int]] = None, eq: Optional[Union[float, int]] = None, gt: Optional[Union[float, int]] = None, gte: Optional[Union[float, int]] = None, is_in: Optional[List[Union[float, int, str, bool]]] = None, lt: Optional[Union[float, int]] = None, lte: Optional[Union[float, int]] = None, not_eq: Optional[Union[float, int]] = None, not_in: Optional[List[Union[float, int, str, bool]]] = None)

Bases: SimpleClassificationTestTopK

Attributes:

condition : TestValueCondition
dummy_metric : ClassificationDummyMetric
k : Optional[Union[float, int]]
name : str = 'True Negative Rate'
probas_threshold : Optional[float]
value : Union[float, int]

Methods:

get_description(value: Union[float, int])
Method for getting a description that we can use. The description can use the checked value. Define it in a child class
get_value(result: DatasetClassificationQuality)

class TestTNRRenderer(color_options: Optional[ColorOptions] = None)

Bases: TestRenderer

Attributes:

color_options : ColorOptions

Methods:

render_html(obj: TestF1Score)
render_json(obj: TestTNR)

class TestTPR(probas_threshold: Optional[float] = None, k: Optional[Union[float, int]] = None, eq: Optional[Union[float, int]] = None, gt: Optional[Union[float, int]] = None, gte: Optional[Union[float, int]] = None, is_in: Optional[List[Union[float, int, str, bool]]] = None, lt: Optional[Union[float, int]] = None, lte: Optional[Union[float, int]] = None, not_eq: Optional[Union[float, int]] = None, not_in: Optional[List[Union[float, int, str, bool]]] = None)

Bases: SimpleClassificationTestTopK

Attributes:

condition : TestValueCondition
dummy_metric : ClassificationDummyMetric
k : Optional[Union[float, int]]
name : str = 'True Positive Rate'
probas_threshold : Optional[float]
value : Union[float, int]

Methods:

get_description(value: Union[float, int])
Method for getting a description that we can use. The description can use the checked value. Define it in a child class
get_value(result: DatasetClassificationQuality)

class TestTPRRenderer(color_options: Optional[ColorOptions] = None)

Bases: TestRenderer

Attributes:

color_options : ColorOptions

Methods:

render_html(obj: TestF1Score)
render_json(obj: TestTPR)

data_drift_tests module

class BaseDataDriftMetricsTest(columns: Optional[List[str]] = None, eq: Optional[Union[float, int]] = None, gt: Optional[Union[float, int]] = None, gte: Optional[Union[float, int]] = None, is_in: Optional[List[Union[float, int, str, bool]]] = None, lt: Optional[Union[float, int]] = None, lte: Optional[Union[float, int]] = None, not_eq: Optional[Union[float, int]] = None, not_in: Optional[List[Union[float, int, str, bool]]] = None, stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]] = None, cat_stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]] = None, num_stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]] = None, per_column_stattest: Optional[Dict[str, Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]]] = None, stattest_threshold: Optional[float] = None, cat_stattest_threshold: Optional[float] = None, num_stattest_threshold: Optional[float] = None, per_column_stattest_threshold: Optional[Dict[str, float]] = None)

Bases: BaseCheckValueTest, ABC

Attributes:

group : str = 'data_drift'
metric : DataDriftTable

Methods:

check()

class TestAllFeaturesValueDrift(columns: Optional[List[str]] = None, stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]] = None, cat_stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]] = None, num_stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]] = None, per_column_stattest: Optional[Dict[str, Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]]] = None, stattest_threshold: Optional[float] = None, cat_stattest_threshold: Optional[float] = None, num_stattest_threshold: Optional[float] = None, per_column_stattest_threshold: Optional[Dict[str, float]] = None)

Bases: BaseGenerator
Create value drift tests for numeric and category features

Attributes:

cat_stattest : Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]]
cat_stattest_threshold : Optional[float]
columns : Optional[List[str]]
num_stattest : Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]]
num_stattest_threshold : Optional[float]
per_column_stattest : Optional[Dict[str, Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]]]
per_column_stattest_threshold : Optional[Dict[str, float]]
stattest : Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]]
stattest_threshold : Optional[float]

Methods:

generate(columns_info: DatasetColumns)

class TestColumnDrift(column_name: str, stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]] = None, stattest_threshold: Optional[float] = None)

Bases: Test

Attributes:

column_name : str
group : str = 'data_drift'
metric : ColumnDriftMetric
name : str = 'Drift per Column'

Methods:

check()

class TestColumnDriftRenderer(color_options: Optional[ColorOptions] = None)

Bases: TestRenderer

Attributes:

color_options : ColorOptions

Methods:

render_html(obj: TestColumnDrift)
render_json(obj: TestColumnDrift)

class TestCustomFeaturesValueDrift(features: List[str], stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]] = None, cat_stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]] = None, num_stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]] = None, per_column_stattest: Optional[Dict[str, Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]]] = None, stattest_threshold: Optional[float] = None, cat_stattest_threshold: Optional[float] = None, num_stattest_threshold: Optional[float] = None, per_column_stattest_threshold: Optional[Dict[str, float]] = None)

Bases: BaseGenerator
Create value drift tests for specified features

Attributes:

cat_stattest : Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]] = None
cat_stattest_threshold : Optional[float] = None
features : List[str]
num_stattest : Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]] = None
num_stattest_threshold : Optional[float] = None
per_column_stattest : Optional[Dict[str, Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]]] = None
per_column_stattest_threshold : Optional[Dict[str, float]] = None
stattest : Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]] = None
stattest_threshold : Optional[float] = None

Methods:

generate(columns_info: DatasetColumns)

class TestDataDriftResult(name: str, description: str, status: str, groups: Dict[str, str] = , features: Dict[str, Tuple[str, float, float]] = )

Bases: TestResult

Attributes:

features : Dict[str, Tuple[str, float, float]]

class TestNumberOfDriftedColumns(columns: Optional[List[str]] = None, eq: Optional[Union[float, int]] = None, gt: Optional[Union[float, int]] = None, gte: Optional[Union[float, int]] = None, is_in: Optional[List[Union[float, int, str, bool]]] = None, lt: Optional[Union[float, int]] = None, lte: Optional[Union[float, int]] = None, not_eq: Optional[Union[float, int]] = None, not_in: Optional[List[Union[float, int, str, bool]]] = None, stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]] = None, cat_stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]] = None, num_stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]] = None, per_column_stattest: Optional[Dict[str, Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]]] = None, stattest_threshold: Optional[float] = None, cat_stattest_threshold: Optional[float] = None, num_stattest_threshold: Optional[float] = None, per_column_stattest_threshold: Optional[Dict[str, float]] = None)

Bases: BaseDataDriftMetricsTest

Attributes:

condition : TestValueCondition
metric : DataDriftTable
name : str = 'Number of Drifted Features'
value : Union[float, int]

Methods:

calculate_value_for_test()
Method for getting the checking value. Define it in a child class
get_condition()
get_description(value: Union[float, int])
Method for getting a description that we can use. The description can use the checked value. Define it in a child class

class TestNumberOfDriftedColumnsRenderer(color_options: Optional[ColorOptions] = None)

Bases: TestRenderer

Attributes:

color_options : ColorOptions

Methods:

render_html(obj: TestNumberOfDriftedColumns)
render_json(obj: TestNumberOfDriftedColumns)

class TestShareOfDriftedColumns(columns: Optional[List[str]] = None, eq: Optional[Union[float, int]] = None, gt: Optional[Union[float, int]] = None, gte: Optional[Union[float, int]] = None, is_in: Optional[List[Union[float, int, str, bool]]] = None, lt: Optional[Union[float, int]] = None, lte: Optional[Union[float, int]] = None, not_eq: Optional[Union[float, int]] = None, not_in: Optional[List[Union[float, int, str, bool]]] = None, stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]] = None, cat_stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]] = None, num_stattest: Optional[Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]] = None, per_column_stattest: Optional[Dict[str, Union[str, Callable[[Series, Series, str, float], Tuple[float, bool]], StatTest]]] = None, stattest_threshold: Optional[float] = None, cat_stattest_threshold: Optional[float] = None, num_stattest_threshold: Optional[float] = None, per_column_stattest_threshold: Optional[Dict[str, float]] = None)

Bases: BaseDataDriftMetricsTest

Attributes:

condition : TestValueCondition
metric : DataDriftTable
name : str = 'Share of Drifted Columns'
value : Union[float, int]

Methods:

calculate_value_for_test()
Method for getting the checking value. Define it in a child class
get_condition()
get_description(value: Union[float, int])
Method for getting a description that we can use. The description can use the checked value. Define it in a child class

class TestShareOfDriftedColumnsRenderer(color_options: Optional[ColorOptions] = None)

Bases: TestRenderer

Attributes:

color_options : ColorOptions

Methods:

render_html(obj: TestShareOfDriftedColumns)
render_json(obj: TestShareOfDriftedColumns)

data_integrity_tests module

class BaseIntegrityByColumnsConditionTest(column_name: str, eq: Optional[Union[float, int]] = None, gt: Optional[Union[float, int]] = None, gte: Optional[Union[float, int]] = None, is_in: Optional[List[Union[float, int, str, bool]]] = None, lt: Optional[Union[float, int]] = None, lte: Optional[Union[float, int]] = None, not_eq: Optional[Union[float, int]] = None, not_in: Optional[List[Union[float, int, str, bool]]] = None)

Bases: BaseCheckValueTest, ABC

Attributes:

column_name : str
data_integrity_metric : ColumnSummaryMetric
group : str = 'data_integrity'

Methods:

groups()

class BaseIntegrityColumnMissingValuesTest(column_name: str, missing_values: Optional[list] = None, replace: bool = True, eq: Optional[Union[float, int]] = None, gt: Optional[Union[float, int]] = None, gte: Optional[Union[float, int]] = None, is_in: Optional[List[Union[float, int, str, bool]]] = None, lt: Optional[Union[float, int]] = None, lte: Optional[Union[float, int]] = None, not_eq: Optional[Union[float, int]] = None, not_in: Optional[List[Union[float, int, str, bool]]] = None)

Bases: BaseCheckValueTest, ABC

Attributes:

column_name : str
group : str = 'data_integrity'

class BaseIntegrityMissingValuesValuesTest(missing_values: Optional[list] = None, replace: bool = True, eq: Optional[Union[float, int]] = None, gt: Optional[Union[float, int]] = None, gte: Optional[Union[float, int]] = None, is_in: Optional[List[Union[float, int, str, bool]]] = None, lt: Optional[Union[float, int]] = None, lte: Optional[Union[float, int]] = None, not_eq: Optional[Union[float, int]] = None, not_in: Optional[List[Union[float, int, str, bool]]] = None)

Bases: BaseCheckValueTest, ABC

Attributes:

group : str = 'data_integrity'

class BaseIntegrityOneColumnTest(column_name: str)

Bases: Test, ABC

Attributes:

column_name : str
group : str = 'data_integrity'
metric : ColumnSummaryMetric

Methods:

groups()

class BaseIntegrityValueTest(eq: Optional[Union[float, int]] = None, gt: Optional[Union[float, int]] = None, gte: Optional[Union[float, int]] = None, is_in: Optional[List[Union[float, int, str, bool]]] = None, lt: Optional[Union[float, int]] = None, lte: Optional[Union[float, int]] = None, not_eq: Optional[Union[float, int]] = None, not_in: Optional[List[Union[float, int, str, bool]]] = None)

Bases: BaseCheckValueTest, ABC

Attributes:

group : str = 'data_integrity'

class BaseTestMissingValuesRenderer(color_options: Optional[ColorOptions] = None)

Bases: TestRenderer
Common class for tests of missing values. Some tests have the same details visualizations.

Attributes:

MISSING_VALUES_NAMING_MAPPING = {None: 'Pandas nulls (None, NAN, etc.)', '': '"" (empty string)', inf: 'Numpy "inf" value', -inf: 'Numpy "-inf" value'}
color_options : ColorOptions

Methods:

get_table_with_missing_values_and_percents_by_column(info: TestHtmlInfo, metric_result: DatasetMissingValuesMetricResult, name: str)
Get a table with missing values number and percents
get_table_with_number_of_missing_values_by_one_missing_value(info: TestHtmlInfo, current_missing_values: dict, reference_missing_values: Optional[dict], name: str)

class TestAllColumnsShareOfMissingValues(columns: Optional[List[str]] = None)

Bases: BaseGenerator

Attributes:

columns : Optional[List[str]]

Methods: