TFMA Evaluators¶
tensorflow_model_analysis.evaluators
¶
Init module for TensorFlow Model Analysis evaluators.
Attributes¶
Classes¶
Functions¶
AnalysisTableEvaluator
¶
AnalysisTableEvaluator(
key: str = ANALYSIS_KEY,
run_after: str = LAST_EXTRACTOR_STAGE_NAME,
include: Optional[
Union[Iterable[str], Dict[str, Any]]
] = None,
exclude: Optional[
Union[Iterable[str], Dict[str, Any]]
] = None,
) -> Evaluator
Creates an Evaluator for returning Extracts data for analysis.
If both include and exclude are None then tfma.INPUT_KEY extracts will be excluded by default.
key: Name to use for key in Evaluation output. run_after: Extractor to run after (None means before any extractors). include: List or map of keys to include in output. Keys starting with '_' are automatically filtered out at write time. If a map of keys is passed then the keys and sub-keys that exist in the map will be included in the output. An empty dict behaves as a wildcard matching all keys or the value itself. Since matching on feature values is not currently supported, an empty dict must be used to represent the leaf nodes. For example: {'key1': {'key1-subkey': {}}, 'key2': {}}. exclude: List or map of keys to exclude from output. If a map of keys is passed then the keys and sub-keys that exist in the map will be excluded from the output. An empty dict behaves as a wildcard matching all keys or the value itself. Since matching on feature values is not currently supported, an empty dict must be used to represent the leaf nodes. For example, {'key1': {'key1-subkey': {}}, 'key2': {}}.
Evaluator for collecting analysis data. The output is stored under the key 'analysis'.
ValueError: If both include and exclude are used.
Source code in tensorflow_model_analysis/evaluators/analysis_table_evaluator.py
MetricsPlotsAndValidationsEvaluator
¶
MetricsPlotsAndValidationsEvaluator(
eval_config: EvalConfig,
eval_shared_model: Optional[
MaybeMultipleEvalSharedModels
] = None,
metrics_key: str = METRICS_KEY,
plots_key: str = PLOTS_KEY,
attributions_key: str = ATTRIBUTIONS_KEY,
run_after: str = SLICE_KEY_EXTRACTOR_STAGE_NAME,
schema: Optional[Schema] = None,
random_seed_for_testing: Optional[int] = None,
) -> Evaluator
Creates an Evaluator for evaluating metrics and plots.
eval_config: Eval config. eval_shared_model: Optional shared model (single-model evaluation) or list of shared models (multi-model evaluation). Only required if there are metrics to be computed in-graph using the model. metrics_key: Name to use for metrics key in Evaluation output. plots_key: Name to use for plots key in Evaluation output. attributions_key: Name to use for attributions key in Evaluation output. run_after: Extractor to run after (None means before any extractors). schema: A schema to use for customizing metrics and plots. random_seed_for_testing: Seed to use for unit testing.
Evaluator for evaluating metrics and plots. The output will be stored under 'metrics' and 'plots' keys.
Source code in tensorflow_model_analysis/evaluators/metrics_plots_and_validations_evaluator.py
verify_evaluator
¶
Verifies evaluator is matched with an extractor.
evaluator: Evaluator to verify. extractors: Extractors to use in verification.
ValueError: If an Extractor cannot be found for the Evaluator.