TFMA Utils¶
tensorflow_model_analysis.utils
¶
Init module for TensorFlow Model Analysis utils.
Classes¶
CombineFnWithModels
¶
CombineFnWithModels(model_loaders: Dict[str, ModelLoader])
Bases: CombineFn
Abstract class for CombineFns that need the shared models.
Initializes CombineFn using dict of loaders keyed by model location.
Source code in tensorflow_model_analysis/utils/model_util.py
Functions¶
setup
¶
Source code in tensorflow_model_analysis/utils/model_util.py
DoFnWithModels
¶
DoFnWithModels(model_loaders: Dict[str, ModelLoader])
Bases: DoFn
Abstract class for DoFns that need the shared models.
Initializes DoFn using dict of model loaders keyed by model location.
Source code in tensorflow_model_analysis/utils/model_util.py
Functions¶
calculate_confidence_interval
¶
calculate_confidence_interval(
t_distribution_value: ValueWithTDistribution,
)
Calculate confidence intervals based 95% confidence level.
Source code in tensorflow_model_analysis/utils/math_util.py
compound_key
¶
Returns a compound key based on a list of keys.
keys: Keys used to make up compound key. separator: Separator between keys. To ensure the keys can be parsed out of any compound key created, any use of a separator within a key will be replaced by two separators.
Source code in tensorflow_model_analysis/utils/util.py
create_keys_key
¶
create_values_key
¶
get_baseline_model_spec
¶
get_baseline_model_spec(
eval_config: EvalConfig,
) -> Optional[ModelSpec]
Returns baseline model spec.
Source code in tensorflow_model_analysis/utils/model_util.py
get_by_keys
¶
get_by_keys(
data: Mapping[str, Any],
keys: Sequence[Any],
default_value=None,
optional: bool = False,
) -> Any
Returns value with given key(s) in (possibly multi-level) dict.
The keys represent multiple levels of indirection into the data. For example if 3 keys are passed then the data is expected to be a dict of dict of dict. For compatibily with data that uses prefixing to create separate the keys in a single dict, lookups will also be searched for under the keys separated by '/'. For example, the keys 'head1' and 'probabilities' could be stored in a a single dict as 'head1/probabilties'.
data: Dict to get value from. keys: Sequence of keys to lookup in data. None keys will be ignored. default_value: Default value if not found. optional: Whether the key is optional or not. If default value is None and optional is False then a ValueError will be raised if key not found.
ValueError: If (non-optional) key is not found.
Source code in tensorflow_model_analysis/utils/util.py
get_model_spec
¶
Returns model spec with given model name.
Source code in tensorflow_model_analysis/utils/model_util.py
get_model_type
¶
get_model_type(
model_spec: Optional[ModelSpec],
model_path: Optional[str] = "",
tags: Optional[List[str]] = None,
) -> str
Returns model type for given model spec taking into account defaults.
The defaults are chosen such that if a model_path is provided and the model can be loaded as a keras model then TF_KERAS is assumed. Next, if tags are provided and the tags contains 'eval' then TF_ESTIMATOR is assumed. Lastly, if the model spec contains an 'eval' signature TF_ESTIMATOR is assumed otherwise TF_GENERIC is assumed.
model_spec: Model spec. model_path: Optional model path to verify if keras model. tags: Options tags to verify if eval is used.
Source code in tensorflow_model_analysis/utils/model_util.py
get_non_baseline_model_specs
¶
get_non_baseline_model_specs(
eval_config: EvalConfig,
) -> Iterable[ModelSpec]
Returns non-baseline model specs.
Source code in tensorflow_model_analysis/utils/model_util.py
has_change_threshold
¶
has_change_threshold(eval_config: EvalConfig) -> bool
Checks whether the eval_config has any change thresholds.
eval_config: the TFMA eval_config.
True when there are change thresholds otherwise False.
Source code in tensorflow_model_analysis/utils/config_util.py
merge_extracts
¶
Merges list of extracts into a single extract with multidimensional data.
Running split_extracts followed by merge extracts with default options
will not reproduce the exact shape of the original extracts. Arrays in shape (x,1) will be flattened to (x,). To maintain the original shape of extract values of array shape (x,1), you must run with these options: split_extracts(extracts, expand_zero_dims=False) merge_extracts(extracts, squeeze_two_dim_vector=False)
extracts: Batched TFMA Extracts. squeeze_two_dim_vector: Determines how the function will handle arrays of shape (x,1). If squeeze_two_dim_vector is True, the array will be squeezed to shape (x,).
A single Extracts whose values have been grouped into batches.
Source code in tensorflow_model_analysis/utils/util.py
877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 |
|
model_construct_fn
¶
model_construct_fn(
eval_saved_model_path: Optional[str] = None,
add_metrics_callbacks: Optional[
List[AddMetricsCallbackType]
] = None,
include_default_metrics: Optional[bool] = None,
additional_fetches: Optional[List[str]] = None,
blacklist_feature_fetches: Optional[List[str]] = None,
tags: Optional[List[str]] = None,
model_type: Optional[str] = TFMA_EVAL,
) -> Callable[[], Any]
Returns function for constructing shared models.
Source code in tensorflow_model_analysis/utils/model_util.py
unique_key
¶
Returns a unique key given a list of current keys.
If the key exists in current_keys then a new key with _1, _2, ..., etc appended will be returned, otherwise the key will be returned as passed.
key: desired key name. current_keys: List of current key names. update_keys: True to append the new key to current_keys.
Source code in tensorflow_model_analysis/utils/util.py
update_eval_config_with_defaults
¶
update_eval_config_with_defaults(
eval_config: EvalConfig,
maybe_add_baseline: Optional[bool] = None,
maybe_remove_baseline: Optional[bool] = None,
has_baseline: Optional[bool] = False,
rubber_stamp: Optional[bool] = False,
) -> EvalConfig
Returns a new config with default settings applied.
a) Add or remove a model_spec according to "has_baseline". b) Fix the model names (model_spec.name) to tfma.CANDIDATE_KEY and tfma.BASELINE_KEY. c) Update the metrics_specs with the fixed model name.
eval_config: Original eval config. maybe_add_baseline: DEPRECATED. True to add a baseline ModelSpec to the config as a copy of the candidate ModelSpec that should already be present. This is only applied if a single ModelSpec already exists in the config and that spec doesn't have a name associated with it. When applied the model specs will use the names tfma.CANDIDATE_KEY and tfma.BASELINE_KEY. Only one of maybe_add_baseline or maybe_remove_baseline should be used. maybe_remove_baseline: DEPRECATED. True to remove a baseline ModelSpec from the config if it already exists. Removal of the baseline also removes any change thresholds. Only one of maybe_add_baseline or maybe_remove_baseline should be used. has_baseline: True to add a baseline ModelSpec to the config as a copy of the candidate ModelSpec that should already be present. This is only applied if a single ModelSpec already exists in the config and that spec doesn't have a name associated with it. When applied the model specs will use the names tfma.CANDIDATE_KEY and tfma.BASELINE_KEY. False to remove a baseline ModelSpec from the config if it already exists. Removal of the baseline also removes any change thresholds. Only one of has_baseline or maybe_remove_baseline should be used. rubber_stamp: True if this model is being rubber stamped. When a model is rubber stamped diff thresholds will be ignored if an associated baseline model is not passed.
RuntimeError: on missing baseline model for non-rubberstamp cases.
Source code in tensorflow_model_analysis/utils/config_util.py
95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 |
|
verify_and_update_eval_shared_models
¶
verify_and_update_eval_shared_models(
eval_shared_model: Optional[
MaybeMultipleEvalSharedModels
],
) -> Optional[List[EvalSharedModel]]
Verifies eval shared models and normnalizes to produce a single list.
The output is normalized such that if a list or dict contains a single entry, the model name will always be empty.
eval_shared_model: None, a single model, a list of models, or a dict of models keyed by model name.
A list of models or None.
ValueError if dict is passed and keys don't match model names or a multi-item list is passed without model names.
Source code in tensorflow_model_analysis/utils/model_util.py
verify_eval_config
¶
Verifies eval config.