Skip to content

Improving Model Quality With TensorFlow Model Analysis

Introduction

As you tweak your model during development, you need to check whether your changes are improving your model. Just checking accuracy may not be enough. For example, if you have a classifier for a problem in which 95% of your instances are positive, you may be able to improve accuracy by simply always predicting positive, but you won't have a very robust classifier.

Overview

The goal of TensorFlow Model Analysis is to provide a mechanism for model evaluation in TFX. TensorFlow Model Analysis allows you to perform model evaluations in the TFX pipeline, and view resultant metrics and plots in a Jupyter notebook. Specifically, it can provide:

  • Metrics computed on entire training and holdout dataset, as well as next-day evaluations
  • Tracking metrics over time
  • Model quality performance on different feature slices
  • Model validation for ensuring that model's maintain consistent performance

Next Steps

Try our TFMA tutorial.

Check out our github page for details on the supported metrics and plots and associated notebook visualizations.

See the installation and getting started guides for information and examples on how to get set up in a standalone pipeline. Recall that TFMA is also used within the Evaluator component in TFX, so these resources will be useful for getting started in TFX as well.