Class RMSProp

java.lang.Object
org.tensorflow.framework.optimizers.Optimizer
org.tensorflow.framework.optimizers.RMSProp

public class RMSProp extends Optimizer
Optimizer that implements the RMSProp algorithm.

The gist of RMSprop is to:

  • Maintain a moving (discounted) average of the square of gradients
  • Divide the gradient by the root of this average

This implementation of RMSprop uses plain momentum, not Nesterov momentum.

The centered version additionally maintains a moving average of the gradients, and uses that average to estimate the variance.

See Also:
  • Field Details

  • Constructor Details

    • RMSProp

      public RMSProp(Graph graph)
      Creates an RMSPRrop Optimizer
      Parameters:
      graph - the TensorFlow Graph
    • RMSProp

      public RMSProp(Graph graph, float learningRate)
      Creates an RMSPRrop Optimizer
      Parameters:
      graph - the TensorFlow Graph
      learningRate - the learning rate
    • RMSProp

      public RMSProp(Graph graph, float learningRate, float decay, float momentum, float epsilon, boolean centered)
      Creates an RMSPRrop Optimizer
      Parameters:
      graph - the TensorFlow Graph
      learningRate - the learning rate
      decay - Discounting factor for the history/coming gradient. Defaults to 0.9.
      momentum - the acceleration factor, default is 0.
      epsilon - A small constant for numerical stability
      centered - If true, gradients are normalized by the estimated variance of the gradient; if false, by the uncentered second moment. Setting this to true may help with training, but is slightly more expensive in terms of computation and memory. Defaults to false.
    • RMSProp

      public RMSProp(Graph graph, String name, float learningRate)
      Creates an RMSPRrop Optimizer
      Parameters:
      graph - the TensorFlow Graph
      name - the name of this Optimizer. Defaults to "RMSProp".
      learningRate - the learning rate
    • RMSProp

      public RMSProp(Graph graph, String name, float learningRate, float decay, float momentum, float epsilon, boolean centered)
      Creates an RMSPRrop Optimizer
      Parameters:
      graph - the TensorFlow Graph
      name - the name of this Optimizer. Defaults to "RMSProp".
      learningRate - the learning rate
      decay - Discounting factor for the history/coming gradient. Defaults to 0.9.
      momentum - The acceleration factor, default is 0.
      epsilon - A small constant for numerical stability
      centered - If true, gradients are normalized by the estimated variance of the gradient; if false, by the uncentered second moment. Setting this to true may help with training, but is slightly more expensive in terms of computation and memory. Defaults to false.
  • Method Details

    • createSlots

      protected void createSlots(List<Output<? extends TType>> variables)
      Performs a No-op slot creation method.
      Overrides:
      createSlots in class Optimizer
      Parameters:
      variables - The variables to create slots for.
    • applyDense

      protected <T extends TType> Op applyDense(Ops deps, Output<T> gradient, Output<T> variable)
      Generates the gradient update operations for the specific variable and gradient.
      Specified by:
      applyDense in class Optimizer
      Type Parameters:
      T - The type of the variable.
      Parameters:
      gradient - The gradient to use.
      variable - The variable to update.
      Returns:
      An operand which applies the desired optimizer update to the variable.
    • toString

      public String toString()
      Overrides:
      toString in class Object
    • getOptimizerName

      public String getOptimizerName()
      Get the Name of the optimizer.
      Specified by:
      getOptimizerName in class Optimizer
      Returns:
      The optimizer name.