Class Nadam

java.lang.Object
org.tensorflow.framework.optimizers.Optimizer
org.tensorflow.framework.optimizers.Nadam

public class Nadam extends Optimizer
Nadam Optimizer that implements the NAdam algorithm.

Much like Adam is essentially RMSprop with momentum, Nadam is Adam with Nesterov momentum.

See Also:
  • Field Details

  • Constructor Details

    • Nadam

      public Nadam(Graph graph)
      Creates a Nadam Optimizer
      Parameters:
      graph - the TensorFlow graph
    • Nadam

      public Nadam(Graph graph, float learningRate)
      Creates a Nadam Optimizer
      Parameters:
      graph - the TensorFlow graph
      learningRate - the learning rate, defaults to 0.001
    • Nadam

      public Nadam(Graph graph, float learningRate, float betaOne, float betaTwo, float epsilon)
      Creates a Nadam Optimizer
      Parameters:
      graph - the TensorFlow graph
      learningRate - the learning rate, defaults to 0.001
      betaOne - The exponential decay rate for the 1st moment estimates. Default is 0.9.
      betaTwo - The exponential decay rate for the exponentially weighted infinity norm. Default is 0.999.
      epsilon - A small constant for numerical stability. Default is 1e-8.
    • Nadam

      public Nadam(Graph graph, String name, float learningRate)
      Creates a Nadam Optimizer
      Parameters:
      graph - the TensorFlow graph
      name - the name for this Optimizer, defaults to "Nadam"
      learningRate - the learning rate, defaults to 0.001
    • Nadam

      public Nadam(Graph graph, String name, float learningRate, float betaOne, float betaTwo, float epsilon)
      Creates a Nadam Optimizer
      Parameters:
      graph - the TensorFlow graph
      name - the name for this Optimizer, defaults to "Nadam"
      learningRate - the learning rate, defaults to 0.001
      betaOne - The exponential decay rate for the 1st moment estimates. Default is 0.9.
      betaTwo - The exponential decay rate for the exponentially weighted infinity norm. Default is 0.999.
      epsilon - A small constant for numerical stability. Default is 1e-8.
  • Method Details

    • createSlots

      protected void createSlots(List<Output<? extends TType>> variables)
      Performs a No-op slot creation method.
      Overrides:
      createSlots in class Optimizer
      Parameters:
      variables - The variables to create slots for.
    • prepare

      protected Optional<Op> prepare(String scopeName)
      Returns a No-op prepare.
      Overrides:
      prepare in class Optimizer
      Parameters:
      scopeName - The scope name to use for any variable creations.
      Returns:
      a No-op to prepare this optimizer, or empty if none.
    • applyDense

      protected <T extends TType> Op applyDense(Ops deps, Output<T> gradient, Output<T> variable)
      Generates the gradient update operations for the specific variable and gradient.
      Specified by:
      applyDense in class Optimizer
      Type Parameters:
      T - The type of the variable.
      Parameters:
      gradient - The gradient to use.
      variable - The variable to update.
      Returns:
      An operand which applies the desired optimizer update to the variable.
    • finish

      protected Op finish(List<Op> updateOperations, String name)
      Gathers up the update operations into a single op that can be used as a run target.

      Adds the betaOne, betaTwo and mu updates to the end of the updates list.

      Overrides:
      finish in class Optimizer
      Parameters:
      updateOperations - The update operations.
      name - The name of the run target.
      Returns:
      A NoOp with a control dependency on each update operation.
    • getOptimizerName

      public String getOptimizerName()
      Get the Name of the optimizer.
      Specified by:
      getOptimizerName in class Optimizer
      Returns:
      The optimizer name.