Class Ftrl

java.lang.Object
org.tensorflow.framework.optimizers.Optimizer
org.tensorflow.framework.optimizers.Ftrl

public class Ftrl extends Optimizer
Optimizer that implements the FTRL algorithm.

This version has support for both online L2 (the L2 penalty given in the paper below) and shrinkage-type L2 (which is the addition of an L2 penalty to the loss function).

See Also:
  • Field Details

  • Constructor Details

    • Ftrl

      public Ftrl(Graph graph)
      Creates a Ftrl Optimizer
      Parameters:
      graph - the TensorFlow Graph
    • Ftrl

      public Ftrl(Graph graph, String name)
      Creates a Ftrl Optimizer
      Parameters:
      graph - the TensorFlow Graph
      name - the name of this Optimizer
    • Ftrl

      public Ftrl(Graph graph, float learningRate)
      Creates a Ftrl Optimizer
      Parameters:
      graph - the TensorFlow Graph
      learningRate - the learning rate
    • Ftrl

      public Ftrl(Graph graph, String name, float learningRate)
      Creates a Ftrl Optimizer
      Parameters:
      graph - the TensorFlow Graph
      name - the name of this Optimizer
      learningRate - the learning rate
    • Ftrl

      public Ftrl(Graph graph, float learningRate, float learningRatePower, float initialAccumulatorValue, float l1Strength, float l2Strength, float l2ShrinkageRegularizationStrength)
      Creates a Ftrl Optimizer
      Parameters:
      graph - the TensorFlow Graph
      learningRate - the learning rate
      learningRatePower - Controls how the learning rate decreases during training. Use zero for a fixed learning rate.
      initialAccumulatorValue - The starting value for accumulators. Only zero or positive values are allowed.
      l1Strength - the L1 Regularization strength, must be greater than or equal to zero.
      l2Strength - the L2 Regularization strength, must be greater than or equal to zero.
      l2ShrinkageRegularizationStrength - This differs from L2 above in that the L2 above is a stabilization penalty, whereas this L2 shrinkage is a magnitude penalty. must be greater than or equal to zero.
      Throws:
      IllegalArgumentException - if the initialAccumulatorValue, l1RegularizationStrength, l2RegularizationStrength, or l2ShrinkageRegularizationStrength are less than 0.0, or learningRatePower is greater than 0.0.
    • Ftrl

      public Ftrl(Graph graph, String name, float learningRate, float learningRatePower, float initialAccumulatorValue, float l1Strength, float l2Strength, float l2ShrinkageRegularizationStrength)
      Creates a Ftrl Optimizer
      Parameters:
      graph - the TensorFlow Graph
      name - the name of this Optimizer
      learningRate - the learning rate
      learningRatePower - Controls how the learning rate decreases during training. Use zero for a fixed learning rate.
      initialAccumulatorValue - The starting value for accumulators. Only zero or positive values are allowed.
      l1Strength - the L1 Regularization strength, must be greater than or equal to zero.
      l2Strength - the L2 Regularization strength, must be greater than or equal to zero.
      l2ShrinkageRegularizationStrength - This differs from L2 above in that the L2 above is a stabilization penalty, whereas this L2 shrinkage is a magnitude penalty. must be greater than or equal to zero.
      Throws:
      IllegalArgumentException - if the initialAccumulatorValue, l1RegularizationStrength, l2RegularizationStrength, or l2ShrinkageRegularizationStrength are less than 0.0, or learningRatePower is greater than 0.0.
  • Method Details

    • createSlots

      protected void createSlots(List<Output<? extends TType>> variables)
      Performs a No-op slot creation method.
      Overrides:
      createSlots in class Optimizer
      Parameters:
      variables - The variables to create slots for.
    • applyDense

      protected <T extends TType> Op applyDense(Ops deps, Output<T> gradient, Output<T> variable)
      Generates the gradient update operations for the specific variable and gradient.
      Specified by:
      applyDense in class Optimizer
      Type Parameters:
      T - The type of the variable.
      Parameters:
      gradient - The gradient to use.
      variable - The variable to update.
      Returns:
      An operand which applies the desired optimizer update to the variable.
    • getOptimizerName

      public String getOptimizerName()
      Get the Name of the optimizer.
      Specified by:
      getOptimizerName in class Optimizer
      Returns:
      The optimizer name.