Class Ftrl
java.lang.Object
org.tensorflow.framework.optimizers.Optimizer
org.tensorflow.framework.optimizers.Ftrl
Optimizer that implements the FTRL algorithm.
This version has support for both online L2 (the L2 penalty given in the paper below) and shrinkage-type L2 (which is the addition of an L2 penalty to the loss function).
- See Also:
-
Nested Class Summary
Nested classes/interfaces inherited from class Optimizer
Optimizer.GradAndVar<T>, Optimizer.Options -
Field Summary
FieldsModifier and TypeFieldDescriptionstatic final Stringstatic final floatstatic final floatstatic final floatstatic final floatstatic final floatstatic final floatstatic final StringFields inherited from class Optimizer
globals, graph, tf, VARIABLE_V2 -
Constructor Summary
ConstructorsConstructorDescriptionCreates a Ftrl OptimizerCreates a Ftrl OptimizerFtrl(Graph graph, float learningRate, float learningRatePower, float initialAccumulatorValue, float l1Strength, float l2Strength, float l2ShrinkageRegularizationStrength) Creates a Ftrl OptimizerCreates a Ftrl OptimizerCreates a Ftrl OptimizerFtrl(Graph graph, String name, float learningRate, float learningRatePower, float initialAccumulatorValue, float l1Strength, float l2Strength, float l2ShrinkageRegularizationStrength) Creates a Ftrl Optimizer -
Method Summary
Modifier and TypeMethodDescriptionapplyDense(Ops deps, Output<T> gradient, Output<T> variable) Generates the gradient update operations for the specific variable and gradient.protected voidcreateSlots(List<Output<? extends TType>> variables) Performs a No-op slot creation method.Get the Name of the optimizer.Methods inherited from class Optimizer
applyGradients, computeGradients, createName, createSlot, finish, getSlot, getTF, minimize, minimize, prepare
-
Field Details
-
ACCUMULATOR
- See Also:
-
LINEAR_ACCUMULATOR
- See Also:
-
LEARNING_RATE_DEFAULT
public static final float LEARNING_RATE_DEFAULT- See Also:
-
LEARNING_RATE_POWER_DEFAULT
public static final float LEARNING_RATE_POWER_DEFAULT- See Also:
-
INITIAL_ACCUMULATOR_VALUE_DEFAULT
public static final float INITIAL_ACCUMULATOR_VALUE_DEFAULT- See Also:
-
L1STRENGTH_DEFAULT
public static final float L1STRENGTH_DEFAULT- See Also:
-
L2STRENGTH_DEFAULT
public static final float L2STRENGTH_DEFAULT- See Also:
-
L2_SHRINKAGE_REGULARIZATION_STRENGTH_DEFAULT
public static final float L2_SHRINKAGE_REGULARIZATION_STRENGTH_DEFAULT- See Also:
-
-
Constructor Details
-
Ftrl
-
Ftrl
-
Ftrl
Creates a Ftrl Optimizer- Parameters:
graph- the TensorFlow GraphlearningRate- the learning rate
-
Ftrl
-
Ftrl
public Ftrl(Graph graph, float learningRate, float learningRatePower, float initialAccumulatorValue, float l1Strength, float l2Strength, float l2ShrinkageRegularizationStrength) Creates a Ftrl Optimizer- Parameters:
graph- the TensorFlow GraphlearningRate- the learning ratelearningRatePower- Controls how the learning rate decreases during training. Use zero for a fixed learning rate.initialAccumulatorValue- The starting value for accumulators. Only zero or positive values are allowed.l1Strength- the L1 Regularization strength, must be greater than or equal to zero.l2Strength- the L2 Regularization strength, must be greater than or equal to zero.l2ShrinkageRegularizationStrength- This differs from L2 above in that the L2 above is a stabilization penalty, whereas this L2 shrinkage is a magnitude penalty. must be greater than or equal to zero.- Throws:
IllegalArgumentException- if the initialAccumulatorValue, l1RegularizationStrength, l2RegularizationStrength, or l2ShrinkageRegularizationStrength are less than 0.0, or learningRatePower is greater than 0.0.
-
Ftrl
public Ftrl(Graph graph, String name, float learningRate, float learningRatePower, float initialAccumulatorValue, float l1Strength, float l2Strength, float l2ShrinkageRegularizationStrength) Creates a Ftrl Optimizer- Parameters:
graph- the TensorFlow Graphname- the name of this OptimizerlearningRate- the learning ratelearningRatePower- Controls how the learning rate decreases during training. Use zero for a fixed learning rate.initialAccumulatorValue- The starting value for accumulators. Only zero or positive values are allowed.l1Strength- the L1 Regularization strength, must be greater than or equal to zero.l2Strength- the L2 Regularization strength, must be greater than or equal to zero.l2ShrinkageRegularizationStrength- This differs from L2 above in that the L2 above is a stabilization penalty, whereas this L2 shrinkage is a magnitude penalty. must be greater than or equal to zero.- Throws:
IllegalArgumentException- if the initialAccumulatorValue, l1RegularizationStrength, l2RegularizationStrength, or l2ShrinkageRegularizationStrength are less than 0.0, or learningRatePower is greater than 0.0.
-
-
Method Details
-
createSlots
Performs a No-op slot creation method.- Overrides:
createSlotsin classOptimizer- Parameters:
variables- The variables to create slots for.
-
applyDense
Generates the gradient update operations for the specific variable and gradient.- Specified by:
applyDensein classOptimizer- Type Parameters:
T- The type of the variable.- Parameters:
gradient- The gradient to use.variable- The variable to update.- Returns:
- An operand which applies the desired optimizer update to the variable.
-
getOptimizerName
Get the Name of the optimizer.- Specified by:
getOptimizerNamein classOptimizer- Returns:
- The optimizer name.
-