Class RMSProp
java.lang.Object
org.tensorflow.framework.optimizers.Optimizer
org.tensorflow.framework.optimizers.RMSProp
Optimizer that implements the RMSProp algorithm.
The gist of RMSprop is to:
- Maintain a moving (discounted) average of the square of gradients
- Divide the gradient by the root of this average
This implementation of RMSprop uses plain momentum, not Nesterov momentum.
The centered version additionally maintains a moving average of the gradients, and uses that average to estimate the variance.
- See Also:
-
Nested Class Summary
Nested classes/interfaces inherited from class Optimizer
Optimizer.GradAndVar<T>, Optimizer.Options -
Field Summary
FieldsModifier and TypeFieldDescriptionstatic final booleanstatic final floatstatic final floatstatic final floatstatic final Stringstatic final Stringstatic final floatstatic final StringFields inherited from class Optimizer
globals, graph, tf, VARIABLE_V2 -
Constructor Summary
ConstructorsConstructorDescriptionCreates an RMSPRrop OptimizerCreates an RMSPRrop OptimizerRMSProp(Graph graph, float learningRate, float decay, float momentum, float epsilon, boolean centered) Creates an RMSPRrop OptimizerCreates an RMSPRrop OptimizerRMSProp(Graph graph, String name, float learningRate, float decay, float momentum, float epsilon, boolean centered) Creates an RMSPRrop Optimizer -
Method Summary
Modifier and TypeMethodDescriptionapplyDense(Ops deps, Output<T> gradient, Output<T> variable) Generates the gradient update operations for the specific variable and gradient.protected voidcreateSlots(List<Output<? extends TType>> variables) Performs a No-op slot creation method.Get the Name of the optimizer.toString()Methods inherited from class Optimizer
applyGradients, computeGradients, createName, createSlot, finish, getSlot, getTF, minimize, minimize, prepare
-
Field Details
-
LEARNING_RATE_DEFAULT
public static final float LEARNING_RATE_DEFAULT- See Also:
-
DECAY_DEFAULT
public static final float DECAY_DEFAULT- See Also:
-
MOMENTUM_DEFAULT
public static final float MOMENTUM_DEFAULT- See Also:
-
EPSILON_DEFAULT
public static final float EPSILON_DEFAULT- See Also:
-
CENTERED_DEFAULT
public static final boolean CENTERED_DEFAULT- See Also:
-
RMS
- See Also:
-
MG
- See Also:
-
MOMENTUM
- See Also:
-
-
Constructor Details
-
RMSProp
-
RMSProp
Creates an RMSPRrop Optimizer- Parameters:
graph- the TensorFlow GraphlearningRate- the learning rate
-
RMSProp
public RMSProp(Graph graph, float learningRate, float decay, float momentum, float epsilon, boolean centered) Creates an RMSPRrop Optimizer- Parameters:
graph- the TensorFlow GraphlearningRate- the learning ratedecay- Discounting factor for the history/coming gradient. Defaults to 0.9.momentum- the acceleration factor, default is 0.epsilon- A small constant for numerical stabilitycentered- Iftrue, gradients are normalized by the estimated variance of the gradient; iffalse, by the uncentered second moment. Setting this totruemay help with training, but is slightly more expensive in terms of computation and memory. Defaults tofalse.
-
RMSProp
-
RMSProp
public RMSProp(Graph graph, String name, float learningRate, float decay, float momentum, float epsilon, boolean centered) Creates an RMSPRrop Optimizer- Parameters:
graph- the TensorFlow Graphname- the name of this Optimizer. Defaults to "RMSProp".learningRate- the learning ratedecay- Discounting factor for the history/coming gradient. Defaults to 0.9.momentum- The acceleration factor, default is 0.epsilon- A small constant for numerical stabilitycentered- Iftrue, gradients are normalized by the estimated variance of the gradient; iffalse, by the uncentered second moment. Setting this totruemay help with training, but is slightly more expensive in terms of computation and memory. Defaults tofalse.
-
-
Method Details
-
createSlots
Performs a No-op slot creation method.- Overrides:
createSlotsin classOptimizer- Parameters:
variables- The variables to create slots for.
-
applyDense
Generates the gradient update operations for the specific variable and gradient.- Specified by:
applyDensein classOptimizer- Type Parameters:
T- The type of the variable.- Parameters:
gradient- The gradient to use.variable- The variable to update.- Returns:
- An operand which applies the desired optimizer update to the variable.
-
toString
-
getOptimizerName
Get the Name of the optimizer.- Specified by:
getOptimizerNamein classOptimizer- Returns:
- The optimizer name.
-