Class Losses
-
Field Summary
FieldsModifier and TypeFieldDescriptionstatic final intstatic final intstatic final floatDefault Fuzz factor. -
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionbinaryCrossentropy(Ops tf, Operand<? extends TNumber> labels, Operand<T> predictions, boolean fromLogits, float labelSmoothing) Computes the binary crossentropy loss between labels and predictions.categoricalCrossentropy(Ops tf, Operand<? extends TNumber> labels, Operand<T> predictions, boolean fromLogits, float labelSmoothing, int axis) Computes the categorical crossentropy loss between labels and predictions.categoricalHinge(Ops tf, Operand<? extends TNumber> labels, Operand<T> predictions) Computes the categorical hinge loss between labels and predictions.cosineSimilarity(Ops tf, Operand<? extends TNumber> labels, Operand<T> predictions, int[] axis) Computes the cosine similarity loss between labels and predictions.Computes the hinge loss between labels and predictionsComputes the Huber loss between labels and predictions.kullbackLeiblerDivergence(Ops tf, Operand<? extends TNumber> labels, Operand<T> predictions) Computes the Kullback-Leibler divergence loss between labels and predictions.l2Normalize(Ops tf, Operand<T> x, int[] axis) Normalizes along dimension axis using an L2 norm.Computes the hyperbolic cosine loss between labels and predictions.meanAbsoluteError(Ops tf, Operand<? extends TNumber> labels, Operand<T> predictions) Calculates the mean absolute error between labels and predictions.meanAbsolutePercentageError(Ops tf, Operand<? extends TNumber> labels, Operand<T> predictions) Calculates the mean absolute percentage error between labels and predictions.meanSquaredError(Ops tf, Operand<? extends TNumber> labels, Operand<T> predictions) Computes the mean squared error between labels and predictions.meanSquaredLogarithmicError(Ops tf, Operand<? extends TNumber> labels, Operand<T> predictions) Calculates the mean squared logarithmic error between labels and predictions.Computes the Poisson loss between labels and predictions.sparseCategoricalCrossentropy(Ops tf, Operand<? extends TNumber> labels, Operand<T> predictions, boolean fromLogits, int axis) Computes the sparse categorical crossentropy loss between labels and predictions.squaredHinge(Ops tf, Operand<? extends TNumber> labels, Operand<T> predictions) Computes the squared hinge loss between labels and predictions.
-
Field Details
-
EPSILON
public static final float EPSILONDefault Fuzz factor.- See Also:
-
CHANNELS_LAST
public static final int CHANNELS_LAST- See Also:
-
CHANNELS_FIRST
public static final int CHANNELS_FIRST- See Also:
-
-
Constructor Details
-
Losses
public Losses()
-
-
Method Details
-
meanAbsoluteError
public static <T extends TNumber> Operand<T> meanAbsoluteError(Ops tf, Operand<? extends TNumber> labels, Operand<T> predictions) Calculates the mean absolute error between labels and predictions.loss = reduceMean(abs(labels - predictions))- Type Parameters:
T- the data type of the predictions and result- Parameters:
tf- The TensorFlow Opslabels- the labelspredictions- the predictions- Returns:
- the mean absolute error
-
meanSquaredError
public static <T extends TNumber> Operand<T> meanSquaredError(Ops tf, Operand<? extends TNumber> labels, Operand<T> predictions) Computes the mean squared error between labels and predictions.loss = reduceMean(square(labels - predictions))- Type Parameters:
T- the data type of the predictions and result- Parameters:
tf- The TensorFlow Opslabels- the labelspredictions- the predictions- Returns:
- the mean squared error
-
meanAbsolutePercentageError
public static <T extends TNumber> Operand<T> meanAbsolutePercentageError(Ops tf, Operand<? extends TNumber> labels, Operand<T> predictions) Calculates the mean absolute percentage error between labels and predictions.loss = 100 * reduceMean(abs((labels - predictions) / labels))- Type Parameters:
T- the data type of the predictions and result- Parameters:
tf- The TensorFlow Opslabels- the labelspredictions- the predictions- Returns:
- the mean absolute percentage error
-
meanSquaredLogarithmicError
public static <T extends TNumber> Operand<T> meanSquaredLogarithmicError(Ops tf, Operand<? extends TNumber> labels, Operand<T> predictions) Calculates the mean squared logarithmic error between labels and predictions.loss = reduceMean(square(log(labels + 1) - log(predictions + 1)))- Type Parameters:
T- the data type of the predictions and result- Parameters:
tf- The TensorFlow Opslabels- the labelspredictions- the predictions- Returns:
- the mean squared logarithmic percentage error
-
binaryCrossentropy
public static <T extends TNumber> Operand<T> binaryCrossentropy(Ops tf, Operand<? extends TNumber> labels, Operand<T> predictions, boolean fromLogits, float labelSmoothing) Computes the binary crossentropy loss between labels and predictions.- Type Parameters:
T- the data type of the predictions and labels- Parameters:
tf- the TensorFlow Opslabels- true targetspredictions- the predictionsfromLogits- Whether to interpret predictions as a tensor of logit valueslabelSmoothing- A number in the range [0, 1]. When 0, no smoothing occurs. When > 0, compute the loss between the predicted labels and a smoothed version of the true labels, where the smoothing squeezes the labels towards 0.5. Larger values of labelSmoothing correspond to heavier smoothing.- Returns:
- the binary crossentropy loss.
-
categoricalCrossentropy
public static <T extends TNumber> Operand<T> categoricalCrossentropy(Ops tf, Operand<? extends TNumber> labels, Operand<T> predictions, boolean fromLogits, float labelSmoothing, int axis) Computes the categorical crossentropy loss between labels and predictions.- Type Parameters:
T- the data type of the predictions and labels- Parameters:
tf- the TensorFlow Opslabels- true targetspredictions- the predictionsfromLogits- Whether to interpret predictions as a tensor of logit valueslabelSmoothing- Float in[0, 1]. When> 0, label values are smoothed, meaning the confidence on label values are relaxed. e.g.labelSmoothing=0.2means that we will use a value of0.1for label0and0.9for label1axis- the- Returns:
- the categorical crossentropy loss.
-
categoricalHinge
public static <T extends TNumber> Operand<T> categoricalHinge(Ops tf, Operand<? extends TNumber> labels, Operand<T> predictions) Computes the categorical hinge loss between labels and predictions.- Type Parameters:
T- the data type of the predictions and labels- Parameters:
tf- the TensorFlow Opslabels- true targets, values are expected to be 0 or 1.predictions- the predictions- Returns:
- the categorical hinge loss
-
cosineSimilarity
public static <T extends TNumber> Operand<T> cosineSimilarity(Ops tf, Operand<? extends TNumber> labels, Operand<T> predictions, int[] axis) Computes the cosine similarity loss between labels and predictions.Note that it is a number between
-1and1, which is different from the mathematical definition of cosine similarity where1represents similar vectors, and0represents dissimilar vectors. In this function, the numbers are inverted in a range of-1to1. When it is a negative number between-1and0,0indicates orthogonality and values closer to-1indicate greater similarity. The values closer to1indicate greater dissimilarity. This makes it usable as a loss function in a setting where you try to maximize the proximity between predictions and targets. If either labels or predictions is a zero vector, cosine similarity will be0regardless of the proximity between predictions and targets.loss = -sum(l2Norm(labels) * l2Norm(predictions))- Type Parameters:
T- the data type of the predictions and labels- Parameters:
tf- the TensorFlow Opslabels- true targetspredictions- the predictionsaxis- Axis along which to determine similarity.- Returns:
- the cosine similarity loss
-
hinge
public static <T extends TNumber> Operand<T> hinge(Ops tf, Operand<? extends TNumber> labels, Operand<T> predictions) Computes the hinge loss between labels and predictionsloss = reduceMean(maximum(1 - labels * predictions, 0))- Type Parameters:
T- the data type of the predictions and labels- Parameters:
tf- the TensorFlow Opslabels- true targets, values are expected to be -1 or 1. If binary (0 or 1) labels are provided, they will be converted to -1 or 1.predictions- the predictions- Returns:
- the hinge loss
-
huber
public static <T extends TNumber> Operand<T> huber(Ops tf, Operand<? extends TNumber> labels, Operand<T> predictions, float delta) Computes the Huber loss between labels and predictions.For each value x in error = labels - predictions:
loss = 0.5 * x^2 if |x| <= d loss = 0.5 * d^2 + d * (|x| - d) if |x| > dwhere d is delta.
- Type Parameters:
T- the data type of the predictions and labels- Parameters:
tf- the TensorFlow Opslabels- true targetspredictions- the predictionsdelta- the point where the Huber loss function changes from quadratic to linear.- Returns:
- the Huber loss
-
kullbackLeiblerDivergence
public static <T extends TNumber> Operand<T> kullbackLeiblerDivergence(Ops tf, Operand<? extends TNumber> labels, Operand<T> predictions) Computes the Kullback-Leibler divergence loss between labels and predictions.- Type Parameters:
T- the data type of the predictions and labels- Parameters:
tf- the TensorFlow Opslabels- true targetspredictions- the predictions- Returns:
- the Kullback-Leibler divergence loss
- See Also:
-
logCosh
public static <T extends TNumber> Operand<T> logCosh(Ops tf, Operand<? extends TNumber> labels, Operand<T> predictions) Computes the hyperbolic cosine loss between labels and predictions.log(cosh(x))is approximately equal to(x ** 2) / 2for smallxand toabs(x) - log(2)for largex. This means that 'logCosh' works mostly like the mean squared error, but will not be so strongly affected by the occasional wildly incorrect prediction.- Type Parameters:
T- the data type of the predictions and labels- Parameters:
tf- the TensorFlow Opslabels- true targetspredictions- the predictions- Returns:
- the hyperbolic cosine divergence loss
-
poisson
public static <T extends TNumber> Operand<T> poisson(Ops tf, Operand<? extends TNumber> labels, Operand<T> predictions) Computes the Poisson loss between labels and predictions.The Poisson loss is the mean of the elements of the Tensor
predictions - labels * log(predictions).- Type Parameters:
T- the data type of the predictions and labels- Parameters:
tf- the TensorFlow Opslabels- true targetspredictions- the predictions- Returns:
- the Poisson loss
-
sparseCategoricalCrossentropy
public static <T extends TNumber> Operand<T> sparseCategoricalCrossentropy(Ops tf, Operand<? extends TNumber> labels, Operand<T> predictions, boolean fromLogits, int axis) Computes the sparse categorical crossentropy loss between labels and predictions.- Type Parameters:
T- the data type of the predictions and labels- Parameters:
tf- the TensorFlow Opslabels- true targetspredictions- the predictionsfromLogits- Whether predictions is expected to be logits. By default, it is assumed that predictions encodes a probability distribution.axis- The dimension along which the entropy is computed.- Returns:
- the sparse categorical crossentropy loss
-
squaredHinge
public static <T extends TNumber> Operand<T> squaredHinge(Ops tf, Operand<? extends TNumber> labels, Operand<T> predictions) Computes the squared hinge loss between labels and predictions.loss = reduceMean(square(maximum(1 - labels * predictions, 0)))- Type Parameters:
T- the data type of the predictions and labels- Parameters:
tf- the TensorFlow Opslabels- true targets, values are expected to be -1 or 1. If binary (0 or 1) labels are * provided, they will be converted to -1 or 1.predictions- the predictions- Returns:
- the squared hinge loss
-
l2Normalize
Normalizes along dimension axis using an L2 norm.- Type Parameters:
T- the data type for the input and the result- Parameters:
tf- The TensorFlow Opsx- the inputaxis- Dimension along which to normalize.- Returns:
- the normalized values based on L2 norm
-