pub fn apply_adagrad_da<'a, T0: ToTensorHandle<'a>, T1: ToTensorHandle<'a>, T2: ToTensorHandle<'a>, T3: ToTensorHandle<'a>, T4: ToTensorHandle<'a>, T5: ToTensorHandle<'a>, T6: ToTensorHandle<'a>, T7: ToTensorHandle<'a>>(
    ctx: &'a Context,
    var: &T0,
    gradient_accumulator: &T1,
    gradient_squared_accumulator: &T2,
    grad: &T3,
    lr: &T4,
    l1: &T5,
    l2: &T6,
    global_step: &T7
) -> Result<TensorHandle<'a>>
Expand description

Shorthand for ApplyAdagradDA::new().call(&ctx, &var, &gradient_accumulator, &gradient_squared_accumulator, &grad, &lr, &l1, &l2, &global_step).

See : https://www.tensorflow.org/api_docs/python/tf/raw_ops/ApplyAdagradDA