pub fn resource_sparse_apply_adagrad_da<'a, T0: ToTensorHandle<'a>, T1: ToTensorHandle<'a>, T2: ToTensorHandle<'a>, T3: ToTensorHandle<'a>, T4: ToTensorHandle<'a>, T5: ToTensorHandle<'a>, T6: ToTensorHandle<'a>, T7: ToTensorHandle<'a>, T8: ToTensorHandle<'a>>(
    ctx: &'a Context,
    var: &T0,
    gradient_accumulator: &T1,
    gradient_squared_accumulator: &T2,
    grad: &T3,
    indices: &T4,
    lr: &T5,
    l1: &T6,
    l2: &T7,
    global_step: &T8
) -> Result<()>
Expand description

Shorthand for ResourceSparseApplyAdagradDA::new().call(&ctx, &var, &gradient_accumulator, &gradient_squared_accumulator, &grad, &indices, &lr, &l1, &l2, &global_step).

See : https://www.tensorflow.org/api_docs/python/tf/raw_ops/ResourceSparseApplyAdagradDA