1. We have seen that neural network learning is based on the following heuristic: we expect that taking a small step in the direction \(-\nabla f\) for a function \(f : R^n \mapsto R\) will decrease \(f\), where \(\nabla f\) is the gradient of \(f\). What can be expected to happen if we take a small step in the direction \(\nabla f\) ?

  2. A.:

    We expect \(f\) to increase.