When we train a neural network, we’re trying to find the weight and bias values that give us the lowest possible cost for all inputs. Cost, in this case, is the “rightness” or “wrongness” of an output’s relationship to an input as measured by a cost function, with low costs being more “right” or favorable.
Method 1 is fine if you’re working with a small number of weights, but it very quickly runs into problems when dealing with a realistically-sized neural network. Even the simple neural network we’ll be making in Part 3 of this series has nearly 24,000 weights to train! With 10, 50, or 100 neurons in your network, you could get lucky with random weights, but in a larger search space with thousands of neurons, luck isn’t going to get you far.
Method 2, where you start with a random network and work to improve it, is a much better way to go. It’s also what we’ll be talking about from here on.