Categories
AI & Machine Learning

Mạng nơ-ron – Ví dụ lan truyền ngược

Overview : initial weights, the biases, and training inputs/outputs

HÌNH 24B.01a. Overview : initial weights, the biases, and training inputs/outputs.

HÌNH 24B.01b. Overview : initial weights, the biases, and training inputs/outputs.

Forward-Computation : total net input, activation function

HÌNH 24B.02a. Forward-Computation : total net input, activation function.

HÌNH 24B.02b. Forward-Computation : total net input, activation function.

HÌNH 24B.02c. Forward-Computation : total net input, activation function.

Calculating the Total Error : squared error function

HÌNH 24B.03a. Calculating the Total Error : squared error function.

For example, the target output for O1 is 0.01 but the neural network output 0.75136507, therefore its error is:

HÌNH 24B.03b. Calculating the Total Error : squared error function.

Backpropagation : Output Layer

Our goal with backpropagation is to update each of the weights in the network so that they cause the actual output to be closer the target output, thereby minimizing the error for each output neuron and the network as a whole.

Consider w5. We want to know how much a change in w5 affects the total error, aka HÌNH 24B.04a. Backpropagation : Output Layer..

HÌNH 24B.04b. Backpropagation : Output Layer. is read as “the partial derivative of Etotal with respect to w5“. You can also say “the gradient with respect to w5“.

By applying the chain rule we know that:

HÌNH 24B.04c. Backpropagation : Output Layer.

Backpropagation : Output Layer

HÌNH 24B.05a. Backpropagation : Output Layer.

HÌNH 24B.05b. Backpropagation : Output Layer.

HÌNH 24B.05c. Backpropagation : Output Layer.

HÌNH 24B.05d. Backpropagation : Output Layer.

Backpropagation : Output Layer

HÌNH 24B.06a. Backpropagation : Output Layer.

HÌNH 24B.06b. Backpropagation : Output Layer.

HÌNH 24B.06c. Backpropagation : Output Layer.

To decrease the error, we then subtract this value from the current weight (optionally multiplied by some learning rate, eta, which we’ll set to 0.5):

HÌNH 24B.06d. Backpropagation : Output Layer.

Backpropagation : Output Layer

We perform the actual updates in the neural network after we have the new weights leading into the hidden layer neurons (ie, we use the original weights, not the updated weights, when we continue the backpropagation algorithm below).

Backpropagation : Hidden Layer

HÌNH 24B.07a. Backpropagation : Hidden Layer.

Starting with HÌNH 24B.07b. Backpropagation : Hidden Layer.:

HÌNH 24B.07c. Backpropagation : Hidden Layer.

HÌNH 24B.07d. Backpropagation : Hidden Layer.

Backpropagation : Hidden Layer

HÌNH 24B.08a. Backpropagation : Hidden Layer.

Therefore:

HÌNH 24B.08b. Backpropagation : Hidden Layer.

HÌNH 24B.08c. Backpropagation : Hidden Layer.

HÌNH 24B.08d. Backpropagation : Hidden Layer.

HÌNH 24B.08e. Backpropagation : Hidden Layer.

Backpropagation : Hidden Layer

Putting it all together:

HÌNH 24B.09a. Backpropagation : Hidden Layer.

HÌNH 24B.09b. Backpropagation : Hidden Layer.

HÌNH 24B.09c. Backpropagation : Hidden Layer.

HÌNH 24B.09d. Backpropagation : Hidden Layer.

Backpropagation : Hidden Layer

Finally, we’ve updated all of our weights! When we fed forward the 0.05 and 0.1 inputs originally, the error on the network was 0.298371109. After this first round of backpropagation, the total error is now down to 0.291027924. It might not seem like much, but after repeating this process 10,000 times, for example, the error plummets to 0.0000351085. At this point, when we feed forward 0.05 and 0.1, the two outputs neurons generate 0.015912196 (vs 0.01 target) and 0.984065734 (vs 0.99 target).

Leave a Reply

Your email address will not be published. Required fields are marked *