Hi

I'm working on implementing a neural network, but I'm having trouble calculating the error gradient. The problem is I don't know much about calculus and can't understand what exactly to do.

I found this Web page that explains it quite well, but I still just can't get it.

http://www.willamette.edu/~gorr/classes/cs449/linear2.html

Basically the part I'm trying to implement is the last function in that table.

delta weight = u * (t sub o - y sub o) * y sub i

I know that u is the learning rate, that t is the target, and that yo is the actual output. I don't understand why it's multiplied by yi (which i presume is the input). Is it the input received by the node in question or is it something else?

Any help is greatly appreciated

ps. I posted this in C++ because my implementation is in C++