Do any of you (reading this) know of a good learning resource for implementing a neural network, instead of papers describing the theory?

Or perhaps someone can explain how one can apply the information on perceptrons given here.

I'm just looking into this for fun. It seems difficult to acquire the knowledge to make something useful--I could create my own implementation of the perceptron/neural network, but I don't really know what it does. Basically, a learning resource that "dumbs it down" for the masses is probably what I'm looking for.

It's not a big deal, I'll be studying this sometime in the next couple years anyway, but it does interest me.

http://www.amazon.com/Artificial-Neural-Networks-Robert-Schalkoff/dp/007057118X/ref=ntt_at_ep_dpt_5 is a great book. It's out of print, but a lot of libraries probably still have a copy. It starts you at the early stages with the Perceptrons and goes through Hopfield nets and associative memories. I haven't formally looked into these things in a long time, but I google around it from time to time.

Look into something like the backprop (for a nice well explained example see http://www.codeproject.com/KB/recipes/BP.aspx) and I think the reasoning behind the Perceptron will probably click for you. The Perceptron is just a weighted sum, so if you had two inputs to it and each of those connections had the weight of 1/2, you'd be finding the average. Depending on the threshold, if the sum is over the threshold, there's output (though that output depends on what type of thresholding function you use).

In this case, the threshold is at 0, so any sum above zero would cause an output of 1, otherwise 0.
Poached from http://www.mathworks.com/help/toolbox/nnet/hardlim.html#499620

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.