3
Contributors
3
Replies
24
Views
3 Years
Discussion Span
Last Post by somyms
0

When you figure them out, apply for that Nobel Prize in Medicine... This is an area of serious ongoing academic research, and there have been a number of recent articles about mapping human neural networks to machine neural networks. Remember, Google is your friend.

If you are referring simply to computer neural networks, then there is a large body of work available to learn from. One of the related topics in computer learning is called simulated annealing.

FWIW, just to get started in that field I had to take an entire course at MIT, and it just covered the basics. I think that was about 15 years ago.

0

I think the most interesting ongoing development in this area is the IBM neurosynaptic chip design. These could be the next big DARPA-backed game-changer.

When you figure them out, apply for that Nobel Prize in Medicine...

Well... there are really two parts to this: the logic and the bio-chemistry. I think that the logic (how neurons, synapses, and learning works) is largely understood, and has been for quite some time, but not to say that there isn't much more to be discovered. The bio-chemistry is much harder to understand overall, i.e., the influence of chemistry on the functions of the brain.

But when it comes to artificial brains or neural networks, the bio-chemistry is largely irrelevant (maybe A.I. will just be less moody than humans). And that's another awesome prospect of this, which is that the chemical reactions that drive the signals across the brain are extremely slow and inefficient compared to electrical signals through silicon substrates. In other words, an artificial brain made from a silicon chip (like the IBM chip) would be several orders of magnitude faster at "thinking" than a human / mammallian brain.

One of the related topics in computer learning is called simulated annealing.

Well, simulated annealing is a very general method that is applicable to many areas far beyond artificial neural-networks (ANNs). In fact, I am using it in completely different kind of method (probabilistic motion planning), and it has also be used a lot in traditional optimization methods. And by the way, the field of "computer learning" is now called "Machine Learning" and is pretty much the main branch of A.I. these days, to many "machine learning" and "artificial intelligence" are synonymous terms (A.I. is just more vague and fuzzy).

FWIW, just to get started in that field I had to take an entire course at MIT, and it just covered the basics. I think that was about 15 years ago.

I also took a few graduate courses on topics around A.I. (mainly, machine learning). I think the field has developped significantly in the past 15 years. One thing that happened is a process, not unlike simulated annealing, by which people cooled down from the craze around artificial neural networks in the 70s and 80s. Today, ANNs are considered just as one particular family of base functions that can be used in machine learning methods, and not a particularly "special" one either. There is no magic involved here. So, I would caution the OP against being too narrowly focused on that particular family of functions, because he could miss the bigger picture such as the crucial concepts of probability theory (e.g., Bayesian methods), information theory (e.g., entropy, expectation-maximization), and Markov models (e.g. HMM, MDP, POMDP, etc.). In other words, machine learning has generalized far beyond the confines of ANNs and a few early algorithms, it is now a much more comprehensive field.

0

thank you rubberman and mike actually when i read about brain networks i feel an interest towards it .

This topic has been dead for over six months. Start a new discussion instead.
Have something to contribute to this discussion? Please be thoughtful, detailed and courteous, and be sure to adhere to our posting rules.