0

Hi there,
Guys I need help in understanding the this code. Because I have been asked my lecturer to use the existing code and propose a problem through any journals which have connection with neural network and demonstrate it using the codes. Before selecting any journal, first I need to understand the python codes in prior.

import neurolab as nl
import numpy as np

Create train samples

x = np.linspace(-7, 7, 20)
y = np.sin(x) * 0.5

size = len(x)

inp = x.reshape(size,1)
tar = y.reshape(size,1)

Create network with 2 layers and random initialized

net = nl.net.newff([[-7, 7]],[5, 1])

Train network

error = net.train(inp, tar, epochs=500, show=100, goal=0.02)

Simulate network

out = net.sim(inp)

Plot result

import pylab as pl
pl.subplot(211)
pl.plot(error)
pl.xlabel('Epoch number')
pl.ylabel('error (default SSE)')

x2 = np.linspace(-6.0,6.0,150)
y2 = net.sim(x2.reshape(x2.size,1)).reshape(x2.size)

y3 = out.reshape(size)

pl.subplot(212)
pl.plot(x2, y2, '-',x , y, '.', x, y3, 'p')
pl.legend(['train target', 'net output'])
pl.show()

up there is the codes. As I knew, neurolab used as neural network tool. I am confused in the part plot result which shows (-6.0,6.0,150 ) . My question is why this part is used? and what is the difference between train target and net output? If got any other information please post here. Hopefully can learn something from you guys.

Thanks guys.

2
Contributors
2
Replies
32
Views
3 Years
Discussion Span
Last Post by alagez
0

It may help everybody to know that this code is an example in module neurolab source code, which illustrates the use of the Feed Forward Multilayer Perceptron.

0

yes i know that. But I m just want to know the difference between train target and net out. moreover what the graph saying is unclear.and what did the error saying? is it decreasing after 50 iterations?

This topic has been dead for over six months. Start a new discussion instead.
Have something to contribute to this discussion? Please be thoughtful, detailed and courteous, and be sure to adhere to our posting rules.