Scipy optimization algorithms? (for minimizing neural network cost function) - python -


i wrote neural network object in python has cost function , determines gradients via back-propogation. see bunch of optimization functions here, have no idea how implement them. i'm having hard time finding example code learn from.

clearly need somehow tell parameters i'm trying change, cost function i'm trying minimize, , gradient calculated backprop. how tell, say, fmin_cg what's what?

bonus question: can learn differences in uses of various algorithms?

===== ok, update =====

this have atm:

def train(self, x, y_vals, iters = 400): t0 = concatenate((self.hid_t.reshape(-1), self.out_t.reshape(-1)), 1) self.forward_prop(x, t0) c = lambda v: self.cost(x, y_vals, v) g = lambda v: self.back_prop(y_vals, v) t_best = fmin_cg(c, t0, g, disp=true, maxiter=iters) self.hid_t = reshape(t_best[:,:(hid_n * (in_n+1))], (hid_n, in_n+1)) self.out_t = reshape(t_best[:,(hid_n * (in_n+1)):], (out_n, hid_n+1)) 

and, error it's throwing:

traceback (most recent call last): file "<stdin>", line 1, in <module> file "netset.py", line 27, in <module> net.train(x,y) file "neuralnet.py", line 60, in train t_best = fmin_cg(c, t0, g, disp=true, maxiter=iters) file "/usr/local/lib/python2.7/dist-packages/scipy/optimize/optimize.py", line 952, in fmin_cg res = _minimize_cg(f, x0, args, fprime, callback=callback, **opts) file "/usr/local/lib/python2.7/dist-packages/scipy/optimize/optimize.py", line 1017, in _minimize_cg deltak = numpy.dot(gfk, gfk) valueerror: matrices not aligned 

...halp!

i never used fmin_cg. guess v weight vector. did not find error in code, when read documentation. searched error , found this: matrices not aligned error: python scipy fmin_bfgs

in addition, think not garantueed g(v) calculated after c(v). thus, backpropagation function should forward propagate x once again:

c = lambda v: self.cost(x, y_vals, v) g = lambda v: self.cost(x, y_vals, v); self.back_prop(y_vals, v) # ------------------------ 

or can pass 1 function returns cost function , gradient tuple avoid 2 forward propagations issam laradji mentioned.

good articles optimization algorithms artificial neural networks are:

i can recommend levenberg-marquardt. algorithms works well. unfortunately every iteration step has cubic complexity o(n^3).


Comments

Popular posts from this blog

javascript - backbone.js Collection.add() doesn't `construct` (`initialize`) an object -

php - Get uncommon values from two or more arrays -

Adding duplicate array rows in Php -