Trying to stimulate Reverse Neuron Activation
In our SEE ML class of Neural Network in 5th Exercise Prof Andrew Ng has taught us a wonderful Algorithm, Neural Network, and its detail implementation was required for the Exercise. The Given Problem in exercise was to recognize the Hand-Written NUMERAL (0-9) from the dataset of 5000 samples collected from Post-offices in US. As expected after the end of the exercise, our implemented NN Algo did a wonderful job with accuracy of almost 98%.
As we know Concept of Neural Network is originated from the simulation of Human Brain. And when we learn and start recognizing things by referring them with some name, Our brain develop a capability of Imaging the same stuff, even wothout seeing that. For example further after recoggnizing digit Zero ‘0’, Whenever i will hear word Zero, i can draw a pic of Zero in Brain (Well in very plain way, otherwise brain has capability to imagine linked stuff too, and i’m sure that would be some where in Advance Machine Learning). So i planned to do similar stuff with our data, twisting the problem to draw a 20x20 matrix of pixels to draw a gray-scale image of a given digit.
With given dataset, i thought of two approach, i’m not sure if any work
- Train the data to Classify the “i x j” pixel to either one or zero, that might give me a Black&White Image or a almost similar way but instead of classifying with sigmoid just let the value of Prediction as it is and consider it as Pixel value of Gray-Scale Image
- Do a Reverse Neural Networking ( i just coined it) , i.e. by making Y data as feature and X data as Output layer having Classification into 400 different Category.
So far, i’m working with 2nd method because that seesm to be easy and in very sync with HW-4 so just some small modification in code to go with hypothesis.
I converted Y data (that was 5000x1) Vector into a Matrices of (5000x10) with each row representing a vector of 1X10, with corresponding ‘y’ as 1 keeping other zero (e.g. y = 5 becomes [0 0 0 0 1 0 0 0 0 0]. And Output Layer as 400x1 Vector, changed the input and out put size. And ready to go my home made Rev-Neural Net.
I trained it over 50 iteration and not to be so very amazement , i dint fail, it failed BRUTLY.
After training, when i fed it y = 1, i.e.([1 0 0 ….])
h1 = sigmoid([1 y]*Theta1’)
h2 = sigmoid([1 h1]*Theta2)
displayData(h2)
Aah... what its given was just not presentable, i still dint loose my heart and just increased my Iter to 400, and it’s tarining is on in background.
i’ll update with result.
UPDATE:
That too Failed Badly, After that try few other steps like making bias term 0, though i have yet to check it with Learning Curve, i'll be doing in some times. Mean while if some one does so can update...
Also please discuss what is wrong in this approach or why this Approach is at all wrong?
h2 = sigmoid([1 h1]*Theta2)
displayData(h2)
Aah... what its given was just not presentable, i still dint loose my heart and just increased my Iter to 400, and it’s tarining is on in background.
i’ll update with result.
UPDATE:
That too Failed Badly, After that try few other steps like making bias term 0, though i have yet to check it with Learning Curve, i'll be doing in some times. Mean while if some one does so can update...
Also please discuss what is wrong in this approach or why this Approach is at all wrong?
