Since it's all 'thinking' over here at the moment, thought I might quote something interesting that I read in Fuzzy Thinking by Bart Kosko recently. It's a pretty interesting book, lots of stuff in there, about fuzzy logic, neural nets, etc.
Anyway, here's a quote. It's in the context of a discussion about neural nets, deja-vu, and human near-death experiences (i.e. 'flash of light' and 'life flashes before eyes').
Your mind knows when you are in a tough spot, when the Mack truck comes right at you or you fall from the rail and see the street racing up at you or you hear the guillotine blade drop. Fight-flight-fright adrenals take over. Your genes and the mind that sits with them and obeys them do not want you to die. The wire is tripped and the adrenals flash in one all-out associative search for something that will save you. Massive associative search. Red alert. Open the emergency doors. Search all the stored data nets and search them in parallel and search them fast. Check all the energy wells at once. The massive search would fill your mind's eye. You would not see your life play in front of you as a film plays in a theatre. It would not be serial but parallel. Your near-death fix would trigger thousands of old stored events that somehow resembled it. The threat event would act as a ball or whole set of balls that roll down hundreds of energy sheets in hundreds of neural nets. Related past events and events related to them would all flash in your mind's eye at once. From this you might find the right thing to do to get out of the fix. It might just “pop” into your mind. Or the fix might pass on its own and after that you would just remember the massive associative flash and tell stories about it. If this view is right, you might recall it the next time you get the big-fix flash or right after you get the flash. It's just something to think about.
The big point is that learning changes an information medium. In neural nets the medium is an energy sheet. In us it is a web of synapses or webs of synapses or webs of webs of synapses and maybe some muscle-contraction rates. The information medium can be anything. A computer learns when you change its software or memory circuits. Warm wax learns your palm print when you push your hand into it. The canvas learns the painting when the artist smears paint on it. Even your lawn of green grass can learn what you mow it. This odd case gives a good example of learning in a parallel information medium.
The grass blades learn what they are cut. They grow in parallel as neurons fire in parallel and they act as a plastic information medium. You can throw out hundreds of grass blades and still read your name that you mowed in the lawn. The lawn forgets what you teach it if you do not re-mow the lawn to the same shape. In the same way a neural net loses its energy wells if you do not retrain the net with the same or similar patterns. Without practise it all goes to seed.
I wonder if 'flash of light' could be the result of the massive associative search dumping multiple results to the same output buffer..?