Before I jump into today’s entry, I realized that I didn’t actually describe what the goal of TFNN is. It’s been a dream of mine for a little over 10 years now to create true Artificial Intelligence. Not a system of algorithms and heuristics that fools a spectator into thinking the computer is intelligent, but actual intelligence – bringing ‘whatever’ it is to the electronic universe that nature has granted the animal kingdom for so long. It is the holy grail of Computer Science, AI subfields, and Cognitive Science – attempted by many brilliant people, and ultimately accomplished by no one as of yet. After the work of Von Neumann, Turing, and the dawn of the programmable machine in the 1950s, there have been many theories on how to accomplish this task. They range from expert systems, drawing on large databases and extensive rules of logic to yield a result, to artificial neural networks that model their biological counterparts found in nature. The TFNN project chooses the method of the latter, this author believing that mother nature already has the design down pretty pat, there’s no need in reinventing the paradigmatic wheel, so to speak.
The most important piece, the software that creates a virtual neural network within a computer, took a little over half a decade for me to write, but it’s been done for a while now. The problem is, though I can test out the functionality of specific neurons and small circuits of neurons (Which work beautifully after A LOT of frustration), I can’t test out Hebbian Learning (Hebb’s ‘Fire Together Wire Together’ postulate, large basis of both intrinsic and extrinsic memory) across a large neural matrix (TFNN term for a neural nucleus, a ganglion, any local cluster of neurons). Without external stimulus and the ability to somehow manipulate the environment (no matter how small that manipulation be), it is impossible to study any meaningful organization across the neural matrices. A neural network’s ability to mold itself to the universe around it is dependent on meaningful and continuous sensory input. In other words, I have the brain, but of yet I had no body to test it out in.
Right now we’re working with this brilliant guy in Minnesota who received a grant through his university to build the prototype robot that will house a large scale implementation of TFNN. This is still a ways away though, and I’m kind of itching to get some testing and experimentation done on a smaller scale – building from 10,000 to 100,000 neurons, from 10,000 to a million synapses. At the school district I work at we do a lot of work with Lego robotics – and I realized this was the perfect opportunity – lego robots are a snap to build, have access to simple sensory inputs and are easily interfaced from a C++ environment.
So I introduce Milo! (I seem to name everything this, I’m not quite sure why. ;))
Milo is pretty quick and dirty, no gearing or manipulative appendages. BUT – that’s not very important since the test neural networks that he will run will start out as very rudimentary – more on that in a second.
What Milo does have is the ability to manipulate the motors in each front wheel, respectively, both power (speed) and direction (forward or backwards). Also, Milo has a forward and downward pointing light sensor – one detects ambient light while one scans the ground. Also, Milo has two large antennae in the front that are connected to touch sensors.
This entry is a little longer than I wanted, but a few things –
I need to rewrite some code involved with the emulation of neurotrophin release( for axon terminal growth and pruning neurons) – currently each presynaptic neuron assumes it received virtual neurotrophins from the post-synaptic neuron, regardless of if the post-synaptic neuron fired simultaneously or not. While this doesn’t affect specific neuron functionality, on a large scale it defeats the point of a neural network – unless a neuron receives no input it never dies (So no pruning, no selectivity), any looping (which is necessary) within the network leads to a cascade overload after a few minutes since every synaptic pathway is strengthening whenever it receives activity, after a few minutes every neuron is firing constantly and out of control – something akin to the computer going through the most massive seizure in history.
Besides that code rewrite, the rest is down to testing. The first incarnations of Milo’s brain will be single and dual matrix constructions. I will continue to add matrices as time goes on, in the final test case I plan on a thalamus for each of the sensory inputs connected via a neural bridge to both a rudimentary respective cortex and a proto-amygdala. It is my hope to see neural configurations and activity akin to that observed by Joseph LeDoux et al in their neural connectivity mapping of danger input to motor-control output.
Things I will be experimenting with in general in every matrix configuration – different threshold values and time periods associated with neuron firing. This is akin to adjusting how much glutamate and other neurotransmitters/modulators a neuron secretes. In the virtual neuron world it is a threshold number of pulses arriving in an arbitrary amount of time.
Also, adjusting how much to increase/degrade synaptic weight (application of virtual neurotrophins).
Eventually I hope to also incorporate tests that speak toward the Roger Penrose theory of quantum mechanics taking place within synaptic or neural activity. This isn’t as important right now though, especially since neural activity appears to be deterministic anyway, and I don’t believe in indeterminism in quantum mechanics anyway, so I think its a needless test. But being a good scientist I have to rule out all possibilities and not be biased by my own personal thoughts. ๐
There’s also going to be an efficiency issue of the Temporal Frame engine with this new code I have to write to fix the Hebbian Plasticity problem I outlined above. The TFNN is already a little slow due to some stuff – this is going to slow it down a little (maybe lot) more. Good thing all these new smokin 64 bit processors are coming out. ๐
Anyway, I had a lot to talk about in the beginning, outline what’s going on, I promise not all the entries will be as boring, long or confusing. I also hope for more pictures and diagrams! ๐ Bye for now!
I still say T100 or Skynet is a much better name.