Update for today – I spent a great deal of time today simply porting the code over to work in Visual Studio .NET (You’d think C++ code is C++ code, but today’s IDEs are so complex that it takes half a day just to get the project setup properly).
I also downloaded the header and source files for the Lego RCX brick interface, and played around with them for a while. There’s pros and cons to executing your program on a computer and then directly communicating with the RCX block for input/output. The pro is you can execute any size program without the memory or processor constraints of the RCX brick, and can write the program in C++. The con is the infrared transfer rate is 2400 BPS between the lego tower and the brick, with an average instruction size of 16 bits, so the latency involved with input from the brick to the computer and output from the computer to the brick is something like 150-180ms all said and done.
In our scheme, this 150ms transmission delay is equivalent (in idea, not in actual numbers) to the propagation delay of sensory input at the tips of the fingers up the spinal cord to the brain. 150ms is a very long time, relatively speaking, which isn’t so good. To drive the point home a bit more – a 150ms central nervous system is about equivalent to 7hz. The computer currently running the brain operates at 2.4GHZ, which is 2576980377.6 hz. Just ballparking figures, assuming we have a 100,000 neuron brain, and each neuron takes 50 cycles to compute, we have the brain operating at around 515 hz, which is 73 times faster than the nervous system. I’m not worried too much about it as these are just simple tests in a simple body, not the primary prototype robot. 2400 BPS is slow, but I think it will be okay to study neural pathway organization.
I modified code tonight as well – first I cleaned up some deprecated code and took it out – it was just messing up my source and I know I’ll never need it again. I fixed the portion of the code that sets the post-synaptic partner of a neuron, allowing you either to pick another neuron, or an output (integer), this allows you to send output back out of the brain, eg to motors or speakers or whatever. I also added some functionality to the matrix code that allows you to refer to a specific neuron within the matrix, so the programmer can set specific threshold values for a neuron or synaptic weights for a neuron, through the matrix the neuron is contained within. This already helped me a lot in the experiment I ran tonight –
I created a single neuron brain for Milo, connected his antenna touch sensors as the input to the neuron, and connected the output to the sound processor within the RCX, specifically triggering off the beep code. I set the threshold of the single neuron to be 3 pulses, equivalent to 3 “doses” of glutamate (or whatever way you want to look at it, potential charge, etc). The result? Success – if I tapped Milo’s antenna antenna less than 3 times in a specified amount of time, he did nothing, but if I went past that threshold Milo beeped – his first communication! It may not sound like much and still very algorithmic – but it shows the neuron itself does indeed function properly when connected to inputs and outputs. A single neuron brain isn’t much of a brain at all, but it’s definitely a start!
Originally I was planning on connecting two neurons together tomorrow and fixing the Hebbian Learning routine, but I think I’m going to wait on that until I get the code going for a graphical depiction of the net. It’s very difficult to just work with the integer values of synaptic weights and reading messages on the screen, it would be much easier to see a graphical, color coordinated representation of the ANN.
So, tomorrow I’ll be working on introducing some code that assigns a spatial configuration to the neural network, along with code to show the net graphically.
Still to do is fix the matrix generation code (to work with the new spatial coordinates, its pretty random and crap right now, and it loops back on itself in a strange manner – its generally bad and needs to be redone), and I still need to take out the static/linear increase in synaptic weight on every neural activation and instead write code for a Hebbian scheme.
Milo is already infinitely smarter from yesterday (When he didn’t have a brain at all), so we’ll keep on working upwards!
Leave a Reply