TFNN – MRI Heaven!

I’ve reached that point now where so many things are coming together that I’m going out of my mind with excitement. In truth I haven’t accomplished MUCH more in the last few days than I have during the rest of the project, but with the graphical view of the neural net, I’m really seeing it come alive like I never have before, and it’s just amazing. More pictures today.

First off, I played around a bit with generating a larger matrix and adjusting threshold and weight values. Here is a 3000 neuron matrix I generated for Milo

Really beautiful to watch something of that size spinning around in front of you. I added code that allows you to adjust the distance from which you view the neural net – not a big deal, but nice for ease of viewing.

I also wanted to test my theory about the current synaptic modification functionality leading to cascade overload in the net. I generated a 1000 neuron brain for milo, connected Milo’s touch sensors to 4 input neurons in the corner region, increased default synaptic weight across the net to speed up the process, turned it on, touched Milo’s antenna, and BOOM, one massive (and deadly if biologically experienced) seizure:

It was amazing, at one point just the local region to the input neurons were active. Then 25% of the net, then as neural pathways looped back and synaptic strength was increasing, 50% of the net was active, then 75%. After 30 seconds or so there were maybe 1 or 2 orphan neurons that weren’t active, but that was it. Definitely proved the fact that the current synaptic modification code is wrong and needs to pattern Hebbian Learning.

Tonight I finished the code that displays synaptic links between the neurons. From the visuals you can’t tell which is the presynaptic neuron and which is the post, but this isn’t TOO too important when viewing in this scale anyway. What it does tell you is the strength of the synapse from the color. A greener synapse means a weak link, while a bluer synapse means a strong link. Here is a picture of a 100 neuron matrix at rest with random synapses assignment:

The great thing is, I was finally VISUALLY able to test out the global synapse degradation scheme. I jacked up the time frame so the synapses degraded much more quickly than normal, turned the brain on and let it just sit, with no input being fed into it. I slowly watched as strong synaptic links weakened over time into green links due to lack of exposure to “virtual neurotrophins”.

Then I regenerated the same net but this time touched Milo’s antenna, and watched in amazement as neuralogical activity burst to life. It’s hard to tell from this picture, but if you watch it while it’s spinning around you can tell that activity happens primarily along strong links and seldomly along weak links, which is expected in the current scheme:

The one problem with the synapse visualization is for anything but small matrices. After a certain point there are so many synapses that it floods the screen, making the visualization not only useless but also extremely slow.

Pretty neat to look at though, for a moment I felt like I was looking at real neurons, it kind of sent a shiver up my back.

Anyway, thanks to the visualization code I can safely say that everything so far is working the way it should be, now onto fixing the synapse modification – very excited!!

TFNN – Functional MRI Scans

Today’s a very exciting day – I spent quite a bit of time working on the code for the graphical representation of a specific neural matrix, allowing me easier study of activity once the experimentation starts on a larger scale.

No matter how much work you do, it’s always kind of breath taking to see a graphical representation of your work, actually see a result as opposed to just code or theory in your head. This OpenGL (GLUT) graphics code visually represents the network like a virtual kind of MRI. It represents inactive neurons as white, and active neurons as red. Currently the synapses are not drawn or represented in any way – though this is planned next time I sit down with the code. They will be represented in different shades depending on the strength of the link.

Anyway though – PICTURES, I can’t say how excited I was – I generated a 100 neuron neural matrix with a synaptic density of 3 – meaning that each axon had (on average) 3 post-synaptic neurons connected to it. I connected a portion of the neural matrix to the touch sensors on Milo.

Here is a “virtual functional MRI scan” of Milo’s 100 neuron brain at rest:

The next three pictures are various shots of activity while I’m touching Milo’s antennae



No synaptic alteration code in effect right now during these phases, so no learning – but there are synapses established in this brain and we see activity! I’m not sure I would call it thought since there is no plasticity amongst synaptic connections, but it’s definitely a beautiful thing to see.

I want to finish off the visual representation of synapses next time I sit down with it, then onto adding the synaptic alteration routines.

TFNN – Another step

Update for today – I spent a great deal of time today simply porting the code over to work in Visual Studio .NET (You’d think C++ code is C++ code, but today’s IDEs are so complex that it takes half a day just to get the project setup properly).

I also downloaded the header and source files for the Lego RCX brick interface, and played around with them for a while. There’s pros and cons to executing your program on a computer and then directly communicating with the RCX block for input/output. The pro is you can execute any size program without the memory or processor constraints of the RCX brick, and can write the program in C++. The con is the infrared transfer rate is 2400 BPS between the lego tower and the brick, with an average instruction size of 16 bits, so the latency involved with input from the brick to the computer and output from the computer to the brick is something like 150-180ms all said and done.

In our scheme, this 150ms transmission delay is equivalent (in idea, not in actual numbers) to the propagation delay of sensory input at the tips of the fingers up the spinal cord to the brain. 150ms is a very long time, relatively speaking, which isn’t so good. To drive the point home a bit more – a 150ms central nervous system is about equivalent to 7hz. The computer currently running the brain operates at 2.4GHZ, which is 2576980377.6 hz. Just ballparking figures, assuming we have a 100,000 neuron brain, and each neuron takes 50 cycles to compute, we have the brain operating at around 515 hz, which is 73 times faster than the nervous system. I’m not worried too much about it as these are just simple tests in a simple body, not the primary prototype robot. 2400 BPS is slow, but I think it will be okay to study neural pathway organization.

I modified code tonight as well – first I cleaned up some deprecated code and took it out – it was just messing up my source and I know I’ll never need it again. I fixed the portion of the code that sets the post-synaptic partner of a neuron, allowing you either to pick another neuron, or an output (integer), this allows you to send output back out of the brain, eg to motors or speakers or whatever. I also added some functionality to the matrix code that allows you to refer to a specific neuron within the matrix, so the programmer can set specific threshold values for a neuron or synaptic weights for a neuron, through the matrix the neuron is contained within. This already helped me a lot in the experiment I ran tonight –

I created a single neuron brain for Milo, connected his antenna touch sensors as the input to the neuron, and connected the output to the sound processor within the RCX, specifically triggering off the beep code. I set the threshold of the single neuron to be 3 pulses, equivalent to 3 “doses” of glutamate (or whatever way you want to look at it, potential charge, etc). The result? Success – if I tapped Milo’s antenna antenna less than 3 times in a specified amount of time, he did nothing, but if I went past that threshold Milo beeped – his first communication! It may not sound like much and still very algorithmic – but it shows the neuron itself does indeed function properly when connected to inputs and outputs. A single neuron brain isn’t much of a brain at all, but it’s definitely a start!

Originally I was planning on connecting two neurons together tomorrow and fixing the Hebbian Learning routine, but I think I’m going to wait on that until I get the code going for a graphical depiction of the net. It’s very difficult to just work with the integer values of synaptic weights and reading messages on the screen, it would be much easier to see a graphical, color coordinated representation of the ANN.

So, tomorrow I’ll be working on introducing some code that assigns a spatial configuration to the neural network, along with code to show the net graphically.

Still to do is fix the matrix generation code (to work with the new spatial coordinates, its pretty random and crap right now, and it loops back on itself in a strange manner – its generally bad and needs to be redone), and I still need to take out the static/linear increase in synaptic weight on every neural activation and instead write code for a Hebbian scheme.

Milo is already infinitely smarter from yesterday (When he didn’t have a brain at all), so we’ll keep on working upwards!

TFNN – Milo

Before I jump into today’s entry, I realized that I didn’t actually describe what the goal of TFNN is. It’s been a dream of mine for a little over 10 years now to create true Artificial Intelligence. Not a system of algorithms and heuristics that fools a spectator into thinking the computer is intelligent, but actual intelligence – bringing ‘whatever’ it is to the electronic universe that nature has granted the animal kingdom for so long. It is the holy grail of Computer Science, AI subfields, and Cognitive Science – attempted by many brilliant people, and ultimately accomplished by no one as of yet. After the work of Von Neumann, Turing, and the dawn of the programmable machine in the 1950s, there have been many theories on how to accomplish this task. They range from expert systems, drawing on large databases and extensive rules of logic to yield a result, to artificial neural networks that model their biological counterparts found in nature. The TFNN project chooses the method of the latter, this author believing that mother nature already has the design down pretty pat, there’s no need in reinventing the paradigmatic wheel, so to speak.

The most important piece, the software that creates a virtual neural network within a computer, took a little over half a decade for me to write, but it’s been done for a while now. The problem is, though I can test out the functionality of specific neurons and small circuits of neurons (Which work beautifully after A LOT of frustration), I can’t test out Hebbian Learning (Hebb’s ‘Fire Together Wire Together’ postulate, large basis of both intrinsic and extrinsic memory) across a large neural matrix (TFNN term for a neural nucleus, a ganglion, any local cluster of neurons). Without external stimulus and the ability to somehow manipulate the environment (no matter how small that manipulation be), it is impossible to study any meaningful organization across the neural matrices. A neural network’s ability to mold itself to the universe around it is dependent on meaningful and continuous sensory input. In other words, I have the brain, but of yet I had no body to test it out in.

Right now we’re working with this brilliant guy in Minnesota who received a grant through his university to build the prototype robot that will house a large scale implementation of TFNN. This is still a ways away though, and I’m kind of itching to get some testing and experimentation done on a smaller scale – building from 10,000 to 100,000 neurons, from 10,000 to a million synapses. At the school district I work at we do a lot of work with Lego robotics – and I realized this was the perfect opportunity – lego robots are a snap to build, have access to simple sensory inputs and are easily interfaced from a C++ environment.

So I introduce Milo! (I seem to name everything this, I’m not quite sure why. ;))

Milo is pretty quick and dirty, no gearing or manipulative appendages. BUT – that’s not very important since the test neural networks that he will run will start out as very rudimentary – more on that in a second.

What Milo does have is the ability to manipulate the motors in each front wheel, respectively, both power (speed) and direction (forward or backwards). Also, Milo has a forward and downward pointing light sensor – one detects ambient light while one scans the ground. Also, Milo has two large antennae in the front that are connected to touch sensors.

This entry is a little longer than I wanted, but a few things –

I need to rewrite some code involved with the emulation of neurotrophin release( for axon terminal growth and pruning neurons) – currently each presynaptic neuron assumes it received virtual neurotrophins from the post-synaptic neuron, regardless of if the post-synaptic neuron fired simultaneously or not. While this doesn’t affect specific neuron functionality, on a large scale it defeats the point of a neural network – unless a neuron receives no input it never dies (So no pruning, no selectivity), any looping (which is necessary) within the network leads to a cascade overload after a few minutes since every synaptic pathway is strengthening whenever it receives activity, after a few minutes every neuron is firing constantly and out of control – something akin to the computer going through the most massive seizure in history.

Besides that code rewrite, the rest is down to testing. The first incarnations of Milo’s brain will be single and dual matrix constructions. I will continue to add matrices as time goes on, in the final test case I plan on a thalamus for each of the sensory inputs connected via a neural bridge to both a rudimentary respective cortex and a proto-amygdala. It is my hope to see neural configurations and activity akin to that observed by Joseph LeDoux et al in their neural connectivity mapping of danger input to motor-control output.

Things I will be experimenting with in general in every matrix configuration – different threshold values and time periods associated with neuron firing. This is akin to adjusting how much glutamate and other neurotransmitters/modulators a neuron secretes. In the virtual neuron world it is a threshold number of pulses arriving in an arbitrary amount of time.

Also, adjusting how much to increase/degrade synaptic weight (application of virtual neurotrophins).

Eventually I hope to also incorporate tests that speak toward the Roger Penrose theory of quantum mechanics taking place within synaptic or neural activity. This isn’t as important right now though, especially since neural activity appears to be deterministic anyway, and I don’t believe in indeterminism in quantum mechanics anyway, so I think its a needless test. But being a good scientist I have to rule out all possibilities and not be biased by my own personal thoughts. ๐Ÿ˜‰

There’s also going to be an efficiency issue of the Temporal Frame engine with this new code I have to write to fix the Hebbian Plasticity problem I outlined above. The TFNN is already a little slow due to some stuff – this is going to slow it down a little (maybe lot) more. Good thing all these new smokin 64 bit processors are coming out. ๐Ÿ˜‰

Anyway, I had a lot to talk about in the beginning, outline what’s going on, I promise not all the entries will be as boring, long or confusing. I also hope for more pictures and diagrams! ๐Ÿ˜‰ Bye for now!

TFNN – In the Beginning…

Hello all reading – I’ve decided to start this blog for a few reasons. First, I thought it would be a neat way to share information surrounding TFNN (My AI research project, the Temporal Frame Neural Networking)ย  with anyone interested.

Also, there is such a large amount of information regarding the TFNN project, from planning to implementation to experimentation to the results – changes along the way, observations, realizations – up till now I’ve been keeping track of them privately in scattered places. I’d like to have one informal place where I record my thoughts and experiences as I go, and I thought this would be a great medium. Some of it I can’t share publicly due to the patent process, but unless you’re really into this stuff you wouldn’t find those aspects very exciting anyway.

I think it’s just all very exciting stuff and I wanted to share it with everyone! If you have any comments or questions about what you read, feel free to respond to any entries. Thanks!