If you’re anything like me, or many of the programmers and hardware hackers out there, you have a deep urge to constantly be creating something. While this presents the opportunity to try new and fun stuff, it can also be a curse in the fact that sometimes it’s hard to complete projects before jumping into a new one. I constantly have this issue, and in general I’ve tried to be good about not staring a new project before completing my existing one. And if you’ve known me for any period of time, you know there is one project that is the big one for me – the one that I’ve been working on for years, and the one that really drives me as a computer scientist – that is my quest to fully emulate the biological neural network (easy, right?). Well, after years of constantly putting it aside while working on other projects, the last 4 months I’ve been very good about focusing on it.
Goodbye TFNN, Hello SynthNet
The problem with emulating the biological brain is – it is extremely complicated to say the least, and there is still a library of information we don’t understand about neuroscience. However – there is also a huge amount of information we DO understand. I’ve had the disadvantage that I do not have a formal education in the biological sciences, let alone the specifics of neurophysiology. Because of that, the process for me of emulating it has been difficult. I have had to do a lot of catchup research to equal what the average graduate would have. This is very apparent looking at the work I’ve done now as compared to earlier versions of the emulator (TFNN) – you can see as much going back to older blog entries on this site. I am by no means an expert now, but I was less so of one back then. In the last year or two, I’ve really hit the books and tried to learn everything I can. And in doing so, I’ve learned that I got so much wrong before, that it was easier to start over again than try to repair what I had. And with that, comes the newest revision of the emulator, SynthNet.
What SynthNet Does So Far
At this point, SynthNet does the following:
- Emulates virtual major cellular structures, such as neuron soma, dendrites and denritic arbors, axons, terminals/boutons, synapses, etc – each with the full functionality (when applicable) of the following:
- Physical properties such as position, surface area, and cellular membranes.
- The ability to contain substances, including ions such as Sodium, Potassium, Chloride, and Calcium, as well as neurotransmitters and modulators, such as Glutamate, Serotonin, Norepinephrine, etc, both intracellular and extracellular.
- For all substances, current concentration is stored (with resolution to nanomoles), homeostatic concentrations, and valance of any ion substances
- Cellular membranes contain channels, both to the extracellular space, as well as gap junctions to the intracellular space of other cellular structures.
- Each channel stores permeability, what substance it is permeable, and tag information for synaptic tagging or other secondary messenger processes.
- Both leak channels and active pumps are supported
- Channels can also have gates, including voltage gates, inactivation gates, and ligand gates. Voltage gates activate at specified membrane potential, inactivation gates close either voltage or ligand gates after a certain amount of time, and ligand gates open in response to a specific concentration of a specific substance
- Membrane voltage is calculated using the Goldman-Hodgkin-Katz Voltage Equation modified for the inclusion of divalent ions (this may need a little tweaking though, converting this over to make use of Spangler’s equation from Ala J Med Sci, 9:218-223, 1972)
- Ion flux across the membrane is calculated using the Goldman-Hodgkin-Katz Flux Equation, with a membrane surface area coefficient.
- All substance flux is virtually processed in an N+1 parallel fashion across all neurons simultaneously
- The emulation of myelin sheaths via the elimination of channels/permeability in specific axonal segments, and an increase in intracellular trans-segment permeability across axonal segments.
- CSV export functionality for analysis within Excel, LiveGraph, or other tools
So at this point, it handles ions and substances as a whole pretty well, calculating flux across a substance’s electrochemical gradient fairly accurately (for our purposes). At this point, we can setup typical ion concentrations for a mammalian neuron, setup leak, pump, and voltage channels, and trigger action potientials with the expected results (still tweaking some of the values).
To Do:
What we don’t have yet, but will have:
- The regulation of extracellular substances via astroglia. This is the next thing I’m working on
- Any kind of protein synthesis or activation, such as kinase phosphorylation. After I get some of the glial cell work done, this will be the next big addition to the emulator. This is critical for the mediation of Hebbian plasticity and other types of learning. The genetic engine of the emulator will allow any sequence of instructions to be run under the specified protein activation – so this will cover everything from the addition of AMPA receptors due to NMDA receptor activation, to neurite growth due to nitric oxide as a retrograde messenger, and the entire neurogenesis process as a whole. Very excited to get started on this.
- Visualization engine, as a kind of virtual fMRI, for the purposes of graphical analysis
- A separate engine to mutate genetic code across generations for the purposes of natural selection (more on this later, a whole different phase of the project)
- A lot of other details, those are the biggies for now
any susceptibility to myelin sheath disorders? Axon hillocks intact still?
Hi Kim – first off, thanks for the interest! I’ve changed the oligodendrocytic/Schwann functionality slightly as what’s documented above. I’m trying to model the electrotonic properties of the cell membrane a little more closely than what I had, so now I use the cable equation and incorporate capacitance (in the middle of implementing this now). So axon myelination is emulated by increasing the resistance across the membrane and decreasing the capacitance, akin to what real myelin does.
Right now no dynamic changes happen to the system outside of ion/substance transfer – so currently they are not susceptible to sheath disorders. However, in the future, the system will emulate protein kinases (I’m hoping to be ready for this in about a year or so), and it could definitely be possible for activation to trigger “deterioration”, e.g. changes to the resistance and capacitance values of the cell membrane, which in effect would be akin to deterioration of the myelin producing neuroglia.
Axon hillocks are functioning well as we speak, they’re initiating action potentials really well (I have some great new graphs of these I need to post – I need to update the blog more often).
Thanks again! ๐
Thank YOU for the response ๐ Please keep me in the loop….I will be following your research. ๐
Hello and amazing work! I’m a biomed engineer and I always wanted someone capable enough to start a networked artificial neural network that ran online and grew slowly over the years into a pure AI because I never believed a single linked code running on system could ever become an AI.
Anyway, I was wondering if you will set up some standard protocols of updating on shared systems before you let your model start being uploaded onto shared networks?
The reason I ask is that I was skeptical of the implementation of natural selection so early into the model as I’m sure you know its not a perfect model yet. And if the basic algorithm of natural selection(or any other process for that matter) was to be changed, you could change it on all systems that are in the network. I’m worried because I somehow can see your project skyrocket in the next two years and you will not be able to take steps back.
Hi Nitish – thanks for your kind words about the project! You definitely bring up some good points regarding standardization of protocols and methods. I think SynthNet (if successful) could have a few possible applications – one obviously being a distributed system like you described, which is one of the reasons I designed the TCP/IP engine into it.
There are a few more pieces that I’m going to be fleshing out before focusing on the distributed portion of it, but once I do focus on that piece, standardization will be an important part of it – I don’t think the engineer/OCD in me would allow me NOT to. ๐
Thanks again for taking an interest – out of curiosity, are you still in school or are you in the biomed industry now? That’s another amazing field with so much potential in it.