Neural Logic Gates
As you may have read, I’ve been thinking about some crazy ideas related to neurons lately. So I thought I should dig in a bit more and get comfortable with the basic behavior of how they function and process information.
As someone with a background in computer science, mapping neurons to logic gates is a natural way to get started. So, my goal with this post is to simply get a feel for how neurons can act as logic gates. I have no idea if neurons actually act as logic gates inside a human brain (maybe they do?), I just think it’s fun that you can get Turing completeness with a bunch of neurons.
Creating our Neuron Model
In this post, we’re going to be working with a very simplified model of a neuron. It’s called the “Leaky Integrate and Fire” (LIF) neuron model.
This is going to sound a bit weird, but in this model, we can think of a single neuron as a system of devices that manipulate water. Each neuron consists of input pipes, where each pipe is connected to a faucet. Then, the faucets are pouring water into a leaky bucket with a small hole in it. When the bucket becomes too full, it tips over and pours the contents into a funnel which leads to a water pipe input of the next neuron. If the faucets don’t fill up the bucket fast enough, the bucket will drain without tipping over.
>=== PIPE A ===o o=== PIPE B ===<
FAUCET FAUCET
A B
| |
| |
v v
. .
| |
| LEAKY BUCKET . <- fill threshold
| | (tips over into funnel)
'===.______._____==='
^
| v
pivot |
v
.-===-.
/ \
/ FUNNEL \
\ /
'._____.'
||
||
=======||========
| NEXT PIPE |
=================
Now, let’s take the same diagram and use fancy neuroscience words for all of the parts. The pipes become axons, the faucets become synapses, and the leaky bucket becomes the membrane. When the bucket tips over, we call that spiking. Conceptually, it works the same, just with different names for each part.
>=== AXON A ===o o=== AXON B ===<
SYNAPSE SYNAPSE
A B
| |
| |
v v
. .
| |
| MEMBRANE . <- fill threshold
| | (tips over into axon)
'===.______._____==='
^
| v
pivot |
v
.-===-.
/ \
/ AXON \
\ /
'._____.'
||
||
=======||========
| AXON |
=================
Instead of water, axons transmit electricity. Instead of a faucet handle, we have synaptic weights, and instead of filling the bucket with water, we fill it with electrical potential. When the bucket “tips” over and fires the electrical charge into the next neuron input, we have a spike.
Remember that the membrane is leaky. If the charge coming from the synapses doesn’t flow fast enough, the membrane will slowly leak without spiking.
Now we’ll walk through a few gates and work through how we can build each one as a neuron.
Here’s a cheat sheet for the visualizations you’ll see in the rest of this post:
- tA and tB: Input spike time (when water arrives out of the faucets)
- Blue line: v(t): Membrane charge (current water level)
- Purple dashed line: Threshold for spiking (tip the bucket over)
- Cyan and orange vertical lines: A and B input spikes (Faucets are on or off)
- Red vertical line: Output spike (water poured into funnel)
- Yellow vertical line: bias spike for NAND, NOR, NOT (explained later)
AND
Let’s start off with a basic AND gate. The idea here is to only spike the output if both A and B have a previous neuron spiking into them at the same time.
- 0,0 → no spike
- 0,1 → bump but no spike
- 1,0 → bump but no spike
- 1,1 → one spike
When only one of A and B spike, v partially rises, then the membrane leaks and the neuron never spikes. However, if both A and B spike together, they fill up the membrane to the threshold and cause the membrane to spike.
When both spikes come in at the exact same time, it’s difficult to really see what’s happening. For a better intuition, try increasing the delay for the B spike (tB) by a few ms. Observe the leak in v after A spikes, then observe how the B spike pushes it over the threshold and causes the membrane of the neuron to spike.
Try increasing tB by, say around 30ms or so. Once the delay between tA and tB become too great, the membrane leaks too much and the membrane never fires. The behavior of spikes are time dependent.
So, what’s really happening here? Why are we able to compute AND with a spike? The answer lies in the concept of linear separability. Lets graph out the inputs of the AND gate and separate them into spiking and non-spiking outputs.
As you can see, we can draw a simple line as our decision boundary between the inputs that spike and the inputs that don’t spike. When it comes to a neuron, the decision bounary becomes the same threshold that we use to decide whether the membrane will spike or not.
OR
Lets walk through another example and use an OR gate this time.
- 0,0 → no spike
- 0,1 → one spike
- 1,0 → one spike
- 1,1 → one spike (not two)
So, as you can see, it has the same basic mechanics as the AND gate. A single input is large enough to cross the threshold by itself and drains the membrane bucket, which prevents a second input from causing a second spike.
We can also draw out the linear separability using the same method used for the AND gate.
NAND
The NAND gate gets a little tricky. To model a NAND gate, we need to introduce the concept of bias. Bias is simply a scheduled spike that the neuron performs on itself without external input.
- 0,0 → one spike
- 0,1 → one spike
- 1,0 → one spike
- 1,1 → no spike
In the simulation below, we’ve scheduled a spike at 60ms. The neuron itself will spike at 60ms. The synaptic faucet weight is then set to negative (yeah, this is where our water analogy breaks down). So, when the input is applied, it cancels out the bias spike and prevents (inhibits) the scheduled spike from taking place.
Play around with the simulation to get an intuition for how it works.
Just like our previous gates, we can draw out how NAND is linearly seperable.
NOR
NOR is similar to NAND and uses a bias. Inputs are vetos, and a single input will cancel out the bias.
- 0,0 → one spike
- 0,1 → no spike
- 1,0 → no spike
- 1,1 → no spike
In the same way as before, we can linearly separate the inputs.
XOR
Up until now, we’ve been getting our neuron to separate out spikes from non-spikes by having a threshold to cross. But how does that work with XOR?
- 0,0 → 0
- 0,1 → 1
- 1,0 → 1
- 1,1 → 0
Lets graph out an XOR gate.
How can we draw a line between the inputs that spike and the ones that don’t spike? Try to draw a straight line on the graph that seperates out inputs that should spike from the ones that don’t and you’ll see the problem.
The answer is that we can’t. XOR is not linearly separable and cannot be modeled using a LIF neuron.
However, that’s not how real neurons work. A single biological neuron can in fact model an XOR gate!
Stay tuned for my next blog post where we’ll dive into the world of dendrites. We’ll talk about how we can extend our LIF model with dendritic compartments to simulate XOR gates and more closely model biological neurons.