..

Timing Is The Bit

In the last post on neural logic gates, I treated a spike like a yes or no question. If a neuron crossed the threshold in time, we called that a one. If it didn’t, we called that a zero. Linear separability does the heavy lifting, we put the threshold here get an AND, move it there and get an OR. We add a scheduled nudge and get NAND and NOR. XOR stood off to the side like Sonic tapping his foot waiting for you to play again.

Today, lets flip that story around and see what happens when the spike doesn’t represent the yes or no. Instead of asking “did the neuron spike?” we ask “when did it spike?”. Time becomes the bit. We leverage the leaky integrator as it climbs toward a threshold. The threshold becomes flow control.

This has some mind blowing implications. During computation, information is carried in the latency between spikes. The spikes act as a sort of transport layer that provides backpressure and persistence (via synaptic state). If you try to represent memory with nothing but discrete spikes, you’d run out of bandwidth fast.

Imagine each logical operation not as a box with a clock at the top and wires on the sides, but as a little play with three acts. The curtain rises when the first input arrives. That first arrival defines a local “now” (let’s call it T0). The moment T0 happens, two processes begin backstage. One is a short window where we keep listening to see if a second input also arrived early. The other is a slow, steady climb toward a fallback decision that will happen later on (whether we like it or not).

If the earlier blog post made you think in terms of amplitude (how high the water in the bucket got), this one wants you to think in terms of latency. We carve up a token’s (a round of evaluation for a neuron) lifetime into two slots, an early slot and a late slot, with a bit of space in between so they don’t smudge together. If it lands in the early slot, we get a one. If we end up in the late slot, that’s a zero. Value is in placement, not existence.

Here’s an example for the visual learners. T0 starts at the first arrival. Time passes from left to right. E is the early slot. L is the late slot. This defines a “line”.

T0 |===== E =====|--> guard -->|===== L =====|
          ^                           ^
     "one" spike                "zero" spike

There are two questions to answer between T0 and the end of that late slot. “Did we see enough evidence to justify an early output?” and “If not, did we let the late default happen (exactly once)?” Both questions are answered with a threshold and a reset.

The coincidence detector opens at T0 and listens for a short window. If two inputs fall into that window, their combined effect is large enough to cross the control threshold that authorizes the early event on one output line (and prevents it on another). If only one input shows up early, a different control threshold gets crossed, authorizing a different early event. If nobody shows up early, nobody authorizes anything and the late default occurs.

Meanwhile, the late leaky integrators are climbing in the background (one per output line), each headed to its own threshold at T0 plus the separation Δ between early and late (Δ is the time interval measured from the first input arrival T0 to the scheduled late default spike from the integrator). If an early output fires for a line, that line’s integrator is cancelled. If not, the default fires at Δ and we call that a zero.

You can think of this as an inversion of duty. In the neural logic gates concept, thresholds drew a line through a cloud of amplitudes and declared “true on this side, false on that side”. In the new story, thresholds inspect the temporal patterns and decide who gets to speak early. They are not the words, they are the stage manager. The words are the when.

Building a Time Coded Half Adder

The half adder is a nice, small play to stage with these props. In our half adder, A and B each send exactly one spike per token. If an input intends a one, it arrives early. If it intends a zero, it arrives late. The very first of the two to arrive sets T0. From that moment we start both the coincidence listening and the late climbs for sum and carry. During a brief interval right after T0 we ask if both A and B were early together.

In a LIF neuron you can implement that with two excitatory bumps and a threshold that sits high enough that one bump won’t do but two will (if this is confusing, see the neuron based AND gate simulator in my previous post). If both were early, carry is allowed to speak early and sum is told to wait. If exactly one was early, sum is allowed to speak early and carry is told to wait. If neither was early, nobody gets a head start and both lines will speak late on their own.

When we read the outputs, we don’t count how many spikes there were, because there is always one per line. We ask when they happened. Early sum means one on sum. Early carry means one on carry. Late spikes mean zeros.

Here’s a little simulator so you can build some intuition on how it works. The earliest enabled arrival becomes T0. Inputs inside the green window (Wc) are early (bit 1), otherwise they’re late (bit 0). If both are early, then we carry early. If exactly one is early, then we sum early. If none are early, then both are late at T0 + Δ.

Inputs & Parameters

A enabled?
B enabled?

Timeline

Yellow = t₀ (earliest enabled). Green band = early window Wc. Blue dashed = late default (Δ). Solid spike = early (1), dashed = late (0), grey dashed = disabled. Hover for details.

Narration

Note that the early listening window has to be big enough to tolerate the jitter between two early arrivals, but short enough to close before the late slot gets going. The late climb has to be steady and predictable (e.g. temperature and voltage) so that Δ stays put. The early authorization has to complete before the late climb.

Liveness sneaks in for free. In the presence‑or‑absence story we had to squint at silence and ask whether it meant zero or failure. W snuck in a timeout to tell stages to move on. In this inverted model, a token always ends with a spike on each line. If a spike doesn’t come at all, it’s not a zero, it’s just broken. It has a heartbeat that doesn’t require an explicit request or acknowledge. The first arrival is the “request”. The early or late outcome is the “acknowledge”. Explicit handshakes wires disappear in the timing.

Additional Reading

All of this isn’t new. Asynchronous circuit researchers have been trying to squeeze control overhead for a long time, and there are beautiful ideas like single‑track handshakes and GasP that share the spirit of “less rail, more data”. What changes here is tightness of the binding. With neurons, leak replaces engineered delay lines, a integrator replaces an explicit completion detector, and a linear threshold applied to “early flags” is cheaper than a dedicated controller. We are letting the neurons take care of what would otherwise be additional control transistors and wires.

Next up, I’m going to attempt to chain this half adder to build a timing based ripple carry adder. Stay tuned.