Spiking Neural Network

What are spiking neural networks, and why are they the future?

Spiking Neural Network (SNN) can be seen as small tiny networks that mimic the brain. In our brain, the neurons talk with each other through spikes. The figure below shows an experiment that has been done with squids. The recording electrode was a pin that was inserted into that neuron, and they recorded the voltage of that part. As the figure below also shows, is that a spike is a small pulse. This is a very short voltage peak and not a constant signal. This inherent the property of energy-efficient. The membrane potential given in this figure shows the small spike between the apical dendrites and the soma of the other neuron. Before going more in-depth, what is this biological principle, and how does a neuron work in the brain.

brainspike

Neuron brain

The figure below is a neuron itself. The (apical) dendrites is the input of the neuron. The Nucleus is the core of the cell, where the magic happens. The apical dendrites receive pulses (signals) from different sources. As shown in the figure above, that small spike is such a signal. If enough spikes stimulate the neuron, the neuron fires a spike that goes through Axon (=wiring part: the Myelin sheath, Schwann cell, Node of Ravier) to the Axon Terminals. This Axon Terminals is the output of the neuron, and the connection with some other neurons apical dendrites.

Neuron

Neuron brain vs Neuron SNN

The neurons communicate with each other due spikes. The SNNs bio-inspired neural networks are different from conventional neural networks due that the conventional neural networks communicate with numbers. Instead, SNNs communicate through spikes. They both have the spikes in time in common and inherent time depend. Having multiple spikes in a short period can stimulate the neuron to fire. However, if the time periods are too significant between spikes, the neuron loses interest and goes to sleep again.

Benefits of SNNs

There is one significant benefit of a Spiking Neural Networks is the power consumption. A ‘normal’ neural network uses big GPUs or CPUs that draw hundreds of Watts of power. SNN only uses for the same network size just a few nano Watts.

Problems with SNNS

The biggest problem that SNNs have it how to train them. At the moment of writing, it is possible to train a network with multiple layers with a loose of a few percentages of accuracy. Also, training time is computationally expensive, because of the extra time dependency the neural network has.

Wait Time dependency and harder to train?

SNNs are working due to spikes. Those spikes start from the input layer and go through the hidden layers, and end in the output layer. Some neurons have a longer activation time or respond faster. Therefore signals take some time a long time to travel in the network. This gives an extra dimension to SNNs. The problem that it is causing is training SNNs, are thus harder to do.

Neuron

State-of-the-art

State of the art are SNNs that are first trained on conventional neural networks, and the parameters are then copied to those spiking neural networks. You lose a penalty of a few percentages due to the transfer learning, but then you have something lower power. The time parameter gives an extra dimension to the network. If trained right, that is currently, an open research question. It would be possible to have low-power neural networks that would have a longer learning time, but better overall performance.
Training those SNNs is possible to do with unsupervised learning like Spike-timing-dependent plasticity (STDP), only this can only be done for one layer accurately.
Current research is more into how to adapt this in applications, but it is not solving the training issue, but moving the issue to another time.
I think there is a lot to been done, with Spiking Neural Networks. The short future looks grey, but in the long run, when there is a solution to the learning those networks research in this area will spike up.
There is a possibility that evolution learning will be combined with conventional machine learning; only this will be an idea that I have that needs to be explored as this is hard to test, because many tools that exist for Spiking Neural Networks, are slow or incomplete.

Neuron equation

There exist many papers that explain how the formula should be of the bio-inspired neurons. However, one significant fact is mostly not considered. Making the function more complicated, adds an extra layer of computation complexity. A small example if a network is running with 400 nodes, that gives an accuracy of 82%. There is a possibility to use an extra parameter in the neuron equations, which gives an accuracy of 87%. Instead of two parameters, the training uses three parameters, which makes the problem three times bigger. In many cases, the following will hold: 500 nodes gives an accuracy of ~89%. In this case, using a less complicated model, so fewer parameters to train can give a better result. In summary, use the most simple equation there is. If you have a good reason, sure it still depends on the application. At this moment, the Leaky integrate and fire (LIF) is the most commonly used model. This is easy to implement on a dedicated SNN chip using a few resistors, capacitors, and transistors. Instead of thousands of transistors for one neuron + synapse.

Images