Spiking Neural Networks

What are spiking neural networks, and why are they the future?

Spiking Neural Network (SNN) can be seen as small tiny networks that mimic the brain. In our brain, the neurons talk with each other through spikes. As the figure below shows, of an experiment that has been done with squids. The recording electrode was a pin that was inserted into that neuron, and they recorded the voltage of that part. As the figure below also shows, is that a spike is a smalle pulse. This is a very short voltage peak, and not a constant signal. That give it the property of energy efficients. The membrane potential as given in this figure shows the small spike between the apical dendrites, and the soma of the other neuron. Before going more in depth what is this biological principal, and how does a neuron work in the brain.

brainspike

Neuron brain

The figure below is a neuron itself. The (apical) dendrites is the input of the neuron. The Nucleus is the cells core, were the magic happends. The apical dendrites receive pulses (signals) from different sources. As shown in the figure above, that small spike is such a signal. If enough spikes are stimulate the neuron, the neuron fires a spike that goes through Axon (=wiring part: the Myelin sheath, Schwann cell, Node of Ravier) to the Axon Terminals. This Axon Terminals is the output of the neuron, and the connection with some other neurons apical dendrites.

Neuron

Neuron brain vs Neuron SNN

The neurons communicates with each other due spikes. The SNNs bio-inspired neural networks are different from conventional neural networks due that the conventional neural networks communicate with numbers. Instead, SNNs communicate through spikes. They both have spikes in common, this means that they are time depend. Having multiple spikes in a short period can stimulate the neuron to fire. However, if the time periods are to big between spikes, the neuron lose interest, and goes to sleep again.

Benefits of SNNs

There is one major benefit of a Spiking Neural Networks is the power consumption. A ‘normal’ neural network uses big GPUs or CPUs that draw hundreds of Watts of power. SNN only uses for the same network size just a few nano Watts.

Problems with SNNS

The biggest problem that SNNs have it how to train them. At the moment of writing it is possible to train a network with multiple layers only you would lose a few percentages of accuracy. Also, training time is computationally expensive, because of the extra time dependency the neural network has.

Wait Time dependency, and harder to train?

SNNs are working due to spikes. Those spikes start from the input layer and go through the hidden layers, and end in the output layer. Some neurons have a longer activation time or respond faster. Therefore signals take some time a long time to travel in the network. This gives an extra dimension to SNNs. The problem that it is causing is training SNNs, are thus harder to do.

Neuron

State-of-the-art

State of the art are SNNs that are first trained on conventional neural networks, and the parameters are then copied to those spiking neural networks. You lose a penalty of a few percentages due to the transfer learning, but then you have something lower power. The time parameter gives an extra dimension to the network. If trained right, that is currently, an open research question. It would be possible to have low-power neural networks that would have a longer learning time, but better overall performance.
Training those SNNs is possible to do with unsupervised learning like Spike-timing-dependent plasticity (STDP), only this can only be done for one layer accurately.
Current research is more into how to adapt this in applications, however, this is not solving the learning issue, but moving the issue to another time.
I think there is a lot to been done, with Spiking Neural Networks. The short future looks grey, but in the long run, when there is a solution to the learning those networks research in this area will spike up.
There is a possibility that evolution learning will be combined with Conventional machine learning, only this will be an idea that I have that needs to be explored. As this is hard to test, because many tools that exist for Spiking Neural Networks, are slow or incomplete.

Neuron equation

There exist many papers that explain how the formula should be of the bio-inspired neurons. However, one big fact is mostly not considered. Making the function more complex, adds an extra layer of computation complexity. A small example if you have a network running with 400 nodes, that gives an accuracy of 82%. There is a possibility to use an extra parameter in the neuron equations, which gives you an accuracy of 87%. Instead of 2 parameters, you train 3 parameters what makes the problem 3 times bigger. In many cases, the following will hold: 500 nodes gives an accuracy of ~89%. In this case, using a less complex model, so less paramaters to train can give a better result. In summary, use the most simple equation there is. If you have a good reason, sure it still depends on application. At this moment the Leaky integrate and fire (LIF) is the most commonly used model. This is easy to implement on a dedicated SNN chip using a few resistors, capacitors, and transistors. Instead of thousands of transistors for one neuron + synapse.

Images