Neuromorphic Computing: Future of AI?

Autonomous Vehicles, or commonly known as Self-Driving Cars, are one of the most popular applications of Artificial Intelligence (AI) today. One can find numerous articles where a Self-Driving Car is used as an example to describe a certain aspect of AI which makes it very monotonous for readers (including me) over time. So let’s switch some gears, pun intended, and look at another application which seems quite interesting as well!

The Mars Rover Mission, which is slated to launch in July 2020, is a NASA project to explore the surface of the planet Mars, collect samples from its surface and find proof of life in the past. Well, it’s not much different from the mission of its predecessors but what is evidently different, is the technology behind this mission. The rover has made advancements in every aspect such as its sensors, communication modules and even its mechanics. But this is an AI blog, so I’ll focus on the AI aspect of this rover.  Although there’s not much information about the “brain” of the rover on their information website, there is a clear mention about the hardware specifications of the rover’s “brain”: Radiation-hardened central processor with PowerPC 750 Architecture (BAE RAD 750). 

Despite its remarkable features, it’s an upgraded version of processors originally built for traditional computing. Using such hardware for AI, although fast, does not bring out the true potential of what AI can achieve. This need demands a new type of computing hardware which can create a new generation of AI.

A Brief History of Neuromorphic Computing

Neuromorphic Computing, or otherwise known as Neuromorphic Engineering, has piqued interest in the AI community for some time now but the field itself has been kicking around since the early 1990s. The original idea was to create VLSI systems which could mimic the neuro-biological architectures that are present in the nervous system. Hats off to Carver Mead who coined the term “neuromorphic” in his 1990 paper [1]. Mead’s motivation for looking into a new approach at understanding the biological information processing systems lies in the fundamental fact that they behave quite differently from how engineers have designed electronic information processing systems, aka traditional computing. 

Mead further drives the importance of a paradigm shift in computing by demonstrating that biological information processing systems are capable of solving ill-conditioned problems in a more efficient manner than the digital systems. Thus, showing that neuromorphic systems are capable of handling complex problems and execute such computations in an energy-efficient manner. But the constant rise and success of Moore’s Law in the industry overshadowed the importance of neuromorphic computing. This caused the neuromorphic computing to go into a long period of hibernation. So, let’s just skip a few years to when things get really interesting. 

The neuromorphic computing research got an overhaul when the SyNAPSE project was established in 2008. Funded by DARPA, this project’s main goal was to explore ways of “organizing principles” which can be used in practical applications. Their various collaborations with other organizations have been covered in the ACM Publication by Don Monroe [2]. The Human Brain Project, established in 2013 and funded by the European Union, took a different route and focused on understanding the cognitive processes and conducting Neuroscience research via modelling and simulations. 

Despite the differing goals, both projects work on the physical realization of biological neurons and spikes [2] as means of transmitting information between neurons. This is drastically different from traditional computing systems which can simulate neurons and spikes, making it computationally expensive when compared to neuromorphic computing systems. Unlike traditional computing, which widely uses the Von Neumann architecture where the processing and memory units are located separately, neuromorphic computing employs a “Non-Von Neumann” architecture where processing and memory units are co-located in a neuron core on the neuromorphic chip.

Fantastic Neuromorphic Computing Systems and How to Use Them

Since the conception of Neuromorphic Computing, there have been numerous hardware implementations of neuromorphic chips. Some are purely analog in nature whereas some are purely digital along with some as a combination of analog and digital components. A survey paper by Schuman et al. (2017) [3] extensively covers the hardware implementations. See Figure 1 for a visual representation of the overview from [3]. As one can guess, there’s a lot to discuss but let’s focus on the popular hardware implementations for neuromorphic computing for now.

Figure 1: Overview of Hardware Implementations of Neuromorphic Computing

IBM TrueNorth [4] is one of the most popular digital neuromorphic implementations. Developed in 2014, it is a fully custom chip with nearly a million neuron cores and 256 million synaptic connections. This is a significant accomplishment given that an average human brain has around 86 billion neurons and around 100-1000 trillion synaptic connections [5]

SpiNNaker [6], part of the Human Brain Project, is yet another digital neuromorphic chip which is massively parallel in its computations with around a million ARM9 cores (neurons). Both TrueNorth and SpiNNaker are good digital hardware implementations but they come at a cost: energy. As the complexity and size of a neural network architecture increases, these systems use a huge amount of energy for the computations, such as learning. BrainScaleS [7], also part of the Human Brain Project, is a mixed analog-digital hardware implementation using wafer-scale implementation (like a microprocessor) along with analog components with nearly 200 thousand neurons per wafer.

Although these above systems are great at simulating neurons at a hardware level, it still requires massive space and energy to perform their computations. This is where Intel’s Loihi [8] Neuromorphic chip enters. Produced in 2017, it claims to be 1000 times more energy efficient when compared to its competition. Currently in the research phase, Intel aims to release such neuromorphic chips for commercial use in the near future. Although only having scratched the surface of this topic, it is enough to understand the extent of research ongoing in this field. But what’s the point of neuromorphic computing when current traditional computing systems such as GPUs are pushing AI to new heights?

Neuromorphic computing provides a brain-inspired computation which is biologically-plausible when compared with the Artificial Neural Network (ANN) models which are run on traditional computing systems. This is possible due to the third generation of Neural Networks, Spiking Neural Networks (SNN). The basic units of a neuromorphic computing system are like  neurons which can be connected to each other via synaptic connections. Such a network simulates a spiking neural network model that exists in biological brains. As these networks which are trained on such systems are biologically-plausible, it drives the research in AI from a probabilistic viewpoint (See Bayesian Brain Hypothesis) which attempts to solve the issues related to fundamental uncertainty and noise that is present in the real world [9].

Return of The Rover

Well, this article is almost at its end. So it’s time for some wrap-up and a  “In a nutshell” session where the Mars Rover is back in focus again. Like I mentioned before, the current Rover project uses an exceptional computing hardware but still follows the traditional “Von Neumann” like architecture to run AI algorithms for the execution of certain tasks. As this does not unlock the full potential of AI, a new form of computing is required. This is where Neuromorphic Computing would be a perfect fit. Not only does it allow simulation of neurons on a hardware level but also focuses more on biologically-plausible neural network models. Using such models is more probabilistic in nature and thus highly beneficial in real world applications where uncertainty and noise is very prominent and can not be avoided.

Intel also explains in their article [8] that tackling problems via a probabilistic approach deals with representing outputs as probabilities instead of deterministic values (which is very common in areas such as Deep Learning). Working with probabilities also creates a path for explainable and general AI which are the next big milestones in this world of AI. So, solving tasks such as maneuvering around the surface of Mars and explaining the decisions taken by the Rover to human operators back on Earth can be a reality with neuromorphic computing systems. Besides this, a neuromorphic chip is more energy efficient than a traditional microprocessor chip. 

In conclusion, the field of Neuromorphic Computing is highly relevant in today’s age where AI demands a new generation which can solve tasks with high performance and low energy consumption.

Disclaimer: Some sections of this article have been adopted from the “Neuromorphic Computing”, a masters AI course at Radboud University.

References:

[1] Mead, C. (1990). Neuromorphic electronic systems. Proceedings of the IEEE, 78(10), 1629-1636.

[2] Monroe, D. (2014). Neuromorphic computing gets ready for the (really) big time.

[3] Schuman, C. D., Potok, T. E., Patton, R. M., Birdwell, J. D., Dean, M. E., Rose, G. S., & Plank, J. S. (2017). A survey of neuromorphic computing and neural networks in hardware. arXiv preprint arXiv:1705.06963.

[4] “The brain’s architecture, efficiency on a chip”, IBM Research Blog, 8 Feb. 2019,  

https://www.ibm.com/blogs/research/2016/12/the-brains-architecture-efficiency-on-a-chip/

[5] Scale of the Human Brain 

https://aiimpacts.org/scale-of-the-human-brain/

[6] “SpiNNaker Project”, APT,  

http://apt.cs.manchester.ac.uk/projects/SpiNNaker/project/

[7] “The BrainScaleS Project”,

http://brainscales.kip.uni-heidelberg.de/public/results/

[8] “Intel unveils Loihi neuromorphic chip, chases IBM in artificial brains”, 17 Oct. 2017,

https://www.aitrends.com/future-of-ai/intel-unveils-loihi-neuromorphic/

[9] “Neuromorphic Computing”,

https://www.intel.com/content/www/us/en/research/neuromorphic-computing.html