The Future of AI is Neuromorphic.

Neuromorphic Engineering or Neuromorphic Computing isn’t a brand-new topic, but a lot of individuals don’t realize it. It’s an idea developed by Carver Mead an American scientist in the late 1980s, describing the utilization of Very Large-Scale Integration (VLSI) systems containing electronic analog circuits to mimic the neuro-biological architectures of our nervous system.  In recent days, the term neuromorphic describes analog-digital mixed-mode, analog/digital VLSI, and the software systems that implement models of neural systems. Neuromorphic computing is often implemented on the hardware level by oxide-based memristor spintronic memories, threshold switches, and transistors. To know it easily we want to find out a touch more about the brain.

The human brain is the most powerful supercomputer of today’s world. It helps to navigate our environment by carrying out about one thousand trillion logical operations per second. It’s compact, uses less power than a light bulb, and has potentially endless storage. It has 86 billion neurons cells and up to 10,000 connections per neuron. For many years we have been learning about the brain and most recently, replicate it. So, Scientists are trying to build a computer that works like our brain. Now, we have deep learning a subset of machine learning which algorithm tries to mimic the human brain, but they are on the software level and scientists are trying to build the chip that is just like our human brain at the hardware level. So, the question arises why we need a neuromorphic computer what is wrong with today’s computers.

The computer that we are using today is based on Von Neumann architecture which consumed a lot of power and they are not as powerful as the human brain. So, the need for neuromorphic computers occurs, based on a neuromorphic architecture that consumes less power and will be very powerful. For example, one neuromorphic chip made by IBM contains the transistors over five times the standard Intel processor, which consumes only 70 milliwatts of power. An Intel processor would use anywhere from 35 to 140 watts, so, it uses up to 2000 times more power than neuromorphic. Lots of researchers are working on making this chip but till now no one can able to make a chip that can completely able to mimic the human brain. But if we can able to make chip-like our brains, building an algorithm for them will be a great challenge. Some of the researchers are trying to build an AI algorithm on these new chips. The recent success of those efforts is Nengo, a compiler (Compilers are a software tool that’s accustomed write code, which translates that code into the complex instructions that get hardware try to do something task) that developers can use to create their algorithms for his/her AI applications that will operate on general-purpose neuromorphic hardware. The thing that produces Nengo useful is its use of the familiar Python programming language, known for its intuitive syntax and its ability to place the algorithms on many alternatives’ hardware platforms, including neuromorphic chips.

Pretty soon, anyone who understands Python may be building sophisticated neural nets made for neuromorphic hardware. The most effective system built using Nengo was Spaun, a project that got international praise for being the foremost complex brain model ever simulated on a computer in 2012. It demonstrated that computers may be made to interact fluidly with the environment and perform human-like cognitive tasks like recognizing images, controlling a robot arm, writing down what it’s seen, and many more. The Spaun wasn’t perfect, but it absolutely was a shocking demonstration that computers could in the future blur the road between human and machine cognition. Recently, by using neuromorphic, most of Spaun has been run 9000x faster, using less energy than it might on conventional CPUs, and by the top of 2017, all Spaun was running on Neuromorphic hardware. In July 2019, Intel launched Pohoiki Beach, an 8 million-neuron neuromorphic system containing 64 Loihi chips (Intel’s self-learning neuromorphic chip for training and inferencing workloads at the sting and within the cloud). Intel designed Pohoiki Beach to facilitate research being performed by its researchers also as those in partners like IBM and HP, as well as academic researchers at MIT, Purdue, Stanford, et al. The system supports research into techniques for scaling up AI algorithms like sparse coding, simultaneous localization and mapping, and path planning. It’s also an enabler for the development of AI-optimized supercomputers an order of magnitude more powerful than those available today. With the rise of neuromorphic, and tools like Nengo, we could soon have AI capable of exhibiting a stunning level of natural intelligence right on our phones and computers.

Leave a Comment

Your email address will not be published. Required fields are marked *