IIIT-Delhi Institutional Repository

Implementation of neuromorphic computing framework using tunneling-based devices

Show simple item record

dc.contributor.author Gupta, Abhinav
dc.contributor.author Saurabh, Sneh (Advisor)
dc.date.accessioned 2024-05-10T11:51:24Z
dc.date.available 2024-05-10T11:51:24Z
dc.date.issued 2024-04
dc.identifier.uri http://repository.iiitd.edu.in/xmlui/handle/123456789/1426
dc.description.abstract In recent years, Machine Learning (ML) and Artificial Intelligence (AI) have become one of the hot topics for research and have found their use in various applications across different sectors like healthcare, automotive, marketing, finance, agriculture, Natural Language Processing (NLP), etc. However, training the current state-of-the-art AI-based algorithms are highly energy intensive. For instance, an energy of 932 MWh is required to train OpenAI’s GPT-3 NLP model. The large power consumption stems from training these algorithms on conventional computing systems based on the von-Neumann architecture. In the von-Neumann architecture, memory and computation are decoupled from one another, making it energy intensive. The human brain, comprising about 1011 neurons and 1015 synapses, operates at a power budget of just 20W. Taking inspiration from the highly dense and energy-efficient architecture of the biological brain, Spiking Neural Networks (SNN) aim to model the behavior of the biological neural network in an energy- efficient manner. The neurons in an SNN communicate via discrete action potentials or “spikes,” which are sparse in time. In this work, an energy-efficient SNN is proposed, which can be trained on-chip in an unsupervised manner using Spike Timing Dependent Plasticity (STDP). Firstly, to implement an energy-efficient SNN, a Leaky Integrate and Fire (LIF) neuron has been proposed. The proposed neuron, comprising a Ge- based PD-SOI MOSFET, can directly receive the incoming voltage spikes and avoid energy dissipation in generating a summed potential. The smaller bandgap with dominant direct tunneling of Ge allows the device to operate at a lower voltage level. The energy consumption per spike of the proposed neuron is 0.07fJ, which is lower than LIF neuron implementations (experimental or simulated) reported in the literature. A Ferromagnetic Domain Wall (FM-DW) based device has been employed to function as a synapse. It comprises a Magnetic Tunnel Junction (MTJ) with a Heavy Metal (HM) underlayer. The MTJ consists of a free FM (CoFe) layer (whose magnetization can be varied) and a pinned FM layer (whose magnetization is fixed) separated by a tunneling oxide barrier (MgO). A DW separates two oppositely polarised magnetic regions in the free FM layer. A programming current flowing through the HM layer results in the movement of the DW in the free FM layer. A displacement in the position of the DW results in a change in the conductance of the FM-DW synapse. Secondly, a Ge-based dual-pocket Fully-Depleted Silicon-on-Insulator (FD-SOI) MOSFETs with dual asymmetric gates has been proposed that implements on-chip unsupervised learning using STDP in the SNN. Using a comprehensive device-to-system level simulation framework, it is demonstrated that a pair of proposed dual-pocket FD-SOI MOSFETs with dual asymmetric gates can generate a current, whose magnitude depends exponentially on the temporal correlation of spiking events between the pre- and post-synaptic neuronal layers. This current drives the HM layer in the FM-DW synapse and programs the conductance of the synapse in accordance with the STDP learning rule. The proposed implementation requires 2-3 × fewer transistors and offers a lower latency to implement STDP than existing literature. While SNNs have emerged as a suitable contender to Artificial Neural Net- works (ANN) due to their high energy efficiency, their use is still not prevalent. One of the major reasons preventing the widespread applicability of SNNs is the lack of efficient training algorithms that efficiently utilize the temporal in- formation embedded in discrete spikes. Moreover, the time required to train the SNN can be substantially longer than ANNs. This is because no learning occurs in the network until some spiking activity exists in the neurons. Thus, learning in deeper network layers is time-consuming and often requires multiple training epochs. A ternary SNN, comprising a ternary neuron, outputs a VDD/2 spike when the membrane potential of the neuron crosses a lower threshold, say vthresh1 and a VDD spike when it crosses a higher threshold vthresh2, can result in a substantial speedup in the time required to train the SNN. This is due to the larger spiking probability of a ternary neuron compared to a conventional spiking neuron. Moreover, the ternary encoding is a more accurate representation than the binary encoding and can result in a higher classification accuracy compared to a conventional SNN. A Dual-Pocket Tunnel Field effect transistor (DP-TFET) has been proposed to implement a ternary spiking neuron. Two distinct tunneling mechanisms exist in the device - within-channel tunneling and source-channel tunneling, which are responsible for the generation of VDD/2 and VDD voltage spikes, respectively. An FM-DW based device is employed as the synapse, andthe network is trained on-chip in an unsupervised manner using STDP. Using a device-to-system level simulation framework, it is demonstrated that the ternary SNN can be trained to classify digits in the MNIST dataset with an accuracy of 82%, which is better (75%) than that obtained using a binary SNN. Moreover, the runtime required to train the proposed ternary SNN is 8× less than that required for a binary SNN. To summarize, the goal of this work is to develop an energy-efficient framework for Neuromorphic Computing using an SNN. It involves developing an insight into the state-of-the-art hardware required to implement an SNN and proposing novel devices that aid in implementing and training the SNN in an energy-efficient manner. en_US
dc.language.iso en_US en_US
dc.publisher IIIT-Delhi en_US
dc.subject Spiking Neural Network en_US
dc.subject Biological & Artificial Neural Network en_US
dc.title Implementation of neuromorphic computing framework using tunneling-based devices en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search Repository


Advanced Search

Browse

My Account