Please use this identifier to cite or link to this item:
http://localhost:8081/jspui/handle/123456789/20143Full metadata record
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Manna, Subhashish | - |
| dc.date.accessioned | 2026-04-02T10:25:54Z | - |
| dc.date.available | 2026-04-02T10:25:54Z | - |
| dc.date.issued | 2022-05 | - |
| dc.identifier.uri | http://localhost:8081/jspui/handle/123456789/20143 | - |
| dc.guide | Das, Bishnu Prasad | en_US |
| dc.description.abstract | Nowadays, Internet of Things (IoT) plays a important role in the edge computing like faster data transmission between server and edge device. The faster computation with low latency ensures the high performance and reliable data processing in the server. The data from IoT sensors are sent for cloud computing. The need for on-chip computing is required to minimize the latency between the cloud service and edge device. Most ML algorithms have many neurons and computing them on-chip in edge devices is challenging because of the power constraint. IBM has come up with a 2nm technology node. The problem with the lower technology node is the leakage power associated with it. And further scaling may impose fabrication challenges due to quantum effects. Just like a Moore law, the generation of Deep learning models is 3.5 months. Photonic channels also experience different difficulties like crosstalk, jitter and less distortion. Due to negligible loss in optical intrachip distance, the photonics are very efficient for data movement problem. As an alternative, wave guides can solve the inefficiency of the metal wires for cost of E/O/E over the same distances. This project aims at reducing the power consumption and latency by performing the MAC operation within the memory macro. The MAC operation of binary convolution neural network layer and full precision network is carried out to achieved the hspice simulation accuracy of 98.67 % in TSMC 180nm technology node. Further, architectural level analysis of IMC are carried out with on-chip implementation of BNN. At last, 10T bit-cell for XNOR and BNN is proposed. | en_US |
| dc.language.iso | en | en_US |
| dc.publisher | IIT, Roorkee | en_US |
| dc.title | IN MEMORY COMPUTATION FOR CONVOLUTION NEURAL NETWORK (CNN) ALGORITHMS | en_US |
| dc.type | Dissertations | en_US |
| Appears in Collections: | MASTERS' THESES (E & C) | |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| 20534009_Subhashish Manna.pdf | 11.69 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
