Loading Events
  • This event has passed.

A Review of Computing in Memory: Devices and Architecture for Deep Learning

December 30, 2022 @ 11:00 - 11:30

Artificial Intelligence (AI) and Deep Learning (DL) are shaping modern world. AI is revolutionizing the advancement in computer vision, natural language processing, autonomous vehicles, security, and industrial production. However, the existing AI infrastructure is highly driven by von-Neumann computing technology, which is facing the barriers of memory wall, heat wall, and data transfer bottleneck for meeting the overwhelming demand for big data computing. Computing-in-Memory (CIM) is an emerging computing paradigm that addresses the data transfer bottleneck in designing modern computing architecture for DL and AI applications. CIM computing promises to improve the throughput and energy efficiency compared to existing computing architecture. Emerging non-volatile memory (NVM) devices such as Resistive Random Access Memory (RRAM) and Static Random Access Memory (SRAM) devices are mainly considered for CIM architectures. This review started with the introduction of memory devices used for CIM. Then, we have discussed the mode of operation for macro and system-level architectures. We have reviewed the single and multi-bit macro-operations. In the end, this review discusses the limitations of CIM architecture and the prospects of the CIM research area. Speaker(s): Shahanur Alam, Virtual: https://events.vtools.ieee.org/m/340625