Loading Events
  • This event has passed.

Training Neural Networks with In-Memory-Computing Hardware and Multi-Level Radix-4 Inputs

June 7 @ 19:00 - 20:00

Training Deep Neural Networks (DNNs) requires a large number of operations, among which matrix-vector multiplies (MVMs), often of high dimensionality, dominate. In-Memory Computing (IMC) is a promising approach to enhance MVM compute efficiency and throughput, but introduces fundamental tradeoffs with dynamic range of the computed outputs. While IMC has been successful in DNN inference systems, it has not yet shown feasibility for training, which is more sensitive to dynamic range. This work leverages recent work on alternative radix-4 number formats in DNN training on digital architectures, together with recent work on high-precision analog IMC with multi-level inputs, to enable IMC training. Furthermore, we implement a mapping of radix-4 operands to multi-level analog-input IMC in a manner that improves robustness to analog noise effects. The proposed approach is shown in simulations calibrated to silicon-measured IMC noise to be capable of training DNNs on the CIFAR-10 dataset to within 10% of the testing accuracy of standard DNN training approaches, while analysis shows that further reduction of IMC noise to feasible levels results in accuracy within 2% of standard DNN training approaches. Co-sponsored by: Wright-Patt Multi-Intelligence Development Consortium (WPMDC), The DOD & DOE Communities Speaker(s): Christopher Grimm Agenda: Training Deep Neural Networks (DNNs) requires a large number of operations, among which matrix-vector multiplies (MVMs), often of high dimensionality, dominate. In-Memory Computing (IMC) is a promising approach to enhance MVM compute efficiency and throughput, but introduces fundamental tradeoffs with dynamic range of the computed outputs. While IMC has been successful in DNN inference systems, it has not yet shown feasibility for training, which is more sensitive to dynamic range. This work leverages recent work on alternative radix-4 number formats in DNN training on digital architectures, together with recent work on high-precision analog IMC with multi-level inputs, to enable IMC training. Furthermore, we implement a mapping of radix-4 operands to multi-level analog-input IMC in a manner that improves robustness to analog noise effects. The proposed approach is shown in simulations calibrated to silicon-measured IMC noise to be capable of training DNNs on the CIFAR-10 dataset to within 10% of the testing accuracy of standard DNN training approaches, while analysis shows that further reduction of IMC noise to feasible levels results in accuracy within 2% of standard DNN training approaches. Virtual: https://events.vtools.ieee.org/m/422920