2024 Joint IEEE Region 1 / Region 2 Board Meetings

Stamford Marriott Hotel & Spa , 243 Tresser Blvd, Stamford, Connecticut, United States, 06901

R1/R2 Joint Board Meeting. Detailed Agenda is available https://agd.ieee.org/mpt/Agenda.aspx?eid=18803. Ensure you register for 1) This event including companion registrations for dinners (as applicable) 2) The hotel ((https://www.marriott.com/event-reservations/reservation-link.mi?id=1707864273422&key=GRP&app=resvlink)) 3) Saturday companion program (as desired) ((https://events.vtools.ieee.org/m/418568)) Agenda R1/R2 Strategic Planning Committee - Friday at 1PM R1/R2 Executive Committee Meeting - Friday at 5PM For all other committee and external leaders/staff, the meeting begins on Saturday morning with breakfast. Agenda: Full Agenda TBD R1/R2 SPC should plan on arriving in time for Friday June 2 meeting beginning at 1pm (may stay Thursday night if needed due to travel timelines) R1/R2 ExCom should plan on arriving in time for Friday June 2 meeting beginning at 5pm (may stay Thursday night if needed due to travel timelines) . R1/R2 Excom/SPC Partners/Spouses may attend the Friday dinner (Cost for guests: $25). All section chairs (or their designee) and other committee and invited guests should plan on arriving Friday afternoon/evening for the full committee meeting to begin Saturday morning and ending by 1pm Sunday. R1/R2 Partners/Spouses may attend the Saturday dinner (Cost for guests: $25) There will be an optional partner/spouse program for Saturday (details above) All R1/R2 expense reports will be submitted through IEEE Concur Travel Expense reporting system through your respective region. Stamford Marriott Hotel & Spa , 243 Tresser Blvd, Stamford, Connecticut, United States, 06901

Training Neural Networks with In-Memory-Computing Hardware and Multi-Level Radix-4 Inputs

Virtual: https://events.vtools.ieee.org/m/422920

Training Deep Neural Networks (DNNs) requires a large number of operations, among which matrix-vector multiplies (MVMs), often of high dimensionality, dominate. In-Memory Computing (IMC) is a promising approach to enhance MVM compute efficiency and throughput, but introduces fundamental tradeoffs with dynamic range of the computed outputs. While IMC has been successful in DNN inference systems, it has not yet shown feasibility for training, which is more sensitive to dynamic range. This work leverages recent work on alternative radix-4 number formats in DNN training on digital architectures, together with recent work on high-precision analog IMC with multi-level inputs, to enable IMC training. Furthermore, we implement a mapping of radix-4 operands to multi-level analog-input IMC in a manner that improves robustness to analog noise effects. The proposed approach is shown in simulations calibrated to silicon-measured IMC noise to be capable of training DNNs on the CIFAR-10 dataset to within 10% of the testing accuracy of standard DNN training approaches, while analysis shows that further reduction of IMC noise to feasible levels results in accuracy within 2% of standard DNN training approaches. Co-sponsored by: Wright-Patt Multi-Intelligence Development Consortium (WPMDC), The DOD & DOE Communities Speaker(s): Christopher Grimm Agenda: Training Deep Neural Networks (DNNs) requires a large number of operations, among which matrix-vector multiplies (MVMs), often of high dimensionality, dominate. In-Memory Computing (IMC) is a promising approach to enhance MVM compute efficiency and throughput, but introduces fundamental tradeoffs with dynamic range of the computed outputs. While IMC has been successful in DNN inference systems, it has not yet shown feasibility for training, which is more sensitive to dynamic range. This work leverages recent work on alternative radix-4 number formats in DNN training on digital architectures, together with recent work on high-precision analog IMC with multi-level inputs, to enable IMC training. Furthermore, we implement a mapping of radix-4 operands to multi-level analog-input IMC in a manner that improves robustness to analog noise effects. The proposed approach is shown in simulations calibrated to silicon-measured IMC noise to be capable of training DNNs on the CIFAR-10 dataset to within 10% of the testing accuracy of standard DNN training approaches, while analysis shows that further reduction of IMC noise to feasible levels results in accuracy within 2% of standard DNN training approaches. Virtual: https://events.vtools.ieee.org/m/422920