William Hwang
About William’s research on Energy Efficient AI Enabled by Emerging Spintronic Memories:
The ever-growing computational demands of today’s artificial intelligence (AI) applications have necessitated cross-layer innovation across the computing stack, ranging from algorithms to devices. Today’s mainstream AI applications rely heavily on application-specific architectures which consist of high throughput multiply-accumulate units fed by operands fetched from off-chip memories (generally DRAM today). While interference-based AI applications have begun to percolate into consumer devices, edge-based training remains challenging due to the extreme computational and memory demands that such applications impose on system design. Adding to that, increasing concerns regarding data privacy and security coupled with the need to deploy applications in environments with unreliable access to the cloud has motivated the need for edge-based, energy-efficient AI hardware. My research focuses on leveraging the unique properties of emerging spintronic memories to enable the next generation of energy-efficient, edge-based AI hardware. We hope to utilize a cross-layer approach, ranging from algorithms to devices, in order to architect AI-enabled systems that can operate in environments with limited cloud connectivity and power budgets, while simultaneously addressing data-privacy concerns.