VIRTUAL Thesis Defense: Ahana Gangopadhyay (Electrical and Systems Engineering Program)

July 27, 2021
1:00 pm - 2:00 pm
Zoom conference (Virtual)

 


Thesis lab: Shantanu Chakrabartty (WashU Electrical & Systems Engineering)

Abstract: As computation increasingly moves from the cloud to the source of data collection, there is a growing demand for specialized machine learning (ML) algorithms that can perform learning and inference at the edge in energy and resource-constrained environments. In this regard, we can take inspiration from small biological systems like insect brains that exhibit high energy-efficiency within a small form-factor, and show superior cognitive performance using fewer, coarser neural operations (spikes) than the high-precision floating-point operations used in deep learning platforms. Attempts at bridging this gap using neuromorphic hardware has produced silicon brains that are also inefficient in energy dissipation and performance. This is because spiking neural networks (SNNs) are traditionally built bottom-up, starting with neuron models that mimic the response of biological neurons and connecting them together to form a network. Neural responses and weight parameters are therefore not optimized w.r.t. any system objective, and it is not evident how individual spikes and the associated population dynamics are related to a network objective. Conversely, conventional ML algorithms follow a top-down approach, starting from a system objective (that usually only models task efficiency), and reducing the problem to the model of a non-spiking neuron with non-local updates and little or no control over the population dynamics. I propose that a reconciliation of the two approaches may be key to designing scalable SNNs that optimize for both energy and task efficiency under realistic physical constraints, while enabling spike-based encoding and local learning in an energy-based framework like traditional ML models. To this end, I will first present a neuron model implementing a mapping based on polynomial Growth Transforms, which allows for independent control over spike forms and transient firing statistics. I will show how spike responses are generated due to a constraint violation while minimizing an energy functional involving a continuous-valued neural variable, that represents local power dissipation at a neuron. I will then show how the framework could be extended to coupled neurons in a network by remapping synaptic interactions in a standard spiking network. I will show how the network could be designed to perform a limited amount of learning in an energy-efficient manner even without synaptic adaptation by appropriate choices of network structure and parameters- through spiking SVMs that learn to allocate switching energy to neurons that are more important for classification and through spiking associative memory networks that modulate their responses based on global activity. Lastly, I will describe a backpropagation-less learning framework for synaptic adaptation where weight parameters are optimized w.r.t. a network-level loss function that represents spiking activity across the network but produces updates that are local. I will show how the approach can be used for unsupervised and supervised learning such that minimizing a training error is equivalent to minimizing the network-level spiking activity. I will build upon this framework to introduce end-to-end SNN architectures and demonstrate their applicability for energy and resource-efficient learning using a benchmark dataset.

For inquiries contact Francesca Allhoff or Ahana Gangopadhyay.