Building Better AI Hardware at IEEE ISCAS 2021
We had 4 papers accepted for IEEE ISCAS 2021! 2 of these are led by the first students I have supervised, so this is a special moment. All of the work is related to either building better AI hardware, or using spikes to reduce overhead.
The first was led by Dennis Robey at UWA, who developed an interesting way to use generative neural nets to synthesize natural scenes from sparse, low-power neuromorphic event streams.
The second was led by Coen Arrow, also at UWA, in collaboration with Kia Nazarpour's group at the University of Edinburgh. We conjecture that using retina spikes to control upper-limb prostheses overcomes the computer vision bottleneck required for fast reaction-response times.
The third was led by Corey Lammie and Mostafa Rahimi-Azghadi at JCU on stochastic deep learning optimization with RRAM. This work generated awesome results. We show that by paying a small price on large bit-widths, we gain huge returns on skipping local minima/saddle points for faster training runs.
Finally, the work led by me used 3D stacked RRAM arrays to enable adaptive dataflow pathways that attempt to overcome die-to-die latency. While one stack is used as the read out, the other is used to program the next layer's parameters.