Deep learning has established itself as a powerful AI paradigm for a range of applications such as computer vision and language modeling. A key challenge for deep learning is its computational inefficiency. In-memory computing (IMC) is arguably the most promising non-von Neumann compute paradigm that could address this challenge. Attributes such as synaptic efficacy and plasticity can be implemented in place by exploiting the physical attributes of memory devices such as phase-change memory. I will provide a status update on the most advanced IMC cores based on phase-change memory integrated in 14nm CMOS technology node. However, to further improve the energy efficiency and to achieve a more general AI it is clear that one needs to transcend or augment deep learning. More bio-realistic deep learning and neuro-vector symbolic architectures are two highly promising approaches in this direction that could also benefit from in-memory computing.
Some relevant publications:
 Sebastian et al., “Memory devices and applications for in-memory computing”, Nature Nanotechnology, 2020
 Khaddam-Aljameh et al, “HERMES core: A 1.59TOPS/mm2 PCM on 14nm CMOS in-memory compute core using 300ps/LSB linearized CCO-based ADCs", IEEE Journal of Solid-State Circuits, 2022
 Sarwat et al., "Phase-change memtransistive synapses for mixed-plasticity neural computations", Nature Nanotechnology, 2022
 Hersche et al., ‘A Neuro-vector-symbolic Architecture for Solving Raven's Progressive Matrices”, ArXiv, 2022