Figure 1 from A 3.43TOPS/W 48.9pJ/pixel 50.1nJ/classification 512 analog neuron sparse coding neural network with on-chip learning and classification in 40nm CMOS | Semantic Scholar
Figure 4 from A 44.1TOPS/W Precision-Scalable Accelerator for Quantized Neural Networks in 28nm CMOS | Semantic Scholar
TOPS, Memory, Throughput And Inference Efficiency
As AI chips improve, is TOPS the best way to measure their power? | VentureBeat
A 1.32 TOPS/W Energy Efficient Deep Neural Network Learning Processor with Direct Feedback Alignment based Heterogeneous Core Architecture | Semantic Scholar
When “TOPS” are Misleading. Neural accelerators are often… | by Jan Werth | Towards Data Science
Sparsity engine boost for neural network IP core ...