We’re releasing an analysis showing that since 2012 the amount of compute needed to train a neural net to the same performance on ImageNet classification has been decreasing by a factor of 2 every 16 months. Compared to 2012, it now takes 44 times less compute to train a neural network to the level of AlexNet (by contrast, Moore’s Law would yield an 11x cost improvement over this period). Our results suggest that for AI tasks with high levels of recent investment, algorithmic progress has yielded more gains than classical hardware efficiency.
Via Farid Mheir
WHY IT MATTERS: great news, AI training methods beat Moore's law! But I find most organizations have yet to embrace AI not because it requires too much compute power (it does but this is manageable). Most of my clients fail at AI because 1- they cannot find the skilled resources they need (or pay them high enough to attract them) and 2- they don't have access to clean data to do the work.
For those who like to reference Moore's law, the article below provides some novel measurements to consider compute power and resources required for implementing AI projects as well as considerations for policy.