Hardware remains a bottleneck - but issues are more profound
#AI22 is a series of articles highlighting what we believe to be 10 developments that will be impacting AI this year.
This series is co-written by Dr. Johannes Otterbach, Dr. Rasmus Rothe and Henry Schröder.
The chip shortage over the last few years has impacted significant portions of industries worldwide: from increased prices for consumer devices to empty car lots and longer delivery times. By October 2021, the lead time between chip order and delivery had reached an average of 22 weeks, almost doubling the lead time from the year before.
While these shortages have put extreme stress on many industries, the AI industry was hit especially hard. A large proportion of AI models rely on powerful graphic processing units (GPUs) to train their models. Especially as models are increasing in size, so is the need for immense computing power. Between 1959 and 2012 the amount of power used in computational models doubled around every two years, since then it has doubled on average every 3.4 months, meaning the resources used today are doubling at a rate seven times faster than in the pre-MAMAA (Microsoft, Apple, Meta, Amazon, Alphabet) model period. A recent example of these enormous supercomputers is the Meta Nvidia partnership - which will produce a computer that can train AI models with more than a trillion parameters.
This development has not only increased computing power and the need for the necessary hardware but thereby also the price of training these models. While it cost as little as $19 to train an ImageNet model in 2019, the training of GPT-3 is estimated to have cost at least $4.6m in 2020 and Nvidia’s Selene cost an estimated $56m to build and train in 2020. This expected exponential growth in price, as well as the limited access to necessary hardware, increases the danger that only very few, very wealthy, tech organizations will be able to develop and train such models - stipulating an increasing danger as the dependency on these companies grows. Additionally, it acts as an enormous barrier to entry not only for smaller startups as VCs are not willing to finance these large CAPEX but also research organizations and universities who don't have the necessary means.
Therefore, to not become highly dependent on these organizations, there are stronger measures to be expected by governmental institutions to provide not only sovereignty in this critical technology but also encourage companies to act more economically by sharing the hardware and knowledge of these models and thereby providing an equal playing field for all market participants. While the chip shortage has been an extreme challenge in this aspect it is naive to say that it was the main driver. In general public institutions around the globe have not provided the necessary incentives and circumstances for such models to be developed. There have been few public funding models for the necessary hardware and the amortization cost of such models is not clear. While these challenges have been highlighted through the hardware difficulties, unless there is severe financial and legislative action by governments, this problem will become even more acute in the coming years.