The high cost of training Artificial Intelligence (AI) models has been found to limit the involvement of non-industry players in the technology revolution, according to the Stanford Institute of Human Research’s Artificial Intelligence 2024 Report.
The report revealed that although AI companies hardly reveal the expenses involved in training large language models, the cost is in the millions of dollars and is still rising.
The institute said, “This escalation in training expenses has effectively excluded universities, traditionally centres of AI research, from developing their leading-edge foundation models.”
It also noted that Sam Altman, OpenAI’s chief executive officer, said in 2023 that it cost over $100 million to train GPT-4, with Google’s Gemini Ultra costing around $191 million. It highlighted that the transformer model, which introduced the architecture that underpins all modern LLMs, costs around $900 to train.
This revelation comes as the Nigerian government launched its Large Language Model (LLM) to improve AI inclusion. According to the institute, data scarcity may be a challenge as existing LLMs have already been trained on meaningful percentages of existing data.
“The growing data dependency of AI models has led to concerns that future generations of computer scientists will run out of data to further scale and improve their systems,” it added.