Wednesday, February 8, 2023
Home Reviews We could run out of data to train AI language programs 

We could run out of data to train AI language programs 

Large language models are one of the hottest areas of AI research right now, with companies racing to release programs like GPT-3 that can write impressively coherent articles and even computer code. But there’s a problem looming on the horizon, according to a team of AI forecasters: we might run out of data to train them on.

Language models are trained using texts from sources like Wikipedia, news articles, scientific papers, and books. In recent years, the trend has been to train these models on more and more data in the hope that it’ll make them more accurate and versatile.

The trouble is, the types of data typically used for training language models may be used up in the near future—as early as 2026, according to a paper by researchers from Epoch, an AI research and forecasting organization, that is yet to be peer reviewed. The issue stems from the fact that, as researchers build more powerful models with greater capabilities, they have to find ever more texts to train them on. Large language model researchers are increasingly concerned that they are going to run out of this sort of data, says Teven Le Scao, a researcher at AI company Hugging Face, who was not involved in Epoch’s work.

The issue stems partly from the fact that language AI researchers filter the data they use to train models into two categories: high quality and low quality. The line between the two categories can be fuzzy, says Pablo Villalobos, a staff researcher at Epoch and the lead author of the paper, but text from the former is viewed as better-written and is often produced by professional writers. 

Data from low-quality categories consists of texts like social media posts or comments on websites like 4chan, and greatly outnumbers data considered to be high quality. Researchers typically only train models using data that falls into the high-quality category because that is the type of language they want the models to reproduce. This approach has resulted in some impressive results for large language models such as GPT-3.

One way to overcome these data constraints would be to reassess what’s defined as “low” and “high” quality, according to Swabha Swayamdipta, a University of Southern California machine learning professor who specializes in dataset quality. If data shortages push AI researchers to incorporate more diverse datasets into the training process, it would be a “net positive” for language models, Swayamdipta says.

Researchers may also find ways to extend the life of data used for training language models. Currently, large language models are trained on the same data just once, due to performance and cost constraints. But it may be possible to train a model several times using the same data, says Swayamdipta. 

Some researchers believe big may not equal better when it comes to language models anyway. Percy Liang, a computer science professor at Stanford University, says there’s evidence that making models more efficient may improve their ability, rather than just increase their size. 
“We’ve seen how smaller models that are trained on higher-quality data can outperform larger models trained on lower-quality data,” he explains.

- Advertisment -

Most Popular

2022 Vanier Cup supported more than $2.4M in economic activity for London, Ont.: study

Last fall’s Vanier Cup university football championship generated more than $2.4 million in economic activity for London, Ont., a new study suggests. Officials with Tourism...

Elon Musk’s ‘Starman’ in a Tesla Roadster completes five years in space

Starman behind the wheel of the Tesla Roadster with planet Earth in the background (Picture: SpaceX)Remember when Elon Musk launched a whole Tesla into...

All the news from Microsoft’s February AI event

Microsoft is holding an event on February 7th to “share some progress on a few exciting projects,” according to CEO Satya Nadella. The company...

Shift Happens celebrates 150 years of typewriters, keyboards, and the people who use them

A wide-ranging interview with Marcin Wichary, author of Shift Happens, which launches today on Kickstarter. Share this storyShift Happens is a book, launching...