【原版杂志阅读】 文章预读:The bigger-is-better approach to AI is running out of road

Science and technology | Anything that can’t continue, won’t

The bigger-is-better approach to AI is running out of road

If AI is to keep getting better, it will have to do more with less 

截屏2023-07-18 下午2.39.30

When it comes to “large language models” (llms) such as gpt—which powers Chatgpt, a popular chatbot made by Openai, an American research lab—the clue is in the name. Modern ai systems are powered by vast artificial neural networks, bits of software modelled, very loosely, on biological brains. gpt-3, an llm released in 2020, was a behemoth. It had 175bn “parameters”, as the simulated connections between those neurons are called. It was trained by having thousands of gpus (specialised chips that excel at ai work) crunch through hundreds of billions of words of text over the course of several weeks. All that is thought to have cost at least $4.6m.


But the most consistent result from modern ai research is that, while big is good, bigger is better. Models have therefore been growing at a blistering pace. gpt-4, released in March, is thought to have around 1trn parameters—nearly six times as many as its predecessor. Sam Altman, the firm’s boss, put its development costs at more than $100m. Similar trends exist across the industry. Epoch ai, a research firm, estimated in 2022 that the computing power necessary to train a cutting-edge model was doubling every six to ten months (see chart).


This gigantism is becoming a problem. If Epoch ai’s ten-monthly doubling figure is right, then training costs could exceed a billion dollars by 2026—assuming, that is, models do not run out of data first. An analysis published in October 2022 forecast that the stock of high-quality text for training may well be exhausted around the same time. And even once the training is complete, actually using the resulting model can be expensive as well. The bigger the model, the more it costs to run. Earlier this year Morgan Stanley, a bank, guessed that, were half of Google’s searches to be handled by a current gpt-style program, it could cost the firm an additional $6bn a year. As the models get bigger, that number will probably rise.


Many in the field therefore think the “bigger is better” approach is running out of road. If ai models are to carry on improving—never mind fulfilling the ai-related dreams currently sweeping the tech industry—their creators will need to work out how to get more performance out of fewer resources. As Mr Altman put it in April, reflecting on the history of giant-sized ai: “I think we’re at the end of an era.”


Quantitative tightening

Instead, researchers are beginning to turn their attention to making their models more efficient, rather than simply bigger. One approach is to make trade-offs, cutting the number of parameters but training models with more data. In 2022 researchers at DeepMind, a division of Google, trained Chinchilla, an llm with 70bn parameters, on a corpus of 1.4trn words. The model outperforms gpt-3, which has 175bn parameters trained on 300bn words. Feeding a smaller llm more data means it takes longer to train. But the result is a smaller model that is faster and cheaper to use.


Another option is to make the maths fuzzier. Tracking fewer decimal places for each number in the model—rounding them off, in other words—can cut hardware requirements drastically. In March researchers at the Institute of Science and Technology in Austria showed that rounding could squash the amount of memory consumed by a model similar to gpt-3, allowing the model to run on one high-end gpu instead of five, and with only “negligible accuracy degradation”.


Some users fine-tune general-purpose llms to focus on a specific task such as generating legal documents or detecting fake news. That is not as cumbersome as training an llm in the first place, but can still be costly and slow. Fine-tuning llama, an open-source model with 65bn parameters that was built by Meta, Facebook’s corporate parent, takes multiple gpus anywhere from several hours to a few days.


Researchers at the University of Washington have invented a more efficient method that allowed them to create a new model, Guanaco, from llama on a single gpu in a day without sacrificing much, if any, performance. Part of the trick was to use a similar rounding technique to the Austrians. But they also used a technique called “low-rank adaptation”, which involves freezing a model’s existing parameters, then adding a new, smaller set of parameters in between. The fine-tuning is done by altering only those new variables. This simplifies things enough that even relatively feeble computers such as smartphones might be up to the task. Allowing llms to live on a user’s device, rather than in the giant data centres they currently inhabit, could allow for both greater personalisation and more privacy.


A team at Google, meanwhile, has come up with a different option for those who can get by with smaller models. This approach focuses on extracting the specific knowledge required from a big, general-purpose model into a smaller, specialised one. The big model acts as a teacher, and the smaller as a student. The researchers ask the teacher to answer questions and show how it comes to its conclusions. Both the answers and the teacher’s reasoning are used to train the student model. The team was able to train a student model with just 770m parameters, which outperformed its 540bn-parameter teacher on a specialised reasoning task.


Rather than focus on what the models are doing, another approach is to change how they are made. A great deal of ai programming is done in a language called Python. It is designed to be easy to use, freeing coders from the need to think about exactly how their programs will behave on the chips that run them. The price of abstracting such details away is slow code. Paying more attention to these implementation details can bring big benefits. This is “a huge part of the game at the moment”, says Thomas Wolf, chief science officer of Hugging Face, an open-source ai company.


Learn to code

In 2022, for instance, researchers at Stanford University published a modified version of the “attention algorithm”, which allows llms to learn connections between words and ideas. The idea was to modify the code to take account of what is happening on the chip that is running it, and especially to keep track of when a given piece of information needs to be looked up or stored. Their algorithm was able to speed up the training of gpt-2, an older large language model, threefold. It also gave it the ability to respond to longer queries.


Sleeker code can also come from better tools. Earlier this year, Meta released an updated version of PyTorch, an ai-programming framework. By allowing coders to think more about how computations are arranged on the actual chip, it can double a model’s training speed by adding just one line of code. Modular, a startup founded by former engineers at Apple and Google, last month released a new ai-focused programming language called Mojo, which is based on Python. It too gives coders control over all sorts of fine details that were previously hidden. In some cases, code written in Mojo can run thousands of times faster than the same code in Python.


A final option is to improve the chips on which that code runs. gpus are only accidentally good at running ai software—they were originally designed to process the fancy graphics in modern video games. In particular, says a hardware researcher at Meta, gpus are imperfectly designed for “inference” work (ie, actually running a model once it has been trained). Some firms are therefore designing their own, more specialised hardware. Google already runs most of its ai projects on its in-house “tpu” chips. Meta, with its mtias, and Amazon, with its Inferentia chips, are pursuing a similar path.


That such big performance increases can be extracted from relatively simple changes like rounding numbers or switching programming languages might seem surprising. But it reflects the breakneck speed with which llms have been developed. For many years they were research projects, and simply getting them to work well was more important than making them elegant. Only recently have they graduated to commercial, mass-market products. Most experts think there remains plenty of room for improvement. As Chris Manning, a computer scientist at Stanford University, put it: “There’s absolutely no reason to believe…that this is the ultimate neural architecture, and we will never find anything better.” 


相关课程
免费试听
实用口语:5个pan out 小短句
实用口语:老外嘴里的put a sock in it 是什么意思?
精选外教自然拼读系列02
每日一词 consistency
rise raise 辨析
实用口语:two-way street
(高中英语) 多所名校都考过的一道从句典型题目
精选外教自然拼读系列01
实用口语:throw a curveball
(每日一词)infallible
铅笔英语