Jumped on the AI hype wagon yet? Good for you – or maybe not. What key factors might put the brakes on AI’s growth? Are large language models a solution in search of a problem?

As AI’s capabilities expand at breakneck speed, businesses face a crucial decision: jump on the AI bandwagon now or wait and see. Both options have their risks.

The release of ChatGPT3.5 in late 2022 started the process. ChatGPT3.5 is a large language model (LLM). Other, now popular tools use different deep learning models like Generative Adversarial Networks and Diffusion Models. It is important to note these models all have limitations.

You can read more on the limitations of LLMs here.

What Could Stop The AI Train

At a top level, what could stop the growth of AI technology? We suggest:

  • Money
  • Power
  • Hardware (Chips)
  • Regulation

We will look at each of those in turn, but first, let’s look at another issue – Are deep-learning-based tools a technology looking for an application?

The End Use Case

AI has a long history. Alan Turing’s paper titled “Computing Machinery and Intelligence” appeared in the journal Mind in 1950. The article introduced the imitation game. Most accept the term Artificial Intelligence was first used by John McCarthy in 1956.

AI came back into wide consciousness in late 2022 with the introduction of ChatGPT3.5.

Some believe (we concur) that the explosion of interest in ChatGPT3.5 was unexpected. It was not a typical product release. No “look at this, isn’t it great, it can do X, Y and Z” type promotion. Users have largely been the guinea pigs.

When interest in LLMs exploded, various versions appeared in quick succession. Some were driven by former employees of OpenAI (the developer of ChatGPT). Others were versions of technology that had (allegedly) been sitting on a shelf gathering dust for years (Google) and some were new.

Technologies for large-scale quantitative and data analysis tasks have existed for years. LLMs, as the name suggests, work best with text. They can simplify a wide range of tasks, but most are small-scale and user-specific. Is there a killer application for LLMs? Not yet, we suggest; more likely they are one step on a long road.


Many factors impact how much it costs to train a large-scale LLM like ChatGPT3.5, but it is at least £10m. As new models are released the costs will only increase.

Training an LLM is only part of the issue. There are organisational costs, the costs of maintaining a large group of (highly paid) engineers, the costs of hardware and more.

The overall costs to a single business could run to multiple billions of pounds. The costs depend on the model, architecture, hardware and everything required to support that hardware. Hardware costs can be significant (more below) as a single, state-of-the-art GPU chip can cost $30,000 or more.

The money has to come from somewhere. There could be significant funds available from various investors, but is it enough? The obvious question investors will ask is exactly what are we are investing in (see use case). What is the upside? Some of those investors might have been around during the .com bubble. They may be cautious.


Running a large language model (LLM) like GPT-3.5 requires significant computational power. As discussed below, high-end GPUs tend to perform the computations. Each chip can consume up to 500W under full load and thousands are required to run an LLM.

The GPUs generate high levels of heat so more power is needed to drive cooling. GPUs need associated hardware to function. It all adds up to a significant power requirement.

The power required to support a single data centre depends on multiple factors. To give an idea of scale, some have suggested that a future data centre, built to support a large-scale LLM, will require its own miniature nuclear power station.

Supplying power has a cost plus there are environmental issues to consider. The solutions are not trivial and somebody has to pay (refer to use case and money above). Remember LLMs like ChatGPT, Claude and Llama are currently either free to use or very low cost.

Architecture (Chips)

Use case aside, the problems outlined above are primarily driven by hardware and model training issues. The hardware is complex, expensive and difficult (and costly) to run and maintain. Worse still, the hardware is in short supply.

Moving forward may require new architectures (a move away from LLMs to something else) and hardware. It appears a large amount of time and effort is being allocated to the next stage.

Model Compression and Distillation to compress very large models into smaller, more efficient forms is one area of research. Improved model training techniques including better optimisation algorithms is another area under investigation.

Better hardware utilisation is an obvious area to evaluate. Quantum computing could help but that technology has been in development for years with no major breakthroughs so far.

Can the architecture issues be resolved? Probably, IF those involved are allowed enough time and money.


Given it took over a decade to deliver (at least partial) social media regulation, it’s not realistic to expect any significant short-term AI regulation. That said, the EU is making progress, but to work, regulation needs to be worldwide. What about the USA? What about China?

Without regulation (given the limitations discussed above) development will (most likely) continue at its current pace.

That said, it’s possible in the next few years (as AI agents are introduced) AI technology could cause something really bad to happen (like, really bad for humanity on a global scale). That would focus minds and bring about massive worldwide regulation. It could stop AI development in its tracks.

Stick or Twist

So what to do as a business? Is it best to go all in on AI now or wait to see what happens? The AI genie is out of the bottle; it is either going to fly and change everything, or it’s going to crash spectacularly. There’s nothing (in our humble opinion) in between.

Go all in now and you might have a real competitive advantage, but there are costs involved. Hold off and you might allow yourself a pat on the back if it does fail. The risk is you get left behind and with the technology (and capabilities) developing at such a fast rate it will be difficult to catch up.

You could go for a middle ground and start to develop some knowledge and capability while minimising costs, in one scenario that will leave you not too far behind, but not in the leading group. At least you would be ahead of the laggards. The choice is yours.

This website uses cookies. By continuing to use this site you are agreeing to our cookie policy

Privacy policy