There is no artificial intelligence without a data center

By now, artificial intelligence (AI) based on machine learning has become available to everyone, and it became a particularly hot topic with the spread of ChatGPT. However, we have been using AI-based solutions for some time. For example, banks as well as the Estonian Tax and Customs Board are implementing artificial intelligence systems that analyse transactions and behavioural patterns to detect fraud and suspicious activities. Many companies use chatbots to automate customer service, and machine learning methods can also be blamed for the selection of advertisements displayed to users online. However, despite the existence of all these and dozens of other applications, very little attention has been paid to how an AI is actually set up.

Foto: Pexels.com

It all starts with technology

Broadly speaking, large amounts of two components are needed to implement artificial intelligence: computing power and data. Throughout the decades that AI has been developed, it is precisely the lack of computing power that has occasionally led to stagnation. There is a term for these pauses in development – ‘AI winter’ – and they have sometimes lasted for years. However, each stagnation has been followed by progress, and it is safe to say that there is currently no sign of a technology-induced ‘winter’.

The scarcity of data has also played a role in AI winters. Training artificial intelligence algorithms requires a large amount of high-quality data, and this resource has been growing in abundance recently. Today, for example, information from social media, IoT devices, and digital public databases can be used to train artificial intelligence. Synthetic data is also created to train the machines, and people are hired to give feedback to the AI on the decisions it makes based on specific parameters.

Where are the big AIs located?

Large-scale AIs are often located in data centers, because they require large amounts of power, a controlled environment, and a high level of security. Of course, the demands and expectations on infrastructure grow over time. For example, the US tech giant Meta has announced plans to double the number of its data centers by 2028. The company already owns 21 ‘data campuses’ around the world, representing a total of 16 billion dollars in investments and over 3.7 million square metres of floor space in data centers. As the success of artificial intelligence applications depends on their capability to process data, this is the only way forward.

In each next-generation data center, Meta will add more computing power using a new chip designed in-house that Meta claims is twice as efficient as the graphics processing unit (GPU) used in most AI infrastructure.

What demands does artificial intelligence impose on data centers?

‘The more complex and demanding the IT equipment, the higher the demands that it imposes on the environment in which it operates. An average office building cannot provide the power and cooling needed to run a large-scale AI. That is why, in some places, they are even building separate data centers,’ says Kert Evert, the author of the idea for the largest data center in the Baltics. According to him, there are also companies developing and implementing artificial intelligence at their center. One great and internationally successful example of this is R8 Technologies, which essentially started from a garage and now operates in 15 countries across Europe, using AI to manage the indoor climate and energy use of large, technically complex buildings. Read more about their success story in another blog post.

‘Considering that we have the electricity capacity of an average Estonian city, we will be able to accommodate even more AIs with high energy consumption,’ says Evert.

The most common hardware that is used to power AI are graphics processing units, or GPUs, mentioned earlier in the Meta example, because they can perform many calculations in parallel. This is why, in the context of AI, GPUs often perform their tasks several hundred times faster than classical central processing units (CPUs). Today, of course, very efficient (and expensive) special versions of these CPUs have already been developed.

The same is true for storage devices that can handle both structured and unstructured data. Of course, connectivity is also critical in this equation. This means that the connections between computing units processing in parallel have to handle huge volumes.

The rule here is that the more specific the use and the newer the device, the more you need to pay for it. In terms of GPUs alone, for example: 2,048 Nvidia A100 GPUs were used to train the Meta artificial intelligence LLaMA earlier this year. At the time of writing, the price of one of these GPUs is in the region of €10,000 – this means that the processors alone cost over 20 million euros. ‘In this context, it can be argued that, when it comes to the training and deployment of artificial intelligence, physical security is, without a doubt, important in addition to reliability. Both the devices themselves and the intellectual property on them must be protected,’ Evert adds.

Ultimately, the AIs that will reshape our future world will rely heavily on two components: computing power and data. The first of these requires specific hardware and data centers to host them. While more and more data is being created all the time, data centers and the technology that runs in them are not going to come into existence on their own, and this is an area where we can expect rapid progress in the near future.

Previous
Previous

Case study: How are the IT systems of Eesti Energia maintained?

Next
Next

Our modern way of life relies on data center cybersecurity, with human errors remaining the biggest threat