Success story – why is a major bank protecting a new server center from EMP weapons?

When we make a bank transfer or pay by card, we never think about the systems and devices that make it possible. The digitalisation of the financial sector is taken for granted today. You will probably only notice it when the system does not function – when your online bank is down or card payments do not work. Then, we realise for a moment how much we depend on it. The IT infrastructure is the foundation of business continuity and ultimately, it all begins there. This is why the IT infrastructure has been designed taking into consideration some scenarios that currently remain in the realm of science fiction.

Andrus Tamm, Head of Product Development and Technology at SEB

We spoke to Andrus Tamm, Head of Product Development and Technology at SEB, one of the largest banks in Scandinavia and the Baltics, about how the bank plans and builds its IT infrastructure, why it needs to protect itself against electromagnetic pulse weapons, how the server fleet has been halved, how to move equipment without disruption to services, and the wider IT world.

How has the role of IT in the financial sector grown over time? Is this development different from other sectors? How?

There are no differences in terms of technological possibilities. If we add another layer, risk management and regulatory measures, though, it is a whole different world. In the financial sector, many of the requirements have been applied earlier than in other sectors and are monitored much more closely. The same applies to the coverage of issues. Take fraud and cyber security, for example. There is relatively little talk of attack vectors against industrial undertakings compared to the financial sector. And understandably so, as a much larger part of the society is directly exposed to bank services and the security of funds is important to everyone.

What is the approach of the bank to building its IT infrastructure to meet the high expectations set for it?

First and foremost, we focus on the risks and the three key components of confidentiality, integrity, and availability. In short, the data of the customer must be protected and remain private, the status of the financial instruments on one’s account must be aligned with the transactions made and the instruments must remain available at all times. We certainly also take into account previous experience, such as business continuity plans, which means that we think a lot in advance about what risks we need to protect ourselves against.

To what extent have risks changed over the past decade, which new threats have emerged?

There have been many developments here. For example, a decade or more ago, it was unlikely that someone would attack the IT infrastructure with an electromagnetic pulse weapon. Back then, the risk was very low. It is much more real now. In addition to weapons, we are also affected by natural factors, such as solar storms. Therefore, it was our specific requirement that the recently commissioned server room at Greenergy Data Center should be protected against such attacks, and a special protection layer was installed.

Have any financial institutions been attacked with electromagnetic pulse weapons to your knowledge?

I cannot point to anything evidence-based. However, we should keep in mind that most organisations keep information about these types of security incidents to themselves.

How are banks prepared for crisis situations? Including national and regional ones, such as a power cut?

This was also topical twenty years ago, and perhaps even more so. When it comes to security of power supply, we are certainly thinking about standard measures by now. UPSs, or in simple terms, battery banks, help to prevent short-term outages. In addition to those, we also use diesel generators or other on-site power generation capacity in case of longer outages.

Power supply is a constant focus for us, and we make sure several times a year that there is sufficient fuel supply for the generators, that the generators start, and that the switch-overs are carried out as planned. At least once a quarter, we also test them. The data centers that provide the service obviously deal with this area independently.

What is the logic behind the IT infrastructure of SEB?

The largest unit of SEB is located in the home country of Sweden, but we also have a presence in many other places around the world. We are represented in Europe, the US and Asia, but we do not have an IT infrastructure everywhere.

The approach is different for the Baltic entities. The IT infrastructure in this region is operated independently and forms a single entity – Estonia, Latvia, and Lithuania.

Are you independent in your planning?

The Baltic States plan independently. In the past, we have had plans to merge with the group, but this has to be approached in the context of the data architecture and the complexity of the systems, which becomes a key issue when merging very large systems. For example, it would not make sense in terms of time, or indirectly financially, to take the SEB Baltic unit and place it somewhere in the Swedish system. This is where the differences in the legislation and the structure of services come into play.

Has the Baltic region historically been a single entity for SEB?

No. It took us eight years to form this entity. Our systems in those countries were different and the reasons for moving towards one system were quite practical. The Baltic markets and product selections were quite similar. For example, when a new requirement or product was developed, it had to be dealt with separately at the Estonian, Latvian, and Lithuanian level and the IT team in each country had to develop and install the change separately. However, the original requirement was the same.

Let us take debit cards, for example. While each country has its own debit card system with several different cards, there are dozens of debit cards in the Baltics that differ very little in their nature. However, taking the Baltic States as a single region, a much smaller number would be feasible, as there would be no duplication. This saves a significant amount of resources.

Did the move from independent countries to a pan-Baltic system in terms of IT imply a major restructuring?

Ten years ago, we had seven server rooms and today, we have two, plus some smaller units that another organisation would probably also call server rooms. However, we do not apply the same requirements to them as we do to real server rooms. Smaller units are important elements of business continuity which will ensure our survival in the event of a failure of the main data centers.

When it comes to the overall number of servers, though, there has been a very large contraction. We once thought that our one main data center was going to be cramped and made plans to expand, but life turned out differently. Our current virtualisation skills are so good that we could even give some of these spaces away. Currently, I think we will continue on a downward trend in terms of the physical fleet.

Moving from seven spaces to two, the number of our services and applications remained similar or is slightly increasing, but there are now half as many servers.

How is this optimisation possible?

Much can be automated and done more efficiently. Another important trend is the closure of end-of-life assets or systems. This is another thing that has improved significantly over the last decade. We are more careful and leave less digital waste lying around. Especially if it is only being kept just in case. No, let us not keep it just in case. Give us a reason within the organisation and we will match it with a price. If the justification is weak and the cost is seen to be high, we quickly decide that there is no real need for keeping it.

It makes sense that optimising the system in this way saves a lot of resources.

I would also add that attitudes towards the environment and sustainability have changed very drastically over the past ten years. I myself have been keeping a close eye in recent years on our energy consumption in the data centers, for example, and the related energy efficiency. Here, it is important to juxtapose energy efficiency with risk mitigation. For example, we always keep the temperature in the server rooms within a certain range. This, in turn, requires either heating or cooling. However, over time, we have changed the rules on what is an appropriate server room temperature. On the one hand, it has to be suitable for the IT equipment, but on the other hand, it should not consume too much power.

Sustainability and environmental friendliness are very important factors, and here I am very pleased to see that we have a partner who is working in the name of this. For example, PUE (the efficiency of energy consumption – editor) is one of the key factors here and this is very important for me.

When a bank moves its IT backbone from one data center to another, there should be no disruption to services. How did you manage this during your last move?

First, we are duplicated in terms of infrastructure. In fact, event more than twice. If you are aware of the risks involved, you may even work with only one set at a certain point, with simply no margin. We did not, though, as we have more than two sets.

If we now look at the lifetime of a server room, it is ten years or more. However, the lifespan of a server is three to five years, and we stick to that quite tightly. Perhaps, by taking into account the life cycle of the server room and other IT assets, we can also optimise the risks associated with the move, by timing our activities with the life cycle.

So is it fair to say that the need to replace equipment is already built into the system and does not usually lead to an emergency?

Right, exactly.

What were the criteria for your choice when you decided to replace one of the data centers?

I remember the first meeting in our building. And this is not just my opinion. The main reason for choosing Greenergy Data Centers was their very high level of professionalism. While other service providers started to sweat when they heard our questions and took the questions home with them to think about their answers, GDC had all the solutions prepared in advance. This left a very professional impression. This was one of the key points in the entire procurement process.

Speaking about my purely personal view of what impresses, it is the certificates and the diligence involved in applying for them – both the ISO 27001 security certificate and the EN 50600, which takes a holistic view of a data center. The efficiency, reliability, duplicated support systems, energy security, and modern security solutions of a data center are naturally also important. Very impressive.

How does the bank assess the risks associated with the IT infrastructure?

The general principles are the same for banks and are partly prescribed by legislation.

Does it go all the way up to the level of RTO and RPO? In other words, how quickly and to what extent will work be restored in the event of an accident?

Yes, it does. This is also a result of the requirements that apply in Estonia to the critical service providers, that is, how quickly they have to restore the service and to what level. A similar law exists in Latvia, for example.

Will the bank go further and more detailed from here?

It definitely will. As it is a large organisation, good cooperation between different units is needed to maintain high standards across the different services. If, for example, your online bank is not working, it is not enough that the rest of your systems run smoothly. So we have much more detailed plans and operational guidelines. We test our recovery plans regularly. The diesel generators mentioned earlier are a good example.

Talking about server rooms, we conduct internal reliability tests. We play out different scenarios. For example, the scenario in which one particular server room is completely out of the picture and, to a limited extent, we test this in real life. For example, on the move to the GDC data center, we ran a test where we switched off the equipment being moved and redirected the load. Everything worked. Our reliability in this respect is very well guaranteed.

To what extent are the systems separated within the bank? Can some of them sacrificed in a crisis, so to speak?

Our systems are classified into four categories. The highest level includes high-availability services that are fully automated. If one half or part of the duplicated solution is lost, the work will continue automatically and without interruptions. It should be kept in mind here that high availability is expensive and it is therefore not appropriate to categorise everything as such. Other categories have lower requirements and are defined by how quickly the IT systems must be restored. Some of them are simply restored at the earliest opportunity.

How have you tackled backups?

Here, we have clear rules on how backups are made and what the lifetime of a backup is. Archiving is also based on its own policies. The life cycle is understandably longer in this case and the requirements for availability are lower. This means that you do not necessarily need to get an answer from archived materials within minutes, within hours or days suffices. Broadly speaking, backups fall into three categories: what is retained, how long it is retained, and how quickly it needs to be accessed. We naturally also take into account our legal obligations when retaining data.

Finally, what do you see as the value of Greenergy Data Centers for this region?

I am very pleased that we have found a highly motivated team who understood that a high quality data center is very much needed in the region. Impressive. All previous solutions had severe limitations. In some ways, a modern, highly reliable data center may be compared to insurance. If you have built up your business on a strong IT foundation, you will be able to cope with the unexpected and sleep better for it.

Previous
Previous

If data connection is down, business halts. How to prevent it?

Next
Next

Artificial intelligence makes data centers look like science fiction movies, but it comes at a cost