Nationwide Should Have Paid More Attention To Capacity Management

January 6, 2022
Back
The use of legacy systems and negligence by the management team of Nationwide Building Society could be behind the repeated payment outages during the Christmas holiday.<br />

The use of legacy systems and negligence by the management team of Nationwide Building Society could be behind the repeated payment outages during the Christmas holiday.

During the crucial Christmas holiday period, customers of the UK’s largest building society faced difficulties in sending and receiving payments. The outages on December 21, December 30 and January 4 left many customers in distress as they waited for their wages to make last-minute purchases before Christmas and New Year’s Eve.

Nationwide apologised for the delays on Twitter, blaming the volume of inbound payments as a reason for causing the latest glitch.

On January 4, the building society’s spokesperson informed the press that it had temporarily queued inbound faster payments after seeing “extremely high volumes of transactions on the first working day of the year”. The spokesperson explained there were “more than 10 million payments processed overnight”.

Some 10m payments over eight hours is roughly 350 transactions per second.

This would be a lot of operations to process and could cause an outage if they are still using mainframes, Kevin Reed, CISO of Acronis, told VIXIO.

According to the latest Faster Payments data, there were 3.4bn transactions made across all scheme participants in 2021. This represents an average of 107 transactions per second.

Although these payments would not have been spread evenly across the year, it does highlight the relative high levels of volume going through Nationwide during this eight-hour period. Nevertheless, this concentration of payments volume should not cause a capacity issue for a modernised payments system.

At a Black Friday weekend or the weekend prior to Christmas, banks sometimes process payments in excess of 20m transactions per hour, said Andrew Abbotsford-Smith, UK and Ireland banking software sales consultant.

The fact that the same issue took place several times within such a short period of time suggests that patches were used for a quick fix but, in fact, this resulted in recurring issues, he added.

Relying on legacy systems is a pressing issue in the digital world.

Many banks are still using outdated core banking systems and payment systems that are prone to error due to the legacy nature and years of spaghetti code and rules additions, according to Abbotsford-Smith.

Some financial organisations claim the decades-old technology is to ensure the reliability of operations, but legacy systems often do not scale to today's needs and recent outages put their reliability in to question too, Reed noted.

Nationwide should have been prepared to deal with an increased amount of transactions in the first place. “Capacity management is a formal process that should account for such events and should be able to support the elevated load,” Reed explained.

The fact that Nationwide repeatedly experienced delays could mean either that the capacity management process was not implemented at the company, which would be negligence by the management team, or it was implemented, but the results were wrong and they did not expect such an increased load.

This would indicate insufficient expertise within the organisation, and, again, would be negligence by the management team, Reed said.

It is also possible that Nationwide had a capacity management process, but decided not to carry the costs of implementing it, something that the regulator could look into.

Our premium content is available to users of our services.

To view articles, please Log-in to your account, or sign up today for full access:

Opt in to hear about webinars, events, industry and product news

Still can’t find what you’re looking for? Get in touch to speak to a member of our team, and we’ll do our best to answer.
No items found.