Why legacy data systems fail during a market crisis

The Data Behind The Data

At the height of the Covid-19 pandemic market frenzy, over $5 trillion dollars was wiped off of stock indices within five days. It’s easy to conceptualize the large mountains of market data recorded in those five days alone. 

Behind the scenes, the on-premises legacy data systems were being put on a strain that they have never been put on before due to the extreme amount of trade volume fluctuations– volumes they are not equipped to handle.

Currently, most data management infrastructures are built on ‘bare metal infrastructure,’ which means they have physical limits with what they’re able to process and how fast they’re able to process the data. No legacy system would have been ready for the Covid-19 market crisis, because only the cloud can provide the needed scalability and adaptability. 

SLOW PROCESSING SPEED

When a legacy system is not able to ingest the data as quickly as it’s coming in, the system will drop and lose the data. Off-cloud systems cannot scale when ingesting data because they simply don’t have the capacity to process on demand. This leads to longer processing times and/or missing SLAs (service level agreements) in terms of ingestion.

With batch processing, there are two scenarios that we see happening with legacy systems: 1) Without the infrastructure to read the data inputs in real-time, the whole system can crash because the memory usage is too high. 2) A more resilient system may be able to read the data, but will miss SLAs, which are the expectations of how quickly data should be processed.

Most enterprise firms have built their legacy systems without the capability to get the data out at the speed necessary for a crisis. So they’re seeing their systems break down, trying to process large amounts of data that they simply can’t handle. For example, one financial exchange is seeing over fifty percent of their files delivered to users later than the set SLAs, during the crisis. 

INABILITY TO SCALE

On an average day, we see half a terabyte of compressed data being processed daily from one of our exchange clients. During the crisis, it’s become a little over three terabytes per day being processed– almost six times the amount of data ingested previously.

It takes a long time for legacy systems to be ready for a scenario like the Coronavirus Correction we are currently seeing. The process is slow; order more machines, order new computers, connect them up in a system and make sure it processes the data properly. The whole method is expensive and takes a lot of time and effort.

It’s clear to see why legacy systems aren’t able to adapt to out-of-the-ordinary situations. However, Big Data technology has evolved throughout the years for exact situations like this. How has TickSmith harnessed this technology in the GOLD Platform and risen to the challenge?

Read more in the third article from our series, ‘The Data Behind the Data.’

Share:

Share on linkedin
Share on twitter
Share on facebook
Share on email
Scroll to Top