Finance Focus Lev Lesokhin Ian Felice
Executive Vice President Strategy and Market Development, CAST
Partner- Hassans & CEO- Line Management Services Limited 57/63 Line Wall Road, Gibraltar
Tel - +(350) 200 79000 Fax - +(350) 200 71966 Email - ian.felice@hassans.gi Website - www.gibraltarlaw.com
Financial Services and Glitches- more to come
The glitch at RBS earlier this year left customers unable to pay their bills or move money around, causing considerable damage to the bank’s reputation, not to mention the £125m for compensation and £80m on an IT overhaul – resulting in a £205m bill for the bank. So what has been done by other financial institutions to ensure this does not happen again? Nothing. In fact in the period since then we have seen high profile outages at the NASDAQ – forcing the cancellation of trades, BATS – halting Apple’s share trading, and Knight Capital’s algorithmic glitch – costing the firm $440m. So why is so little being done to address the underlying technology issues which cause these problems?
S
oftware has long been integral to running core business functions, and is getting more complex every day, meaning we are going to continue to see more of these high profile outages.
This is an industry where microseconds can be the difference between winner and watchers, so to remain competitive requires constant upgrading of software capabilities. There is an arms race for speed among the major financial centres and exchanges worldwide which makes new software rollout decisions risker, as quality becomes a secondary issue.
The result is software development is often stretched to its limits to satisfy these demands and the pressure to cut corners and meet the ever increasing ‘need for speed’ is one cause of the damaging, costly, and nowadays, high profile failures in financial systems.
The tragedy is most, if not all, of these issues can be avoided by introducing code quality management processes. Software is complex, and getting more so, inevitably the factors which lead to these problems lie chiefly in a lack of insight into the application as a whole. For developers dealing with systems which have been added to, cropped, and changed over the years, it is often a struggle simply to understand the ‘ins’ and ‘outs’ of the whole system.
For example the glitch at Knight Capital occurred when the company updated its
software. Most IT applications have dead code which is lying dormant in systems because none of the live modules are using it. The update brought some dead code back to life, causing the system to spit out wrong trades. If a company has no structural oversight of their systems then they cannot know when new updates might suddenly start calling on dead code.
An upgrade like the one which caused the RBS glitch, should not have been run without proper analysis of what was being added, especially on a system as interlinked as those which deal with bank payments. The real problem is many organisations have no kind of measurement procedures in place to guide management on the risks in an application. Without hard software risk measures it is often difficult for an IT executive to stand in the way of a determined CEO who wants to roll out similar applications to the competition. This is particularly difficult in consumer finance services where there is so much pressure to move towards mobile applications and to create new offers constantly.
However with so much information spread across systems it becomes increasingly difficult to create stable applications and problems begin to pile up on top of each other. Even more worrying is IT departments are often largely unaware of these problems, only becoming aware of them when a user complains about the performance. With so many companies in a
rush to out-innovate the competition and add new features, the quality of the code underneath it all gets quickly forgotten.
Of course, no organisation can remove all the defects in its software before it is rolled out. The problem now is many do not do enough analysis to make strategic decisions about which defects present the most serious risks to operations.
If financial institutions are going to stem the flood of costly failures, they must sacrifice some speed for quality software engineering and adopt at least the minimum quality practices to reduce business risk. Put another way, IT executives cannot leave structural quality to chance, or delegate responsibility to their minions of programmers – they must manage structural software risk top-down.
In the meantime no doubt there will be more outages, many of which will make the headlines, cause consumers more disruption and damage brand reputations. We should take the failures to date as a wake-up call to look at our own development and testing practices, and ask ourselves if our teams are doing all they can to ensure the utmost quality, performance and reliability.
Lev Lesokhin is Executive Vice President for Strategy and Marketing at CAST, providers of Software Analysis and Measurement solutions
Previous Page