Language Selection

Artificial Intelligence: Smart Systems Need Smart Resource

Whenever there is a major event, surveillance teams are not only called upon to be on heightened alert, but the volume of work increases substantially. This is combined with ensuring compliance with new and often complex regulations and products. However, with high levels of trade surveillance hits being dismissed as false positives, banks and regulators agree this is not an efficient use of resource; resulting in abusive behaviour potentially going undetected. 

Trade surveillance has transformed in recent years and huge progress has been made in this space; going from reviewing basic manual reports by junior compliance staff, to largely automated solutions analysed by skilled, informed resources. In addition, greater collaboration between front office product experts and 1LOD & 2LOD teams have fed systems with more intelligent data. Whilst this has meant scenario-based surveillance has vastly improved, it is widely acknowledged that there is a grave need to move to a more behavioural approach.

COVID has further propelled the need to work smarter; global regulators have been very clear that their expectations around surveillance have heightened as supervision teams and front office staff have been forced to work remotely. The buzz around artificial intelligence and machine learning is real and warranted, but is it the sole answer for financial institutions?

“Market abuse is a global concern and global regulators consider market participants to be the ‘first line of defence’ in the identification and prevention of market abuse,” said FIA’s Head of Europe Bruce Savage. [1]

Artificial Intelligence (AI) is being used to programme automated systems to learn behaviours of traders and clients, therefore highlighting when something is atypical and subsequently raising an alert. However, with all technology comes limitations, including; availability of good data, risk of software failure and application across multiple systems. 

To overcome its main challenges, AI requires highly skilled teams to continually work and refine systems.

Rather than being a needle in a haystack, machine learning in surveillance aims to burn the haystack completely by removing everything that is not relevant through teaching the system what a true positive alert is.

It is the daily human interactions with surveillance models that allows AI to identify trends and patterns within certain events, thus the human is making the surveillance system smarter. Data validation is therefore key and reiterates the need for expert human interaction determining what content is appropriate to flow back into a machine learning system.

A benefit of rule-based platforms is the transparency and auditability of the systems. Should an alert trigger or not trigger, the reasoning is easily determined. Having a clear understanding of your surveillance system is key in explaining the reasons behind its triggers; moreover, it is a regulatory expectation that firms should be able to clearly describe how their systems and controls operate.

Although machine learning systems should reduce false positives, the black box method of determining what is deemed suspicious increases the risk of true positives being missed. In a black box system if an alert does not trigger, the reasoning as to why is clouded. Many black box systems use risk percentages to trigger alerts, which are subjective, with some systems overanalysing or looking for the “perfect case”. There must be a strategy for what happens if something is missed and there should also be an easy way of adding in other true positives that have been found from ad hoc searches. This presents a need for Quality Assurance to ensure nothing is missed and that the systems are operating effectively.

Within a rapidly changing environment, not only within the Covid-19 landscape but the need to adapt to new threats, struggling technology systems are pushed to their limits. Technology is not where it needs to be to deal with the demand of surveillance and most prevalent, it is not quick enough to change and respond as quickly as regulations expect it to in order to catch abusive behaviour. Of course, this is due to the nature of how surveillance works – it is backward looking. The view is, this event has happened, how do we mitigate this in the future, and should a hybrid between machine learning and rules-based systems the way forward?

How FinTrU can help?

FinTrU offers experience in what is considered a niche market, partnering with clients to offer expertise which in turn eases pressure on senior surveillance staff. FinTrU has the proven ability to augment, support and integrate seamlessly into surveillance teams within international Investment Banks.

We cover all levels of surveillance including BAU alert analysis, through to regulatory submissions across all asset classes, alongside change projects, such as the initial set up and testing of surveillance systems. We have experience in a variety of surveillance systems, from those with a simple rules-based approach to those employing sophisticated analytics. We have a key focus on understanding exactly how these systems work to optimise alert quality. FinTrU aims to close the gap between surveillance and compliance IT within banks, having experience in both areas. 

FinTrU has assisted in the creation of gap analysis reviews and reports to ensure clients meet their regulatory requirements. We are highly experienced in supporting global surveillance programmes in the prevention and detection of market abuse.

References

[1] FIA releases surveillance and market abuse guidelines | FIA

Related Insights

Subscribe to get all our latest news straight to your inbox