Can AI catch rogue traders? 

Behold the dawn of the fourth industrial revolution. This refers to potentially transformative emerging technology, involving increased automation, greater connectivity and machine learning. Microsoft’s chief envisioning officer, Dave Coplin, stated that within this new era, AI (artificial intelligence) will be preeminent. He stated that AI may change our relation to technology and even ‘how we perceive what it means to be human.’[i]  Evolution and adaptation amongst businesses and the wider social economy will be inevitable.


AI as defined by Tecuci et al, refers to ‘systems that exhibit the characteristics we associate with intelligence in human behaviour, such as perception, natural language processing, problem solving and planning, learning and adaptation and acting on the environment.’[ii]

The following diagram shows AI’s effects across numerous sectors, with financial services amongst the most affected:

Source: Boston Consulting Group and MIT Sloan Management Review, Reshaping Business With Artificial Intelligence.[iii]


AI is still developing but is already reducing cost and improving decision making with its task oriented approach. In financial services, AI is being used to optimise trading execution and to analyse credit quality for example. AI’s potential contribution to compliance and market surveillance is particular interesting.

In terms of current surveillance, Opimas reported that banks worldwide spent almost $100 billion in 2016 for talent and consulting in compliance, representing an increase from 15% to 25% during the previous four years.[iv] According to PwC’s market abuse surveillance survey 2016, banks expected that their investments in surveillance solutions will increase significantly during the next 12-18 months. Effective surveillance intends to identify market abuse, such as that exemplified by Nick Leeson, who caused the collapse of Baring’s Bank in 1995 or Kweku Adoboli, who caused UBS to be issued with a £29.7 million FSA fine in 2012. The first nine months of 2017 saw FCA suspicious transaction reports increase by 24% compared to the entirety of 2016.[v] Whilst new regulatory challenges have contributed to this rise, it may also be that current deterrents are ineffective. Market abuse can cause collateral damage as the integrity of the stock markets can influence spending, consumer confidence and fiscal health.

Surveillance programs include Nasdaq, SMARTS, which has been named best market surveillance provider for five consecutive years by Waters and Trading Hub’s MAST. These programs are monitored by analysts and their findings reviewed. SMARTS works by creating alerts, using rule based algorithms which have the ability to identify trading patterns based on tailored parameters. Generally, SMARTS only automates part of the process and its limited analytical capacity can create numerous false positives. The aforementioned PWC report identified mass dissatisfaction with the number of errors and the cost of reviewing inaccurate alerts. At least 65% of tier 1 banks consider that the number of false positives created by surveillance platforms is unacceptably high. It seems that improvement is necessary.



The aforementioned Opimas report indicated that spending on compliance will begin to plateau in 2020 as banks expect to automate processes and improve IT capacity. Accordingly, by 2020, expenditure on regulatory technology, known as RegTech, is predicted to reach $80 billion. RegTech, a subcategory of FinTech, is defined as innovative technology, capable of efficiency gains, being used to facilitate compliance with regulatory requirements. AI in particular will provide a significant opportunity for RegTech firms.


Some of these opportunities are quantified in the diagram below:

According to a 2017 Financial Stability Board report, ‘AI and machine learning techniques are being used by regulated institutions for regulatory compliance, and by authorities for supervision’.[vi]  In November 2017 Nasdaq ‘acquired Sybenetix, which takes a behavioural science approach to analysing conduct risk, with a view to incorporate these techniques into the surveillance suite.’

Frank Hatheway and Tony Sio from Nasdaq stated that ‘the regulatory process is still very reliant on humans, but artificial intelligence can supplement human capabilities. In the short term, machines are unlikely to be able to ban someone from the market on the basis of their conduct or levy a fine. However, a surveillance system such as SMARTS can leverage artificial intelligence to generate better actionable information for human analysts.’[vii]

AI’s growing influence is shown in the below Chartis Research diagram showing a survey response to the question: What do you consider to be a technology priority for trader surveillance?

Another example of AI’s use for surveillance is IBM’s Watson. Watson combines AI with advanced analytical software. Used for surveillance, Watson should be capable of analysing trader communications alongside trading data and market data to identify possible market abuse. Marc Andrews, vice president of Watson Financial Services Solutions, said that ‘We take all of traders' emails and chats and run them through our personality insights and tone analyzer and identify whether there’s anger, are they happy, are they sad?’ and also ‘We’re analysing the behavioural patterns that are associated with misconduct: How do people start behaving right before they get involved in misconduct?’[viii]

IBM believe that the need to process extensive information along with the basic ability of current tools to analyse language for detections of suspicious conduct, will make Watson essential. According to Bridget van Kralingen, a VP of IBM Industry Platforms, the tool is already being trialled in 2017.

Furthermore, ‘A.I. may even sniff out new types of chicanery, said Tom Gira from the Financial Industry Regulatory Authority (FINRA).’ ‘The biggest concern we have is that there is  some manipulative scheme that we are not even aware of’, he told Reuters. ‘It seems like these tools have the potential to give us a better window into the market for those types of scenarios.’[ix]

In terms of AI’s effectiveness, Wendy Jephson, chief behaviour scientist at Sybenetix stated that in a number of cases, ‘traders are now coming to compliance officers before they trade and checking with them’. ‘One of our clients said this is an unheard-of behavioural change, for the front office to come in and talk to compliance.’[x]

Regarding AI’s future, Scott W. Bauguess, of the SEC’s DERA, a speaker at the 2017 Annual Operational Risk North America Conference, stated that whilst ‘major advances in machine learning’ have been made, ‘it is premature to think of AI as our next market regulator.  The science is not yet there. The most advanced machine learning technologies used today can mimic human behavior in unprecedented ways, but higher-level reasoning by machines remains an elusive hope.’[xi]

AI’s implementation will also create risks, such as general mismanagement resulting from interpretation issues, environmental harm and unpredictability regarding AI’s capacity to learn organically. Ethical concerns include the conception that AI will cause job losses. However, ‘data suggests that nearly 12 per cent of jobs could be replaced,’ but ‘nearly three-quarters of all jobs will be ‘cobots’ (collaborative robot) with AI as partner.’[xii] Theoretically, collaboration between robots and humans will produce better results than either party working alone.

In conclusion, AI may be capable of improvements in the present but given that it is still in a rudimentary developmental phase, its surveillance potential remains untapped. AI may eventually be able to out-process human cognition and consistently detect new and complex chicanery.


[ii] Tecuci et al, Knowledge Engineering: Building Cognitive Assistants for Evidence-Based Reasoning, pg. 29






[viii] ://




[xii] ://

Ryan Maguire

Ryan Maguire joined FinTrU through our third Financial Services Academy after graduating from Dundee University with an MA hons in English Literature.  Ryan also completed an MA in English Literature from the University of Bristol.​

Since joining FinTrU, Ryan has worked as a member of a Trade Surveillance team for a global investment bank. Through this team, he gained experience of analysing trader communications and investigating trades flagged outside of approved thresholds. More recently, Ryan has joined the Client Lifecycle Management team as a Know Your Customer (KYC) analyst for a Tier 1 investment bank. As part of this team, h​e conducts independent research to gather and record data from internal systems, commercial databases, and the internet. He executes sanction and negative news checks, PEP screening and traces Ultimate Beneficial Ownership of complex ownership structures.

Ryan liaises with internal stakeholders to ensure that KYC requirements are met. His experience has afforded him an insight into the monitoring of clients which takes place from the point of the client being onboarded to when they begin trading through use of the bank’s products.