BCN-08, 09 Financial markets embrace brave new world of AI

224

ZCZC

BCN-08

BRITAIN-BANKING-INTERNET-AI

Financial markets embrace brave new world of AI

LONDON, Dec 11, 2019 (BSS/AFP) – Artificial Intelligence has spread rapidly
across markets in recent years as traders constantly strive to gain the upper
hand, while regulators have given a guarded welcome to the cutting-edge
technology.

High-frequency trading propelled by algorithms has reigned over the past
decade, as banks and funds take advantage of small price fluctuations on many
markets to carry out thousands of deals in a fraction of a second.

Complex mathematical equations have long been used to conduct certain
operations — for example, selling or buying a security if it breaches a
certain level.

Yet algorithms have come under fierce criticism over “flash crashes”, such
as a dizzying slump in the British pound in October 2016 that was widely
blamed on high-frequency deals.

Artificial Intelligence now seeks to take trading into new realms, where
“machine learning” (ML) software compares dozens of databases in the blink of
an eye to monitor risk.

A computer identifies trends and market correlations, runs models,
forecasts outcomes, and arrives at the decision to buy or sell by itself.

AI can assist investment funds and portfolio managers to manage risk — and
pick which stocks are best for which clients.

Banks deploy AI to help detect fraudulent activity, stop computer attacks
and lower costs, while they also use it to set product interest rates — and
analyse risk profiles of loan applicants.

– Anticipating storms –

Survey evidence suggests AI will gain more traction in the financial
services sector in the coming years.

Data analytics firm Greenwich Associates, which conducted a recent study of
market professionals, says more than half of respondents believe they will
have incorporated AI over the next two years.

Israeli technology startup SparkBeyond is a data-crunching platform that
seeks to harness AI for problem-solving.

MORE/HR/1015

ZCZC

BCN-09

BRITAIN-BANKING-COMPUTERS-INTERNET-AI 2 LAST LONDON

SparkBeyond uses machine-learning to think outside of the box — and test
outcomes that might not seem obvious, according to Edward Janvrin, who heads
its Europe, Middle East and Africa division.

For example, most people might assume that proximity to a hospital might be
the best predictor of survival after a telephone call to emergency services.

Yet SparkBeyond software analysed millions of pieces of data within a few
minutes and concluded that the best predictive factor is proximity to a fire
station, according to Janvrin, who applies the same logic to trading.

Global financial regulators can also deploy AI to try to anticipate storms
on the horizon.

The Commodities and Futures Trading Commission (CFTC) warns the process is
not without pitfalls — but admits that AI still possesses “strengths” in
monitoring risk.

“Predicting catastrophic market events, such as the cascading defaults of
2008, is like predicting the weather,” the CFTC said in a recent report.

– Amplified risks –

“There are many variables, which can generate diverging predictions, and
some key information may be overlooked or not available. This can impede
corrective action.

“A strength of AI is its ability to identify correlations in vast data
sets. Such correlations can be helpful in systemic risk monitoring, it’s
clear that a solid majority of market participants … soon will be using AI
in the securities trading process.”

The Bank of England, meanwhile, has also given a cautious analysis.

“In the financial services industry, the application of machine learning
methods has the potential to improve outcomes for both businesses and
consumers,” it said in a separate report.

“At the same time, existing risks may be amplified if governance and
controls do not keep pace with technological developments.”

Vasant Dhar, professor of information systems at New York University Stern
School of Business, added, however, that AI trade would always be safer than
purely human-led decisions.

Dhar added that any AI system has a human safeguard as a fallback option.

But he cautioned that humans would not necessarily recognise a system
failure or make the right call. “Consider the fact that humans can make bad
decisions as well. You can’t assume that the human in the loop will make the
right decision,” Dhar told AFP.

BSS/AFP/HR/1020