Last month Barclays’ credit card business struck a deal with Amazon to offer seamless customized shopping and payment services in Germany. The announcement drew little attention amid the US election, pandemic pain – and the cancellation of Ant Financial’s putative 37 billion US-Dollar initial public offering (IPO). But investors and regulators should pay attention. That is not because of what the deal shows about German shopping habits, Amazon’s voracious expansion or Barclays strategy, per se.
Instead, the German tie-up’s real significance in our opinion is a tiny, but unusually visible, sign of a feverish race under way at banks and tech companies to find ways to use big data and artificial intelligence (AI) in finance. Essentially, Barclays and Amazon are linking data with AI analysis to approve credit-or not-and predict what customized services clients will want next.
What happens next in this AI race could soon matter enormously – helping to determine the future winners in finance and the next big set of regulatory risks.
The AI platforms now being deployed in finance are exponentially more powerful than anything seen before. In particular, the capabilities unleashed by a subset of AI called “deep learning” represent in our opinion a fundamental discontinuity from the past.
Jack Ma, founder of Ant’s parent company, Alibaba, was arguably one of the first to spot the potential. It uses data on consumer and corporate digital activity to predict credit risk and provide customized services. That is a key reason why the Chinese finance group has expanded at such a dizzy pace. But western companies are racing to catch up both in retail – with Barclays’ German deal – and wholesale finance.
In theory, this could be beneficial as a way to “democratize finance”. More specifically, these innovations should enable financial companies to offer consumers more choice, better – targeted services and keener pricing. They should also cut corporate borrowing costs. Ant has used its vast data troves and AI to analyze credit risks in a way that its says enables the company to offer cheaper loans. Marshalled correctly, AI could also help regulators and risk controllers spot fraud more easily, and improve bank stress tests.
But there are in our opinion enormous potential costs too. One of these is the propensity of AI programs to embed bias, including racism, into decision making. Another resolves around privacy risks.
A third is antitrust: since having a huge data base offers a compelling advantage in AI, there is a tendency for dominant companies to become ever more dominant. A fourth, related issue is herding: since AI programs are often constructed on similar lines, their use could reduce institutional diversity and undermine the resilience of finance.
However, the biggest problem of all is opacity. The lack of “interpretability” or “auditability” of AI and machine learning methods could become a macro-level risk. Applications of AI and machine learning could result in new and unexpected forms of inter-connectedness between financial markets and institutions.
So what should be done?
One in our opinion obvious and tempting idea might be for politicians to press the “pause” button. Indeed, that is what Beijing seems to be trying to do with Ant (although it is unclear how far the decision to halt the IPO reflects grand policy concerns, as opposed to politics).
However, it will not be easy to stuff the AI genie back into the bottle. Nor is it necessarily a good idea, given the potential benefits. What would be far better is for policymakers and financiers to embrace four ideas.
First, companies engaged in AI-enabled financial activities must be regulated within a finance framework. That does not mean transposing all the old banking rules on to fintech; as Mr Ma has argued, these are not all appropriate. But central bankers and regulators must retain oversight of fintech and maintain a level playing field, even if that requires them to expand their oversight into new areas, such as the data being plugged into AI platforms.
Second, regulators and risk managers must bridge information silos. Very few people understand both AI and finance; instead, the people with these skills typically sit in different institutions and departments. This in our opinion is alarming!
Third, we cannot hand all the creation and control of AI-enabled finance to geeks with tunnel vision: instead, the people crafting strategy must have a holistic view of their societal impact.
But for this to happen, there needs to be a fourth development: politicians and the wider public must pay attention to what is under way, instead of outsourcing it to technical experts.
All of this will not be easy, given that AI is hard to understand. But the 2000s showed what can happen when geeks with tunnel vision go mad in finance and politicians ignore them. We cannot allow this again. If you thought the 2008 financial crisis was bad, just imagine one that moves faster and goes farther because it is enabled by AI. That actually should scare us into a policy debate right now.