“When change is easy, the need for it not be foreseen; when the need for change is apparent, change has become expensive, difficult and time consuming.”
How can we act when dealing with known unknown?
That dilemma is facing regulators today as they try to assess the impact of decentralized finance and artificial intelligence in the finance industry. As two recent reports from the Bank for International Settlement (BIS) and the OECD make clear, we have now reached a critical juncture. While the benefits of AI are obvious, in terms of increased efficiency and improved service, the risk in our opinion are often obscene.
We at Calvin•Farel believes that the broad adoption of deep learning AI models might even increase the fragility of the financial system. There is of course no shortage of views about the principles that should govern AI
According to AlgorithmWatch, a non-profit, at least 175 sets of AI principles have been published around the world. It is in our opinion hard to disagree with the worthy intentions contained in their guidelines, promising fairness, accountability and transparency. But the challenge is exactly to translate lofty principles into everyday practice given the complexity ubiquitous and opacity of so many cases of AI use.
Automated decision-making systems are approving mortgage and consumer loans and allocating credit scores. Natural language processing systems are contacting sentiment analysis on corporate earnings statements and writing personalized investment advice for retail investors. Insurance companies are using image recognition systems to assess the cost of our car repairs.
Although the use of AI in these cases might affect the rights and the wealth of individuals and clients, they do not pose a systematic risk. Many of these concerns are covered by forthcoming legislative, including the EU’s AI rules. These legislative initiatives sensibility put the ones on any organization deploying an AI system to use appropriate and bias-free data, to ensure that it outputs are aligned with its goals, to explain how it operates and to help determine accountability if things go wrong.
Trust is cracked through public consensus: community members must themselves agree about the validity of transactions, rather than relying on third parties.
In principle, these features make DeFi invulnerable to hacks at particular computer nodes or malfeasance by individuals or institutions. DeFi also enables “permissionless composability”. That means a developer can easily connect together multiple Defi applications built on open-source technology to create new financial products and services, without having to seek permissions. Several innovative Defi products are already available. Flash loans, for example, enable borrowing without collateral, using that money for a transaction and returning the borrowed amount, all for a small fee. A flash loan is initiated, executed and completed in the blink of an eye, using just computer code. They have many uses, from helping to arbitrage price differences across markets, to increasing market efficiency. Since they are instantaneous, default and liquidity risk are reduced.
Then there are smart contracts, which allow financial and other assets to be exchanged using computer code with no attorney or escrows agent involved. Computer tools can perform rigorous economic risk assessments of smart contracts and specific DeFi products. The open-source nature of the application helps uncover and eliminate security and other weaknesses.
Still, sophisticated hackers have been able to take advantage of vulnerabilities in DeFi products. Malevolent agents can exploit the large: “attack surface” that is created when combining multiple applications. They are also more vulnerable to software bugs and users who do not fully understand the risks. The absence of a central authority to police bad behavior also has risk. Researchers at Cornell University found that automated bots could front-ran certain trades – for instance, executing open orders at an unfavorable price before they can be cancelled when prices change. With no one to report this flaw to, they published a blog post detailing the risk, assuming the community would protect itself. Instead, a cottage industry of bots emerged to exploit the idea before the loophole could be closed.
It is in our opinion with remembering that while DeFi may rely on libertarian ideals such as its own rule of law, with the community creating and enforcing rules in the broad interests of stakeholders, in reality nascent blockchain systems are vulnerable to governance capture by small groups and stakeholders, who could twist rules in their own favour.
Moreover, while blockchains are self-contained, they will need information about prices and ownership of assets to execute certain transactions. For instance, on-chain hog futures contracts need access to hog prices from commodity exchanges. Computer programs called “oracles” obtain such off-chain information and pass on-chain information back to the real world. These “oracles” are vulnerable to technical risks including hacks and even problems with external data providers.
Given all this, regulators are in a quandary – even open-minded ones who see potential in DeFi but worry about financial stability risks. They can now intercede only at the point where these products intersect with institutions they oversee. As decentralized finance and AI grow in size and scope, regulators in our opinion will have to pay attention to risks building up in these markets and their spillovers into traditional financial markets.
Updated regulatory frameworks that encompass DeFi and AI will eventually be needed, although they should strike a reasonable balance in the innovation-risk trade-off. At a minimum, naïve retail investors swept up by the technological razzle-dazzle must be in our opinion protected from taking on outsized risks.