National

ETV Bharat / business

Moving first on AI has competitive gains, but higher risks too: WEF

The often-opaque nature of AI decisions and related concerns of algorithmic bias, fiduciary duty, uncertainty and more have left the implementation of the most cutting-edge AI uses at a standstill, the report pointed out.

WEF

By

Published : Oct 23, 2019, 11:28 PM IST

New Delhi: Financial services firms that are first movers on implementing artificial intelligence (AI) use have the most to gain, but also face higher risks by deploying emerging technologies without regulatory clarity, the World Economic Forum said in a report on Wednesday.

The often-opaque nature of AI decisions and related concerns of algorithmic bias, fiduciary duty, uncertainty and more have left the implementation of the most cutting-edge AI uses at a standstill, the report pointed out.

Geneva-based WEF, which describes itself as an international organisation for public-private cooperation, in its report also proposed frameworks to help financial institutions and regulators explain AI decisions, understand emerging risks from the use of AI, and identify how they might be addressed.

According to the study, using AI responsibly is about more than mitigating risks, as its use in financial services presents an opportunity to raise the ethical bar for the financial system as a whole. It also offers financial services a competitive edge against their peers and new market entrants.

"AI offers financial services providers the opportunity to build on the trust their customers place in them to enhance access, improve customer outcomes and bolster market efficiency," said Matthew Blake, Head of Financial Services at WEF.

"This can offer competitive advantages to individual financial firms while also improving the broader financial system if implemented appropriately," Blake added.

Some forms of AI are not interpretable even by their creators, posing concerns for financial institutions and regulators who are unsure how to trust solutions they cannot understand or explain.

This requires evolving past 'one-size-fits-all' governance ideas to specific transparency requirements that consider the AI use case in question.

Explaining it with an example, the WEF said it is important to clearly and simply explain why a customer was rejected for a loan, which can significantly impact their life.

It is less important to explain a back-office function whose only objective is to convert scans of various documents to text. For the latter, accuracy is more important than transparency, as the ability of this AI application to create harm is limited.

According to the report, algorithmic bias is another top concern for financial institutions, regulators and customers surrounding the use of AI in financial services.

The widespread adoption of AI also has the potential to alter the dynamics of the interactions between human actors and machines in the financial system, creating new sources of systemic risk, it said.

As AI systems take on an expanded set of tasks, they will increasingly interact with customers. As a result, fiduciary requirements to always act in the best interests of the customer may soon arise, raising the question if AI systems can be held 'responsible' for their actions, and if not, who should be held accountable, the WEF said.

Given that AI systems can act autonomously, they may plausibly learn to engage in collusion without any instruction from their human creators, and perhaps even without any explicit, trackable communication. This challenges the traditional regulatory constructs for detecting and prosecuting collusion and may require a revisiting of the existing legal frameworks, it added.

The report was prepared in collaboration with Deloitte.

Read more:Libra will ensure America's financial leadership around the world

ABOUT THE AUTHOR

...view details