Singapore’s Monetary Authority (MAS) has updated its software toolkit to assist financial institutions in evaluating the responsible use of artificial intelligence (AI). The assessment toolkit, known as FEAT (Fairness, Ethics, Accountability, and Transparency), was first launched in February 2020. It provides a checklist and methodologies for businesses in the financial sector to define the objectives of their AI and data analytics use and identify potential bias. The consortium behind the toolkit, called Veritas, includes industry players such as Bank of China, BNY Mellon, Google Cloud, Microsoft, Goldman Sachs, Visa, OCBC Bank, Amazon Web Services, IBM, and Citibank. The updated version includes review methodologies for the other three principles and an improved fairness assessment methodology. The open-source toolkit is available on GitHub and allows for integration with financial institutions’ IT systems.
To demonstrate the toolkit’s application, Veritas developed new use cases, including a transparency assessment for Swiss Reinsurance’s predictive AI-based underwriting function and Google’s application of the FEAT methodologies to its fraud detection payment systems in India. Veritas also released a whitepaper outlining lessons shared by financial institutions, including Standard Chartered Bank and HSBC, on integrating the AI assessment methodology with their internal governance framework. The document emphasizes the need for a responsible AI framework that spans geographies and a risk-based model to determine the governance required for AI use cases. It also covers responsible AI practices and training for AI professionals in the financial sector.
MAS Chief Fintech Officer Sopnendu Mohanty highlighted the importance of robust frameworks for the responsible use of AI, given its rapid development. He stated that the Veritas Toolkit version 2.0 would enable financial institutions and fintech firms to effectively assess their AI use cases for fairness, ethics, accountability, and transparency, promoting a responsible AI ecosystem.
The Singapore government has identified six top risks associated with generative AI and proposed a framework to address these issues. Additionally, it established a foundation to develop test toolkits through collaboration with the open-source community and mitigate the risks of adopting AI. During a recent visit to Singapore, OpenAI CEO Sam Altman emphasized the need for public consultation and human control in the development of generative AI to mitigate potential risks or harm. Altman also highlighted the importance of addressing bias and data localization challenges as AI gains traction globally. OpenAI aims to train its generative AI platform, ChatGPT, on diverse datasets that encompass multiple cultures, languages, and values.
In conclusion, the updated Veritas Toolkit aims to assist financial institutions in evaluating the responsible use of AI. It provides methodologies and guidelines based on the principles of fairness, ethics, accountability, and transparency. The toolkit has been tested by several banks and is available as an open-source solution. The Singapore government has also taken steps to address the risks associated with AI and promote a responsible AI ecosystem. OpenAI emphasizes the importance of public consultation and diverse training datasets to mitigate potential risks and biases in generative AI.