Fidelity - Rise of artificial intelligence underlines role of ESG analysis

Society is calling for protection against the threats posed by AI. Incorporating AI considerations into investors’ ESG analysis of companies will help.

 

 

The spectrum of risks arising from AI is wide. On one end lie doomsday scenarios involving ‘super-intelligent’ AIs that their creators can’t understand or control. More immediate threats include the spread of misinformation from Large Language Models (LLMs), which are liable to ‘hallucinate’ - conjuring false facts or misinterpretations.

The complexity of the technology and difficulties in containing it are reflected in the efforts of regulators, which are mobilising but with little global cohesion. Industry-wide attempts to self-regulate have also gained little traction.

But the burden isn’t just regulatory. Large holders of capital have proven their ability to move the needle on existential issues by engaging with companies on environmental, social, and governance (ESG) issues such as climate change or employee welfare, and holding firms to account for transgressions. Given the potential dangers related to artificial intelligence, now is the time for investors to assess their investees’ use of this powerful tool.

The future is not yet written

Industry leaders such as OpenAI’s Sam Altman or tech pioneers like Elon Musk have already voiced their desire for improved AI safety. In a recent study, the Centre for the Governance of AI (GovAI) proposed a 50-point list detailing best practice when it comes to artificial intelligence. The measures were shared with academics and business leaders from companies including Google’s AI-focused DeepMind, and Anthropic. The vast majority agreed with the proposals, which included evaluations of dangerous capabilities and developing safety restrictions.

This seems encouraging. Yet companies have been slow to implement such measures. Few organisations currently have an AI risk committee or internal audit teams, for instance. Worrying too are recent cuts to ‘responsible AI teams’ at companies including Microsoft, Meta, Google, and Amazon. Today’s tech transformations have come at a time of cost-savings and recession-proofing.

The need for greater safety measures will only increase as the technology becomes more sophisticated. OpenAI, for example, has provided less information on the architecture behind its latest ChatGPT than it did for earlier iterations, both for competitive reasons, but also - they say - to minimise the risk of the underlying technology being exploited by bad actors. Companies could soon find themselves caught in an uncomfortable bind, compromising transparency for the sake of safety and market leadership.

Nevertheless, there are ways to mitigate these dangers. Though wary of sharing the secrets of ChatGPT, OpenAI has announced that it plans to commission third-party audits of its AI models in the future. Participants in GovAI’s study were also willing to employ third-party audits and risk assessments. Greater auditing of AI companies could well be a necessary step as concerns over the abuse of AI increase. 

Past, present, and future

This is not the first time companies have had to change their behaviour in response to an existential threat. In 1988, the NASA scientist James Hansen said: “the greenhouse effect has been detected and is changing our climate now”.

Responding to the environmental crisis has seen sustainable capitalism emerge as a serious force for effecting positive change. Regulators set the tone, but money talks. The threat that it can also walk when companies fail to adhere to standards has forced businesses to think more seriously about their environmental impact.

Questions around artificial intelligence sit naturally alongside current ESG analyses, through easily measured disclosures. On recruitment, for instance, investors can ask about the number of employees focused on AI, what proportion of them focus on AI safety, and the degree of authority they have within the company. Analysts can investigate what companies are doing within subfields like monitoring, robustness, and capability control. Incident reporting, too, was an area highlighted in the GovAI study, and something many analysts would be familiar with through standard ESG frameworks. Then there are more explicit checks on risk - how are companies monitoring threats, and what are they doing to guard against them?

Recent work by the Collective Impact Coalition for Digital Inclusion represents one of the few concerted efforts from financial institutions to engage with technology companies on AI. The coalition [1], on which Fidelity International is a co-lead, recently found only just over 10 per cent of 150 leading AI companies had disclosed commitments to ethical AI.

Those that had disclosed commitments offered a glimpse of what best practice could look like. Some had devised a ‘kill switch’ to deactivate AI systems, while others had drawn up ethics guidelines for employees to follow. Incorporating these as prerequisites for safe AI usage would be reasonable steps for most companies - and ones which investors are well positioned to ask about. 

Artificial Intelligence meets Active Investing

Arguably, investors are yet to fully appreciate the dangers of artificial intelligence to portfolios. Many will tacitly hold AI leaders in their sustainable holdings - companies like Microsoft and Nvidia invariably feature in the largest ESG tracker funds. The question is, what do such holdings represent? Existential threat, or sustainable investment? Artificial intelligence could pave the way for an array of advancements, or it could dismantle democracies. By integrating AI analysis into their ESG assessments, investors should spot the difference and be able to hold bad actors to account.

 

[1] The coalition was created by the World Benchmarking Alliance (WBA) and is guided by the Digital Inclusion Benchmark (DIB).

 

 

Important Information

This document is for Investment Professionals only and should not be relied on by private investors.

This document is provided for information purposes only and is intended only for the person or entity to which it is sent. It must not be reproduced or circulated to any other party without prior permission of Fidelity.

This document does not constitute a distribution, an offer or solicitation to engage the investment management services of Fidelity, or an offer to buy or sell or the solicitation of any offer to buy or sell any securities in any jurisdiction or country where such distribution or offer is not authorised or would be contrary to local laws or regulations. Fidelity makes no representations that the contents are appropriate for use in all locations or that the transactions or services discussed are available or appropriate for sale or use in all jurisdictions or countries or by all investors or counterparties.

This communication is not directed at, and must not be acted on by persons inside the United States and is otherwise only directed at persons residing in jurisdictions where the relevant funds are authorised for distribution or where no such authorisation is required. Fidelity is not authorised to manage or distribute investment funds or products in, or to provide investment management or advisory services to persons resident in, mainland China. All persons and entities accessing the information do so on their own initiative and are responsible for compliance with applicable local laws and regulations and should consult their professional advisers.