How to harness the power of artificial intelligence for good

Rathbones’ Matt Crossman looks at how machine learning is supporting renewable energy and helping to bridge cultural barriers

Matt Crossman

|

Matt Crossman, stewardship director, Rathbones 

Over the last few quarters we’ve been looking at some of the ways that artificial intelligence (AI) is transforming how our lives and economies work. The green shoots of AI’s potential for helping to solve existential problems, such as climate change, are already starting to appear. 

However, these possibilities don’t come without accompanying ESG risks. The main ESG risks associated with AI fall within the social and governance elements. This is where Rathbones’ stewardship team are focusing efforts to engage with companies in clients’ portfolios in an effort to guard against these risks. 

See also: Firms use artificial intelligence for ESG index

AI is playing an important role across a range of industries in contributing to energy efficiency gains. While data centres use vast amounts of energy and water (for cooling) to crunch the huge amounts of data AI algorithms run on; the trade-off is worthwhile. In the utilities sector, for example, machine learning is leveraging meteorological data to help grid operators make better predictions about the availability of renewable energy supply, as well as demand from customers. 

On the social front, traditional AI tools, such as real-time language translation applications, have for some time been bringing us closer together, by helping to bridge cultural barriers. Existing generative AI tools are now empowering people across the world to get better access to education. 

What about the risks? 

When asked in an interview at the start of 2023 what the worst-case scenario for AI could be, OpenAI CEO Sam Altman said that it could be “lights out for us all.” Other experts in the field, such as Ian Hogarth, chair of the UK government’s AI Foundation Model Taskforce, have issued similarly catastrophic warnings around the speed of the current arms-race to develop AI — or specifically, artificial general intelligence (AGI), which is a form of AI that is capable of performing many tasks, as well, if not better than any human can. There are clearly social and governance risks for companies developing these new tools of the 21st century.

In an essay published in the Financial Times last year, Hogarth, an entrepreneur and tech investor himself, argued that the speed at which money is pouring into the development of AI tools, whose outputs developers do not yet fully comprehend, is worrying. He also expressed concern over the paucity of resources currently being allocated towards making AI safer. For example, he noted that teams involved in this work made up just 2% and 7% of headcount at two of the major developers, DeepMind and OpenAI, respectively. 

On the AGM agenda 

Transparency around the risks a company’s use of AI poses to both its customers and business, as well as board-level oversight of such risks, is expected to come into sharp focus this year at the annual general meetings (AGMs) of major US technology companies and providers of streaming services, such as Netflix, Walt Disney and Comcast. 

As a recent example, Rathbones voted in favour of shareholder resolutions, at their respective AGMs, asking Microsoft and Apple to produce reports assessing the risks generative AI poses to the companies and to public welfare. Both resolutions failed to receive sufficient votes to pass, but garnered enough support to indicate that plenty of other investors share similar concerns around these ESG risks. 

By using our votes and influence as shareholders in the companies we own, we hope to be able to play our part in mitigating some of the potential downside that they may be exposed to from AI-related risks.