Responsible investment specialist advisor speaks out on ESG risks of nascent generative AI technology
After making waves in the stock market and across different industries, AI is set to be a major point of focus for stakeholders moving forward … and ESG investing is no exception, argues one top advisor.
“AI may not feel like a responsible investment issue at first, but I think it’ll be one of the most important issues in RI moving forward,” says Sonia Leroy, senior wealth advisor at The Leroy Wealth Management Group with IPC Securities.
At the most recent Responsible Investment Association (RIA) Conference held last year, Leroy says AI emerged as a core theme and concern among the Canadian investment community. Among the positives, the technology has major potential in terms of improving business productivity and driving significant economic growth,
In a recent blog post published by the RIA, Kate Tong, an analyst with TD Asset Management, discussed how generative AI – a flavour of artificial intelligence based on large language models (LLMs) trained on massive data sets that could include text, images, and other media – has exploded following the success of OpenAI’s ChatGPT.
“There are various benefits to incorporating generative AI in a business – process improvements, cost reduction and value creation, to name a few,” Tong said, highlighting examples of chatbots being tested at financial institutions to give customers financial advice, automatically generating medical documentation at healthcare institutions, and being used in other test cases for marketing, customer service, and product development.
However, Leroy says AI also comes with several concerns and ethical considerations.
“We have to think about the potential for job displacement. What plans are being developed for a fair transition to take place?” she says. “There’s also privacy concerns – think about the collection and use of our personal data that supports that. And there are implications for human rights with possible misuses of surveillance and facial recognition technologies, for example.”
The fact that AI has to be trained on colossal amounts of data, Leroy says, also has other ESG implications. The intensive types of computation needed to develop LLMs, implies a tremendous degree of energy consumption and, by extension, a not-insignificant environmental impact and carbon footprint.
Turning back to the ethical front, Tong pointed out that AI models have also been known to generate “hallucinations,” delivering false outputs that, upon close inspection and scrutiny, are not justified by the data used to train them. Those errors could arise from a range of issues including improper model architecture or noise and disparities in the training data.
“Opaqueness about the generation of model outcomes is also an issue,” she said. “With billions to trillions of model parameters that determine the probabilities of each part of its response, it is exceedingly difficult to map model outputs to the source data, including in cases of hallucination.”
Even if the training data is error-free, Tong said there’s still a risk of generative AI models being corrupted by human bias, which could be carried through societal prejudices baked into real-world data or the code used to train them. At worst, she said generative AI platforms could end up propagating and amplifying those biases through their outputs.
As they say: with great power comes great responsibility … and so it goes for AI. With all those concerns in mind, LeRoy asserts that it will be more important than ever for investors to understand AI not just in terms of its impact today, but the trends that will drive the space over time.
“We’ve got to be aware of the role we can play as investors in engaging with companies … to increase awareness and action on these kinds of issues,” LeRoy says.