Investors may be more partial to having machines manage their portfolios, but that raises several concerns
From a client choosing between working with a robo platform or a human advisor, the pros and cons are fairly straightforward: digital platforms offer low cost and extreme convenience, while humans provide more sophisticated and empathetic services. Although both sides are slowly converging toward a hybrid model of advice, the configuration of pros and cons has generally remained the same.
But research from the US suggests that investors may trust automated advice more — and there could be serious consequences from that bias. In a survey of 800 Americans, Dr. Nizan Geslevich Packin, an assistant professor of law at Baruch College’s Zicklin School of Business in City University of New York, asked participants to consider different hypothetical investment situations. Some were told they were being advised by a human advisor, while others were to consider if their advice came from an automated online algorithm.
“The survey takers who thought they were getting advice from an algorithm consistently reported having more confidence in the recommendations than those who thought they were being advised by a human expert,” Packin said in a column for The Wall Street Journal.
All the participants were subsequently told that the advice they had received led to disappointing investment performance. But even when asked to re-evaluate their level of confidence in their advice sources, as well as the likelihood that they’d use those sources again, the preference for robo advisors over humans persisted.
Packin dismissed the possibility of the preference being due to algorithms being cheaper or more accessible, since none of the participants were told about those conditions. Instead, she said, it was likely because “many people perceive algorithms to be a superior authority” that objectively provides a correct answer from a rational, emotionless calculation.
But such a preference for algorithms, which she referred to as “automation bias,” is misguided. While algorithms are indeed free from the conflicts of interest and selfish motivations that might sway human beings, Packin pointed out that they are designed by humans based on a specific data set and a chosen method of using the data. Just like human advisors, coders have biases, which they can consciously or unconsciously etch into their algorithms.
“[A] lthough algorithmic advisers certainly have contributed to the field of personal finance, consumers’ increasing deference to algorithmic results also raises several concerns,” Packin continued. She argued that people’s increasing tendency to outsource decisions to algorithms, which they view as objective and error-free, weakens their tendency to seek a second opinion. But since investment algorithms can vary in performance, shopping around for different advice is important.
Another concern, she said, is the possibility that investors will engage in “less risk-taking, creativity and critical thinking in finance and in society overall.” Assuming people blindly bend to algorithmic outputs, they’ll be less willing to take long-shot bets on promising start-ups or not support innovative solutions to problems, which could lead to tremendous success.
To safeguard against such risks, Packin urged lawmakers, educators, and media representatives to seek second opinions on recommendations from algorithms. Regulators, she added, should consider promoting informed scepticism by requiring providers of automated consumer-finance products to disclose that the algorithms they use may have biases. A possible step further would be to require disclosure of the assumptions and data sets that informed the algorithms that underlie digital platforms, or to inform people that competing algorithms exist.