生成式人工智能的挑战与风险:投资者需考虑的要点

Published: 23/02/2024

By Kate Tong, Analyst, Portfolio Research, ESG, TDAM


Investment Insights +  5 Minutes = New Thinking

Since OpenAI's ChatGPT went viral in late 2022 for its unprecedented ability to engage in human-like conversations and provide articulate responses in wide-ranging domains of knowledge, several competitors have begun introducing their own iterations of the technology. This type of AI technology, known as generative AI, is based on large language models that are trained on massive amounts of data, which could include text, images or other media. The models identify the patterns and structures of the training data and then generate new content that has similar characteristics based on user prompts.

There are various benefits to incorporating generative AI in a business - process improvements, cost reduction and value creation, to name a few. Leveraging these opportunities, companies across different sectors have already begun testing and implementing generative AI tools. Examples range from financial institutions deploying chatbots trained on internal databases to provide financial advice to customers to healthcare institutions automating the generation of medical documentation based on conversations between patients and physicians. Across industries, companies are also incorporating generative AI tools in marketing, customer service and product development.

As such, investors have to pay attention not only to large tech companies that are building the foundational models, but also to companies that are starting to incorporate generative AI tools into their business. 与多数新技术一样,广泛应用之前需要认真考虑并采取措施以防潜在风险。 Regulation will be important in helping reduce these risks. But because the development of regulation occurs at a much slower pace than the development and application of AI, investors should actively consider its risks and seek stewardship opportunities in companies involved in generative AI to address these risks.

Challenges and Risks in Generative AI

Generative AI models have various known issues. These models have the tendency to "hallucinate," generating false outputs that are not justified by the training data and presenting them as a fact. These errors can be caused by various factors, such as improper model architecture or noise and divergences in the training data. Opaqueness about how model outcomes are generated is also an issue. With billions to trillions of model parameters that determine the probabilities of each part of its response, it is exceedingly difficult to map model outputs to the source data, including in cases of hallucination.

In addition, if the training data contains societal prejudices or if the algorithm design is influenced by human biases, the model may learn and propagate these biases in its outputs. Enterprise applications could also be vulnerable to data privacy issues and cybersecurity threats. This includes leakage of sensitive information within the training data if the model is customer-/public-facing, usage of personal or sensitive data in model training that may have needed explicit consent to use, as well as malicious attacks from hackers that aim to manipulate model outputs. These issues give rise to various legal and reputational risks, the scale of which depends on the criticality of the use case and the company's industry. For example, the financial and healthcare industries may be subject to severe consequences if problems do arise, due to the high-stakes nature of these industries.

Sample Use Cases in the Financial Industry

In financial advisory use cases, model hallucinations could give inappropriate advice or offer the wrong product to undiscerning clients, which could undermine public trust in AI systems and the financial institutions using them. Opaqueness about how model outcomes are generated is also a key issue for financial institutions, as these institutions are required to be able to explain their decisions internally and to external stakeholders. Considering all this, it is best practice to implement a degree of separation between direct model outputs and the customer, where internal staff could be trained to recognize potential errors and inconsistencies in model outputs and assume ultimate responsibility for the decision-making process.

Generative AI could also offer a quick and low-cost way for financial institutions to profile their clients for marketing campaigns, risk management and identification of suspicious transactions. However, overreliance on generative AI profiling could violate anti-discrimination laws due to potential bias embedded within the models. Appropriate human judgment will need to complement generative AI models that perform client profiling. Financial institutions will also need to have strong data privacy policies and robust cybersecurity measures to address generative AI's risks to their sensitive client information and proprietary data.

Questions for Investors to Consider

In view of all these issues and risks, below are questions investors should consider when assessing companies employing generative AI tools:

  • What are the risk-mitigating mechanisms and/or circumstances? Solutions include having trained internal staff act as an intermediary between direct model outputs and the customer; working to understand potential biases in the training data and address them in model design; regular and proactive monitoring of model output to promptly identify and address any signs of hallucinations; implementing robust cybersecurity measures; etc.
  • What is being done to enhance model performance? Solutions include ensuring that training data is high quality, accurate and up to date; implementing iterative feedback loops to refine and improve model performance; etc.
  • Are there any transparency and oversight on ethical AI principles? This pertains to providing transparency on data sourcing and data privacy concerns; defining clear policies and procedures to ensure compliance with ethical standards and emerging regulations; outlining the roles and responsibilities of individuals involved in the development, operation and oversight of the generative AI model; etc.

本文所含信息由道明资产管理有限公司提供,仅供参考。信息出自我们认为可靠的来源。本信息并未提供财务、法律、税务或投资建议。具体的投资、税款或交易策略应根据每位投资者的目标和风险承受能力加以评估。

This material is not an offer to any person in any jurisdiction where unlawful or unauthorized. These materials have not been reviewed by and are not registered with any securities or other regulatory authority in jurisdictions where we operate.

这些资料中对证券或市场状况的任何一般性讨论或意见均代表着我们或所引用来源的观点。除非另有说明,否则这些观点仅为所注明日期当时的观点,并有可能发生改变。有关投资组合持仓、资产配置或分散投资的信息是基于历史数据的,可能会随时变化。

本文档中的部分陈述可能包含预测性的前瞻性陈述(“FLS”),其中包含“预计”、“预期”、“打算”、“认为”、“估计”和类似的前瞻性表述或其否定形式。前瞻性陈述基于当前对未来普遍的经济、政治、相关市场因素(例如利率和汇率、股票和资本市场)以及普遍经营环境的预计和预测,并假定不发生税法或其他法律或政府管制方面的任何变动或灾难事件。对于未来事件的预计和预测本身受无法预见的风险和不确定性的影响。此等预计和预测可能在未来并不准确。前瞻性陈述并非对未来表现的保证。实际发生的事件可能与前瞻性陈述明示或暗示的事件存在实质差异。包括上文所述各项因素在内的多个重要因素均可能造成这种背离。 You should avoid placing any reliance on FLS

道明资产管理有限公司 (TD Asset Management Inc.) 是道明银行 (The Toronto-Dominion Bank) 的全资拥有附属机构。

® TD标志和其他TD商标为道明银行或其子公司的产权。

The statements and opinions contained herein are those of Kate Tong and do not necessarily reflect the opinions of, and are not specifically endorsed by, TD Asset Management Inc.


TDAM Connections at a Glance:

您可能还希望了解:

TDAM访谈播客

创富之道

Market Commentaries