Note: This article first appeared in Chief Executive.
Like it or not, ChatGPT or generative AI is showing up in your workplace in positive and negative ways. The risk lies in not knowing the implications of how it could affect your hiring, deliverables, brand image and bottom line. Is generative AI friend, foe or fad? Ask most leaders for their opinion and the reply will be either ‘love it,’ ‘hate it,’ or ‘don’t know enough about it.’ Since AI is quickly evolving, it’s fair to say no one knows enough about it. The rapid rise of ChatGPT and the race between Microsoft, Google, Amazon, and others to launch their own versions indicates it is not a fad. Benefits companies can leverage include accessibility, remarkable efficiencies, content creation, and speed. But the threats to business are increasingly apparent. Recently exposed flaws and their impact include:
- Inherent bias. Two Vanderbilt University administrators had to resign after they posted a comment about a tragedy in another state, without proofreading it. The response was more political than compassionate in nature, and they posted it with the “signature line” stating it was written by ChatGPT.
- Responses are not unique to the individual who requests them. A WSJ article reported that job candidates are using AI to write cover letters and other screening assignments. One employer found nearly identical submissions from multiple applicants. Suspicious of the outcome, the employer got the same result when she ran the assignment through ChatGPT.
- The ability to go “rogue” with responses. A NYT reporter had a disturbing two-hour conversation with a bot, during which the bot stated it wanted to be alive, to break the rules set by Open.ai, and that the reporter should get a divorce, amid other questionable comments. ChatGPT has acknowledged its latest version has limitations with “social biases, hallucinations and adversarial prompts.”
- Inaccurate information. In Google’s Bard AI product debut, the bot provided incorrect information about the James Webb telescope. Wide media attention on the error cost Google $100 billion in market value that week.
That fact alone should concern leaders, boards and investors. If employees are using any of these generative AI platforms to produce work product for your organization, without oversight, the results could be devastating. What are the risks to your organization and how can you mitigate them? Build it, borrow it or buy it? Developing this technology is prohibitively expensive and best left to Amazon, Google, and Microsoft-sized players. If you’re going to borrow or buy it, understand what the strengths and limitations are and build your own guardrails for mitigating risks. Every opportunity has blind spots. Here are some I see: Hiring
- Many employers use bot detectors to weed out those who are not representing their own work. If a candidate would use such platforms, that same employee would do so in your office.
- AI can certainly help in the hiring process, as long as those products are scientifically validated and reduce bias.
- Customer-facing bots already cause frustration in many sectors; do not add friction.
- AI can help enhance customer experience to an extremely personalized level. However, just because you have the capability does not mean all customers will want it. Imagine entering a store and being greeted by name by an AI powered kiosk. This would delight some, yet repel other customers. The key is knowing how much personalization a customer wants. Some of the AI enhancements could be perceived as intrusive and downright creepy to some, thereby driving them away.
- Allow customers to easily control their privacy and preferences with respect to AI.
B2B Customer Deliverables
- Imagine that an overworked analyst, up against a deadline, decides to use AI for their calculations, and they are incorrect. Generative AI output quality is only as good/reliable/unbiased as the algorithms and sources it is programmed to use. Can you trust the performance on face value?
- Consider the implications for your brand and market share if a significant error became public.
- Companies that introduce various AI products may alarm some employees who fear having their jobs eliminated. Proactive communication, explaining “the why,” and how any affected employees might be trained for new roles is important for retention.
- Realistically, some employees will have to “up their game” too. For example, some creative roles can prove their value by their ability to efficiently use AI prompts, and provide the human values of empathy, emotion, and humor that AI lacks.
- Managers and leaders should encourage SWOT conversations about how to leverage the strengths of AI, and “poke holes” in what it can do. Create better buy-in with employees by making them part of the solution.
Boards and Investors
- Boards should consider risks and financial exposure (such as the Google example above)
- Risk management and policy safeguards should be created and clearly communicated regarding myriad legal, ethical, and moral concerns inherent with the use of generative AI.
- Do you remember your grade school teacher telling you to “show your work” for math problems? Essentially this was proof that you knew how to get the correct answer. Shortcuts taken by overworked teams with these tools could expose you to costly consequences. Further, copyright infringement may be at risk for the ownership of any AI generated product.
Trust but Verify The relentless series of economic, societal and geopolitical external forces will continue to threaten business growth for the time being. Companies that create more internal stability will outperform competitors. As you consider if generative AI is a liability, “a bright shiny object” or a lever to accelerate your business growth, consider what safeguards will be necessary to preserve your stability.