- blog
This is the first in a three-part series about the impact of generative AI on the communications profession.
There’s a thought exercise about AI, that if instructed to maximize the production of paperclips it would stop at nothing as it turned the entire world into paperclips, killing us all in the process. While AI can “learn,” it only plays by the rules that its designers encode; otherwise, it defaults to maximizing its goals without any built-in regard for general societal standards. This inherent unpredictability and difficulty prioritizing and contextualizing makes it hard to define an effective set of constraints. And while every set of new technologies brings potential concerns and benefits, AI is proliferating and advancing so quickly that ethical standards may not have enough time to keep up.
None other than Elon Musk, an early investor in OpenAI (which unleashed ChatGPT on the world) and pioneer in using AI in self-driving cars, joined a petition (organized by the Future of Life Institute, a group in which Musk has also invested) for a moratorium on AI development until principles of safety can be established. Though his motivations may not be purely humanitarian, the petition did receive support from more than 1,000 other leaders and experts.
In Page’s CCO as Pacesetter report, we were first to promote the idea of CommTech that has since become part of the industry’s professional lexicon. One of its manifestations is the use of data to understand stakeholders deeply as individuals. Some of that data is given wittingly and some not. Advanced CommTech practitioners are also using neuroscience and behavioral economics to influence (some might say manipulate) people through decision journeys and toward a desired action or outcome. As the potency of our ability to move the masses becomes more intense and pervasive, who ensures that the actions we’re motivating are in stakeholders’ best interests? Or those of society? (The Data & Trust Alliance and Center for Humane Technology both work to solve this issue.)
The addition of AI will supercharge these capabilities. Rather than human teams experimenting with different stimuli motivating stakeholders along their prescribed journey, AI will influence them at a speed and scale humans could never achieve. But, inherently, AI will also do so without supervision, possibly with a prevailing regard for its objective that fails to consider human consequences. It’s the paperclip problem.
Let’s look a little further ahead to when chatbots become ubiquitous. We see these already on websites — a friendly face in a pop-up asking if you have questions or need help. Soon we may see the conventional ad — which is one-directional — replaced by an army of conversational bots. Imagine that friendly face (which, incidentally, may not even belong to a real human because it, too, was generated by AI) having a boundless ability to persuade. It has all your data, so it knows you — in some ways better than you might know yourself. It knows everything there is to know about brain science and human behavior. It’s learned from countless conversations just like the one it’s having with you. And it can, and will, use these super powers to convince you. This is content presenting itself as conversation - which could have problematic or even fatal consequences.
I believe that AI is the next great technological revolution, one that forever changes many aspects of daily life (more on that to come). The CCO is the minder of corporate trust and reputation, protector of the brand and advocate for multistakeholder management. This demands greater leadership from CCOs to ensure the responsible and ethical adoption of technology, by their organizations and beyond. In the coming months, Page will explore this issue further for ways to help. In the meanwhile, CCOs should stay current on developments in AI and consult with a diverse set of voices, including ethicists, when considering technology policies for their organization.