This is the second in a three-part series about the impact of generative AI on the communications profession.

The first thing that most of us did when we first got to try ChatGPT was play with it, a shiny new toy with fun parlor tricks. Ask it to explain the Russia-Ukraine conflict as if it were an episode of Friends, or write a ballad about inflation in the style of Bob Dylan, and instantly, it does. On the more practical end, you can ask it to produce a summary of a lengthy document, or write an email, a thousand tweets or a business plan. Soon gone will be the days when we had to generate all that stuff ourselves; we’ll forever have a first draft of everything, ready for editing. 

AI-Generated Bob Dylan Inflation BalladFriends Analogy on the Russia-Ukraine Conflict

Writing is literal communication, of course, but it is also an exercise in thinking - about what one means to say, about the reader, about narrative, context and meaning, and about the desired effect of the words chosen. There’s something lost when we can skip that process; it becomes too easy to accept what the computer decides is relevant, too easy to be complacent about ideas, impact or nuance. 

That said, having a first draft of everything will save tons of time and should make us better editors. Communicators will spend more time thinking about what to ask for and how, about the substance of the message and the audience, than about style, grammar and form. As I’ve heard a hundred times now, one won’t lose their job to the robot, they’ll lose it to the human who knows how best to work with the robot. 

There’s an inherent problem, though, with relying on systems that were trained on everything written so far in human history: eventually, much of what these systems will “learn” from will be text that it has itself created. At best this will slow the evolution of its capability, make its produced text stagnant and entrench existing biases. But it may also place an even greater premium on genuinely original ideas. Though it can be argued that generative AI will stifle creativity, I think it’s more likely to unleash it. When the creator isn’t constrained by the skills required to create - to use visual or video editing software, for example - they are bound only by their imagination. For better or worse, anyone will be able to bring their ideas to life. 

But there’s a more mundane effect to all this that could have profound implications. It’d be hard to find someone who isn’t drowning in emails and meetings. I expect that these AI tools will soon serve as our eyes and ears. Missed a meeting? Your AI can review the video or transcript and provide the salient points and action items. Buried under your inbox? AI can read all your emails and provide a summary, to which you can provide direction on how to respond to various threads. I won’t lie, that all sounds amazing. But are we entering a world where AIs are communicating and attending meetings on our behalf? What is lost when we’re relying on these systems to listen to and speak with one another? At what point are the machines just talking to themselves?