As AI tools continue to transform how teams operate, communicators are rethinking not just what they do—but how they work.
Christina Twomey, Chief Communications Officer at S&P Global, kicked off a recent Collaboration Hour by sharing how her team is reimagining the way communications work gets done with AI, grounding the conversation in what’s possible today and what communicators should be preparing for next. Here’s a look at the key themes she surfaced.
What you should know:
- AI proficiency is becoming a baseline expectation: Many teams are now pushing toward universal AI capability, with some reaching over 95% proficiency completion through intentional training programs, clear standards, and hands-on practice. The biggest unlock? Treating AI like a required skill, not an optional tool.
- Create a simple proficiency ladder (Beginner → Intermediate → Power User) and require each level for specific workflows.
- Efficiency gains are real: Teams are seeing dramatic time savings: tasks that once took a full day now take 20 minutes using internal, firewalled LLMs and automated workflows. The real win isn’t speed, it’s the ability to redirect that time into more strategic, creative work.
- “Excellence at scale” is emerging as a new mandate: AI isn’t just boosting efficiency inside comms teams, it’s allowing best practices to be pushed beyond the function. Self-service chatbots and reusable templates are helping non-comms employees produce stronger outputs without hand-holding, while freeing communicators for bigger priorities.
- Build a small library of “always-needed” tools (e.g., announcement drafts, briefing templates) and make them accessible beyond your team.
- Optimize content for humans and AI agents: As more people discover information through AI-powered systems, content needs to be structured in a way LLMs can interpret reliably. That means clear language, strong metadata, consistent formats, and eliminating ambiguity so content is “LLM-ready.”
- Human judgment remains non-negotiable, especially around risk and reputation: AI may act as a “junior intern” for first drafts and task automation, but high-stakes areas—like reputation monitoring, crisis response, and interpreting misinformation or deepfakes—require deliberate human oversight. AI amplifies capacity, not accountability.
- Establish a rule. AI may start the work, humans finish it. Require human review for all external-facing content.
- The communicator’s role is expanding: As organizations adopt AI broadly, comms is increasingly expected to help shape the company’s external AI narrative, advise on ethical risks, and prepare for new threats such as synthetic media or deepfake-driven misinformation.
- Build a cross-functional AI working group (Comms, Legal, Technology, Risk) to coordinate governance, messaging, and rapid-response protocols.
With the floor open, a lot of practical experiences and insights came to the surface—here are the key takeaways from that conversation.
- Measure adoption, engagement, and impact: Tracking success goes beyond task completion. Metrics like course completion rates, chatbot usage, and general employee adoption and sentiment provide insight into how AI tools are being integrated. Efficiency gains are also measurable—tasks that once took a full day can now take 20 minutes, freeing up time for more strategic initiatives.
- Use a mix of quantitative metrics (usage stats, task times) and qualitative feedback (employee sentiment, adoption stories) to evaluate AI integration.
- Equip future communicators: Students and junior staff should learn how to collaborate effectively with AI, experiment creatively with tools, and anticipate which traditional tasks may be automated. The focus shifts from executing routine work to contributing strategic value.
- Build curricula or training modules that combine hands-on experimentation with practical scenarios, encouraging curiosity and problem-solving.
- Overcome skepticism and resistance: Initial doubt about AI—whether due to job security concerns or uncertainty about its longevity—is common. Transparency, trust-building, and leading by example are key to adoption. Interestingly, junior staff often become early champions and may even “reverse-mentor” more senior colleagues.
- Balance AI efficiency with human judgment: AI can feel like “cheating” to some team members, but the real value comes from using it to free up time for strategic and creative work. Teams should reward productivity gains rather than view them as cutting corners.
- Maintain caution in high-stakes areas: While AI is broadly useful, it’s not a substitute in critical domains like reputation monitoring or crisis management. Human oversight remains essential to manage nuance, risk, and potential reputational impact.
- Establish clear boundaries for AI use, reserving sensitive tasks for curated human-led processes.
Check this out:
- Educating the Human Advantage – Just issued last week, this article is a great primer for students on preparing for AI-driven workplaces: Read “The Human Advantage”
- The GenAI Transformation of the Communications Function – BCG explores how generative AI is reshaping comms teams and workflows: Read the report
- AI in Communications – Frank Shaw from Microsoft shares practical insights on how AI is changing the field: View on LinkedIn
- Navigating AI “Workslop” – A piece on fast-tracking success in human-AI collaboration and avoiding common pitfalls: Read on Fast Company
- Virtual Chief of Staff (VCoS): Explore how communications leaders can build a virtual chief of staff function. View on LinkedIn.
Continue the conversation with Christina (Christina.Twomey@spglobal.com).