AI for Communicators: Lessons on AI, PR, and Crisis Management from Peter Heneghan
July 30, 2025
Last Thursday and Friday, we spent two days diving deep into the future of communications with Peter Heneghan, founder of The Future Communicator. Peter isn’t just talking theory—he’s lived through the evolution of media and technology, and his insights hit exactly where communicators feel both excited and anxious about AI.
His opening message was simple but unforgettable: AI is changing everything, yet the fundamentals of communication—trust, relevance, and storytelling—remain the same.
From the very start, Peter pushed us to think differently. AI can now write copy, edit videos, generate images, and even clone voices in seconds. But all of that power means nothing if we don’t understand how to use it safely, ethically, and strategically. He reminded us that communicators are no longer just writers or spokespeople—we’re navigators in a world of hyper-speed information and deepfakes.
One of the most talked-about parts of the course was Peter’s AI cheat sheet. Tools like ChatGPT, Fireflies, Canva, Sora, Midjourney, and ElevenLabs are already transforming the way we work. But as he pointed out, 78% of employees are already bringing their own AI tools to work without guidance or policies. This “Shadow AI” world is full of opportunities—but also real risks for privacy, security, and brand reputation. Peter’s advice was clear: don’t ban these tools. Train your teams. Build guardrails. Turn risk into advantage.
A recurring theme over the two days was prompting. Writing a good prompt, Peter said, is the new power skill for communicators. Instead of asking AI to “write a press release,” you teach it to think like a strategist: give it a role, a scenario, and a clear objective. When we tested this in real exercises, the difference in the output was night and day. As Peter put it, “AI is only as smart as the brief you give it.”
Ethics, of course, ran through everything we discussed. Peter introduced his AI Trust Triangle—Transparency, Privacy, and Accountability. If any of these three are ignored, trust collapses. AI can assist in a crisis, but it can also make one worse if teams aren’t prepared to handle deepfakes, misinformation, and data risks.
Day two of the course was all about AI in crisis communications, and it was eye-opening. We looked at real scenarios, from a finance worker tricked by a deepfake CFO into transferring $25 million, to a viral concert clip that brought down a CEO. These stories weren’t just cautionary tales—they were proof that AI is no longer futuristic. It’s happening now.The communicators who succeed will be those who combine human judgment, creativity, and empathy with AI-enabled speed and insight.
Peter closed the course with a reminder that stuck with everyone:
“Soft skills and human empathy are now at a premium. AI won’t replace communicators—but communicators who master AI will replace those who don’t.”
We left the course with a clear sense of both responsibility and excitement. AI isn’t something to fear—it’s a tool that, when used thoughtfully, makes us better at what we already do best: telling stories that matter and protecting the reputations we serve.
If you missed this edition, the good news is you have another chance. We’re hosting the next AI for Communicators Boot Camp on 13–14 November in New York City. It’s training by practitioners, for practitioners—led by experts from Amazon, Verizon, Stanford, IBM, and more. No fluff. No theory. Just tools, insights, and strategies that work.