Most organisations believe they have a crisis communications capability. They have Microsoft Teams. They have email. They have a phone list somewhere, last updated in 2022. Some have a WhatsApp group for the senior leadership team, created hurriedly during the last major incident and never properly governed since.
That is not a crisis communications stack. That is a collection of dependencies - and dependencies fail precisely when the pressure is highest.
The uncomfortable truth is that the majority of organisations have built their crisis communications capability on top of the same infrastructure they will be trying to respond to when things go wrong. When the incident hits, the comms and the crisis are sitting in the same place. And when that infrastructure goes down - or worse, when it is actively compromised - your ability to coordinate a response goes with it.
Building a proper crisis communications stack means thinking differently. Not more tools, but the right architecture.
The failure pattern is consistent, and it plays out in almost every major incident post-mortem.
A cyber attack or infrastructure failure takes down primary systems. The response team reaches for their usual channels - email, Teams, Slack - and finds them unavailable, degraded, or untrustworthy. Someone creates a WhatsApp thread. Someone else starts calling people directly. A third stream of communication forms over personal email. Within an hour, there are four parallel conversations with different information, no single source of truth, and no way to confirm who has seen what.
The problem is compounded when organisations rely on Managed Security Service Providers. If your MSSP operates on the same cloud environment or network segment that has been compromised, you may lose your security partner and your communications infrastructure simultaneously. This is not a hypothetical risk — it is an increasingly common feature of sophisticated attacks.
The root cause is almost always the same: crisis communications was never treated as its own infrastructure. It was assumed to be covered by existing tools. It was not.
A well-designed crisis communications stack is not a single platform or a list of features. It is a set of distinct capabilities that together form a resilient, coherent system - one that can operate independently of whatever else is failing around it.
This is the non-negotiable foundation. An out-of-band communications environment operates entirely separately from your primary systems - different network, different hosting, different authentication path. It cannot be taken down by the same incident that is triggering your crisis response.
Without this, everything else is built on sand. If your crisis comms tool relies on the same Microsoft 365 tenant that ransomware has just encrypted, or the same network that an attacker is currently traversing, you have no foundation at all. The independence of your crisis communications infrastructure is not a nice-to-have. It is the whole point.
Once you have an independent environment, the communications within it must be trustworthy. This means encrypted channels, verified identities, and no reliance on consumer applications or corporate SSO that may itself be compromised during an incident.
The problem with WhatsApp, Signal, or personal messaging apps is not just governance - it is that you cannot verify, in the heat of an incident, that the person messaging you is who they claim to be. Sophisticated attackers increasingly target the human communications layer of an incident response, impersonating executives or IT leads to misdirect the response team. Authenticated messaging closes that vector.
The ability to reach people quickly and at scale is a different capability from secure team messaging, and it needs to be treated as such.
When an incident escalates, you may need to alert hundreds or thousands of staff, contractors, or stakeholders simultaneously - across SMS, push notification, and voice - without manual intervention and without relying on systems that may be down. Confirmation of receipt matters too. Knowing that a message has been delivered and read is operationally significant when you are making time-critical decisions.
Manual call trees are not a substitute. They are slow, error-prone, and dependent on individuals being reachable and willing to cascade information accurately under pressure.
A crisis response is not just a communications task - it is a decision-making task, conducted under time pressure, with incomplete information, often by people who are not in the same location.
The crisis team needs a shared space to work: to review documents, assign actions, track decisions, and maintain a common operating picture. This capability needs to exist within the secure, out-of-band environment - not bolted on from outside it. Using a Google Doc or a shared drive that sits on potentially compromised infrastructure defeats the purpose entirely.
Video conferencing, collaborative whiteboards, and real-time document access are not luxuries. They are the difference between a crisis team that functions cohesively and one that is operating from fragmented, out-of-date information.
Everything that happens during a crisis response should be logged, timestamped, and retrievable.
This matters for three reasons. First, for post-incident review - understanding what was known, when, and what decisions were made on the basis of that information is essential for learning and improvement. Second, for regulatory reporting - under frameworks like DORA and the FCA's updated cyber reporting requirements, organisations face obligations to demonstrate how they responded and communicated during a significant incident. Third, for legal protection - in the event of litigation, a clear, verifiable record of decision-making is considerably better than reconstructed memory and incomplete email threads.
An audit trail is not simply a log. It includes read receipts, decision records, message timestamps, and evidence of who was notified of what and when.
Five distinct capabilities is not the same as five separate tools. A mature crisis communications stack is a coherent system, not a patchwork.
The test is simple: can your team activate the full capability when your primary systems are dark? If the answer involves any dependency on infrastructure that could be affected by the incident - cloud tenants, corporate VPNs, shared identity providers - then you have a gap.
There are four questions worth asking of any stack:
Can it be activated independently? Your out-of-band environment must be reachable without touching your primary infrastructure.
Is it tested regularly? A capability that exists only in documentation is not a capability. Regular exercises - including scenarios where primary systems are assumed unavailable - are what separate a plan from a practice.
Can it scale? A small incident might involve a five-person crisis team. A major ransomware event or operational disruption might require coordinating hundreds of people across multiple sites. The stack needs to handle both.
Does it support your compliance obligations? If your sector requires regulatory notification within specific timeframes, your communications infrastructure needs to support that - with the evidence to prove it.
| Bad | Good |
|---|---|
| Crisis comms run on the same infrastructure as the incident | Independent out-of-band environment, fully separate |
| WhatsApp group for the leadership team | Encrypted, authenticated platform with verified identities |
| Manual call trees for staff notification | Automated mass alerting across SMS, push, and voice with read confirmation |
| Shared documents in a potentially compromised environment | Secure document access and collaboration within the isolated environment |
| No post-incident record | Full audit trail with timestamps, read receipts, and decision logs |
| Untested capability | Regular exercises, including simulated primary-system failure |
The framing matters here. Crisis communications infrastructure is not a line item in the IT budget. It is a strategic resilience asset - one that determines whether an organisation can function and communicate under the exact conditions that most threaten it.
Regulators increasingly see it that way. DORA requires financial entities to demonstrate operational resilience including communications continuity. The FCA's tightening cyber reporting rules place obligations on firms around how they communicate during and after incidents. Martyn's Law, now moving through the UK legislative process, places new duties around emergency communications for venues and events. The direction of travel is clear: regulators expect organisations to have thought carefully about how they communicate when everything else is under pressure.
Organisations that invest in this before an incident recover faster, communicate more credibly with stakeholders and regulators, and face fewer downstream legal and reputational consequences. Those that try to improvise with consumer tools and compromised infrastructure find that the communications failure compounds the operational failure.
Having a crisis communications stack on paper is not the same as having one that works at 2am on a Sunday when your primary systems are dark, your IT team is scattered, and the clock is already running on your regulatory notification window.
The question is not whether your organisation could communicate in a crisis. It is whether you have built the infrastructure to ensure that it can.