When Certes dropped their announcement last week, it got me thinking about a gap that most data protection and cyber resilience teams haven’t fully reckoned with yet — and it’s not the one you’re probably thinking of.

Server-to-server data flows are largely unprotected in most environments; and sophisticated adversaries are already harvesting that traffic today with the intention of decrypting it later. Your backup strategy doesn’t address that. Your DR plan doesn’t either.

So who owns that problem — and have your teams even asked the question yet?

video transcript

To all my data protection and cyber resilience colleagues: when was the last time you and your teams had a serious conversation about your server-to-server data flows?

Most of us in this space spend a lot of time thinking about how data gets backed up, how it gets recovered, and increasingly; how we prevent ransomware from encrypting it into oblivion. Those are all legitimate and important questions. But there’s a gap in that thinking that I don’t hear enough people talking about.

When data moves between servers — workload to workload, application to application, across your hybrid and multi-cloud environments — that traffic is largely unprotected. Not because anyone made a bad decision, but because the tools we’ve traditionally used stop at the network perimeter or the cloud provider boundary. What’s happening inside that environment, between your own systems, has been a bit of an assumed safe zone.

It’s not safe. And the threat isn’t just today’s ransomware actor grabbing what they can and running. Sophisticated adversaries are harvesting that traffic today, with the intention of unpacking and using it later — potentially sooner than most organizations are planning for. Your backup strategy doesn’t protect you from that. Your DR plan doesn’t address it either. This is a different problem.

When I look at what Certes [logo] announced around data-in-motion protection, it got me thinking about how few organizations have even asked this question internally.

So, here’s what I’d be asking if I were in an enterprise security or resilience role right now. What is actually protecting our server-to-server flows? And if the answer is “not much,” then who owns that problem — because it sits awkwardly between the network team, the security team, and the data protection team and tends to fall through the cracks.

And for those of you serving mid-market organizations through managed services; your clients are even more exposed here, because they’re typically relying on you to flag these gaps before they become material events. Most folks likely don’t have good answers to this yet — which is an indicator that more of us ought to be asking the question.

Leave your thoughts below

Leave a comment

Trending