Down in sunny Florida, between great sessions, ocean sunsets, and a well‑timed happy hour, Convene 2026 reminded me why this conference has become “home base” for Human Risk Management (HRM) practitioners.
It wasn’t just the weather or the venue—though both were excellent. It was the feeling of being surrounded by people who genuinely care about how humans experience security: CISOs, security awareness leaders, behavioral science nerds, and folks who’ve just recently had HRM added to their already full plates.
A 2026 Florida Sunset.
Convene has become my number one choice for a purely HRM‑focused event. It consistently brings together a mix you don’t often see in one place: C‑suite leaders, seasoned practitioners, and people brand new to the space, all sharing what’s working and what isn’t in their organizations. That mix matters, because risk lives everywhere—from the boardroom to the front line—and you need all of those perspectives in the same room to move the field forward.
What struck me this year was how global and diverse the community is becoming. I met people who had flown in from England and even from Alaska, which is not a quick hop to Florida. When folks travel that far to talk about “the human side” of security, you’re reminded this isn’t a niche concern anymore; it’s a core part of modern security strategy. Convene isn’tjust a conference in our calendar—it’s one of the ways we invest in keeping our Human Risk Management practice grounded in what real teams are facing.
Unsurprisingly, AI was everywhere at Convene 2026. It showed up as both the shiny new problem we have to defend against and the shiny new tool we get to use. There were sobering discussions about deepfakes being used to target executives and high‑risk roles, and how convincing these attacks can be when they borrow a leader’s face and voice. At the same time, there was a lot of pragmatic excitement about using AI to accelerate content creation, personalize learning, and help small teams do more with the limited time and budget they have.
One of my favorite talks framed the real tension many of us feel: “How do we use AI without being replaced by it?” The answer that resonated most with me is simple but easy to overlook: AI doesn’t care about humans, context, or culture—we do.
The human risk practitioner still has to decide what’s appropriate, ethical, and effective. In our own work, AI has become a genuine accelerator—helping us get to first drafts and new angles faster—but we keep humans firmly in charge of the judgment calls and the storytelling.AI can propose copy, generate scenarios, or even produce draft training modules, but it doesn’t understand your internal politics, your risk appetite, or the emotional weight of that last incident your team went through.
One of the strongest themes running through hallway conversations was this: we can’t allow AI to distract us from the basics.
If we chase every new AI‑powered threat or tool at the expense of consistent, human‑centered fundamentals, we’ll lose ground.
The most thoughtful people I spoke with weren’t asking, “What can AI replace?” They were asking, “Where can AI give us back time so we can invest more in relationships, culture, and strategic communication?”
That’s exactly how we’re approaching our Human Risk Management services at Reveal Risk. We use AI to speed up the mechanical parts—drafting copy, brainstorming scenarios, sorting data—so that we can spend more of our energy on the work no model can do: listening to employees, co‑designing experiences with stakeholders, and aligning HRM to real business objectives.
Compared with a couple of years ago, one change really stood out: champions programs are no longer a rare experiment. When I first attended Convene, only a handful of attendees had active security champions programs running in their organizations. This year, far more people were not just planning one but actively running one and learning from it. That shift is important. It signals that more organizations are starting to meet people where they are—embedding security voices in different departments instead of broadcasting everything from a central silo.
Champions programs are where we see the theory of “human risk” turn into lived reality. When a champion in Finance tells you that a particular process doesn’t work in month‑end close, that’s not a failure—it’s insight. When a champion in Sales explains why a “simple” security nudge feels like friction in the middle of a live deal, that’s invaluable design feedback. As a consulting partner, we’re increasingly helping clients not just launch champions programs, but shape them into two‑way communication channels that surface what’s really happening in the business.
One of the most thought‑provoking sessions I attended was from Alex Panaretos, who challenged us to think more critically about phishing and just‑in‑time pop‑ups. In our field, we often treat these “nudges” as an obvious good: a little reminder here, a warning banner there, and behavior will improve. The session introduced the concept of “amygdala hijacking”—the idea that, in high‑stress, high‑stakes environments, these prompts can actually trigger stress responses and work against us.
As someone who sees the value in nudges but also cares deeply about the psychology behind them, that tension stuck with me. It’s a topic I’ll be expanding upon (check out my first thoughts here!), because it has serious implications for how we design HRM programs that support people under pressure instead of adding to it.
For our team, attending Convene isn’t just about collecting stickers or posting sunshine photos...though I’m open to that, too. It’s about being in the same rooms as the people wrestling with human risk every day, hearing what’s actually working, and pressure‑testing our own ideas against the realities they’re living.
The conversations we had in Florida—about AI as both risk and accelerator, about the maturing of champions programs, and about the psychological nuances of nudging people toward safer behavior—are already shaping how we design and deliver our Human Risk Management services. If you’re carrying HRM as a “side of desk” responsibility or trying to modernize a long‑standing awareness program, this is the kind of thinking we want to bring when we work with you: grounded in practice, energized by community, and always centered on the humans at the heart of security.