Skip to content

You Paid $50K for a Cybersecurity Maturity Assessment. You Got a Number, and Someone Else’s Shopping List.

This article originally appeared on February 24, 2026 as the first edition of The Risk Realist Newsletter


I’m going to say something that might make some of my peers in the consulting world uncomfortable: the cybersecurity maturity assessment market is broken.

Not because the frameworks are wrong. NIST CSF 2.0 is a solid option.

Not because the people doing assessments are bad. Most assessors are generally competent and well-intentioned.

It’s broken because the incentive structures that drive the market produce outputs that often serve the assessor more than the organization being assessed.


I’ve seen this on all sides of the table. On the corporate side, I held cyber leadership roles where I was accountable for driving and sourcing an external cyber maturity assessment. I’ve been the person coordinating, receiving, and trying to action these assessments. And yes, I’ve been the one sitting in conference rooms watching the consultants walk through 80 slides of gap analysis that told me everything that was wrong and nothing of substance about what to do next or how to do it.

On the consulting side, I’ve held nearly full-time interim CISO roles where my first job was figuring out what to do with the last assessor’s report, which often had resulted in two years of cyber tool purchases and integration projects that lacked process, a plan for scale, or a comprehensive vision.

And lastly, on the consulting service side, I can see the temptations to protect project margins by delegating work to cheaper resources or to streamline efficiency by reusing “accelerator” deliverables that teams can forget to align with the client’s reality.

Here’s what I’ve learned:

The problems exist in both the focus, depth of experience, and bias of the consulting provider. And on the corporate side, what happens (or doesn’t happen) after the consultants leave.

The six failure modes we need to talk about:

1. The VAR Play.

You call your technology reseller for a “NIST CSF maturity assessment.” They look like the budget affordable play (pay no attention to the reason). They send a consultant or two. The consultant identifies gaps. Shockingly, every gap maps tightly to a product they sell. Your assessment just became a purchase order with a professional services wrapper. The real incentive isn’t to find out what most needs improvement. It is biased to find out what improves their bottom line (aka generates revenue for their software and hardware portfolio). You didn’t get an assessment. You got a sales pipeline beachhead disguised as professional advice.

2. The MSP Limitations.

Much of the small business and lower-mid-market relies on an IT Managed Service Provider (MSP), which often offers a cybersecurity assessment as part of its bundle. But MSPs are infrastructure generalists. They can be good at what they do, managing uptime/patching/helpdesk. Their cyber depth often stops at the bolt-on tools: MDR, endpoint protection, and vulnerability scanners are the typical hat trick. They typically don’t focus on developing robust, practitioner-minded strategy, on cyber program processes, on determining whether your governance model works, on whether your program aligns with business risk, or on whether your workforce culture is your biggest vulnerability. You get green checkmarks for tools, but a blind spot for everything else that actually matters.

3. The Shelf Report.

A consulting firm delivers a thorough NIST CSF assessment. It’s 80 pages. It says you’re a 2.3 out of 5. It lists 47 gaps across all six functions. And then… nothing. No prioritized roadmap. No budget alignment. No “here’s what to do first with the money you actually have.” No “here’s HOW to tangibly improve the maturity in your deficient areas with clear scope and objectives. The report is technically accurate but also practically useless. Six months later, someone asks, “What happened with that assessment?” and nobody has a good answer.

4. The Proprietary Lock-In.

Some firms skip NIST CSF entirely and use their own scoring methodology. Ooh! Appealing! Something unique and proprietary just for you! Custom dashboards, branded maturity models, and unique scales. The problem: you can’t benchmark against peers (beyond the redacted peers they gave you during their assessment, you can’t compare results across future assessors (if you don’t go back to the same firm in 2 years), and you can’t manage the findings without them. Want a second opinion? The frameworks don’t translate. Want to switch firms? Start over. You didn’t buy an assessment; you bought a dependency. They are sticky… and you are stuck.

5. The Big Firm Bait-and-Switch.

A partner from a major consulting firm pitches your assessment (and probably to your boss first). They’re impressive! Deep experience, polished methodology, name-brand credibility. You sign the SOW. And then the partner disappears. The work is delivered by a team of associates and analysts. Sure, they are smart people, but often 2-3 years out of school, running a standardized methodology they were well trained on, but haven’t lived. They ask good questions from a checklist. They don’t ask the follow-up questions that only come from experience. The deliverable is polished / well-formatted / expensive. Pay no attention to the fact that it was built from a template that often includes details from the last client whose report it was used on. If AI helped streamline and cut some corners, you may see the issues even faster. You got the associate’s execution. And nobody connected the findings to YOUR business, YOUR risk appetite, or YOUR budget reality.

6. The Assessor Roulette.

Even when the framework and the intent are right, maturity scoring is disturbingly subjective. Three qualified assessors evaluating the same environment will produce three different scores. One assessor’s “partially implemented” is another’s “largely in place.” Are they scoring the tools, tech, and knowledge, or how far those tools have been adopted across the organization? When the criteria for each maturity level aren’t precisely defined, everyone invents their own rubric. Your board-level maturity score depends on who held the clipboard, not on how mature your program actually is.

The real questions that might not be being asked or answered

Your board shouldn’t care that you’re a 2.3 on a NIST CSF maturity scale. They need to care about: How much risk do we carry? What happens if we get hit? What should we spend, on what, in what order? What progress will we make with the efforts? And how do we know it’s working?

A maturity score (by itself) can’t answer ANY of those questions. It tells you where you are on a spectrum. It doesn’t tell you what it means in dollars, business impact, or operational priority. Organizations are spending real money on assessments and still can’t connect the results to a budget conversation, a board narrative, or a three-year roadmap.

The assessment answered a single question (“what’s our NIST CSF score?”) while ignoring the many of the broader questions everyone actually needs answered: “What should we do next, what will we get from it, how we can sanction and manage the work, what realistically can we get done with our resources, and what will it cost?”

What “good” actually looks like

The organizations I’ve seen get this right share a few things in common — and none of them involve a fancier scoring methodology:

1. Use a standard framework, not a proprietary one. Your results should be transparent/benchmarkable/portable. You should be able to take them to any qualified advisor, and they should make sense. If your assessment only works inside one firm’s ecosystem, that’s a feature for them and a liability for you.

2. Demand a roadmap, not just a report. An assessment that tells you what’s wrong without telling you what to do about it (in what order, with what resources, with actionable details of what actually needs to be done) is only half the job. If your output is a list of problems without a sequenced, achievable, and actionable plan to address them, you received a diagnosis with no treatment plan.

3. Insist on business translation. Your board doesn’t need a heat map. They need: here’s where we’re exposed, here’s what it means for the business, here’s what we recommend, and here’s how we’ll know it’s working. If the assessment can’t produce THAT conversation, it’s a technical exercise pretending to be a strategic one.

4. Find someone accountable for what comes AFTER. A maturity assessment should be the beginning of a strategic relationship, not the end of a consulting engagement. The organizations that actually improve are those where someone owns execution (week over week, quarter over quarter), managing the people/process/technology changes required to sustain improvement. Because improving a maturity score isn’t a technical project. It’s an organizational change management initiative. Most assessors just don’t think that way.

If your last assessment gave you a number and a list of problems, but no plan, no priorities, and no one accountable for what comes next, that wasn’t an assessment....that was a receipt.
About the author
Aaron Pritz
Aaron Pritz is a veteran cybersecurity professional with experiences within IT, Six Sigma, privacy, insider threat, and risk management. He is the CEO and Co-Founder of Reveal Risk, a boutique cybersecurity, privacy, and risk consultancy and has over 20 years of experience in the field. He held various leadership roles in the pharmaceutical industry for 17 years before pivoting to a client advisory role and co-founding Reveal Risk. He applies robust knowledge, his industry networks, and creativity to solve some of the toughest challenges in the field