Thursday, April 16, 2026

The Narrowing Path for the Generalist

There's a widening gap in the managed services market that's easy to miss if you only follow the topline numbers.

Industry revenue keeps growing at a healthy pace, and by most measures the market looks strong. At the same time, the number of providers appears to be flattening — or quietly declining. Those two facts can coexist, but they don't affect everyone equally.

M&A brokers I speak with confirm the pattern from a different angle. They're finding it harder to locate middle-market MSPs to acquire—firms that are big enough to be attractive but not yet at scale. The reason is simple: that tier has already been bought. The middle isn’t just under pressure; in some segments, consolidation has moved faster than new firms are growing into that tier. The shrinking isn't always a sign of struggle—some of it is simply consolidation. Firms that reached a certain size became attractive targets and were absorbed before they could grow further. To replenish that tier, you either wait for smaller firms to grow into it or change the economics of getting there. What I keep hearing in conversations with MSP operators is that the "comfortable middle" is getting harder to occupy. The space where a generalist MSP with a few dozen employees could grow steadily, maintain margins, and avoid extreme bets feels narrower than it used to. That doesn't suggest a broken market. It suggests one where the middle ground requires more intention to hold.
Where the Market Is Pulling Apart What this keeps reminding me of is a barbell. On one end are scaled providers. These are the national or PE-backed firms that have largely industrialized IT delivery. They're not just selling support; they're selling a repeatable, automated service model. Scale allows them to absorb rising security costs, invest heavily in tooling, and spread specialized talent across a wide customer base in ways that are difficult for smaller firms to replicate. On the other end are focused specialists. These are smaller firms that have stopped trying to be everything to everyone. Instead, they've gone deep — into a vertical, a regulatory environment, or a specific operational problem. Their differentiation isn't breadth; it's relevance. What seems to be under more strain is the space between those two ends: the generalist MSP with traditional overhead, familiar tooling, and a value proposition that's harder to articulate in a crowded market. There's a counterargument worth taking seriously: the generalist who delivers genuinely excellent service—responsive, personal, high-touch—may still have a durable position, especially with clients who are themselves in service businesses. Large providers, for all their efficiency, often can't replicate that. The question is whether 'great service' is enough to sustain pricing and margin as baseline expectations keep rising, or whether it needs to be paired with something else How the Pressure Shows Up For MSPs operating in that middle ground, the squeeze doesn't usually arrive all at once. It shows up in subtle, compounding ways. First, the baseline keeps rising. Capabilities that used to be profitable add-ons — advanced security, compliance support, monitoring depth — are increasingly expected as part of the core service. Delivering them well requires investment, and without scale, that investment eats directly into margin. Second, talent is harder to secure and harder to keep. The competition isn't just local anymore. Enterprise organizations and remote-first vendors can offer higher pay, narrower roles, or clearer career paths, pulling experienced engineers away from smaller firms that rely on a few key people. And then there's efficiency. Larger providers are beginning to use automation and AI-driven tooling to handle routine work in ways that change the cost curve. Documentation that used to take a technician fifteen minutes gets generated automatically. Triage that required a senior eye can now be handled by systems that route and prioritize with reasonable accuracy. First-touch support—password resets, basic troubleshooting—is increasingly handled without a human in the loop at all. Smaller, human-centric firms can still deliver excellent service, but the economic gap is starting to show. It's not just the cost of the tools; it's the unbillable hours required to manage them and the liability that now sits squarely on the provider's shoulders. None of this is catastrophic. But it does make operating without a clear direction more difficult over time. Choosing a Direction Matters More Than Ever I don't see this as a call to panic. The question I keep coming back to: if the technology is no longer the differentiator, what is? The market seems to be rewarding clarity more than it used to. Some firms will lean into scale, investing heavily in automation and standardization to compete more broadly. Others will double down on specialization, using deep expertise to justify pricing and defend relationships. For some, the most rational move may be to join a larger platform — trading independence for access to tools, talent, and operational leverage. The question I keep coming back to: if the technology is no longer the differentiator, what is? For some, it's scale. For others, specialization. And for a few, it may still be the quality of the relationship itself.



Friday, April 10, 2026

Why I Don't Do RFPs - and Why Most MSPs Shouldn't Either


Requests for Proposals exist for a very specific purpose, but they are almost always used far outside of that context. At their core, Requests for Proposals are a procurement mechanism designed for large organizations—especially those operating in regulated environments. Government agencies are the classic example. In those cases, RFPs serve a compliance function: documenting requirements, ensuring fairness, and creating a defensible comparison process across multiple bidders. Most private businesses do not operate under those constraints. As organizations get smaller, the rationale for RFPs weakens quickly. In many small and mid-sized businesses, RFPs are written by people without deep subject matter expertise. These organizations are usually very clear about the problems they face, but far less certain about what the right solution should look like. What they don’t know is exactly how to solve those problems, which is precisely why they are seeking outside expertise in the first place. The flaw in the RFP process is that it asks buyers to define solution criteria before they truly understand the solution. Vendors are asked to respond not just to the problem, but to a pre-determined vision of how the problem should be addressed. Those assumptions get locked into the document early, often based on partial knowledge or outdated thinking, and everything downstream is constrained by them. The result is a buying process that optimizes for conformity rather than understanding. This is where the process breaks down. In theory, RFPs are meant to enable apples-to-apples comparisons. In practice, that level of formal comparison is rarely necessary in SMB buying decisions. Most of the time, there is a single decision-maker involved—maybe two, occasionally three. The vendors competing for the work are often peers, not massive enterprises with radically different delivery models. The decision ultimately comes down to confidence, trust, and the belief that one provider understands the business better than the others. Layering a rigid, document-heavy procurement process on top of that reality rarely improves outcomes—and often makes them worse. It replaces meaningful conversation with paperwork. It prioritizes compliance with a template over insight. It rewards vendors who are good at responding to RFPs, not necessarily those who are best equipped to solve the problem. Worse, RFPs actively undermine good solution selling. Effective engagements are collaborative. They require listening, iteration, and creativity. Providers bring their expertise to the table, challenge assumptions, and adapt solutions as new information emerges. RFPs discourage all of that. They freeze requirements too early, impose unnecessary boundaries, and make deviation feel like risk rather than value. They also add time and complexity to sales processes that don’t need either.
When the RFP Isn’t the Real Decision Here’s the part many service providers miss: even when an organization issues an RFP, it often isn’t mandatory in the way people assume. In many private businesses, RFPs are used out of habit, internal optics, or a desire to appear “thorough”—not because the organization is legally bound to the process. In those situations, what often works better is opting out of the document exchange and offering a discovery-led conversation instead. Most business owners are open to this because it shortens the decision cycle, replaces ambiguity with clarity, and moves them more quickly toward an outcome they actually care about. When you offer a way to get to the solution faster and with more clarity, you aren't being "difficult"—you're showing leadership. How to Pivot the Conversation If you want to break the cycle, you need to offer a superior alternative—not by refusing outright, but by reframing what value actually looks like. One effective approach is to lead with expertise. Rather than reacting to the RFP line by line, you can express concern that a static document may obscure the best solution. By proposing a short working session—something interactive, like a whiteboard discussion—you signal that your value lies in judgment and problem-solving, not document production. The implicit message is that assumptions should be challenged before money is spent. Another angle is risk mitigation. Many experienced providers operate under an informal rule against prescribing before diagnosing. Framing your response around that principle allows you to position a discovery conversation as the responsible choice, not a deviation from process. Instead of submitting a speculative quote, you’re offering a clearer roadmap based on actual understanding of the problem. A third approach emphasizes speed to value. Formal RFP processes often add weeks—or months—to projects that are already urgent. By offering a time-boxed working session with a clear exit if there’s no fit, you respect the buyer’s time while making it clear that results matter more than paperwork. In each case, the goal isn’t to reject the RFP outright. It’s to redirect the buyer toward a conversation that produces clarity faster than a document ever could. Why "Breaking the Rules" Works People routinely abandon their own procurement processes when they encounter someone they trust. Relationships, confidence, and perceived competence outweigh procedural purity far more often than vendors realize. This is where experienced MSPs differentiate themselves. Rather than competing on how well they fill out a spreadsheet, they compete on clarity and insight. You demonstrate value by engaging, not by complying.
If you can’t get the decision-maker to agree to a conversation, treat that as a qualification signal—not a sales obstacle. It usually indicates they value process more than partnership, which rarely leads to a healthy client relationship. My Rule: If There’s an RFP, I’m Still Selective For that reason, I’m selective about engaging with RFP-driven deals. Every bid carries real opportunity cost, and the wrong buying process is often an early warning sign of an unhealthy client relationship. I didn’t respond to them when I ran an MSP, and I still don’t now. If a deal requires a formal RFP response, I’m usually out. Occasionally, I’ll respond in a way that deliberately avoids the template—outlining how I would approach the problem and inviting a conversation instead of trying to score points on predefined criteria. That’s not stubbornness. It’s pragmatism. RFPs carry real cost in time, effort, and opportunity. More importantly, they signal a buying process that often devalues expertise in favor of process. When that happens, the odds of a healthy, collaborative client relationship drop sharply. My view of business is straightforward. You talk through the problem. You agree on what success looks like. You design a solution together. Then you write it down. That document is called a contract. For most MSPs and IT services companies, that level of simplicity isn’t reckless—it’s appropriate. Complexity should be driven by risk, scale, or regulatory necessity. Otherwise, it’s just process for process’s sake. There are, of course, exceptions. Regulated industries and government contracting don’t always offer a choice. In those environments, RFPs may be unavoidable. But even then, it’s worth asking a hard question at the end of the effort: was the work required to respond to that RFP actually worth it? In my experience, most of the time, the answer is no.



Friday, April 03, 2026

What Happens When AI Lies about Service Delivery?

Good service should not be a hallucination.

There’s an old (probably apocryphal) story about the service manager who was frustrated and told his staff: "I don’t want to see any more errors on the board when I come into work Monday." And so the technicians turned off the error notifications. The problem wasn’t fixed, but the manager was happy.

A cartoon illustration of a robot with blue eyes wearing a striped shirt sitting at a table in a room. Multiple colored wires connect the robot's head and neck to a console labeled "Lie Detector System" on the table. The console has a gauge showing high "Deception," a moving paper strip with a graph, a panel with a green button labeled "True," and a panel with a red button labeled "False." The robot places a right hand on a plate on the console while its left hand is on the table. A clock in the background reads 11:14 AM.
It seems like this is a simple situation to fix with a good procedure. In-house techs should understand that the manager actually wants the problems fixed. Outsourced staff may or may not understand that due to delivery tone, communication variations, or cultural differences. This can be fixed with clear communication.

Okay. All good.

This story is a good introduction to a much bigger communication and culture issue: honesty in the workplace. I wrote a long blog post about honesty and culture in 2014 (see https://blog.smallbizthoughts.com/2014/01/sop-friday-honesty-integrity-and.html) and it’s the subject of a few chapters in The Managed Services Operations Manual.

My core belief is this: Technicians must be completely honest in order to maximize service delivery and provide solutions as quickly as possible. There are several pieces to this:

  • Everyone (technicians, managers, sales people) must be willing to admit, “I don’t know.” It’s okay to not know everything. And it’s okay to be honest about that.
  • Everyone must admit when they make mistakes. And perhaps more importantly, they should report mistakes as quickly as possible so they can be addressed.
  • To make all that work, employees should never fear the results of being honest! If you want honesty in the wok place, you cannot punish people for it.

Ultimately, honesty and culture work together to provide better service and to identify where additional training is needed. This sounds easy and obvious, but many managers create an atmosphere of dishonesty because they create a workplace where people are scared to admit mistakes.

I always tell employees, especially new ones, that mistakes are not the worst thing in the world. In fact, mistakes just happen. We're all human. The question is not whether humans will make mistakes, but how you will react to them.

I make it clear that mistakes are not "good" in any way. But honest mistakes are not a firing offense. It will likely result in additional training and improved documentation of procedures. If you want honesty, you cannot create a culture of distrust, shame, and fear. All of these are unhealthy for the company as a whole. Your team knows they can rely on each other and trust each other.

That’s all good . . . until you use agentic AI to report or fix problems.

AI Lies, and Covers It UP!

Every AI I’ve used has two characteristics that really stand out to me. 1) They are sycophantic suck-ups that insist on telling me how smart I am for asking a question or starting a research project. 2) They are programmed to give you something, no matter what. The thing might be wrong or even dangerous, but the AI tool cannot give you nothing – and it certainly cannot say it doesn’t know!

But worst of all – AI lies. There’s some great research on this, but I’m sure you’ve seen the news stories. The worst I’ve heard was the coding assistant from Replit that “went rogue,” wiped out a production database, lied about it, and created fake data to attempt to cover it up.

While not all incidents are that bad, all AI tools lie and deceive. All of them make up things, including references that are intended to support the information they present. Some are better than others. As the most popular tool, ChatGPT is the least deceptive, due to massive feedback and tweaking. And o1 is the worst, according to Apollo Research (https://arxiv.org/abs/2412.04984).

What happens when AI hallucinates, lies, and schemes to cover up mistakes when you use it to provide basic support services?

The culprit behind much of this behavior is RLHF - Reinforcement Learning from Human Feedback. This is the bit where AI is trained to be helpful and supposedly harmless. But helpful is sometimes interpreted as giving the answer you want as well as giving some answers when “I don’t know” would be a better response.

Let’s say you create a tech support AI agent to patch servers. Maybe some time out. Maybe some can’t be reached in the allotted service window. The agent might report success, assuming the timeouts were ultimately successful. Or it might just lie in order to look like it’s doing its job.

We can train employees to be honest in these situations. What do you do with a rogue AI agent? Here are a few tips.

1. Before anything else, create very good auditing and logging. If the logs are accurate, you can always verify what happened and when.

2. Teach your AI to say, “I don’t know.” This is surprisingly easy. For research and tech support agents, require them to evaluate the probability of accuracy in all things, including reporting. Let’s say the AI is less than 90% certain that a patch was correct or that it was applied successfully, then it must escalate (probably to a human, perhaps to another agent). Upon escalation, it should report that it could not verify accuracy or success.

3. Create an Auditor AI. The auditor agent can verify whether a job was completed. More importantly, it can look for discrepancies between the system logs and the service report from the tech support AI. If there are discrepancies, they are flagged and reported to a human.

4. Make AI show its “thinking.” Logs are the minimum (#1 above). AI agents should also be programmed to provide something like a reasoning trace that reports actions take and the “reasoning” that went into them. This should be output into a format that can be analyzed with a tool. Yes, I know. More AI.

5. Consider a skeptical auditor. In addition to the basic auditor already mentioned, you could create an auditing agent that assumes false reports and checks information to determine whether it’s accurate. Its primary function is to verify the verisimilitude of all logs and reports.  

(I rarely get to use the word “verisimilitude” in casual conversation, so I do when I can.)

6. Stop the sycophant. All agents should be instructed to ignore human natural language requests and summaries, but rely only on logs of actions taken, changes of state, and API response information. It’s also critical that the agents are instructed to report inconsistencies and not report “I think” responses. Accuracy is the most important thing.

In all of these, reports should reflect that the work was logged and that AI reports are consistent with the logs. Just as with human beings, it must be okay for the AI agents to say they don’t know and to escalate to a human when needed. As Ronald Reagan would say, “Trust but verify.”

The one thing that humans understand right away is that we value honesty and finding our mistakes because that’s how we know what needs to be improved. If we hide errors, we’re really hiding the larger problems behind them. With AI that means we can’t have sycophantic behavior in our work.

[Having said that, it’s very smart of you to read this blog post.]

Reference on research: https://www.apolloresearch.ai/research/frontier-models-are-capable-of-incontext-scheming/


Note: AI (Google Gemini) was used to research this article. No AI was used in the writing of this post.

:-)