Thursday, April 30, 2026

Follow the Money: Understanding Financial Motivation in Business Relationships

A surprising number of frustrating business conversations turn out to be incentive problems.

A customer asks for a technology solution that doesn’t actually move their business forward. A vendor pushes a product that doesn’t quite fit your environment. A conversation that seems straightforward ends up feeling oddly misaligned.

In many of those cases, the root issue isn’t technology or capability. It’s that the people involved are optimizing for different financial outcomes.

Two questions sit beneath many of them: how does your customer make money, and what is your vendor contact actually measured on?

Your Customer's Financial Motivation

Technology conversations too often start with the technology. What's the right tool? What's the best solution? But that's the wrong starting point. The better question is: why are we solving this problem in the first place?

More often than not, economics is what’s driving the decision. Your customer is trying to make their business work — generate revenue, control costs, deliver value. Even non-profits work this way. They may not be profit-driven, but they still need revenue to support the mission.

One of the clearest examples for me came from working with nonprofit membership organizations. Their key growth metric was membership, because membership was what funded the mission. In one case, articles were moving too slowly through the publishing process, which meant less content reaching the market. By tightening that workflow and getting material published faster, the organization could share more, stay more visible, and create more opportunities to attract new members. The technology change mattered, but only because it supported the real business objective: membership growth. Once you see the economic lever clearly, technology decisions stop being about features and start being about outcomes.

Your Vendor's Financial Motivation

The same logic applies to vendors. What is the person you’re talking to actually measured on?

A vendor rep is a person with a compensation plan. They're measured on specific things — new logo acquisition, upsell and retention metrics, movement of particular products or bundles, promotional quotas. Once you know that, a lot of vendor behavior stops being mysterious.

Here's the clearest illustration of what happens when you ignore this: picture an MSP walking up to a vendor's conference booth to complain about a support ticket from eight months ago. That rep at the booth is almost certainly being compensated on new business development or existing account growth. They may want to help, but they’re usually not the person equipped or incentivized to solve that problem. The interaction usually doesn’t resolve anything, and both sides walk away frustrated.

That's a misalignment failure. Not a bad vendor. Not a bad rep. A mismatch between what one party needs and what the other is positioned to deliver.

Once you start paying attention to incentives, a lot of vendor behavior becomes easier to interpret. A rep pushing a bundle may not be trying to sell you something unnecessary — they may be trying to hit a specific quota category. A sudden promotion on a product may not be about market demand — it may be about clearing a quarterly target. Understanding those incentives doesn’t mean you have to agree with them. But it does make the conversation easier to navigate.

Alignment Is the Work

Understanding the financial motivation on the other side helps you ask for the right thing from the right person.

For customers, that means grounding your recommendations in the economic realities of their business — not just technical capabilities.

For vendors, that means routing the right conversations to the right people, and understanding what a given rep can and can't actually move on.

Once you understand the financial motivation on the other side, it gets easier to ask for the right thing from the right person. A lot of business friction turns out to be incentive misalignment.


Thursday, April 23, 2026

Acquisition Risk Modeling for MSPs: What I Learned by Getting It Wrong

 

In 2008, my MSP was in a strong position. Evolve was a generalist IT services provider — servers, infrastructure, the bread and butter of that pre-cloud, pre-security era. Business was solid, and when the opportunity came to acquire a smaller MSP, I jumped. New clients, new recurring revenue, a shortcut to organic growth.

Within four months, I'd lost nearly every contract I acquired. I was running the company on credit cards and watching the business I'd built before the deal start to buckle.

Here's what went wrong — framed around the risk modeling I should have done before I ever signed.

Risk 1: Unverified Revenue Quality

My entire acquisition thesis lived in Excel. I saw a client list, saw revenue numbers, and built a model that showed what Evolve would look like with those contracts bolted on. What I didn't do was look behind the numbers.

I didn't verify financials with any rigor. I didn't assess client quality — their engagement, satisfaction, or likelihood of surviving a transition. I didn't fully understand whether the contracts were even transferable. I had lawyers and advisors giving me checklists and guidance, and I skipped most of it because I thought the deal was simple.

The reality: most of the acquired clients were low quality, loosely attached, and had zero loyalty to a new provider, and most important, not interested in our standards. When the transition wasn't identical, they cancelled.

The model should have included: Client quality scoring, contract transferability review, and churn scenarios at 20%, 40%, and 60%.

Risk 2: Key-Person Dependency

The single worst blow came from what I thought was a crown jewel account. The technician I'd inherited convinced that client to hire him directly, cutting us out entirely. He walked out the door and took the revenue with him.

The model should have included: Identification of key-person dependencies, non-compete and non-solicitation provisions, and a client relationship transition plan that reduced single points of failure.

Risk 3: Staffing Exposure

I'd taken on the acquired company's staff as part of the deal. As contracts evaporated, I was suddenly overstaffed with no way to support payroll. The deal structure itself was actually sound — a small amount down with payouts tied to retained contracts — so the purchase price didn't bury me. But the operational cost of carrying a larger team through a collapsing client base was devastating.

The model should have included: Staffing cost scenarios tied to revenue retention thresholds, with defined triggers for restructuring if contract volume dropped below break-even.

Risk 4: Distraction Cost to the Core Business

This is the risk nobody talks about. As things spiraled with acquired clients, every ounce of energy went into triage — saving contracts, fixing service gaps, onboarding properly. All reactive, all urgent. Meanwhile, our existing Evolve clients started getting less attention. Response times slipped. Proactive work stalled. They noticed.

It took over a year to stabilize. The financial hole was real, but the distraction cost to existing client relationships was worse.

The model should have included: A capacity plan that ring-fenced service delivery for existing clients, with dedicated resources for integration work that didn't cannibalize the core business.

The Framework I Wish I'd Used

If I distill this down to a single failure, it's that I never modeled the downside. Every assumption was optimistic. There was no Plan B.

Before closing any MSP acquisition, stress-test your model against these questions:

  • What happens if 30–60% of acquired clients churn in 90 days?
  • Which accounts depend on a single technician relationship, and how do you mitigate that?
  • At what revenue retention level does your staffing become unsustainable, and what's the trigger plan?
  • How do you protect service quality for your existing clients during integration?
  • What's your worst-case financial exposure, and can you survive it without jeopardizing the core business?

If you can't answer these clearly, you're not ready to close.

This Isn't Just Acquisition Advice

Here's the thing — every one of these risks exists in your MSP right now, whether you're acquiring another company or not.

You have clients on your books today whose revenue looks solid in your PSA but who are disengaged, underserved, or one bad experience away from leaving. You have technicians whose departure would take key accounts with them. You have staffing costs that assume current revenue holds steady. And you've almost certainly taken on a major initiative — a new service line, a platform migration, a big project — and watched it quietly erode the attention your existing clients were getting.

The same discipline applies. Score your client quality regularly. Identify where single points of failure exist in your client relationships. Know your break-even staffing number and what triggers a change. Protect the core business whenever you take on something new.

Most MSPs run on optimistic assumptions every day. We plan for growth, not for contraction. We assume clients will stay, staff will remain, and revenue will hold. The acquisition just compressed all of those unexamined risks into a few brutal months and made them impossible to ignore.

The Bottom Line

Whether you're evaluating an acquisition or just running your MSP on a Tuesday, the discipline is the same: model what happens when things go wrong. Build the plan that assumes some clients leave, some key people move on, and some initiatives don't land the way you expected.

Work the process. Listen to your advisors. Stress-test your assumptions. And be brutally honest about the downside — because that's where the real risk lives.

I learned this the hard way. Hopefully you don't have to.

War story of your own? I do appreciate feedback.



Thursday, April 16, 2026

The Narrowing Path for the Generalist

There's a widening gap in the managed services market that's easy to miss if you only follow the topline numbers.

Industry revenue keeps growing at a healthy pace, and by most measures the market looks strong. At the same time, the number of providers appears to be flattening — or quietly declining. Those two facts can coexist, but they don't affect everyone equally.

M&A brokers I speak with confirm the pattern from a different angle. They're finding it harder to locate middle-market MSPs to acquire—firms that are big enough to be attractive but not yet at scale. The reason is simple: that tier has already been bought. The middle isn’t just under pressure; in some segments, consolidation has moved faster than new firms are growing into that tier. The shrinking isn't always a sign of struggle—some of it is simply consolidation. Firms that reached a certain size became attractive targets and were absorbed before they could grow further. To replenish that tier, you either wait for smaller firms to grow into it or change the economics of getting there. What I keep hearing in conversations with MSP operators is that the "comfortable middle" is getting harder to occupy. The space where a generalist MSP with a few dozen employees could grow steadily, maintain margins, and avoid extreme bets feels narrower than it used to. That doesn't suggest a broken market. It suggests one where the middle ground requires more intention to hold.
Where the Market Is Pulling Apart What this keeps reminding me of is a barbell. On one end are scaled providers. These are the national or PE-backed firms that have largely industrialized IT delivery. They're not just selling support; they're selling a repeatable, automated service model. Scale allows them to absorb rising security costs, invest heavily in tooling, and spread specialized talent across a wide customer base in ways that are difficult for smaller firms to replicate. On the other end are focused specialists. These are smaller firms that have stopped trying to be everything to everyone. Instead, they've gone deep — into a vertical, a regulatory environment, or a specific operational problem. Their differentiation isn't breadth; it's relevance. What seems to be under more strain is the space between those two ends: the generalist MSP with traditional overhead, familiar tooling, and a value proposition that's harder to articulate in a crowded market. There's a counterargument worth taking seriously: the generalist who delivers genuinely excellent service—responsive, personal, high-touch—may still have a durable position, especially with clients who are themselves in service businesses. Large providers, for all their efficiency, often can't replicate that. The question is whether 'great service' is enough to sustain pricing and margin as baseline expectations keep rising, or whether it needs to be paired with something else How the Pressure Shows Up For MSPs operating in that middle ground, the squeeze doesn't usually arrive all at once. It shows up in subtle, compounding ways. First, the baseline keeps rising. Capabilities that used to be profitable add-ons — advanced security, compliance support, monitoring depth — are increasingly expected as part of the core service. Delivering them well requires investment, and without scale, that investment eats directly into margin. Second, talent is harder to secure and harder to keep. The competition isn't just local anymore. Enterprise organizations and remote-first vendors can offer higher pay, narrower roles, or clearer career paths, pulling experienced engineers away from smaller firms that rely on a few key people. And then there's efficiency. Larger providers are beginning to use automation and AI-driven tooling to handle routine work in ways that change the cost curve. Documentation that used to take a technician fifteen minutes gets generated automatically. Triage that required a senior eye can now be handled by systems that route and prioritize with reasonable accuracy. First-touch support—password resets, basic troubleshooting—is increasingly handled without a human in the loop at all. Smaller, human-centric firms can still deliver excellent service, but the economic gap is starting to show. It's not just the cost of the tools; it's the unbillable hours required to manage them and the liability that now sits squarely on the provider's shoulders. None of this is catastrophic. But it does make operating without a clear direction more difficult over time. Choosing a Direction Matters More Than Ever I don't see this as a call to panic. The question I keep coming back to: if the technology is no longer the differentiator, what is? The market seems to be rewarding clarity more than it used to. Some firms will lean into scale, investing heavily in automation and standardization to compete more broadly. Others will double down on specialization, using deep expertise to justify pricing and defend relationships. For some, the most rational move may be to join a larger platform — trading independence for access to tools, talent, and operational leverage. The question I keep coming back to: if the technology is no longer the differentiator, what is? For some, it's scale. For others, specialization. And for a few, it may still be the quality of the relationship itself.



Friday, April 10, 2026

Why I Don't Do RFPs - and Why Most MSPs Shouldn't Either


Requests for Proposals exist for a very specific purpose, but they are almost always used far outside of that context. At their core, Requests for Proposals are a procurement mechanism designed for large organizations—especially those operating in regulated environments. Government agencies are the classic example. In those cases, RFPs serve a compliance function: documenting requirements, ensuring fairness, and creating a defensible comparison process across multiple bidders. Most private businesses do not operate under those constraints. As organizations get smaller, the rationale for RFPs weakens quickly. In many small and mid-sized businesses, RFPs are written by people without deep subject matter expertise. These organizations are usually very clear about the problems they face, but far less certain about what the right solution should look like. What they don’t know is exactly how to solve those problems, which is precisely why they are seeking outside expertise in the first place. The flaw in the RFP process is that it asks buyers to define solution criteria before they truly understand the solution. Vendors are asked to respond not just to the problem, but to a pre-determined vision of how the problem should be addressed. Those assumptions get locked into the document early, often based on partial knowledge or outdated thinking, and everything downstream is constrained by them. The result is a buying process that optimizes for conformity rather than understanding. This is where the process breaks down. In theory, RFPs are meant to enable apples-to-apples comparisons. In practice, that level of formal comparison is rarely necessary in SMB buying decisions. Most of the time, there is a single decision-maker involved—maybe two, occasionally three. The vendors competing for the work are often peers, not massive enterprises with radically different delivery models. The decision ultimately comes down to confidence, trust, and the belief that one provider understands the business better than the others. Layering a rigid, document-heavy procurement process on top of that reality rarely improves outcomes—and often makes them worse. It replaces meaningful conversation with paperwork. It prioritizes compliance with a template over insight. It rewards vendors who are good at responding to RFPs, not necessarily those who are best equipped to solve the problem. Worse, RFPs actively undermine good solution selling. Effective engagements are collaborative. They require listening, iteration, and creativity. Providers bring their expertise to the table, challenge assumptions, and adapt solutions as new information emerges. RFPs discourage all of that. They freeze requirements too early, impose unnecessary boundaries, and make deviation feel like risk rather than value. They also add time and complexity to sales processes that don’t need either.
When the RFP Isn’t the Real Decision Here’s the part many service providers miss: even when an organization issues an RFP, it often isn’t mandatory in the way people assume. In many private businesses, RFPs are used out of habit, internal optics, or a desire to appear “thorough”—not because the organization is legally bound to the process. In those situations, what often works better is opting out of the document exchange and offering a discovery-led conversation instead. Most business owners are open to this because it shortens the decision cycle, replaces ambiguity with clarity, and moves them more quickly toward an outcome they actually care about. When you offer a way to get to the solution faster and with more clarity, you aren't being "difficult"—you're showing leadership. How to Pivot the Conversation If you want to break the cycle, you need to offer a superior alternative—not by refusing outright, but by reframing what value actually looks like. One effective approach is to lead with expertise. Rather than reacting to the RFP line by line, you can express concern that a static document may obscure the best solution. By proposing a short working session—something interactive, like a whiteboard discussion—you signal that your value lies in judgment and problem-solving, not document production. The implicit message is that assumptions should be challenged before money is spent. Another angle is risk mitigation. Many experienced providers operate under an informal rule against prescribing before diagnosing. Framing your response around that principle allows you to position a discovery conversation as the responsible choice, not a deviation from process. Instead of submitting a speculative quote, you’re offering a clearer roadmap based on actual understanding of the problem. A third approach emphasizes speed to value. Formal RFP processes often add weeks—or months—to projects that are already urgent. By offering a time-boxed working session with a clear exit if there’s no fit, you respect the buyer’s time while making it clear that results matter more than paperwork. In each case, the goal isn’t to reject the RFP outright. It’s to redirect the buyer toward a conversation that produces clarity faster than a document ever could. Why "Breaking the Rules" Works People routinely abandon their own procurement processes when they encounter someone they trust. Relationships, confidence, and perceived competence outweigh procedural purity far more often than vendors realize. This is where experienced MSPs differentiate themselves. Rather than competing on how well they fill out a spreadsheet, they compete on clarity and insight. You demonstrate value by engaging, not by complying.
If you can’t get the decision-maker to agree to a conversation, treat that as a qualification signal—not a sales obstacle. It usually indicates they value process more than partnership, which rarely leads to a healthy client relationship. My Rule: If There’s an RFP, I’m Still Selective For that reason, I’m selective about engaging with RFP-driven deals. Every bid carries real opportunity cost, and the wrong buying process is often an early warning sign of an unhealthy client relationship. I didn’t respond to them when I ran an MSP, and I still don’t now. If a deal requires a formal RFP response, I’m usually out. Occasionally, I’ll respond in a way that deliberately avoids the template—outlining how I would approach the problem and inviting a conversation instead of trying to score points on predefined criteria. That’s not stubbornness. It’s pragmatism. RFPs carry real cost in time, effort, and opportunity. More importantly, they signal a buying process that often devalues expertise in favor of process. When that happens, the odds of a healthy, collaborative client relationship drop sharply. My view of business is straightforward. You talk through the problem. You agree on what success looks like. You design a solution together. Then you write it down. That document is called a contract. For most MSPs and IT services companies, that level of simplicity isn’t reckless—it’s appropriate. Complexity should be driven by risk, scale, or regulatory necessity. Otherwise, it’s just process for process’s sake. There are, of course, exceptions. Regulated industries and government contracting don’t always offer a choice. In those environments, RFPs may be unavoidable. But even then, it’s worth asking a hard question at the end of the effort: was the work required to respond to that RFP actually worth it? In my experience, most of the time, the answer is no.



Friday, April 03, 2026

What Happens When AI Lies about Service Delivery?

Good service should not be a hallucination.

There’s an old (probably apocryphal) story about the service manager who was frustrated and told his staff: "I don’t want to see any more errors on the board when I come into work Monday." And so the technicians turned off the error notifications. The problem wasn’t fixed, but the manager was happy.

A cartoon illustration of a robot with blue eyes wearing a striped shirt sitting at a table in a room. Multiple colored wires connect the robot's head and neck to a console labeled "Lie Detector System" on the table. The console has a gauge showing high "Deception," a moving paper strip with a graph, a panel with a green button labeled "True," and a panel with a red button labeled "False." The robot places a right hand on a plate on the console while its left hand is on the table. A clock in the background reads 11:14 AM.
It seems like this is a simple situation to fix with a good procedure. In-house techs should understand that the manager actually wants the problems fixed. Outsourced staff may or may not understand that due to delivery tone, communication variations, or cultural differences. This can be fixed with clear communication.

Okay. All good.

This story is a good introduction to a much bigger communication and culture issue: honesty in the workplace. I wrote a long blog post about honesty and culture in 2014 (see https://blog.smallbizthoughts.com/2014/01/sop-friday-honesty-integrity-and.html) and it’s the subject of a few chapters in The Managed Services Operations Manual.

My core belief is this: Technicians must be completely honest in order to maximize service delivery and provide solutions as quickly as possible. There are several pieces to this:

  • Everyone (technicians, managers, sales people) must be willing to admit, “I don’t know.” It’s okay to not know everything. And it’s okay to be honest about that.
  • Everyone must admit when they make mistakes. And perhaps more importantly, they should report mistakes as quickly as possible so they can be addressed.
  • To make all that work, employees should never fear the results of being honest! If you want honesty in the wok place, you cannot punish people for it.

Ultimately, honesty and culture work together to provide better service and to identify where additional training is needed. This sounds easy and obvious, but many managers create an atmosphere of dishonesty because they create a workplace where people are scared to admit mistakes.

I always tell employees, especially new ones, that mistakes are not the worst thing in the world. In fact, mistakes just happen. We're all human. The question is not whether humans will make mistakes, but how you will react to them.

I make it clear that mistakes are not "good" in any way. But honest mistakes are not a firing offense. It will likely result in additional training and improved documentation of procedures. If you want honesty, you cannot create a culture of distrust, shame, and fear. All of these are unhealthy for the company as a whole. Your team knows they can rely on each other and trust each other.

That’s all good . . . until you use agentic AI to report or fix problems.

AI Lies, and Covers It UP!

Every AI I’ve used has two characteristics that really stand out to me. 1) They are sycophantic suck-ups that insist on telling me how smart I am for asking a question or starting a research project. 2) They are programmed to give you something, no matter what. The thing might be wrong or even dangerous, but the AI tool cannot give you nothing – and it certainly cannot say it doesn’t know!

But worst of all – AI lies. There’s some great research on this, but I’m sure you’ve seen the news stories. The worst I’ve heard was the coding assistant from Replit that “went rogue,” wiped out a production database, lied about it, and created fake data to attempt to cover it up.

While not all incidents are that bad, all AI tools lie and deceive. All of them make up things, including references that are intended to support the information they present. Some are better than others. As the most popular tool, ChatGPT is the least deceptive, due to massive feedback and tweaking. And o1 is the worst, according to Apollo Research (https://arxiv.org/abs/2412.04984).

What happens when AI hallucinates, lies, and schemes to cover up mistakes when you use it to provide basic support services?

The culprit behind much of this behavior is RLHF - Reinforcement Learning from Human Feedback. This is the bit where AI is trained to be helpful and supposedly harmless. But helpful is sometimes interpreted as giving the answer you want as well as giving some answers when “I don’t know” would be a better response.

Let’s say you create a tech support AI agent to patch servers. Maybe some time out. Maybe some can’t be reached in the allotted service window. The agent might report success, assuming the timeouts were ultimately successful. Or it might just lie in order to look like it’s doing its job.

We can train employees to be honest in these situations. What do you do with a rogue AI agent? Here are a few tips.

1. Before anything else, create very good auditing and logging. If the logs are accurate, you can always verify what happened and when.

2. Teach your AI to say, “I don’t know.” This is surprisingly easy. For research and tech support agents, require them to evaluate the probability of accuracy in all things, including reporting. Let’s say the AI is less than 90% certain that a patch was correct or that it was applied successfully, then it must escalate (probably to a human, perhaps to another agent). Upon escalation, it should report that it could not verify accuracy or success.

3. Create an Auditor AI. The auditor agent can verify whether a job was completed. More importantly, it can look for discrepancies between the system logs and the service report from the tech support AI. If there are discrepancies, they are flagged and reported to a human.

4. Make AI show its “thinking.” Logs are the minimum (#1 above). AI agents should also be programmed to provide something like a reasoning trace that reports actions take and the “reasoning” that went into them. This should be output into a format that can be analyzed with a tool. Yes, I know. More AI.

5. Consider a skeptical auditor. In addition to the basic auditor already mentioned, you could create an auditing agent that assumes false reports and checks information to determine whether it’s accurate. Its primary function is to verify the verisimilitude of all logs and reports.  

(I rarely get to use the word “verisimilitude” in casual conversation, so I do when I can.)

6. Stop the sycophant. All agents should be instructed to ignore human natural language requests and summaries, but rely only on logs of actions taken, changes of state, and API response information. It’s also critical that the agents are instructed to report inconsistencies and not report “I think” responses. Accuracy is the most important thing.

In all of these, reports should reflect that the work was logged and that AI reports are consistent with the logs. Just as with human beings, it must be okay for the AI agents to say they don’t know and to escalate to a human when needed. As Ronald Reagan would say, “Trust but verify.”

The one thing that humans understand right away is that we value honesty and finding our mistakes because that’s how we know what needs to be improved. If we hide errors, we’re really hiding the larger problems behind them. With AI that means we can’t have sycophantic behavior in our work.

[Having said that, it’s very smart of you to read this blog post.]

Reference on research: https://www.apolloresearch.ai/research/frontier-models-are-capable-of-incontext-scheming/


Note: AI (Google Gemini) was used to research this article. No AI was used in the writing of this post.

:-)