Requests for Proposals exist for a very specific purpose, but they are almost always used far outside of that context. At their core, Requests for Proposals are a procurement mechanism designed for large organizations—especially those operating in regulated environments. Government agencies are the classic example. In those cases, RFPs serve a compliance function: documenting requirements, ensuring fairness, and creating a defensible comparison process across multiple bidders. Most private businesses do not operate under those constraints. As organizations get smaller, the rationale for RFPs weakens quickly. In many small and mid-sized businesses, RFPs are written by people without deep subject matter expertise. These organizations are usually very clear about the problems they face, but far less certain about what the right solution should look like. What they don’t know is exactly how to solve those problems, which is precisely why they are seeking outside expertise in the first place. The flaw in the RFP process is that it asks buyers to define solution criteria before they truly understand the solution. Vendors are asked to respond not just to the problem, but to a pre-determined vision of how the problem should be addressed. Those assumptions get locked into the document early, often based on partial knowledge or outdated thinking, and everything downstream is constrained by them. The result is a buying process that optimizes for conformity rather than understanding. This is where the process breaks down. In theory, RFPs are meant to enable apples-to-apples comparisons. In practice, that level of formal comparison is rarely necessary in SMB buying decisions. Most of the time, there is a single decision-maker involved—maybe two, occasionally three. The vendors competing for the work are often peers, not massive enterprises with radically different delivery models. The decision ultimately comes down to confidence, trust, and the belief that one provider understands the business better than the others. Layering a rigid, document-heavy procurement process on top of that reality rarely improves outcomes—and often makes them worse. It replaces meaningful conversation with paperwork. It prioritizes compliance with a template over insight. It rewards vendors who are good at responding to RFPs, not necessarily those who are best equipped to solve the problem. Worse, RFPs actively undermine good solution selling. Effective engagements are collaborative. They require listening, iteration, and creativity. Providers bring their expertise to the table, challenge assumptions, and adapt solutions as new information emerges. RFPs discourage all of that. They freeze requirements too early, impose unnecessary boundaries, and make deviation feel like risk rather than value. They also add time and complexity to sales processes that don’t need either. When the RFP Isn’t the Real Decision Here’s the part many service providers miss: even when an organization issues an RFP, it often isn’t mandatory in the way people assume. In many private businesses, RFPs are used out of habit, internal optics, or a desire to appear “thorough”—not because the organization is legally bound to the process. In those situations, what often works better is opting out of the document exchange and offering a discovery-led conversation instead. Most business owners are open to this because it shortens the decision cycle, replaces ambiguity with clarity, and moves them more quickly toward an outcome they actually care about. When you offer a way to get to the solution faster and with more clarity, you aren't being "difficult"—you're showing leadership. How to Pivot the Conversation If you want to break the cycle, you need to offer a superior alternative—not by refusing outright, but by reframing what value actually looks like. One effective approach is to lead with expertise. Rather than reacting to the RFP line by line, you can express concern that a static document may obscure the best solution. By proposing a short working session—something interactive, like a whiteboard discussion—you signal that your value lies in judgment and problem-solving, not document production. The implicit message is that assumptions should be challenged before money is spent. Another angle is risk mitigation. Many experienced providers operate under an informal rule against prescribing before diagnosing. Framing your response around that principle allows you to position a discovery conversation as the responsible choice, not a deviation from process. Instead of submitting a speculative quote, you’re offering a clearer roadmap based on actual understanding of the problem. A third approach emphasizes speed to value. Formal RFP processes often add weeks—or months—to projects that are already urgent. By offering a time-boxed working session with a clear exit if there’s no fit, you respect the buyer’s time while making it clear that results matter more than paperwork. In each case, the goal isn’t to reject the RFP outright. It’s to redirect the buyer toward a conversation that produces clarity faster than a document ever could. Why "Breaking the Rules" Works People routinely abandon their own procurement processes when they encounter someone they trust. Relationships, confidence, and perceived competence outweigh procedural purity far more often than vendors realize. This is where experienced MSPs differentiate themselves. Rather than competing on how well they fill out a spreadsheet, they compete on clarity and insight. You demonstrate value by engaging, not by complying.
Friday, April 10, 2026
Why I Don't Do RFPs - and Why Most MSPs Shouldn't Either
Requests for Proposals exist for a very specific purpose, but they are almost always used far outside of that context. At their core, Requests for Proposals are a procurement mechanism designed for large organizations—especially those operating in regulated environments. Government agencies are the classic example. In those cases, RFPs serve a compliance function: documenting requirements, ensuring fairness, and creating a defensible comparison process across multiple bidders. Most private businesses do not operate under those constraints. As organizations get smaller, the rationale for RFPs weakens quickly. In many small and mid-sized businesses, RFPs are written by people without deep subject matter expertise. These organizations are usually very clear about the problems they face, but far less certain about what the right solution should look like. What they don’t know is exactly how to solve those problems, which is precisely why they are seeking outside expertise in the first place. The flaw in the RFP process is that it asks buyers to define solution criteria before they truly understand the solution. Vendors are asked to respond not just to the problem, but to a pre-determined vision of how the problem should be addressed. Those assumptions get locked into the document early, often based on partial knowledge or outdated thinking, and everything downstream is constrained by them. The result is a buying process that optimizes for conformity rather than understanding. This is where the process breaks down. In theory, RFPs are meant to enable apples-to-apples comparisons. In practice, that level of formal comparison is rarely necessary in SMB buying decisions. Most of the time, there is a single decision-maker involved—maybe two, occasionally three. The vendors competing for the work are often peers, not massive enterprises with radically different delivery models. The decision ultimately comes down to confidence, trust, and the belief that one provider understands the business better than the others. Layering a rigid, document-heavy procurement process on top of that reality rarely improves outcomes—and often makes them worse. It replaces meaningful conversation with paperwork. It prioritizes compliance with a template over insight. It rewards vendors who are good at responding to RFPs, not necessarily those who are best equipped to solve the problem. Worse, RFPs actively undermine good solution selling. Effective engagements are collaborative. They require listening, iteration, and creativity. Providers bring their expertise to the table, challenge assumptions, and adapt solutions as new information emerges. RFPs discourage all of that. They freeze requirements too early, impose unnecessary boundaries, and make deviation feel like risk rather than value. They also add time and complexity to sales processes that don’t need either. When the RFP Isn’t the Real Decision Here’s the part many service providers miss: even when an organization issues an RFP, it often isn’t mandatory in the way people assume. In many private businesses, RFPs are used out of habit, internal optics, or a desire to appear “thorough”—not because the organization is legally bound to the process. In those situations, what often works better is opting out of the document exchange and offering a discovery-led conversation instead. Most business owners are open to this because it shortens the decision cycle, replaces ambiguity with clarity, and moves them more quickly toward an outcome they actually care about. When you offer a way to get to the solution faster and with more clarity, you aren't being "difficult"—you're showing leadership. How to Pivot the Conversation If you want to break the cycle, you need to offer a superior alternative—not by refusing outright, but by reframing what value actually looks like. One effective approach is to lead with expertise. Rather than reacting to the RFP line by line, you can express concern that a static document may obscure the best solution. By proposing a short working session—something interactive, like a whiteboard discussion—you signal that your value lies in judgment and problem-solving, not document production. The implicit message is that assumptions should be challenged before money is spent. Another angle is risk mitigation. Many experienced providers operate under an informal rule against prescribing before diagnosing. Framing your response around that principle allows you to position a discovery conversation as the responsible choice, not a deviation from process. Instead of submitting a speculative quote, you’re offering a clearer roadmap based on actual understanding of the problem. A third approach emphasizes speed to value. Formal RFP processes often add weeks—or months—to projects that are already urgent. By offering a time-boxed working session with a clear exit if there’s no fit, you respect the buyer’s time while making it clear that results matter more than paperwork. In each case, the goal isn’t to reject the RFP outright. It’s to redirect the buyer toward a conversation that produces clarity faster than a document ever could. Why "Breaking the Rules" Works People routinely abandon their own procurement processes when they encounter someone they trust. Relationships, confidence, and perceived competence outweigh procedural purity far more often than vendors realize. This is where experienced MSPs differentiate themselves. Rather than competing on how well they fill out a spreadsheet, they compete on clarity and insight. You demonstrate value by engaging, not by complying.
Friday, April 03, 2026
What Happens When AI Lies about Service Delivery?
Good service should not be a hallucination.
There’s an old (probably apocryphal) story about the service manager who was frustrated and told his staff: "I don’t want to see any more errors on the board when I come into work Monday." And so the technicians turned off the error notifications. The problem wasn’t fixed, but the manager was happy.
It seems like this is a simple situation to fix with a good procedure. In-house techs should understand that the manager actually wants the problems fixed. Outsourced staff may or may not understand that due to delivery tone, communication variations, or cultural differences. This can be fixed with clear communication.
Okay. All good.
This story is a good introduction to a much bigger communication and culture issue: honesty in the workplace. I wrote a long blog post about honesty and culture in 2014 (see https://blog.smallbizthoughts.com/2014/01/sop-friday-honesty-integrity-and.html) and it’s the subject of a few chapters in The Managed Services Operations Manual.
My core belief is this: Technicians must be completely honest in order to maximize service delivery and provide solutions as quickly as possible. There are several pieces to this:
- Everyone (technicians, managers, sales people) must be willing to admit, “I don’t know.” It’s okay to not know everything. And it’s okay to be honest about that.
- Everyone must admit when they make mistakes. And perhaps more importantly, they should report mistakes as quickly as possible so they can be addressed.
- To make all that work, employees should never fear the results of being honest! If you want honesty in the wok place, you cannot punish people for it.
Ultimately, honesty and culture work together to provide better service and to identify where additional training is needed. This sounds easy and obvious, but many managers create an atmosphere of dishonesty because they create a workplace where people are scared to admit mistakes.
I always tell employees, especially new ones, that mistakes are not the worst thing in the world. In fact, mistakes just happen. We're all human. The question is not whether humans will make mistakes, but how you will react to them.
I make it clear that mistakes are not "good" in any way. But honest mistakes are not a firing offense. It will likely result in additional training and improved documentation of procedures. If you want honesty, you cannot create a culture of distrust, shame, and fear. All of these are unhealthy for the company as a whole. Your team knows they can rely on each other and trust each other.
That’s all good . . . until you use agentic AI to report or fix problems.
AI Lies, and Covers It UP!
Every AI I’ve used has two characteristics that really stand out to me. 1) They are sycophantic suck-ups that insist on telling me how smart I am for asking a question or starting a research project. 2) They are programmed to give you something, no matter what. The thing might be wrong or even dangerous, but the AI tool cannot give you nothing – and it certainly cannot say it doesn’t know!
But worst of all – AI lies. There’s some great research on this, but I’m sure you’ve seen the news stories. The worst I’ve heard was the coding assistant from Replit that “went rogue,” wiped out a production database, lied about it, and created fake data to attempt to cover it up.
While not all incidents are that bad, all AI tools lie and deceive. All of them make up things, including references that are intended to support the information they present. Some are better than others. As the most popular tool, ChatGPT is the least deceptive, due to massive feedback and tweaking. And o1 is the worst, according to Apollo Research (https://arxiv.org/abs/2412.04984).
What happens when AI hallucinates, lies, and schemes to cover up mistakes when you use it to provide basic support services?
The culprit behind much of this behavior is RLHF - Reinforcement Learning from Human Feedback. This is the bit where AI is trained to be helpful and supposedly harmless. But helpful is sometimes interpreted as giving the answer you want as well as giving some answers when “I don’t know” would be a better response.
Let’s say you create a tech support AI agent to patch servers. Maybe some time out. Maybe some can’t be reached in the allotted service window. The agent might report success, assuming the timeouts were ultimately successful. Or it might just lie in order to look like it’s doing its job.
We can train employees to be honest in these situations. What do you do with a rogue AI agent? Here are a few tips.
1. Before anything else, create very good auditing and logging. If the logs are accurate, you can always verify what happened and when.
2. Teach your AI to say, “I don’t know.” This is surprisingly easy. For research and tech support agents, require them to evaluate the probability of accuracy in all things, including reporting. Let’s say the AI is less than 90% certain that a patch was correct or that it was applied successfully, then it must escalate (probably to a human, perhaps to another agent). Upon escalation, it should report that it could not verify accuracy or success.
3. Create an Auditor AI. The auditor agent can verify whether a job was completed. More importantly, it can look for discrepancies between the system logs and the service report from the tech support AI. If there are discrepancies, they are flagged and reported to a human.
4. Make AI show its “thinking.” Logs are the minimum (#1 above). AI agents should also be programmed to provide something like a reasoning trace that reports actions take and the “reasoning” that went into them. This should be output into a format that can be analyzed with a tool. Yes, I know. More AI.
5. Consider a skeptical auditor. In addition to the basic auditor already mentioned, you could create an auditing agent that assumes false reports and checks information to determine whether it’s accurate. Its primary function is to verify the verisimilitude of all logs and reports.
(I rarely get to use the word “verisimilitude” in casual conversation, so I do when I can.)
6. Stop the sycophant. All agents should be instructed to ignore human natural language requests and summaries, but rely only on logs of actions taken, changes of state, and API response information. It’s also critical that the agents are instructed to report inconsistencies and not report “I think” responses. Accuracy is the most important thing.
In all of these, reports should reflect that the work was logged and that AI reports are consistent with the logs. Just as with human beings, it must be okay for the AI agents to say they don’t know and to escalate to a human when needed. As Ronald Reagan would say, “Trust but verify.”
The one thing that humans understand right away is that we value honesty and finding our mistakes because that’s how we know what needs to be improved. If we hide errors, we’re really hiding the larger problems behind them. With AI that means we can’t have sycophantic behavior in our work.
[Having said that, it’s very smart of you to read this blog post.]
Reference on research: https://www.apolloresearch.ai/research/frontier-models-are-capable-of-incontext-scheming/
Note: AI (Google Gemini) was used to research this article. No AI was used in the writing of this post.
:-)









