Table of Contents >> Show >> Hide
Every company says it wants to be “AI-first” now. That phrase is everywhere, floating through board decks, Slack threads, and executive keynotes like confetti after a product launch. But once the confetti hits the floor, somebody still has to answer a harder question: what does an AI-first company actually look like when real customers, real workflows, and real mistakes are involved?
Zapier CEO Wade Foster has a more grounded answer than most. His company, long known for connecting apps and automating repetitive work, has become one of the clearest case studies in how AI rollouts work in practice. The flashy version of the story is easy to tell: Zapier is a $5 billion company, it has pushed hard into AI orchestration, and it talks openly about agents, automation, and scaling work without scaling chaos. The useful version of the story is more interesting: Foster does not argue that AI should replace humans outright. He argues that the best systems combine deterministic automation, agentic intelligence, and human judgment.
That is where the “90% rule” comes in. In Foster’s framing, many AI workflows are good enough to do about 90% of the work, but not reliable enough to own the final mile alone. In other words, AI can do the heavy lifting, but humans still need to make the last call when the stakes are real. It is less “fire everyone and let the bots roam free” and more “let the machine carry the piano, but maybe do not let it conduct the orchestra unsupervised.”
This is a surprisingly practical strategy at a moment when too many AI conversations swing between two extremes: blind hype and blanket cynicism. Foster’s view lands in the middle. It accepts that agents are powerful. It also accepts that they are imperfect, expensive to misuse, and dangerous when dropped into important workflows without structure. For leaders trying to turn AI from a demo into an operating model, that middle path may be the smartest path.
What the 90% Rule Actually Means
The 90% rule is not an argument for mediocre work. It is an argument for realistic system design. Foster’s point is that the more agentic a workflow becomes, the more likely it is to produce outputs that are highly useful but not fully trustworthy on their own. That final 10% matters because it is usually where nuance, judgment, accountability, and customer trust live.
Think about a renewal workflow for a B2B account team. An AI system can gather usage data, summarize support tickets, scan call transcripts, spot product adoption patterns, and draft a sharp brief faster than any human rep juggling a crowded book of business. That is the 90%. But deciding whether a customer should receive a renewal pitch, an upsell motion, a softer retention strategy, or a problem-solving conversation still benefits from human context. A machine can assemble the ingredients. The human decides what kind of meal should actually be served.
This distinction matters because many companies still confuse capable with autonomous. A system can be capable of drafting, summarizing, ranking, and recommending without being ready to operate independently. Foster’s framework treats AI as a force multiplier, not a magical replacement engine. That hybrid design is not a compromise. It is the product.
Why Zapier’s AI Strategy Makes Sense
Zapier’s advantage is that it already lived in the world of workflows before AI agents became the obsession of the month. The company spent years helping businesses automate repeatable tasks across software stacks. That experience matters because it taught Zapier an old lesson that still applies to new AI systems: some work needs predictable rules, and some work needs flexible reasoning. Smart companies know the difference.
Foster has repeatedly emphasized that leaders should not choose between deterministic workflows and agentic workflows as if they are competing religions. They solve different problems. Deterministic automation is best when the task must happen the same way every time, such as routing data into a CRM, triggering a notification, or syncing records between systems. Agentic workflows shine when interpretation, summarization, or adaptive reasoning are needed. The real value comes from combining both.
That is also why Zapier’s product direction around AI orchestration is logical. Instead of pitching agents as mystical digital coworkers who can handle everything, Zapier positions them inside broader systems that include apps, approvals, data sources, triggers, and human checkpoints. That is much closer to how work actually happens inside companies. Nobody runs a sales team on vibes alone. Nobody should run one on pure agent autonomy either.
There is also a cultural reason the strategy fits. Zapier’s long-standing “build the robot” mindset made it more likely that employees would see AI as leverage rather than as an alien invasion wearing a productivity badge. Foster has been candid that cultural bias toward automation helped the company move faster than organizations where AI adoption feels like a threat rather than a skill. That does not mean fear disappeared overnight. It means the company had a better starting point for channeling that fear into experimentation.
From Memo to Muscle: How Zapier Turned AI Into a Habit
One of the most revealing parts of Zapier’s AI journey is that it did not start with a perfect master plan. It started with urgency. When stronger models made it obvious that AI was moving from curiosity to operational reality, Foster issued what Zapier internally called a “code red.” That message did not pretend the company had every answer. It made the case that standing still was the bigger risk.
But the real turning point was not the memo. It was the hackathon. Zapier paused normal work and got the whole company building with AI. Not just engineers. Marketers, operators, support staff, recruiters, and everyone else. Foster has described this as the moment that pushed weekly AI usage from under 10% to over 50% in one week. Later, the company said AI use became embedded across the organization, eventually reaching 97% of the team in day-to-day work.
That progression explains a lot about successful AI rollouts. People do not become AI-fluent because leadership posts a strong opinion in Slack. They become AI-fluent because they touch the tools, test workflows, break things, compare results, and start seeing where the technology helps or hurts. In short, adoption sticks when it becomes behavior, not branding.
Zapier also paired experimentation with guardrails. Legal and security teams worked on guidelines. Procurement helped reduce friction in accessing approved tools. Internal champions surfaced wins. Show-and-tells, repeated communication, and structured enablement turned one-time excitement into shared practice. That combination matters. Without experimentation, teams stay skeptical and abstract. Without structure, teams get excited and create a glorious mess that nobody wants to own later.
The Four Mistakes That Kill AI Rollouts
1. Believing the Agent Hype
Foster’s first warning is simple: do not confuse online noise with operational reality. Social feeds make it seem like every company has agents running every process already. They do not. Many organizations are still in pilot mode, and plenty of projects never make it past proof of concept. When leaders buy the hype, they start chasing theatrics instead of business value.
The smarter move is to ask a boring but powerful question: what specific workflow gets better here? If the answer is fuzzy, the rollout is probably theater. AI should solve a real problem, not decorate a strategy slide.
2. Skipping Function-by-Function AI Fluency
“We are an AI-first company” sounds bold, but it is useless if nobody knows what that means for sales, marketing, support, engineering, HR, or finance. Foster argues that AI fluency has to be defined by function. The use cases, expectations, and skills for a recruiter should not look identical to those for a product manager or a support lead.
This is where many rollouts go sideways. Leadership declares a new era, employees nod politely, and then everybody improvises in different directions. One team overuses AI. Another ignores it. A third creates secret workflows in the shadows. Soon the company has policy ambiguity, uneven quality, and a lot of confident nonsense.
3. Choosing Full Automation Too Early
This is the mistake the 90% rule is designed to prevent. Leaders get excited by speed, see a decent early result, and decide the human should disappear from the process. That is when the expensive errors show up. AI-generated recommendations can sound polished while missing context, confusing priorities, or making the kind of wrong call that annoys customers and embarrasses teams.
Human-in-the-loop design is not a temporary crutch. In many workflows, it is the whole trust architecture. Especially in customer-facing work, final approval, exception handling, and judgment calls still matter. Removing that too early is like taking the training wheels off while the bicycle is still on fire.
4. Ignoring the Cultural Foundation
Technology does not roll out into a vacuum. It rolls out into a culture. If a company punishes experimentation, hides tools behind endless procurement delays, or quietly makes employees feel that automation equals self-erasure, adoption will stall. Foster’s experience suggests that hands-on leadership, fast access to tools, and recurring hackathons help lower fear and build momentum.
Culture is not fluffy in AI transformation. It is infrastructure. Teams need permission to learn, permission to share, and permission to admit that version one of a workflow may be ugly before it becomes useful.
Why This Matters Beyond Zapier
Zapier’s playbook lines up with a broader shift in enterprise AI. Major research and industry reporting increasingly point to the same conclusion: organizations get more value when AI is built into workflows with human oversight, role clarity, and governance. The winning pattern is not random automation. It is hybrid work design.
That matters because the market is moving fast while organizational maturity is not always keeping up. Leaders want productivity gains. Teams want practical help. Vendors want to sell autonomy. Somewhere in the middle, companies still have to manage trust, security, cost, and accountability. Foster’s approach cuts through the fantasy by treating AI as part tool, part teammate, and part unfinished experiment.
His emphasis on “prompts for humans” is especially smart. That phrase captures something many AI teams miss: the best output from a system is not always a final answer. Sometimes the best output is a sharper brief, a ranked list of options, a better starting point, or a more informed human decision. AI does not have to be the closer to be wildly valuable. Sometimes it just needs to be the world’s fastest setup person.
A Practical AI Rollout Playbook for Leaders
If you want to borrow from Zapier without copying it blindly, start here. First, identify workflows where AI can do the heavy prep work but humans still own approval. Second, separate deterministic tasks from adaptive tasks. Third, define AI fluency role by role instead of issuing vague company-wide slogans. Fourth, create a safe space for experimentation, ideally with a real build event rather than another meeting about future opportunities. Fifth, invest in governance early enough that teams can move fast without creating compliance nightmares later.
Most of all, stop asking whether AI should replace people in broad terms. That question is too dramatic to be useful. Ask instead: where can AI remove drudgery, expand capacity, improve preparation, and surface better options while keeping human judgment where it matters most? That is the question behind the 90% rule, and it is a much better question for the real world.
Experience and Lessons From the Field
One of the biggest lessons from watching companies roll out AI is that the first reaction is rarely technical. It is emotional. Some employees are excited because they can already see the shortcuts. Others are uneasy because every new AI announcement sounds like a corporate version of, “Great news, we found a more efficient way to worry you.” Foster’s approach works because it addresses that emotional layer without turning the whole thing into therapy disguised as transformation.
In practice, the best AI rollouts usually begin with a few very ordinary wins. A support team uses AI to summarize tickets before a human responds. A sales leader gets cleaner account prep before a renewal call. A recruiter uses an agent to organize interview feedback so the hiring team spends less time hunting through notes and more time discussing candidates. None of this looks like a science-fiction trailer. That is precisely why it works. The value is concrete, repeatable, and visible.
Another important experience is that adoption becomes easier when people realize AI can make previously ignored work economically viable. There are many tasks inside a company that everybody agrees would be useful, but nobody does them because they are too tedious. Weekly internal summaries. Pattern detection across customer comments. Drafting first-pass documentation. Pulling useful signals out of transcripts and support data. AI often shines here because it unlocks work that was valuable but never urgent enough to justify the human hours.
There is also a leadership lesson hiding inside all this. Executives cannot outsource curiosity. Employees notice very quickly whether leadership is actually using the tools or just speaking in polished keynote language about transformation. When leaders show up at hackathons, demo their own workflows, and openly discuss what failed, the company gets a signal that experimentation is real. When leaders only talk in slogans, people tend to respond with the ancient corporate art of smiling and waiting for the next priority shift.
The final experience is that AI maturity is less about a single launch and more about repetition. Teams need recurring chances to build, share, compare, and refine. One workshop creates excitement. A system of ongoing demos, role-based expectations, safe governance, and visible business wins creates capability. That is what makes Zapier’s story useful. It is not just a story about product strategy. It is a story about organizational muscle. And that muscle is what separates companies that merely buy AI tools from companies that actually change how work gets done.
Conclusion
Wade Foster’s most useful contribution to the AI rollout conversation is not that agents matter. Everybody says that now. It is that agents matter most when they are placed inside thoughtfully designed systems where humans still own the final mile, culture supports experimentation, and each function knows what good AI use actually looks like.
The 90% rule is a refreshingly honest framework for a messy moment. It says AI can do a lot, sometimes a startling amount, but the last 10% is often where trust, accuracy, and business value are decided. Companies that understand that will build stronger AI systems and make fewer avoidable mistakes. Companies that ignore it may end up with expensive demos, nervous teams, annoyed customers, and a very advanced way to produce confusion at scale.
If there is a deeper lesson in Zapier’s strategy, it is this: the future of AI at work is not humans versus agents. It is humans with agents, designed on purpose.
