By Aytekin Tank
Two tennis players are given the chance to train for a day with a world-class pro. The expert covers service grips, how to judge an opponent’s topspin, and when to stay at the baseline versus serve and volley. It quickly becomes clear there’s a problem. One student is an experienced tournament player. She absorbs the lessons and puts them into practice. The other is a complete novice. She finds the instruction confusing—and it ends up making her already shaky strokes even worse.
The takeaway: the value of performance-enhancing tools depends largely on the experience of the person using them.
Researchers are finding the same pattern when it comes to AI. For entrepreneurs with solid business expertise, AI improves performance. For those with less experience and judgment, it can actually make outcomes worse. At the end of the day, human judgment is still critical.
In today’s increasingly AI-powered business landscape, whether to use the latest tools isn’t really a choice—if you don’t, your competitors will. The real question is how leaders can ensure employees at every level get the most from AI.
Teach How To Use AI Analytically
Researchers looked at how a generative AI assistant helped small business entrepreneurs in Kenya. One of the findings was that for those who were already doing well, the AI tool boosted profits and revenues by 10-15%, according to the study. On the other hand, it lowered results for those on the low-performing side by about 8%.
The researchers noted a difference in the type of advice that users accepted from the generative AI tool. In short, low performers took worse advice—generic recommendations like lowering prices.
The lesson for business leaders is pretty clear: organizations must provide training and instructions on how to work with AI’s output.
For starters, it’s common knowledge that generative AI tools like Gemini and ChatGPT tend to hallucinate—confidently make up answers rather than admit they’re unsure. Beyond clear-cut hallucinations, you can’t always tell the quality of a response. That’s why it’s important to start with a mindset of evaluation, not assumption.
For example, at Jotform, I encourage employees to ask questions before accepting an AI tool’s answer. Questions like: What assumptions are being made? Is any context missing? Is this advice tailored to our specific [business/product/pain point]?
Generative AI can be a powerful brainstorming, writing, and research partner, but never accept an AI result at face value.
Define AI Points In Workflows
The standard leadership advice—provide employees with training—sounds like an obvious way to level the AI playing field. But speaking from experience at my own company, employees already work hard. They’re deeply committed to the mission. They also have rich personal lives, and that’s a good thing. Rolling out training programs that require after-hours learning or cut into personal time can be a tough sell.
One alternative is to integrate AI directly into existing workflows, so employees build proficiency and confidence on the job. But as teams decide where to incorporate AI, leaders must be explicit on how it fits within each workflow—and where human judgment remains essential. This helps establish ground rules for use, like consult AI for first drafts or working analyses, but leave final revisions and sign-offs to people. AI can offer guidance, but employees ultimately own the decisions.
AI can take over the tedious parts of a process, but humans should stay in the loop at the consequential moments. That’s how employees continue to hone their judgment and build business acumen.
Reward Great Ideas, Not Quantitative Output
The buzzword that’s sending chills down the spines of today’s leaders is “workslop.” Harvard Business Review defines it as “AI-generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task.” It’s the rapid fire list of ideas that ignores key considerations. It’s the first draft that falls completely flat, requiring a return to the drawing board.
Research confirms the cost of workslop: it can add nearly two hours of extra work and hurt productivity, collaboration, and trust. The onus is on leaders to set clear expectations for effective AI use—and to proactively discourage low-quality output.
Here’s the refrain I repeat often at my company: quantity matters little. Substantive quality is everything.
The sheer number of ideas generated or tasks completed is not a measure of success. What matters is output that moves the needle, such as proposing workable solutions. Even when an idea doesn’t ultimately fly, it still has value if it shows real ingenuity and clear thinking.
Leaders can reinforce this by rewarding great ideas and encouraging transparency around AI use. For example, even if an employee starts with an AI-generated suggestion, I want to understand the original idea, how they evaluated it, and how they revised it.
This causes an important shift, away from rewarding those who use AI the fastest and toward those who use it most thoughtfully. As employees build better judgment about when and how to rely on AI, organizations can cut back on workslop and fully harness the technology’s potential. Hopefully, they can level the impact of generative AI on performance, so that all can get the most from it.
Feature image credit: Annie Spratt on Unsplash
By Aytekin Tank
Find Aytekin Tank on LinkedIn and X. Visit Aytekin’s website.