AI is everywhere at work—but quality isn’t. A Snowflake leader and the co-founder of Unframe AI share how companies can prevent workslop from costing their business millions in lost productivity and trust.
The rise of AI has pushed organizations into a race toward speed and efficiency. They all want their teams on their toes, delivering quick results and taking the business to new heights.
Ever since ChatGPT came to the scene, AI use at work has doubled. Yet, as with any powerful tool, speed isn’t always the same as progress. AI can research, write, summarize, and automate at lightning pace, but it still needs human judgment to turn output into real impact.
In many workplaces, that human oversight is missing. The result is AI-generated content that looks polished at a glance but lacks substance, context, and quality – often described as “AI workslop.”
This growing and concerning phenomenon is negatively impacting businesses and is one of the reasons as many as 95% of organizations are seeing no measurable return on their AI investments. The “slop tax,” where AI creates more work than it saves, is quietly eating into productivity across industries.
Understanding the case of AI workslop
In a recent survey of more than 1,100 U.S. employees, the Harvard Business Review found that 40% had received low-quality AI work from their colleagues in the past month. On average, they said, 15% of all content they received at work qualified as “workslop.”
The problem shows up across projects, from presentations to code. But according to Larissa Schneider, COO and co-founder of Unframe AI, which builds tailored AI solutions for enterprises, it’s most visible when teams try to summarize or extract insights from large, unstructured datasets like contracts, reports, and technical documentation.
“The problem gets even worse when teams try to analyze multiple long-form documents or generate analytics from aggregated data… In enterprise settings, this leads to misleading summaries, incomplete insights, and ultimately a lot of rework. Experts have to go back and verify everything or redo the analysis manually,” she told Future Nexus.
This rework comes at a real cost.
First, there’s the productivity tax. People who would otherwise be focusing on strategic or creative work spend hours fixing AI-generated content, reversing AI’s promise of efficiency. HBR’s respondents estimated that each instance of workslop takes nearly two hours to rework, translating to around $9 million in lost productivity per year for a 10,000-person company.
“When people have to double-check AI outputs constantly, productivity drops instead of rising,” said Jeff Hollan, Head of Cortex AI Agents at data major Snowflake. “It also creates cognitive drag as teams spend time parsing whether an AI-generated summary or code suggestion is accurate rather than moving work forward.”
Beyond lost time and resources, workslop damages trust both in colleagues and in the technology itself. Teams grow skeptical of AI-generated work and wary of peers who rely on it too heavily, considering them less reliable and creative.
That lack of confidence erodes collaboration and makes employees resistant to integrating AI into their workflows.
The root cause of the problem
Before jumping to the ways of fixing workslop in business environments, it is important to understand the root cause behind it.
At the core, the problem is not that employees are using AI aggressively; it’s their unthinking adoption. More often than not, they’re using it under pressure to appear fast and productive, rather than to collaborate with it and genuinely improve outcomes.
“A lot of this comes down to pressure to be fast rather than thoughtful,” Schneider pointed out. People feel like they need to show they’re using AI or prove they’re being “productive,” even when the output isn’t actually helping them get better results.”
Compounding the problem is a lack of guidance and fit-for-purpose tools. In many organizations, employees are simply told to “use AI” for getting a project done, with little training, access to the right tools, or clarity on how to do it well.
“Companies are trying everything right now — top-down mandates, bottom-up experiments — and honestly, that curiosity is a healthy sign. But reality is starting to set in… The rush to ‘do something with AI’ often happens before there’s a clear understanding of what actually works in a complex enterprise environment,” she added.
Many teams also rely on generic tools or consumer-grade LLMs that can’t handle enterprise-grade data or domain-specific needs and only provide output that looks polished on the surface but misses key details or loses important context.
“The reality is that high-quality results just aren’t possible from generic AI tools that are built to handle everything. Enterprise data, systems, and workflows are too specific for that approach. In practice, one-size-fits-all really means one-size-fits-none. That’s exactly why tailored, context-aware AI solutions matter so much,” she said.
Setting the tone right
To get AI right and turn it into a true value driver for the business instead of a slop generator, the change must start at the top. Leaders must model what responsible AI use looks like, and define when and how AI should assist rather than replace human work.
“Leaders can set the tone by encouraging AI adoption as a trusted assistant to humans, not a replacement. At Snowflake, we encourage teams to use AI to augment judgment, not automate it. Whether it’s helping analysts generate hypotheses faster or assisting engineers in writing cleaner code, AI becomes most powerful when it expands human thinking rather than shortcuts it,” Hollan said.
Once that mindset is clear, the next step is defining the right use cases and selecting tools, backed by trusted data, transparent operations, and clear validation processes.
“Start with quick wins,” Schneider advised. “Select one or two workflows where AI can deliver visible, measurable value within weeks. Early success builds confidence and gets buy-in from the rest of the organization.”
She emphasized the importance of defining clear KPIs tied to actual business outcomes — accuracy, turnaround time, or employee efficiency rather than vanity metrics like “AI usage.”
“Focus on a few specific use cases you can closely monitor through testing and rollout, rather than trying to automate everything all at once,” she said, while adding that leaders should prioritize giving their teams the resources they need to solve the problem, be it tools, data, or guided experimentation.
Finally, both Schneider and Hollan also urged organizations to treat AI hygiene as seriously as data hygiene, ensuring every output is traceable, explainable, and verifiable.
“Employees should be able to see where the information comes from — clear source references, links to data locations, and accuracy scores. Explainability should be built right in so teams can quickly judge whether an AI response is trustworthy before acting on it,” Schneider emphasized.
Hollan added that this kind of monitoring and transparency, from accuracy scoring to prompt documentation, creates a feedback loop that catches errors early and turns AI into a continuously improving system — not a siloed tool — that drives business growth.

