Fireworks AI CEO Lin Qiao builds the case for autonomous intelligence, a system where a model evolves based on the product’s use, powered by open-source.
An IBM survey found that just 1 in 4 AI initiatives delivered expected ROI over the past few years, while MIT’s State of AI in Business report found a whopping 95% of companies saw zero return on $30-40 billion in AI investment.
These headlines have worried CIOs and CTOs in charge of enterprise AI projects, but they’re diagnosing the wrong problem. We’ve been sold AI chatbots as general-purpose when they’re not. AI is incredibly capable, but we need to be honest about the level of customization they need to succeed.
Walk into most enterprises today, and you’ll find the same approach: teams plug into APIs from OpenAI, Anthropic, or Google, send their queries through someone else’s infrastructure, and pay based on usage. It’s the path of least resistance, but this convenience comes with hidden costs that compound as usage scales. Some companies are literally scaling their AI products into bankruptcy.
Consider what happens when your customer service team builds an AI assistant using a closed API. Their prompts get optimized for that specific model’s quirks and capabilities, workflows depend on proprietary features, and your data accumulates in systems you don’t control.
When your vendor updates their model, you can’t influence how those differences impact your specific use case or evaluate what’s changed. Your carefully tuned prompts might produce inconsistent results, your eval benchmarks break, and your performance metrics shift in ways you can’t predict or debug. Prompts can be rewritten and workflows rebuilt, but the true cost has deeper roots: you’ve built a production system on infrastructure that simply isn’t yours.
There’s also the huge problem of data ownership. Every query you send through a closed API hands your competitive intelligence to a third party: user interactions that reveal how your customers think, domain-specific workflows that represent years of operational refinement, and behavioral patterns that distinguish your service from competitors. This is core business logic that model developers can’t otherwise scrape off the web, and when it flows through someone else’s infrastructure, you’re outsourcing your competitive moat. Some vendors promise not to train on your data, but even if they keep that promise, you’re still siphoning the insights that make your AI product uniquely valuable into a black box.
Foundation labs train on publicly available data because that’s what they have access to, which also means their models can’t learn the specific patterns that matter to your business. A healthcare AI that doesn’t learn from actual patient interactions will miss critical diagnostic nuances. A legal research tool that can’t adapt to a firm’s attorney caseloads will never hold a competitive advantage. Generic models trained on generic data will never be able to solve domain-specific problems, no matter how large they get.
This is why companies that are seeing real returns have shifted from renting to owning their AI. They build what I call Autonomous Intelligence: systems where the model and product evolve together. When users interact with the application (correcting outputs, ignoring suggestions, discovering new approaches), that feedback loops back into the model through continuous evaluation and reinforcement learning. The model becomes sharper, which generates better training data, which further improves the model. Autonomous Intelligence learns from your core business logic: the data patterns that no one can scrape from the web.
The technology to build Autonomous Intelligence already exists. Open-source models like Kimi K2 Thinking, MiniMax-M2, and DeepSeek V3.2 now match closed models in capability, and advanced customization techniques like reinforcement fine-tuning are accessible to developers outside of frontier labs.
So, why aren’t more companies building autonomous intelligence? In short, autonomous intelligence sounds more complex than it actually is. CTOs and CIOs are thinking about AI the way they thought about websites in 1998 or cloud computing in 2008: something specialists handle while the core business focuses on other priorities. Companies that delayed moving to the cloud fell behind cloud-native competitors who could iterate faster, scale more efficiently, and experiment more freely. The same dynamic is playing out today, except the pace is much faster and the stakes much higher.The disparity between companies succeeding with open models and those stagnating with closed providers emerged as early as last December, when an IBM study found that 51% of surveyed companies using open-source AI tools reported positive ROI, compared with 4 in 10 that did not. Perhaps the companies worried about AI ROI should ask themselves a different question: when your AI finally delivers the returns everyone is predicting, will you own the system generating those returns, or will you be renting it from someone else?

