Box CEO Aaron Levie on the Strategic Value of Token Waste in AI Development
In the fast-moving world of artificial intelligence, efficiency is often the goal, but Box CEO Aaron Levie is offering a counterintuitive take. During a recent episode of the “a16z Show,” Levie argued that companies shouldn’t be overly precious about their AI spending. “For me right now, I’m like, ‘Yeah, we should probably waste a lot of tokens because that means that we’re trying new things,'” Levie stated. He suggests that if your team isn’t burning through resources, they probably aren’t pushing the boundaries of what is possible with agentic AI models.
Levie’s perspective aligns with a growing sentiment across Silicon Valley regarding token usage. This isn’t a solitary opinion. Nvidia CEO Jensen Huang has expressed a similar stance, noting he would be “deeply alarmed” if a highly paid engineer wasn’t maximizing their access to computational power. To foster this culture of exploration, some industry giants like Meta and OpenAI have even introduced leaderboards to celebrate “Tokenmaxxing” among their developers. It’s a shift in perspective, moving away from rigid austerity and toward rapid, experimental deployment.
But the financial implications are impossible to ignore.
Tokens serve as the essential numerical currency for large language models, and since providers charge based on these inputs and outputs, the bills can add up quickly. Engineers are currently forced to balance the complexity of AI agents—which run longer and more intricate prompts—against the reality of finite budgets. “You have to decide, do you want that to be a long-running prompt? Do you have to be a long-running agent?” Levie asked. “Do you want to parallelize that? What is your comfort level of wasted tokens?” He believes these tensions will persist until data center capacity significantly scales up.
Beyond the raw cost, there is a logistical headache brewing for corporate leadership. Levie noted that CFOs and CIOs are “running around with their hair on fire” trying to determine if existing IT infrastructure can handle the chaotic nature of autonomous agents. It’s not just about performance issues; it’s about governance. When agents are hitting systems thousands of times an hour, managing data integrity—like preventing agents from accidentally moving or deleting files simultaneously—becomes a massive challenge. Honestly, it’s a level of operational complexity that most companies are only beginning to grapple with as they adopt these new technologies at scale.