Politics

Inside the AI Executive Assistant Future

For Illia Polosukhin, the future of work isn’t just about faster software—it’s about having a digital team on standby. On any given day, Polosukhin utilizes a dozen distinct AI agents to handle his workload. One of his favorites acts as a dedicated coach, parsing through his Slack messages, Google Drive documents, and meeting transcripts to keep his decision-making sharp. “So it effectively summarizes all of the meeting notes… and provides me with a coaching and executive summary of what happened, what I’m missing, and where decisions are stuck,” he told USA News Hub Misryoum. He treats these digital tools like a billionaire’s chief of staff, using the prompt, “You’re a billionaire’s chief of staff” to guide their performance.

This isn’t just a gimmick. It is a fundamental shift toward an AI agent future.

Polosukhin, a co-author of the seminal 2017 paper “Attention Is All You Need,” has a long history of predicting these trends. Back when he founded NEAR AI, the idea that humans would simply talk to computers to generate software sounded “pretty ridiculous.” Today, that practice is colloquially known as “vibe coding.” However, with these capabilities comes significant risk. As companies like Anthropic grapple with models capable of exploiting vulnerabilities, Polosukhin warns that we are entering a “cat and mouse” game where systems constantly break what the previous iteration patched. He argues that our current global society—our governments, internet protocols, and institutions—remains wholly unprepared for the eventual reality of artificial general intelligence.

We need a decentralized approach to trust.

Polosukhin’s current focus at NEAR is building a backend security layer that prevents users from being forced to rely on a single, centralized gatekeeper. The prospect of one company controlling an AI agent that holds your login credentials, travel plans, and financial data is, in his eyes, a dangerous proposition. By pushing for open-source, auditable platforms, he hopes to pull the curtain back on the “black box” nature of modern AI. He points to past incidents, such as when xAI’s Grok provided problematic responses, as evidence that these systems can be manipulated. To him, transparency isn’t just a technical requirement; it’s a social necessity for anyone who intends to let an AI agent handle the complexities of daily life.

Still, the technology is far from autonomous. While his agents can successfully aggregate geopolitical news or assist with software development, they often stumble when left entirely to their own devices. “If I just let it go and run and do things, I come back to something that makes no sense,” Polosukhin admitted. Despite the industry hype, human judgment remains the final, essential filter. Until that changes, the vision of a completely hands-off AI agent future remains tethered to the careful, watchful eye of its human operator, requiring constant supervision to ensure these tools actually work as intended.

Back to top button