Reclaiming AI: A Democratic Agenda for Shared Prosperity

Who should write the rules for artificial intelligence: a handful of ultra-wealthy founders or the American people? Rep. Ro Khanna posed that question repeatedly this year as he argued for an AI democracy that centers jobs, fairness and public interest over concentrated profit. He warned that without new policies, AI could deepen inequality and hand more power to a tiny elite.
Khanna, a Democratic congressman who represents Silicon Valley, drew on local realities to make his case. He noted firms headquartered in his district account for more than $18 trillion in market capitalization—more than one-quarter of the entire US stock market—and that five companies are worth more than $1 trillion each. He said those concentrations matter when deciding who benefits from technological change and urged a one-time 5 percent wealth tax on California billionaires while excluding voting shares and illiquid gains. He also described federal legislation he’s proposed to raise $4.7 trillion by taxing billionaires and another $2 trillion through corporate measures, and he challenged colleagues to support those plans.
The pace of change, Khanna stressed, has unsettled even AI pioneers. Geoffrey Hinton, the Nobel laureate in physics known as the “godfather of AI,” resigned from Google and warned that AI could flood public discourse with misinformation and pose an existential risk. Stuart Russell now worries development is “intrinsically unsafe.” He pointed to a separate example of industry restraint: after the Department of Defense requested access to Anthropic’s Claude for domestic surveillance and autonomous warfare, CEO Dario Amodei said he would not allow the technology to be used for those purposes. Yet Khanna asked what happens when other companies accept defense contracts and when tools are used in conflicts such as those unfolding in Gaza.
Khanna pressed for immediate and coordinated action from lawmakers, unions, faith groups and community organizers to design rules that protect people before technology reshapes work and social life.
Democrats must now offer a clear policy alternative to Donald Trump’s posture, Khanna argued, one that connects with independents and responsible Republicans. This is a pivotal contest over whether AI rules will be set by public interest or private balance sheets.
He laid out seven practical principles for governing AI that aim to keep humans central. First, people should remain “in the loop,” with protections against mass displacement—he cited 3.6 million truck drivers who could lose work as autonomous vehicles spread. Second, large employers must bargain with workers so displaced staff gain access to new roles and share productivity gains. Third, Khanna said the tax code is skewed toward capital; he cited research showing companies often pay 5 percent or less in taxes on digital tools versus as much as 30 percent when hiring humans, and he proposed policy fixes and an annual data dividend so individuals receive compensation for data they generate.
He proposed a Future Workforce Administration, modeled on bold historical programs, funded by a modest wealth tax on the newly created trillions and a token tax on business AI that displaces labor. This initiative would fund moon-shot public projects, expand clean energy and biotech, mobilize service programs for towns and schools, and create 1,000 trade schools and tech institutes to prepare workers for roles that AI cannot replace. Such a workforce effort, if implemented at scale, could blunt political resistance by directly tying AI gains to tangible community investments.
Khanna also called for data centers to benefit their host communities through local jobs, school resources and use of renewable energy and dry-cooling to protect water supplies. He urged multi-party action to stop engagement-driven algorithms from amplifying hateful content, including revising liability protections to allow stronger platform oversight. Finally, he pressed for enforceable federal guardrails and mandatory third-party verification of advanced AI models rather than voluntary industry self-regulation.
Khanna warned that current trends concentrate economic power. He cited economist Gabriel Zucman’s finding that about 19 billionaires have amassed $3.3 trillion—the equivalent of 10 percent of annual US economic output—and he described Silicon Valley’s unique role, where a 15-mile radius around Stanford hosts Apple, Google, Nvidia, Broadcom and Meta. Polling, he noted, reflects public anxiety: a January poll found more than half of Americans surveyed said the gap between rich and poor was “a very big problem” while only 6 percent said it wasn’t a concern. An April 2025 survey found that, by a nearly two-to-one margin, people expect AI to harm rather than benefit them. Those figures, he argued, underscore why an AI democracy must be built now if trust in institutions is to be restored.
Khanna balanced critique with possibility. He emphasized that the AI revolution could help cure diseases, reduce housing and medical costs, and spur new businesses and factories—if governed to spread benefits. “There will be no surrender to the tech lords. None,” he said, insisting that America reclaim the future. “What there will be is a claiming of AI, and the future, for the American people.” The push for an AI democracy, he concluded, is both a policy agenda and a political campaign to ensure technology serves broad prosperity rather than elite enrichment.