Alex Bores: Why Palantir and OpenAI Fear His Oversight

This post contains affiliate links, and I will be compensated if you make a purchase after clicking on my links, at no cost to you.

This article reviews a recent interview with Alex Bores, a former Palantir employee who became a New York State Assembly member. It explores his tech background, his stance on AI regulation, and the political dynamics shaping how democracies approach powerful technologies.

The piece traces his shift from the private sector to the policy arena. It highlights the tension between innovation, civil liberties, and worker protections, as well as the influence of tech donors on regulatory debates.

From activism to AI policy

Alex Bores’ story weaves together labor roots, computer science training, and hands-on work building data tools for government services. He grew up around labor activism and studied industrial and labor relations at Cornell. He also majored in computer science and learned campaign organizing through student activism.

In 2014, he joined Palantir because he thought its platform could help government deliver essential services while safeguarding civil liberties. His early focus was on federal civilian projects—DOJ, CDC, and VA—where he argued that data integration and disciplined data governance created real value, even before the hype around AI.

Bores says the real work was organizing data—linking disparate datasets, ensuring accuracy, and implementing guardrails to prevent misuse. He insists the strongest case for these tools was pragmatic: better services for citizens, not abstract techno-libertarian dreams.

This distinction set the stage for later debates about how AI should be regulated when the technology intersects with immigration enforcement, civil liberties, and transparency-and-accountability/”>public accountability.

Balancing data work with civil liberties

As the Trump administration shifted priorities toward immigration enforcement, Bores recalls growing concerns about Palantir software being used for deportations. He describes internal debates over contract guardrails and says executives refused to prevent ICE’s Enforcement and Removal Operations (ERO) from accessing the platform.

He sees that decision as a pivotal moment in his choice to leave Palantir in 2019. The tension he highlights isn’t just about technology for its own sake—it’s about whether public-data tools should support coercive policy without enough safeguards.

From private sector to policy making

After leaving Palantir, Bores entered politics and helped write the RAISE Act in New York, one of the early state-level AI regulatory efforts. The goal was to set clear standards for transparency, accountability, and oversight as artificial intelligence began to change state services.

This work put him at the crossroads of technology policy and democracy, where the stakes include not only innovation but also democratic legitimacy and worker protections.

His advocacy soon faced partisan and industry headwinds. A super PAC called Leading the Future, funded by tech donors like Palantir co-founder Joe Lonsdale, launched attack ads targeting his stance on regulation.

The episode points to a broader dynamic: powerful industry contributors sometimes try to shield policy from scrutiny, while legislators and advocates push for democratic processes to shape AI governance.

Influence of tech donors and the democratic process

Klein points out how donors can shape both policy and public perception. The key tension is between industry protectionism and the democratic process that lets diverse voices shape AI regulation.

Bores frames his work as an attempt to balance the enormous possibilities of AI with its risks. He keeps coming back to oversight, transparency, and worker protections as non-negotiables in any responsible regulatory regime.

Implications for AI governance today

So, what does Bores’ journey actually mean for today’s AI regulation debates? A few themes pop up that policymakers and industry folks really shouldn’t ignore:

  • Prioritize data governance and data interoperability to unlock value, but don’t forget about privacy along the way.
  • Embed guardrails and contractual guardrails to stop mission creep, especially in tricky areas like immigration or public safety.
  • Keep strong democratic oversight in place, and make sure public accountability mechanisms can actually balance out private influence.
  • Protect workers’ rights. People deserve clear info about how AI tools affect jobs and civil liberties—no hiding behind buzzwords.

As AI keeps weaving itself into public services, the need for thoughtful regulation just gets louder. Bores’ story doesn’t ask us to shut the door on innovation, but to shape it—democratically, with practical safeguards, and with a real commitment to civil liberties and workers’ dignity.

 
Here is the source article for this story: Opinion | Why Are Palantir and OpenAI Scared of Alex Bores?

Scroll to Top