LDS Church, Big Tech, and AI: Utah’s Unexpected Political Power

This post contains affiliate links, and I will be compensated if you make a purchase after clicking on my links, at no cost to you.

This article explores how Utah is shaping AI policy with a “pro-human” focus, blending innovation with ethical and religious perspectives. LDS Church leaders, state lawmakers, and a fast-growing tech scene are teaming up on safety, accountability, and governance for new AI systems, all while dealing with federal intervention and industry adoption.

The piece puts Utah in the spotlight as a testing ground for state-level AI governance and ethical review.

A Pro-Human AI Agenda in Utah

In Utah, senior church figures like Elder Gerrit Gong and other Latter-day Saint leaders have pushed for an AI policy that advances technology without losing sight of moral and social values. Through Organized Intelligence, a faith-driven initiative, the state has hosted conferences focused on AI safety, child protections, and how to evaluate large language models ethically.

These conversations are steering policy debates toward a more balanced approach, aiming to blend innovation with responsibility. Alongside religious leadership, Utah’s policy talks highlight practical safeguards—like child protection rules and careful safety planning for advanced AI systems.

The broader goal is to keep AI behavior fair and reliable. Leaders want to see responsible deployment that protects vulnerable people and keeps public trust in technology strong.

Religious Leadership and Ethical Framing

Utah’s faith leaders argue that chasing profits shouldn’t decide how AI shapes beliefs, relationships, or daily life. The church’s involvement goes further, building a framework for moral reasoning in automated systems.

They’ve even developed a “Faith and Ethics AI Evaluation” tool to assess models’ religious literacy and moral fairness. This ethical lens is meant to balance out technical safety checks, making sure AI aligns with shared human values and community standards.

Policy Developments and State Initiatives

Utah’s legislature, with GOP State Rep. Doug Fiefia at the helm, drafted HB 286. The bill would make frontier AI companies publish safety and child-protection plans, report incidents, and face civil penalties, plus offer whistleblower protections.

This move signals a push for more transparency and accountability as AI advances, especially where kids and vulnerable users are involved. Governor Spencer Cox has carved out a “middle path” by promoting AI-driven economic growth but insisting on safeguards.

He’s set up an Office of AI Policy and launched a task force on “pro-human AI.” Beyond big bills, the state is passing narrower laws, like ones making it easier to sue over deepfake images, and investing in internal tools to guide AI use across public systems.

These steps show Utah’s strategy: combine targeted regulation with building up the state’s ability to manage AI responsibly.

Federal Intervention and Political Dynamics

The tug-of-war between state innovation and federal policy got real when the Trump administration stepped in to block Utah’s legislation. They argued state rules might clash with a national AI framework.

A directive against the bill and President Trump’s executive order to override state rules highlighted a bigger fight—state experimentation versus federal priorities. The administration’s AI czar, David Sacks, became a key figure here.

Some conservatives pushed back, saying the federal move undermined states’ ability to shape AI policy for local needs. The episode shows how national debates over AI regulation can shape, and sometimes limit, what states can do.

Utah’s approach—flexible, informed by stakeholders, and rooted in community values—reflects the ongoing struggle over who gets to write the rules for new technologies.

Industry Adoption and Workforce Impacts in Silicon Slopes

Utah’s booming tech sector, dubbed “Silicon Slopes,” isn’t waiting for perfect rules before jumping into AI. State agencies and big employers are already testing out AI tools, like Google Gemini, for internal tasks.

They’re even looking at chatbots to take over some call-center jobs. This quick adoption brings efficiency, but it also sparks worries about job losses and the need for retraining programs to help workers adapt.

Utah is trying to balance rolling out AI with safety and ethical guardrails. The mix of public policy, faith-based values, and industry innovation aims to keep workforce changes transparent, fair, and focused on people.

Economic Potential and Ethical Guardrails

  • Economic growth aligned with safety: Smart policy supports innovation but works to protect children and vulnerable groups.
  • Workforce resilience: Proactive retraining and fair transition plans go hand-in-hand with AI adoption in government and business.
  • Religious and ethical oversight: Tools like the Faith and Ethics AI Evaluation help make sure models respect diverse values and moral fairness.

Moral and Social Implications

Church leaders warn that if profit-driven tech governance takes over, AI could twist society’s moral compass and reshape fundamental beliefs and relationships. They push for ethics-based guidance so technology lifts up humanity, not just narrow interests.

Utah’s approach tries to weave moral thinking into policy, regulation, and how AI actually gets used on the ground. Is it perfect? Probably not. But it’s a real attempt to put people first in the age of AI.

What This Means for Policymakers, Tech Leaders, and the Public

  • Policymakers need to strike a balance between encouraging innovation and maintaining safeguards. They should push for more transparency and accountability, using ethical evaluation tools along with technical standards.
  • Tech leaders ought to talk with faith communities and civil society, not just other tech folks. It’s important to shape products that reflect shared values and actually protect people who might be at risk.
  • Public stands to gain when governments take action early, set up clear safety protocols, and make it easy for anyone to report worries about AI systems. Let’s not make it a maze—people deserve straightforward ways to speak up.

 
Here is the source article for this story: How The Church of Jesus Christ of Latter-day Saints Found Itself in the Battle Over Big Tech

Scroll to Top