65% of Workers Avoid AI Over Moral, Environmental and Privacy Concerns

This post contains affiliate links, and I will be compensated if you make a purchase after clicking on my links, at no cost to you.

The article pulls together results from a CNBC–SurveyMonkey poll of 3,597 U.S. students and workers, taken in mid-April. The findings paint a complicated picture of how people feel about AI.

Plenty of folks are using AI at work and in school to get more done. Still, a big chunk avoids these tools, worried about ethics, privacy, accuracy, or even the environmental impact of massive data centers.

This post takes those findings and tries to break them down into practical takeaways for students, workers, and employers. The perspective comes from decades of experience in tech adoption and workforce development, so there’s some weight behind it.

Key lessons from the poll: hesitation coexists with adoption

Roughly two-thirds of people said they’ve avoided using AI at some point. The reasons are all over the place—moral, environmental, privacy, and practical concerns—making the whole thing a bit of a balancing act.

Students and workers don’t always see eye to eye. Their answers split along different lines, probably because they’re at such different points in their careers.

Environmental considerations stand out among students: About 36% of students pointed to environmental reasons for avoiding AI, while only 19% of workers did. That gap suggests students really notice the energy and resources AI data centers gobble up—think water, land, and electricity.

Moral and ethical concerns play a big role, too. 36% of students and 28% of workers worry about things like plagiarism, creativity taking a hit, or work losing its “human” feel.

These worries hint at bigger questions. How much should AI help us before it starts chipping away at our originality or sense of responsibility?

Privacy and accuracy form a separate cluster of deterrents: 37% of both groups flagged privacy concerns. 37% of students and 26% of workers questioned whether AI’s output is accurate or even useful.

This shows that people want AI to be trustworthy and transparent. They don’t just want answers—they want to know their data’s safe and the info is real.

Some avoid AI because it’s tough to learn (6% students, 8% workers). Others have reasons that didn’t fit the usual boxes (4% students, 5% workers).

AI and the job market: optimism, pessimism, and the demand for skills

From where I sit, the data line up with what’s happening in the job market. Optimism and worry seem to go hand in hand these days.

About two-thirds of students feel pessimistic about job prospects. AI plays a part for 56% of those who feel that way.

65% of students and 53% of workers think AI is snatching up entry-level jobs. That’s a big deal for anyone just starting out.

But here’s the twist: employers want AI skills more than ever. Entry-level roles needing those abilities have nearly doubled in a year.

People who use AI regularly say it pays off. 73% report productivity-and-demand-dynamics/”>higher productivity, and 68% say it saves them time.

Looking down the road, 55% of workers expect AI to handle some of their tasks as well as they can.

Turning hesitation into opportunity: guidance for learners and managers

If you’re a learner, the answer isn’t to ditch AI altogether. Instead, focus on building skills that keep ethics, privacy, and accuracy front and center.

Organizations can get ahead by putting clear rules in place and tracking how AI affects both productivity and creativity. It’s not just about plugging in new tech—it’s about making sure it actually helps people work smarter.

For students and workers: building market-relevant AI skills

Want to stay competitive? Pick up project-based learning that shows you know how to use AI responsibly. Learn how to manage data and double-check AI’s work.

Don’t forget those broader skills—data ethics, thinking critically about what AI spits out, and knowing when a human touch matters. If you can show real improvements in productivity or quality, that’s something employers notice.

For employers: responsible AI adoption and workforce development

Organizations should set up governance frameworks that address privacy, fairness, and environmental impact. They also need to invest in employee training. When teams combine AI adoption with clear objectives and ongoing ethics reviews, they can boost productivity without losing trust or breaking compliance.

This mix—skill building alongside responsible deployment—really helps close the gap between concern and capability that the poll uncovered.

The CNBC–SurveyMonkey findings show a world where caution and capability live side by side. Honestly, the best move for both individuals and organizations? Build strong AI literacy and use these tools with transparency and integrity.

After three decades in the field, I’ve noticed that technology thrives when people work with it thoughtfully—not just reacting out of fear.

 
Here is the source article for this story: 65% of workers have avoided using AI for moral, environmental, privacy or other reasons: CNBC survey

Scroll to Top