The article revisits the famous monkey selfie case and its odd, lasting impact on copyright law. It digs into how courts treat works made without direct human authorship.
Those same questions have come roaring back in the age of generative AI. The piece compares the United States and the United Kingdom, and wonders what all this means for creators, policymakers, and the future of creative value.
Origins of the monkey selfie saga
Back in 2011, photographer David Slater set up a camera in the Indonesian jungle. A crested black macaque, just messing around, managed to snap a selfie.
Wikimedia published the image and ignored Slater’s takedown request. The US Copyright Office then declared that works made by non-humans can’t be registered.
PETA jumped in and sued Slater on the macaque’s behalf, but courts tossed the case. Animals can’t file lawsuits, after all.
Years later, the selfie became a touchstone for new disputes. Computer scientist Stephen Thaler tried to register an image his AI, Dabus, generated.
Again, the US Copyright Office refused. Lower courts agreed, and the Supreme Court wouldn’t even hear the case.
Purely AI-created works can’t be copyrighted in the US. That’s the line, at least for now.
US and UK legal landscapes in the AI era
The split between the US and UK is shaping how studios, tech companies, and creators handle AI-assisted work. In the US, human authorship is still required for copyright—even if AI does most of the heavy lifting.
This makes companies think twice before swapping out human labor for AI-generated content. If a work isn’t protected, it’s a risky commercial bet.
The UK, on the other hand, lets some machine-generated works be copyrighted. If a human can be identified as making the key arrangements—like designing prompts or curating outputs—they might get protection.
That framework is under review right now. Authorities are still figuring out where to draw the line between machine output and human control.
US approach: human authorship as the cornerstone
In the US, the main rule is clear: copyright protection requires human authorship. The Dabus cases, including Thaler’s push to register AI-generated images, didn’t meet this standard.
So, works made purely by algorithms don’t make it into the protected catalog. This affects how media companies value AI-generated content and whether they stick with human-driven storytelling or shift to automated pipelines.
UK framework and ongoing questions about control
The UK takes a different angle. If someone can show they made the right arrangements—picking inputs, curating outputs, or making big creative edits—the work can get copyright protection.
This approach keeps humans central to ownership, even if AI plays a big role. As these rules get re-examined, ongoing cases—especially those with artists heavily prompting and editing AI outputs—will shape how much human involvement is needed to claim ownership.
Takeaways for creators and policy ahead
Some experts say AI works best as a creative collaborator, not as a full replacement for human imagination. Right now, the latest rulings still protect human creativity, which keeps doors open for authorship, licensing, and the revenue that drives artists and organizations to keep pushing boundaries.
But let’s be honest, generative AI is moving fast. Policy needs to keep up, especially as new workflows emerge—where just a handful of human prompts and tweaks can create something pretty valuable and market-ready.
Courts and lawmakers are still figuring out where to draw the line between human and machine authorship. The main idea? Safeguard real human creativity, but let technology help out instead of pushing creators aside.
That ongoing tug-of-war is bound to influence how industries see, license, and profit from AI-assisted art in the coming years. It’s a complicated dance, and honestly, nobody’s totally sure where it’ll land.
Here is the source article for this story: This monkey selfie will protect you from AI slop