The Soft Politics of AI Art
AI artist residencies may influence a small piece of how we view the emerging technology
Today I published an article in The Verge about AI artist residencies.
At Mila, a leading AI research lab, an artist named Violeta Ayala created an AI-driven jaguar that tells stories of fire and kinship in the Bolivian Amazon. Her piece, Huk, greets exhibition visitors, selects individuals from the crowd, and delivers tailored, voice-generated reflections and stories. The experience is part myth, part machine.
These residencies and the art that they enable are ending up in places like the Museum of Modern Art in New York and Centre Pompidou in Paris. They form one of our early interactions with AI in day-to-day media. Those interactions help shape what we see as the natural purpose of AI: what uses are legitimate, exciting, or extractive. That public perception can, in turn, influence how new technologies are governed.
I wanted to, maybe a bit self-indulgently, write this short post to reflect on the broader context in which the article sits. Some of the questions that came up in my reporting echo long-running themes in academic work on emerging technologies and governance. And they can help clarify the way cultural institutions like these may influence AI governance in the future.
Cultural governance happens first
There’s a well-established idea in science and technology studies that policy and regulation aren’t the first or only ways that new technologies are governed. Instead, we often see ‘cultural governance’ emerge earlier, through things like stories and media representations. These shape how people understand a technology: what it’s for, what it threatens, and who it belongs to.
Sheila Jasanoff and Sang-Hyun Kim’s concept of ‘sociotechnical imaginaries’ captures this well through the idea that communities build collective visions of desirable futures made possible by science and technology. These imaginaries are shaped in film, in fiction, and in galleries and public spaces.
That’s what drew me to this story. Artist residencies might seem peripheral to the AI debate. They’re not developing models or filing lawsuits, but they matter because they shape what feels legitimate and play a role in defining what AI means to people, not just what it does.
Framing as a form of power
In talking with curators, ethicists, and artists for the piece, I kept coming back to the idea of framing: a concept that spans media studies, political theory, and moral psychology. The way a technology is framed early on influences how it’s later debated in courts, in policy, and in public imagination.
AI that appears in a Discord meme might feel unserious, but AI framed as a tool of careful artistic inquiry, displayed at the Museum of Modern Art, might elicit different reactions entirely. That shift can eventually normalize new uses of technology.
One ethicist I spoke to called this “a kind of soft governance.” Another described the artists and institutions hosting these residencies as “norm entrepreneurs”, a term from international relations theory used to describe actors who shift what is culturally acceptable before formal rules exist.
History rhymes
The pattern isn’t new. In 1908, the US Supreme Court ruled that piano rolls, an early music-reproduction technology, weren’t subject to copyright because they weren’t readable by the human eye. The cultural response from musicians and publishers was swift, strong, and negative. A year later, Congress passed the Copyright Act of 1909, establishing new rules for mechanical reproduction.

In this particular case, laws followed sentiment, and sentiment followed framing. Now, artist lawsuits against Stability AI, Midjourney, and others are winding through US courts. In the meantime, institutions like Villa Albertine, SETI, and Max Planck are funding artists to use AI tools, and encouraging them to reflect on authorship, ownership, and labor.
These programs don’t erase many artists’ concerns about generative models: the use of scraped data, the loss of attribution, and the risk of eroding creative livelihoods. But they do play a role in determining how AI art is perceived by audiences.
Of course, these dynamics don’t always follow a clean arc. Cultural perception doesn’t guarantee policy change, and symbolic framing can falter, especially when legal, economic, or political incentives push hard in the opposite direction. This piece isn’t a prediction or an answer, but a snapshot of one ecosystem engaging with a small part of a much bigger question.
What I found most interesting in reporting this piece is that no one I spoke to within these residencies framed them as advocacy. They didn’t claim to be pro- or anti-AI:
“We’re not choosing sides so much as opening space for inquiry,” says Mohamed Bouabdallah, Villa Albertine’s director. “Some residents may critique AI or explore its risks.”
Instead, they emphasized giving artists the space and time to figure out what this technology is to them and to the audiences with whom they’re engaging.
In the years ahead, judges and policymakers will weigh in, but by the time those decisions arrive, many of the symbolic battles may already be won or lost.