We Need a Trust Fund - or Superfund - for AI

AI is everywhere, it is impacting people, society, and nature, and we need to start taking action - for Interior and beyond

Power transmission lines run through a forested, hilly landscape with a swath of trees cleared for the power lines.
High-voltage power lines are needed to accommodate the insatiable energy appetite of data centers for generative AI and similar applications, and they carry costs like the fragmentation and destruction of habitats. Photo by Lasse Nystedt / Unsplash

BLUF (Bottom-Line Up-Front): The harms of artificial intelligence (AI), in particular generative AI, to people, society, and nature are significant and will impact everyone, including the US Department of the Interior and its mission. We should be proactive in our action rather than waiting for disaster.

Let’s dive in.

Impacts to People

A painter’s inspiring art was photographed and posted online a while back. Then a technology firm training a large language model (LLM) scraped that photo, where it got processed along with untold thousands of other such images. Some time later, the painter was not commissioned to provide illustrations for an organization’s website because the organization used that LLM to generate an image for pennies instead. Probably not quite as inspiring, and certainly not as original, but it worked. Maybe it was cheaper for the organization to take this path, but it was bad for the painter, and the world missed out on a piece of art their mind might have brought to the world.

Our story could also be a traveler and writer whose experience visiting a faraway place and navigating a new culture was scraped from their blog by the LLM, only for those learnings—their learnings—to be churned out of the LLM by a content farm that generates income for someone else. Someone is getting paid, just not the person with the experience. 

Or our example could be a translator or a musician, a freelance journalist or a filmmaker…anyone whose creativity, experience, learning, and works were used to train an AI model that then produces outputs that mimic original, human contributions, but are clearly not the same, with legal implications. These examples are hypotheticals that are not fanciful—they’re generalizations of what is happening right now.