We Need a Trust Fund - or Superfund - for AI
AI is everywhere, it is impacting people, society, and nature, and we need to start taking action - for Interior and beyond
BLUF (Bottom-Line Up-Front): The harms of artificial intelligence (AI), in particular generative AI, to people, society, and nature are significant and will impact everyone, including the US Department of the Interior and its mission. We should be proactive in our action rather than waiting for disaster.
Let’s dive in.
Impacts to People
A painter’s inspiring art was photographed and posted online a while back. Then a technology firm training a large language model (LLM) scraped that photo, where it got processed along with untold thousands of other such images. Some time later, the painter was not commissioned to provide illustrations for an organization’s website because the organization used that LLM to generate an image for pennies instead. Probably not quite as inspiring, and certainly not as original, but it worked. Maybe it was cheaper for the organization to take this path, but it was bad for the painter, and the world missed out on a piece of art their mind might have brought to the world.
Our story could also be a traveler and writer whose experience visiting a faraway place and navigating a new culture was scraped from their blog by the LLM, only for those learnings—their learnings—to be churned out of the LLM by a content farm that generates income for someone else. Someone is getting paid, just not the person with the experience.
Or our example could be a translator or a musician, a freelance journalist or a filmmaker…anyone whose creativity, experience, learning, and works were used to train an AI model that then produces outputs that mimic original, human contributions, but are clearly not the same, with legal implications. These examples are hypotheticals that are not fanciful—they’re generalizations of what is happening right now.
(mis-)Understanding Reality
Somewhere out there is a student who thinks the U.S. Department of Health and Human Services should be a valuable source of information about healthcare. Seems reasonable! But what if that student “learns” a host of completely untrue facts from a report that was written with extensive use of generative AI that hallucinated references and conclusions from research? Who is going to help that student unlearn this false information? Who will stop the further spread of falsehoods arising from this report or the tools that generate the lies?
What about an innocent criminal defendant whose lawyer uses genAI to produce error-riddled legal documents meant to defend their client, only to have the client go to prison as a result? Who will pay for the harm to this person? Maybe it’s not a criminal trial, but something like a probate case in Indian Country, where there have been suggestions that AI will be used to address case backlogs. Even if the errors are caught—and we have been catching many—are clients or descendants having to pay for the cleanup of the lawyers’ sloppy work? The judicial system is.
Multiply by untold millions of cases every day these real-life examples of basic errors and error propagation being perpetuated by generative AI, often without any oversight or immediate consequences for the error. Think about how much information pollution is being released into our collective understanding of reality. What is the cost of becoming ever-more detached from reality because of hallucinations?

Harm to the Planet
It might not be Winnie the Pooh’s Hundred Acre Wood, but what about that woodlot where people used to walk or deer roamed or woodpeckers and butterflies flitted about, but was just cleared for a power transmission line for a data center a dozen miles away? Maybe the power source is an aging coal-fired plant that is being forced to stay open in the name of AI, maybe it is from a renewable source that won’t be used to power homes. Those details are only marginally relevant when
You might have a particular woods—or favorite open space—in mind as you read this, asking generally, What about AI and the planet? The increase in energy production needed to run the datacenters powering genAI models are large and ever-increasing. This means impacts to climate, especially in the context of current federal priorities for oil and gas production, especially on the federal estate, the impacts are expected to be immense. Water, whether it is to cool datacenters or use in manufacturing the hardware on which AI models are run, is being drawn down at an accelerating rate. GenAI is leading us to a hotter planet with less water for people and nature—plus the local impacts to ecosystems as lands are cleared for datacenters, power lines, and power plants.
GenAI and Interior
The costs of genAI to society—the costs to people, the costs of our understanding of reality, the costs to the planet— are likely enormous. They may not at first glance seem relevant to the Department of the Interior, but they are. That’s because the Department has significant responsibilities for:
- energy and minerals, such as the fossil or renewable sources for power and the minerals and materials needed to construct the hardware from boards to power transmission;
- water and wildlife, whether it’s fish and mussels that rely on connected waters, the common species we know around us every day, or the threatened and endangered species whose numbers will only grow as habitats decline;
- science and understanding of the world around us, such as natural hazards (already prone to conspiracy theories), water, ecosystems, geology, engineering, and much more; and
- serving Tribes and their members in areas where AI may make an appearance, whether in trust issues like probate, education, climate adaptation, and more.
Who is going to pay for the extra work that Interior must do to meet its obligations for resources, energy, science, or people? The Fish and Wildlife Service will have more imperiled species to protect and recover. The Bureau of Land Management and Bureau of Ocean Energy Management will have more mining and energy production and transmission to manage. Indian Affairs will have more to do with AI and education, or genAI and Indian arts and crafts. The list goes on. Whether the Department uses AI tools, or is tasked with addressing the costs of AI up-front or cleaning up after the damages are done (if they can be undone), AI will matter to the Department’s future.
So what do we do with this?
First, let’s stipulate right up front that it’s not Interior’s job to address the costs of AI in the United States alone. But the Department will have to be part of the solution. Second and related, the agency would be foolish to pretend that there is no cost to the people and resources it serves—and that meeting this emerging and growing need means changing Interior and how it operates. This is significant enterprise risk and leaders must take steps to treat such risks.
The risks of genAI to people, society, and the planet have led many people to ask: should we ban genAI? There are some convincing arguments that might be the best course of action. But, at least until something catastrophic happens, that seems less likely than a snowball in Yuma in the summer.
And it doesn’t acknowledge that there are good and valid applications of AI. Generative AI art might be useful in expressing ideas that haven’t been expressed before, and there is a sector of emerging AI artists—but how society ultimately receives them is to be seen. (Whether these visual or written arts will have the same breadth of creativity as human-first works rather than regression to the mean seems doubtful, but that’s a discussion for another day.) With human oversight and careful fact-checking, reports and other written creations may be produced more efficiently and effectively. Data analyses may help improve sustainability and advance conservation through the careful application of AI. Governments and organizations may be able to provide better service, across languages, to their constituents than would be feasible without AI.
So what’s an alternative? Regulate AI and try to ensure the benefits of AI outweigh the costs? We don’t yet have full estimates of the “Social Cost of AI,” that is, a dollar value of the costs per million tokens generated (akin to the Social Cost of Carbon that prices the cost of each ton of CO2 or other greenhouse gases), but researchers could develop sound estimates. The realities of regulation in the U.S. today argue against this approach, but the Social Cost of AI would surely be a valuable number to know and understand.
A Trust Fund for AI
Here’s an approach that may be palatable for helping address the harms of AI: we could establish an AI Trust Fund for America. The Trust Fund could provide compensation in the three main areas of its harm: the people whose work and imagination create things of value, but who are harmed by the unchecked proliferation of genAI; it could fund people and system to clean up the slop that is being generated and contaminating bona fide human knowledge and understanding of reality; and it could compensate or otherwise mitigate damages to the planet.
We might think of the Trust Fund a little like Superfund, which has been a model for cleaning up hazardous waste sites in the US for decades. But instead of the money being used to repair and remediate hazardous compounds from industrial production or accidents, the hazardous compounds to be treated by the Trust Fund are the outputs of LLMs. The analogy isn’t perfect since Superfund targets specific places and physical or chemical properties rather than conceptual or intellectual property dispersed across the country (and indeed, around the world), but the idea is close enough.
Who would fund the Trust Fund? Revenue would come from both the producers and the users of genAI tools. The first stream of revenue would come from the tech giants like OpenAI, Anthropic, Google, and others who develop and provide the technology, depending on the scale and costs of their operations. Second, the companies would be responsible for the user-pays portion by collecting and submitting funds from a surcharge that scales with the amount of use by people and businesses.
The governance of the Trust Fund would require extensive discussion, but we know some principles. We know it would need to address the size of the fund and how funding would be distributed each year. To determine the revenue requirements—how much damage has been done and will be offset—and distributions each year, an equivalent of the Natural Resource Damage Assessment could be valuable. It might use a variety of scientific and economic approaches to estimate the Social Cost of AI, borrowing from the work on the Social Cost of Carbon and other scholarship. The Fund’s governing board would probably have to include stakeholders and partners from sectors with direct and indirect impacts. That means government, technology companies, scientists (such as from the social, economic, and conservation sciences), unions or employees groups and guilds, and user groups. There are plenty of other questions to be answered before this would become a reality!
The Trust Fund and Interior
What would the Trust Fund mean for Interior? It would probably mean an influx of funding to help the Department address impacts to natural resources like ecosystems and to better manage growing energy demands. The parts of the Department dedicated to science and knowledge, like the US Geological Survey, might need and be able to invest in reducing information pollution that interferes with the mission. Other agencies or organizations might be the lead on offsetting harm to artists, but Interior may have a role too: think of Indian arts and crafts and the impacts if genAI products undercut that market. Resource managers and scientists or practitioners would probably need to invest time and effort on better data and evidence of resources and their status and trends—the kind of thing that the National Nature Assessment might have provided were it not cancelled. If genAI tools are used at the Department then Congress would need to ensure appropriations are sufficient to cover the increase
Let’s Not Wait
We are at or nearing (did we already pass?) a critical juncture in how society approaches the use and expansion of genAI. We can clearly see a host of harmful “side effects” of the technologies for people, society, and the planet. Some people might say we’ll get around to action after something catastrophic happens; that it took events like the Cuyahoga River catching fire, or our national bird being on the doorstep of extinction before we took actions that led to laws like the Clean Water Act or the Endangered Species Act.
But instead, let’s try to learn from the past! There’s no reason why we can’t go ahead and begin the conversation and take action that will be needed to balance the costs and benefits of AI for everyone. Let’s figure out how a Trust Fund for AI would work and make it happen. It will matter for people, society, and the planet.

Like this post? Have a bone to pick? Want to discuss? Let us know! memos@nextinterior.org