WRI’s Approach to Responsible Artificial Intelligence

Communities around the world face an increasingly daunting road to a thriving and sustainable future. In this context, artificial intelligence (AI) is often cast as either a hero or a villain. At WRI, we have seen many technology cycles come and go, creating both transformational value and new challenges for those we serve.

We believe that steering toward value and away from harm requires a rigorous and responsible approach to AI innovation. To harness technical progress in favor of people, nature and the climate, we have adopted a set of principles and practices to guide our work.

Responsible AI Principles

We operate using three principles that are grounded in our Institutional Values and commitment to open data and have been refined through over a decade of innovation and product delivery:

1. Be curious: We encourage our staff to design experiments and measure results instead of trusting the hype or avoiding new technologies altogether. We move forward only when the data shows potential for positive impact and manageable risk. Most experiments fail; some succeed and progress to production; all produce valuable learning.

2. Be user centered: We build new technology to solve problems for people, nature and the climate. We do not build technology for novelty’s sake or in ways that create large risks for those we serve. When people and their needs are the central focus, how AI should and should not be used becomes clear.

3. Be accountable: Using AI doesn’t mean absolving humans of responsibility or replacing their efforts entirely. We seek to increase our accountability and environmental stewardship while using AI to augment human capacity, ingenuity and care, instead of automating it away.

Responsible AI Practices

Principles alone are not enough. To put them into action, WRI follows a set of practices embedded in our day-to-day work. These practices ensure that we can harness the promise of AI while avoiding its perils:

1. Evaluate and learn at every stage of product development. We conduct internal evaluations of new AI-based products to assess accuracy, reliability and cost, which in turn informs development. However, this is only the start of the journey:  Just because something performs well on internal benchmarks, doesn’t mean it will create value for users in the real world. After launch, we perform live monitoring of usage logs and error rates as well as manual spot checks to identify errors or misuse. When applications are stable, we do formal impact evaluations to understand their impact outside the lab. If a product doesn’t perform at any of these phases, we adjust or stop development, which we have already done with more than a dozen pilots.

2. Measure and manage the environmental and financial costs of using AI. We keep a close eye on the resources our AI-powered tools use. For most of our tasks, financial cost is a leading — if incomplete — indicator of environmental cost, with lower bills usually reflecting lower energy and water use. To make our work as efficient as possible, we follow four common practices:

  • All new products ship with compute and cost metering enabled so that we can track environmental and financial costs in real time.
  • We default to the most computationally efficient model that still meets quality targets.
  • We save results for repeat queries, reducing net reliance on AI.
  • We inform external users which applications use AI and encourage them to make focused requests and avoid unnecessary queries.

As an organization at the cutting edge of climate policy, we are no stranger to trade-offs around carbon-intensive but mission-critical activities, like travel and using commercial office space. As we launch more AI-based products in the coming months and years, we plan to expand reporting on our AI use via our Sustainability at WRI page.

3. Clarify methods and product maturity for external users. AI is an experimental technology that requires real-world testing to make progress. However, this work is not without risk. To balance the need for experimentation and acceleration with responsible safeguards, we aim to clearly communicate two things to our users: First, we clarify when AI is used and provide documentation of the methods, underlying architecture and key design choices. Second, we clearly label our products based on their level of maturity:

StageWhat It MeansUser Guidance
ExperimentalMeets baseline quality, cost and user testing benchmarks, but is not yet stable and edge cases may be untested.Try it with extreme care and let us know what you think. We want to learn from your experience with the tool and will actively consider user feedback in development.
BetaExceeds baseline quality, cost and user testing criteria and is roughly stable quarter-to-quarter.Test integrating the tool into real-world processes, but with active monitoring.
General ReleaseExceeds baseline quality, cost and user testing criteria and is roughly stable year-to-year.Integrate the tool into operational workflows while maintaining regular monitoring.

4. Set clear rules of the road for internal users. In addition to developing AI-based products, WRI encourages staff to use AI tools responsibly to support their research, engagement and communications work. We have developed institute-wide guidelines to manage this usage. Two ground-rules apply to every use case:

  • Keep a human in the loop to review outputs.
  • Do not put sensitive, confidential or private information into prompts.

Day-to-day work follows a simple traffic-light framework: “Green” tasks like improving the readability of text or documenting code are permitted and encouraged. “Yellow” tasks like internal literature reviews or translations are permitted, but only if the user has the expertise needed to assure the quality of the outputs and does so fully. “Red” tasks, like generating images or writing entire communication products, are strongly discouraged. We will update this guidance regularly as AI capabilities evolve.

5. Build human capacity and community. We formed an Applied AI Group within our Data Lab to set guidelines, lead AI experiments, and support product development and evaluation. While this is only the seed of a larger, whole-of-WRI effort, dedicated technical capacity and expertise are critical — especially for organizations seeking to both leverage AI internally and produce AI-driven products. At the same time, building our own capacity alone is not enough. We work with expert partners like Patrick J. McGovern Foundation, Google.org and Bezos Earth Fund, as well as technical friends like Development Seed, Fenris, Pew Research Center Research and many more, who help us learn and deliver.

Get the latest news and updates from WRI's Data Lab

Subscribe Now