How can recent advancements in artificial intelligence (AI) be leveraged by philanthropies? What does it take to implement these technologies responsibly within decision making processes?
In the Stanford Social Innovation Review article, "Could AI Speak on Behalf of Future Humans?” Konstantin Scheuermann & Angela Aristidou discuss the opportunities to leverage advancements in AI as a way to include more voices in decision making processes (February 5, 2024).
The authors explain that traditional collective decision-making has been beneficial in aggregating a multiplicity of perspectives in a way that’s thoughtful for the decision. But, it has its limitations. Traditional collective decision-making often focuses on including those directly impacted by the problem in the short-term and as a result fails to include those who will be most impacted by the decision in the long-term.
Towards this end, the authors advocate for including the “AI Voice” in collective decision making processes as a way to make these voices heard and consider how future generations might be impacted by today’s decisions. They explain that advancements in AI, specifically generative AI, have allowed AI outputs to be generated in new ways including audio, video, and text. These AI generated outputs can serve a productive function in decision making processes to expand who is included or how decisions are mediated. Given that advancements in AI are constantly evolving, they view the “AI Voice” as not one specific AI product/tool but rather the outputs of the systems available to us.
A few key takeaways from the article:
-
Many private sector organizations have already embraced the “AI Voice” in decision-making processes (e.g. Salesforce’s use of Einstein AI), but its use in the social and non-profit sectors remain limited.
-
The majority of generative AI tools focus on either (1) generating new data within the parameters of a specific training set or (2) making predictions based on a training set. As a result, there is a need for subject experts to participate in decisions with the “AI Voice” to ensure complex and imaginative ideas are included.
-
The “AI Voice” may play four roles within decision-making processes. First, the “AI Voice” can be the discussion Facilitator–developing discussion agendas that bring in new perspectives or helping participants stay on topic. Second, the “AI Voice” may serve as a Consultant that analyzes or synthesizes data to support decision making, but does not have decision-making power. Third, as an Optimizer, the “AI Voice” analyzes all of the stakeholders’ information and provides pathways forward. Lastly, the Collaborator role allows the “AI Voice” to both participate in deliberations and make decisions.
-
Using the “AI Voice” responsibly in decision-making processes requires “stakeholder and domain expert involvement, transparency, and AI literacy.” The authors emphasize that more stakeholders need to be included in training data and the critical role of domain experts in evaluating the accuracy and appropriateness of the outcomes. They explain the need for transparency not only in the data used to train the AI system, but also in how and where the “AI Voice” is used in decisions. Additionally, they stress the importance of both AI and data literacy to ensure decision makers understand the limitations of the “AI Voice” and when outputs need to be validated.
The authors conclude by discussing how leveraging the “AI Voice” responsibly in decision-making processes can “create a more equitable collective future where future humans and nature can better thrive.”
Learn more by reading the full article here.