Generative Artificial Intelligence (Generative AI) includes a range of models that can create content based on training data. Generative Pretrained Transformers, or GPTs, for example, enable computers to perform tasks that mimic certain human brain functions such as interpreting language and solving problems.
Generative AI has long been in development, but it wasn’t until 2022 when Generative AI became widely used by the public. Generative AI tools such as ChatGPT, Bard, and Claude have attracted a huge amount of attention and continue to be used across several sectors–including by governments and nonprofit organizations. However, there has been minimal research on the role of philanthropy in the generative AI era.
What does generative AI mean for philanthropy? How can funders accelerate the opportunities of generative AI for society while minimizing the risks?
In the article, “10 Ways Funders Can Address Generative AI,” published September 28, 2023 in the Stanford Social Innovation Review, Kelly Born explains that the majority of charitable foundations currently funding AI initiatives have focused on understanding the opportunities and risks AI brings to societies in the long-term. However, few philanthropies are funding current actors seeking to advance the ethical use of generative AI or minimize the current risks.
Born then provides 10 suggestions for funders to support the ethical use of generative AI in the United States. These include:
-
There is a need for additional guidance and protective measures to aid government leaders seeking to leverage AI in their work. Many public servants are already leveraging AI in government functions and additional work is needed to ensure AI tools are implemented responsibly.
-
Government and nonprofit actors need to be trained on how to best leverage the power of AI in their work. The majority of AI training programs tend to focus on the private sector. Tailored training programs are needed not only for AI and government in general, but also for specific workstreams (e.g. how AI can impact health or climate change).
-
Currently, much of the data and infrastructure on AI and its usage remains siloed or is not held in the long-term. Increasing transparency and data access around AI use is integral in order for governments and societies to understand and mitigate the risks of it. In addition, there is a need for more research to enable transparent technical infrastructure.
-
Philanthropies play an important role in advocating for new AI research–in particular about its risks. There is a need for philanthropies to not only fund AI research themselves, but also to advocate for AI labs to fund research about emerging risks.
-
Multi-stakeholder collaboration is key in advancing responsible AI ecosystems. There is a need for institutions that can support the collaboration of AI labs, governments, and the public as well as implement new approaches to mitigate the potential risks of specific AI developments.
-
Engaging the public as key stakeholders in AI decisions through several participatory methodologies is integral. There is a need for more best practices on public engagement and ethical AI decision making.
-
The majority of the AI industry is housed in the private sector. More AI models are needed for public purposes.
-
Governments struggle to stay up to date on rapid technological advancement of the private sector. First, there is a need for new standards and methods to effectively assess the ethical implications of AI technologies. Second, there is a need for more external experts across sectors that governments can rapidly turn to when exploring different AI related decisions. Third, academic institutions need to update their curriculum to incorporate more about the ethical implications of AI development.
-
Laws and regulations need to be updated to account for the risks of AI in societies.
-
The story of AI needs to change: “The most upstream problem of all is the question of how we, as a society, view the role of technology in our lives. How do we tell the story of generative AI?”
Born concludes by discussing the importance of a proactive (rather than reactive) approach to regulating AI and how advocacy can make an impact. Born emphasizes the value of a “a clear vision and narrative to help Americans understand and determine the kind of economy and society we want to transition toward.”
Read the full article HERE.