Machine Learning Times
Machine Learning Times
EXCLUSIVE HIGHLIGHTS
Three Best Practices for Unilever’s Global Analytics Initiatives
    This article from Morgan Vawter, Global Vice...
Getting Machine Learning Projects from Idea to Execution
 Originally published in Harvard Business Review Machine learning might...
Eric Siegel on Bloomberg Businessweek
  Listen to Eric Siegel, former Columbia University Professor,...
Effective Machine Learning Needs Leadership — Not AI Hype
 Originally published in BigThink, Feb 12, 2024.  Excerpted from The...
SHARE THIS:

3 years ago
Institutionalizing Ethics in AI Through Broader Impact Requirements

 
Originally published in Nature Machine Intelligence, Feb, 2021.

Turning principles into practice is one of the most pressing challenges of artificial intelligence (AI) governance. In this Perspective, we reflect on a governance initiative by one of the world’s largest AI conferences. In 2020, the Conference on Neural Information Processing Systems (NeurIPS) introduced a requirement for submitting authors to include a statement on the broader societal impacts of their research. Drawing insights from similar governance initiatives, including institutional review boards (IRBs) and impact requirements for funding applications, we investigate the risks, challenges and potential benefits of such an initiative. Among the challenges, we list a lack of recognized best practice and procedural transparency, researcher opportunity costs, institutional and social pressures, cognitive biases and the inherently difficult nature of the task. The potential benefits, on the other hand, include improved anticipation and identification of impacts, better communication with policy and governance experts, and a general strengthening of the norms around responsible research. To maximize the chance of success, we recommend measures to increase transparency, improve guidance, create incentives to engage earnestly with the process, and facilitate public deliberation on the requirement’s merits and future. Perhaps the most important contribution from this analysis are the insights we can gain regarding effective community-based governance and the role and responsibility of the AI research community more broadly.

Growing concerns about the ethical, societal, environmental and economic impacts of AI have led to a wealth of governance initiatives. In addition to traditional regulatory approaches, complementary forms of governance can help to address these challenges1. One such governance form is community-based technology governance or ‘governance from within’2. Here, measures to influence research based on societal considerations develop from within the scientific community and are implemented at the community level. A recent initiative of this kind comes from one of the world’s largest AI conferences, NeurIPS. In early 2020, the committee announced a new submission requirement: submitting authors must now include a statement that addresses the broader impacts of their research, including its ‘ethical aspects and future societal consequences’3. This requirement from NeurIPS has triggered mixed reactions by the AI research community, with discussions about its purpose and effectiveness emerging in social media and elsewhere4. Although few deny that there exists a real need to identify and address ethical and societal challenges from AI, the diversity in reactions illustrates that there is little consensus on the right approach, nor on what the responsibilities of individual researchers or the research community (including conferences) should be in the process5,6. It also highlights the need for further discussion on the purpose, implementation and effects of the NeurIPS requirement and similar governance measures.

To continue reading this article, click here.

One thought on “Institutionalizing Ethics in AI Through Broader Impact Requirements

  1. Pingback: La comunità AI prova ad autoregolamentarsi - Notizie sull'Intelligenza Artificiale

Leave a Reply