Abstract
Artificial intelligence (AI) is recognised as a strategically important technology that can contribute to a wide array of societal and economic benefits. However, it is also a technology that may present serious challenges and have unintended consequences. Within this context, trust in AI is recognised as a key prerequisite for the broader uptake of this technology in society. It is therefore vital that AI products, services and systems are developed and implemented responsibly, safely and ethically.
Through a literature review, a crowdsourcing exercise and interviews with experts, we aimed to examine evidence on the use of labelling initiatives and schemes, codes of conduct and other voluntary, self-regulatory mechanisms for the ethical and safe development of AI applications. We draw out a set of common themes, highlight notable divergences between these mechanisms, and outline anticipated opportunities and challenges associated with developing and implementing them. We also offer a series of topics for further consideration to best balance these opportunities and challenges. These topics present a set of key learnings that stakeholders can take forward to understand the potential implications for future action when designing and implementing voluntary, self-regulatory mechanisms. The analysis is intended to stimulate further discussion and debate across stakeholders as applications of AI continue to multiply across the globe and particularly considering the European Commission's recently published draft proposal for AI regulation.
Through a literature review, a crowdsourcing exercise and interviews with experts, we aimed to examine evidence on the use of labelling initiatives and schemes, codes of conduct and other voluntary, self-regulatory mechanisms for the ethical and safe development of AI applications. We draw out a set of common themes, highlight notable divergences between these mechanisms, and outline anticipated opportunities and challenges associated with developing and implementing them. We also offer a series of topics for further consideration to best balance these opportunities and challenges. These topics present a set of key learnings that stakeholders can take forward to understand the potential implications for future action when designing and implementing voluntary, self-regulatory mechanisms. The analysis is intended to stimulate further discussion and debate across stakeholders as applications of AI continue to multiply across the globe and particularly considering the European Commission's recently published draft proposal for AI regulation.
| Original language | English |
|---|---|
| Place of Publication | California, United States |
| Publisher | RAND Corporation |
| Commissioning body | RAND Corporation |
| Number of pages | 136 |
| DOIs | |
| Publication status | Published - 2022 |