Abstract
This paper uses the lenses of law and ethics to identify limitations in existing regulatory approaches to artificial intelligence (AI). Regulation is an incomplete solution to the ethical challenges posed by AI. Instead, businesses should take initiative and observe company- and industry-specific self-regulation. In particular, businesses should utilize corporate social responsibility (CSR) principles in anticipatory compliance of future AI regulation. CSR provides a framework capable of addressing inherent AI challenges such as bias, ethics and privacy. This paper discusses the rapid emergence of AI, and lays out the regulatory landscape and the multiple regulatory gaps that include information asymmetry, jurisdiction, cross-border regulation, enforcement, data control, black-box AI, risk classification, accountability and regulatory overreach. After explaining the theoretical underpinnings of CSR, this paper identifies four families of CSR theories and three core CSR concepts found across the four theoretical families: transparency, accountability and sustainability. The paper discusses how CSR principles can enhance the European Union Artificial Intelligence Act. It further proposes that businesses should adopt a proactive, CSR-oriented approach to AI policy and practice.