AI in Law & Policy Quillbee Expert Commentary August 2 2023

AI in Law and Policy – Safety & Security

The White House announced in late July that it had received voluntary commitments from Amazon, Google, Meta, OpenAI, and other tech giants regarding principles of safety, security, and trust in emerging technologies including AI. You can find the press release here. This comes ahead of the planned Senate AI Insight Forums scheduled for later this year, lawmakers’ efforts to harness the acumen of industry experts in developing a future regulatory framework that Senate Majority Leader Chuck Schumer and Senate Democrats are calling the SAFE Innovation Framework. Leader Schumer’s press release is here.

Both announcements reflect growing consciousness about the potential of AI platforms to demonstrate bias, generate content that could be used to further unlawful discrimination, and infringe on protected intellectual property, a possibility that has already resulted in litigation against OpenAI, Stability AI, and others. Alongside the usual expense and frustration one associates with class-actions, even if none of the litigation results in a landmark case, each successive lawsuit increases the risk that a target AI company may be forced to disclose information about datasets and training methods that it would much prefer remain out of the public eye. However, it remains unlikely that the litigation will end until Congress, or perhaps the Supreme Court, acts, and AI is expected to be a major topic at the ABA’s annual meeting in August.

A principal drawback to both the President’s and the Senate’s current proposals is that they depend in large part on AI companies to self-regulate, a practice which has a very spotty history. In addition to highlighting the risks associated with rapid advances in AI itself, the next several months are equally likely to place under the microscope issues of regulatory competence and the trustworthiness of the masters of emerging AI technology.

Further Reading

Similar Posts