AI in Law & Policy Quillbee Expert Commentary Nov 15 2023

Risks of LLMs, Along with other Generative AI, and The FTC’s Mandate

In early November the Federal Trade Commission submitted a comment to the United States Copyright Office sounding alarms that are likely familiar to readers. What’s noteworthy about the filing isn’t the copyright issues at stake in generative AI, since those are already being litigated, but rather the intersection between the risks of LLMs, along with other generative AI, and the FTC’s mandate. As models improve and proliferate, malicious actors will gain better tools with which to engage in deceptive and predatory practices, and businesses will face new temptations to use AI to engage in unfair competition. These risks extend far beyond risks to intellectual property, and particularly when it comes to protecting consumers and vulnerable businesses, the awareness of another federal agency is a welcome sign.
As expected, on October 30 the Biden administration released an executive order directing executive agencies to regulate the use of AI in the federal bureaucracy and setting forth principles for government oversight of the technologies. However, the order is notably lacking in teeth, ABC reports. Responsible practices within the government are certainly welcome, but what’s needed is strong regulation of the industry itself. As with any burgeoning industry, a powerful, early regulatory response may be the best weapon the public has to forestall regulatory capture. With a technology as uniquely vulnerable to weaponization against individuals as generative AI, the necessity of avoiding regulatory capture is impossible to overstate, as the alternative is an industry perfectly positioned to run ramshackle over fundamental rights to privacy. You can find the order here.

Similar Posts