AI in Law and Policy - December 13, 2023

AI in Law and Policy – December 13, 2023

What was anticipated to be the final round of talks in cementing the EU’s AI Act ground to a halt on December 7 as negotiators failed to agree after 24 hours of talks. Representatives of member states, the European Commission, and the EU parliament broke to enable negotiators and staff to sleep, with talks expected to resume Friday, December 8, Reuters reports.

At issue is whether AI companies should be permitted to self-regulate, a move that would benefit smaller, Europe-based AI companies such as Mistral and Aleph Alpha, according to the member states (France, Germany, and Italy) promoting the proposal. I have mentioned this notion previously, and I remain of the mind that such a policy could have disastrous consequences, as self-regulation, either by design or as a result of lax oversight, has a very spotty history. Frequently, a self-regulated industry is just an unregulated industry.

In late November, researchers for Google DeepMind and a number of universities published a paper recently disclosing a vulnerability and means of attack against ChatGPT by which one could, prior to a recent patch, cause ChatGPT to disclose its training data verbatim. The authors discuss the paper here, and you can find the full paper here. The discovery of the relatively simple exploit is notable for litigants in ongoing litigation against OpenAI because it demonstrates that OpenAI not only generates material substantially similar to the published contents on which it has been trained, but that it has in fact memorized those contents, and they are readily, trivially available.

That ChatGPT can be made to regurgitate copyrighted material and personally identifiable information has, of course, previously been demonstrated, but this vulnerability demonstrates the capacity for a determined user to obtain the training data at several orders of magnitude greater frequency than previously shown. If there were not already a compelling case for regulation, the presence of this vulnerability paints a picture of a tremendous gulf between OpenAI’s stated commitment to data privacy and reality.

Similar Posts