The senators weren’t happy with the “seemingly minimal” protections to fight against fraud and cybercrime in Meta’s AI model.
Two United States senators have questioned Meta chief executive Mark Zuckerberg over the tech giant’s “leaked” artificial intelligence model, LLaMA, which they claim is potentially “dangerous” and could be used for “criminal tasks.”
In a June 6 letter, U.S. Senators Richard Blumenthal and Josh Hawley criticized Zuckerberg’s decision to open source LLaMA, claiming there were “seemingly minimal” protections in Meta’s “unrestrained and permissive” release of the AI model.
Meta released its advanced AI model, LLaMA, w/seemingly little consideration & safeguards against misuse–a real risk of fraud, privacy intrusions & cybercrime. Sen. Hawley & I are writing to Meta on the steps being taken to assess & prevent the abuse of LLaMA & other AI models. pic.twitter.com/vDyJbuWSlJ
While the senators acknowledged the benefits of open-source software they concluded Meta’s”lack of thorough, public consideration of the ramifications of its foreseeable widespread dissemination” was ultimately a “disservice to the public.”
LLaMA was initially given a limited online release to researchers but was leaked in full by a user from the image board site 4chan in late February, with the senators writing:
Blumenthal and Hawley said they expect LLaMA to be easily adopted by spammers and those who engage in cybercrime to facilitate fraud and other “obscene material.”
The two contrasted the differences between OpenAI’s ChatGPT-4 and Google’s Bard — two close source models — with LLaMA to highlight how easily the latter can generate abusive material:
While ChatGPT is programmed to deny certain requests, users have been able to “jailbreak” the model and have it generate responses it normally wouldn’t.
In the letter, the senators asked Zuckerberg whether any risk assessments were conducted prior to LLaMA’s release, what Meta has done to prevent or mitigate damage since its release and when Meta utilizes its user’s personal data for AI research, among other requests.
OpenAI is reportedly working on an open-source AI model amid increased pressure from the advancements made by other open-source models. Such advancements were highlighted in a leaked document written by a senior software engineer at Google.
Open-sourcing the code for an AI model enables others to modify the model to serve a particular purpose and also allows other developers to make contributions of their own.