Meta Pushes Back On EU’s AI Code Of Conduct

Meta has officially declined to sign the European Union’s voluntary AI Code of Practice, warning that the framework introduces significant legal risks for companies building open-source models.

According to Joel Kaplan, Meta’s VP of Global Policy, the company supports the broader goals of the EU’s AI Act but believes the current Code could “inadvertently harm open innovation” and make it harder for responsible developers to release open models like Llama 3.

Kaplan emphasized that Meta has been transparent and proactive in its AI safety efforts, sharing model details and evaluations with the community.

However, he noted that certain elements of the Code, such as obligations around disclosure and risk mitigation, that could introduce legal uncertainty, particularly for open-source releases.

While Meta remains committed to working with EU regulators, it will not sign a framework that, in its view, undermines the very innovation it aims to protect.

You may also want to check out some of our other recent updates.

Subscribe to Vavoza Insider to access the latest business and marketing insights, news, and trends daily. 🗞️

Share With Your Audience

Vavoza Insider

Subscribe to access the latest business and marketing insights, news, and trends daily with unmatched speed and conciseness

Wanna know what’s
trending online?

Subscribe to access the latest business and marketing insights, news, and trends daily!